Archive for the 'Thinking' Category

I hate Tom Cruise



I gave a five-minute Ignite talk about gesture interfaces at Foo Camp a couple of weeks ago. It was an attempt to distill my thinking about the difficulties and opportunities of mid-air gesturing as input into a single serving. I wrote about 7000 words and whittled them down to 700 over the course of a week. Fun, but five minutes can contain only so much hatred, and I have oh so much more.

Some thoughts on where computing is headed

This is largely a response to this video informed by this other video:

I think that despite all his calls for “out of the box” thinking, Scott Jenson’s thinking is as bounded as that he decries. I agree, apps suck, and yes, I love the idea of browser as operating system, but I also think the idea of phones themselves as interfaces sucks. They are the apps of the physical world. We won’t need a Google for ranking the sensor-enabled objects around us because they exist in three-dimensional space just as we do. The whole point of physical computing is to eliminate screens as go-betweens.

To use his (kind of lame) example, if I want to interact with my stereo, I shouldn’t have to go to my phone. He just got done telling me how much it sucks that there needs to be an app for that and then he tells me I can tap through a list of objects around me on my phone to interact with them. How about I look at my stereo? Or I talk to it? Or I point at it? Or I think about it?

What we’re seeing is the dying of a computing metaphor. We have always had to go to computers and speak to them in their language. At first, hundreds of us flocked to massive computers and spoke to them in punchcard, an entirely human-unintelligible language. Then a revolution: one man, one computer. The graphical user interface, handmaiden to this revolution, allowed us to speak to the computer in a way we could comprehend, though it still required us to learn how to manipulate its appendages to accomplish the tasks we wanted performed. Now we’re in a world where each person has multiple, increasingly tactile computers. And as processor speeds grow and prices drop, it seems likely that the computer to people ratio will continue to increase.

The desktop metaphor, with its graphically nested menus and multiple windows, won’t survive. It didn’t translate well onto the pocket-sized screens of smartphones, and Siri is the first of peal of its death knell. Siri eliminates the physical analog of a desktop/button pad altogether and replaces it with a schema-less model where I can use a computer without learning anything about how it works.

Couple that with the increasing physical awareness and falling cost of networked devices equipped with cameras and sensors, and what you end up with is not a small computer we can carry with us to interact with the world around us but a giant computer which we inhabit, and which treats us and what we do as input.

What’s tricky about this is imagining the output. With each jump in computing, the new modes did not replace the old modes. They overlapped a bit, but mostly they expanded the possibilities of computing and the number of computable operations. No one programming on the command line imagined that a computer would one day be great for editing films. The command line is still very much in use today as it is still the best method of doing many things, but the GUI has greatly expanded the computable universe. Likewise, while it’s relatively easy to imagine the region where a physical user interface (PUI?) intersects the GUI (advancing slides in a keynote presentation without a remote, for instance), it’s much harder to imagine those tasks we’ve never even thought of as within the reach of computability.

Computing Paradigms Bubble Chart

And that’s what I’m really interested in, the film editing scenarios. Context and object awareness won’t require phones to rank nearby objects as we’ll be able to interact with them with minimal or no perceptible interfaces. We’ve slowly watched consumerization turn sophisticated operating systems into shiny idiot-proof button pads. There’s no reason to believe the trend won’t continue spreading into the backend, turning programming itself into a consumer behavior. At Google we’re obsessed with machine learning, but it seems to me the future may be its converse—human teaching. If people can tell their computers exactly what they want without having to learn C or Java, then they can start to ignore their computers entirely.

That’s the ultimate goal: invisible computing. After all, how often do you think about how you’re light switch works when you go turn on the lights in a dark room?

Everything I know about interaction design I learned by making a scratch-n-sniff television

My favorite thing about my Scratch-n-Sniff TV is the conversations it spawns. I showed it recently at Maker Faire NY, and as at previous showings at ITP and at Greylock Arts, reactions were divided. About 70% of people were totally incredulous until they tried it, and then were delighted and had to find out how it worked. Of the remaining 30%, half looked at it suspiciously and rebuffed invitations to try it and the other half tried to predict how it worked before using it and then complained that the smells weren’t “accurate.” All of these reactions reveal an underlying attitude towards technology and its possibilities: the first, marvel—the what will they think of next effect; the second, suspicion—this has got to be a trick; the third, which shares elements of the second, a need to establish that we control technology—not the other way around.

heroShot

Smell is subjective, it’s ephemeral, and it’s not binary. What smells like citrus to one person smells like air freshener to another; smells can’t be turned on and off, they waft, so getting people to believe that their actions resulted in equal and opposite smell reactions required some clever sleight of nose. First of all, I gave people clear visual cues. When you scratch a picture of chocolate, you’re much more likely to interpret the resulting smell as chocolate. I also made the screen respond to being scratched by fading, just as scratch-n-sniff stickers do after vigorous scratching. This tie-in to a direct physical analogue was key, as people were much more likely to smell the screen where they’d scratched it and the one-to-one correspondence between action and reaction primed people to smell. A couple of times I ran out of scents, and several people still swore they’d smelled scents that simply weren’t there!

HOW IT WORKS

puff

  1. I found that the transistor-based model of the Glade Flameless Candle automatic air freshener would fire once approximately every two seconds if powered for 500 milliseconds (as opposed to the earlier version that relies on a resonant circuit that requires ten seconds before firing), so I hooked up its battery terminals to an Arduino, and voila! Controllable atomization of non-oil based scents!

arduinoinplace

  1. Trying to create an effective scent disperser from scratch is madness. One of the benefits of piggybacking on all of Glade’s hard work is that it’s easy to fill the provided smell canisters with other scents. I got most of mine from the nice folks at Demeter.

scents

  1. I aligned the scent dispensers under a touchscreen sending touch coordinates to the Arduino via Processing sketch. Thanks to the hydrostatic properties of the fine particle mist, when emitted, it flows up the screen and across it, sticking to it until the scent evaporates a few seconds later.

screenanddispensers

If nothing is new under the sun, then why bother with today’s news?

palinpatra

Some thoughts on the future of journalistic “content”—for a related project, check out my paywalls.

What is the future (if any) of professional, unfree, editor-refereed journalism? Setting aside for a second the usual economic arguments (why pay for something you can get for free, why wait for something you can get immediately, why all the news that’s fit to print when all the news fits and print is an afterthought), how might journalism turn all the granular data and digital resources now at its disposal into something worth paying for?

What brought all this about was a recent trip to Asia where I had at the disposal of my itchy remote finger 24-hour news from all over the world. I know it’s hardly news, but seeing sensationalist CNN next to the more staid but still alarmingly populist BBC World next to the multiple bland iterations of China’s CCTV really brought home the extent to which “news” stories are a manufactured product that respond in near real time to audience demand (for entertainment in the West, in the East, for “reassurance”). The result in the West is hours of programming devoted to the information equivalent of dandruff: book burners and rednecks and science deniers and slutty heiresses and other subjects you’d think should be relegated to dark corners of the web but instead are broadcast globally and legitimized to an extent that would have appalled even the editors of the National Enqurirer just twenty years ago. In addition to being meaningless and legitimizing questionable subjects, China’s crop yield statistics and endless political meetings and amazing traffic accidents are also mind-numbingly dull.

When left to free market forces, the news tends towards tabloidization. And this is not just a television news phenomenon. Note the Huffington Post‘s descent into celebrity gossip and Murdochian headline hyperbole. But here’s the really interesting thing about online media—while TV and print rely on largely fuzzy audience data and asynchronous adjustments and tweaks, online media can in real time and with absolute resolution determine what stories are getting clicked, linked, emailed, and tweeted and by whom, and rearrange themselves accordingly, minute to minute. News becomes a democracy where every mouse gets a vote. It’s hard to imagine that editors aren’t letting these statistics at least in part influence the types of stories they’re running. Even the august editors of the New York Times must be paying attention to the most emailed articles—that’s ad revenue.

That’s a grim prospect. Many serious newspapers agree and have either erected or are planning paywalls. But I’m still not convinced that asking people to pay for online news will end up netting any real gains for them. The subscription money they collect might equal the advertising money they scare away, but becoming unsearchable and unindexable will drive away good writers. The other possibility, the one the New York Times is considering, is allowing paywalled sites to appear in search results. That seems like a recipe for disaster, as it will either encourage people to rely on Google News or some new aggregating web service. So what can the future possibly hold? Maybe that question contains the answer.

TOWARDS A PRIORI JOURNALISM

recorded future screenshotRecorded Future is a startup that searches for trends and patterns in current events that may be imperceptible to readers but are obvious to computers. They’ve figured out a way to algorithmically parse the nature of news stories and rank their import, essentially developing a language that abstracts specific events into generic types. Their subscribers and investors, who include Google and many government agencies and Wall Street organizations of differing levels of nefariousness, can then look for patterns and trends that may indicate an oncoming event. It’s scenario planning with a statistical backbone. The military has been obsessed with the idea of reducing battles to a series of determining factors and then computationally predicting their outcomes for years.

With computers reaching unprecedented speeds and processor power,muckrack.com screenshot it’s now conceivable to model incredibly complex systems. Add to that the ability to teach computers to interpret texts and classify their contents, and you approach a not too distant future in which people know the probable news weeks in advance! This isn’t a crazy idea, it’s already happening on a very small scale. Take for instance Muckrack.com, a site that aggregates journalists’ tweets, in a sense getting the news as it is made but before it goes to press. If you want to know what a particular columnist is going to be writing about, then pay attention to the sorts of leads he’s soliciting.

Imagine for a second what the world would be like if the paid-for, professional news were not a running account of what had happened, but a forecast of what was probably going to happen.

In such a future, a news organization’s most valuable asset is its archives. Years ago, the New York Times tried and failed to charge for access to its archives, probably because that was too literal an approach, akin to charging for a collection of reporters’ notes rather than a finished newspaper. It’s the interpretation that adds value. You could have a news source whose focus was on historically significant news, determined a priori. Based on comparisons to older news, computer-aided journalists would be able to identify the beginnings of revolutions years before they occurred, distinguish hit movies from duds as soon as they were greenlighted, and weigh in on the importance of leaders on the eve of their election. Today’s stories could be chosen on the basis of their future historical importance given their similarities to past stories. There are already a bunch of services that find connections in news stories. Though they’re still in an incipient phase, combined with a computable semiotics of events, it’s easy to imagine how they might lend themselves to untangling the web of historical cause and effect to put it at the service of the future.

You could similarly imagine a news source that focused on unprecedented news, on stories that fit no known patterns, really pushing their newness. The converse, a source devoted to describing just how old each piece of news really is by digging up an exact analogue from the historical record, might keep cynics reading even after the paper they swore they’d never abandon is replaced by screens or projection or some other digital means of delivery.

In either case, the news again becomes worth money, as the information it provides is “actionable,” and the role of the professional, paid journalist is preserved, though transformed. Pattern matching is the province of computers, but I suspect the human mind will always retain its primacy in the fields of analogy and metaphor. Finding the future in the past is a poetic task and having a class of highly visible, professional introspectors of a poetic bent might not be a bad idea—regardless of the possible future significance of any of the other ideas I’ve expressed.

DID YOU KNOW I JUST STOLE YOUR VIDEO?

Plentiful bandwidth, virtually free storage, and internet connected cameras has translated into a glut of online video. When anyone can upload to the online panopticon, it’s only a matter of time before people start exploiting the web’s massive audience to crowdsource moonwalks, personal interpretations of the Mos Eisley Cantina scene, ads, or homemade porn—for fun and for profit.

Well, guess what? I don’t want to see your videos. Not the ones you’ve uploaded at least.

The proliferation of cameras everywhere makes it less and less likely that you are ever not being recorded and uploaded the minute you do something remotely interesting. See, for instance, Hong Kong Bus Uncle, the infamous “don’t tase me, bro” (which I find so distasteful that I refuse to link to it), Chinese Airport Woman, and el niñato de Valencia. But again, these are actions performed in public—the operating assumption has to be that someone is recording. And with sites that make live broadcasting as easy as hitting a button on your phone (UStream for instance) popping up like nefarious little mushrooms, it’s entirely possible that your public meltdown will be captured and transmitted live and from several different angles. Totally unscripted reality TV, it’s like your real life, only more interesting.

But not to me. I’m more interested, at least for the purposes of this argument, in recording deviously, either in secret or with unacknowledged intentions. At some point in the future, it’s conceivable to imagine that there will be no place where one is legally protected from being filmed and/or photographed. Or when there are just so many people and devices filming and uploading so many things that prosecuting them all will be impossible, which is functionally equivalent. It is from said future that the ideas that follow come.

What if I created an iPhone app that requires you to hold the device up to your ear as if you were talking on the phone (or when you’re actually talking on a phone with an open source platform) entirely as a pretense to upload video the camera on the back of the phone is recording without your knowledge. There would probably be a lot of hands in the way, but that would make it easier to filter through the results in software. You’d never be in the video so it would be hard to definitively identify it as yours.

A slightly more elaborate variation on that theme would be to build cameras into other devices. One of the big payoffs for me of the Eternal Moonwalk mentioned above is that the majority of people tend to moonwalk across their living rooms, so you get to see the insides of people’s homes all over the world. What if everyone who bought a Roomba were unwittingly inviting an autonomous, wireless streaming surveillance camera into their home? The easiest way I can think of doing this is embedding cameras into particularly nice pieces of furniture left out on New York City sidewalks.

Page scraping and iframes offer another interesting alternative video source which might actually be much less illegal since technically you’re not moving the video from its original location. Instead, you’re finding video content, preferably unembeddable proprietary stuff, and using a web script to strip away any surrounding material and reproduce it in a different place—and it never moves from its original location.

My favorite approach, though, is simply to lie about your intentions. It might be as simple as creating a video high score board for an online game, where instead of their initials, people leave a ten-second taunt for the players they’ve just displaced. A database filled with video taunts has many potential uses. It might be more complicated, for instance creating an online application that uses face detection to perform some non-camera-related function—shaking your head to pan an image back and forth for instance—so that when the application requests access to the user’s web camera, he thinks nothing of pressing “OK,” never suspecting that his face is being displayed on a billboard somewhere across the globe with the supertitle “Did you know that 1 in 3 people has genital herpes?”

Or, as I discovered in the process of writing this post, offer some sort of online video conversion. Video formats are confusing as hell. Put up an all-in-one converter, make it look slick, and simply “keep a backup copy” of people’s video when people upload it!

Trust Fund Thursdays

Design an experiment to take you out of your comfort zone in terms of how you relate to your body and space.

That was a no-brainer for me. I hate taking pictures of people I don’t know. I’m not sure why it makes me uncomfortable, but it does. I’ve lived in some of the most bizarre and photogenic (and crowded) places in the world but have frightfully little photographic evidence of anything other than their architecture. I set out on a beautiful spring day to photograph random New Yorkers. My initial idea was to stop in front of people, plant my feet and raise my camera and, without saying anything, take their picture and walk away. This quickly stopped being uncomfortable, especially since no one seemed all that surprised or upset. My body is unexceptional in the space of New York—I’m one of the crowd. In Asia, my face instantly identified me as a foreigner and it was the motivations that I imagined my subjects could attribute to my picture-taking combined with the total unpredictability of their reactions that made me so hesitant to uncap my lens.

DSC_0788

DSC_0785

DSC_0778

I considered forcing an interaction. It might make me uncomfortable to act strangely in front of these anonymous New Yorkers. I could express some emotion or attitude when taking the picture (disgust came to mind, grimacing once the picture was taken and shaking my head sadly as I walked away). That seemed unnecessarily confrontational and with the current state of my back, I didn’t want to risk a scuffle. The threat of bodily harm, whether real or perceived, is a whole different universe of discomfort and I’m no Marina Abramovic. I could also go the other way and be extremely friendly and use the element of surprise to my advantage. This seemed like a better idea, so I stopped random people and asked to take their picture.

DSC_0797

That didn’t make me uncomfortable at all and really didn’t involve my body and space so much as it did my mind. I had a prop (the camera), it was a beautiful day, and the force of my delivery made people almost universally acquiesce to my request. Having a purpose emboldened me to overcome the discomfort of getting close to a stranger, looking him/her in the eyes, and making a request. It was like asking for directions. The only no I got was from a couple of European tourists who seemed to think I was running a scam.

I couldn’t exactly replicate the conditions of the initial discomfort I set out to overcome in New York, so I abandoned the idea and enjoyed the sun.

Several days later, I was lying on a grassy hill in Central Park, reading a magazine after a doctor’s appointment in Columbus Circle. There weren’t a lot of people around. A couple was lying on a blanket to my right, calling frantically to their two dachshunds Dottie and Dixie whom the combination of sun and open space to explore had apparently rendered deaf and impervious to a proffered frisbee as they disappeared down the hill, their collars clinking madly. To my right, two Puerto Rican girls in tight jeans and big sneakers discussed the probable futures of their classmates. Somewhere behind me, a woman with a voice hoarse from a late loud night exclaimed, “That’s soooo fucking funny! Today is sooo fucking trust fund Thursday,” which was met with clapping and hooting laughter. I didn’t turn around.

I read until the people to my right finally corralled their dogs and packed up their blankets and the Puerto Rican girls had run out of classmates with prognosticateable futures and turned separately to attending their cell phones. Another explosion of laughter from caused me to look for its source, which I discovered was three hipsterish white women and a slim and even from a distance obviously gay black man. I found myself disapproving of them, if for no other reason than that they seemed a New York cliché—the girls in leggings, ratty tee-shirts, and oversized sunglasses, the guy snarking comments that would set them all off laughing—and they were having much more fun than the afternoon and their surroundings warranted. I realized I’d found my discomfort.

I stood up, brushed the wet grass off my pants, and walked over to them. “Can I join you?” To my surprise, they burst out laughing, “Of course! We’ve had designs on you all afternoon. Sit, sit. Here, have some champagne.” In addition to a couple bottles of champagne, they were drinking something green out of plastic cups. “Are those mojitos?” Apparently, the Mexican man who had come by an hour earlier selling ice cold water and gatorade had an unadvertised happy hour special hidden in the rolling suitcase he trundled behind him. I sat with them for over an hour, earning the epithet “Ambitious Alex” in the process, receiving several hugs, handshakes, and a lot of playful innuendo that made me terrifyingly uncomfortable while thrilling me at the same time.

I don’t like to be noticed in unfamiliar social situations until I’m confident I understand their dynamics. It took me almost six months to make my first post to the ITP student list. In this situation, however, I made myself the center of attention. I had to talk about myself without a good idea of what sort of tone to adopt or what reaction to expect. I had to shape my first impression, as I could not rely on having months of repeated interactions over which to hone it. I wanted Sam, Michelle, Deb, and Stefan, whom I liked as soon as I sat down, to like me back.

I only performed this experiment once, but I plan to try it again the next time an opportunity presents itself. I learned that what makes me uncomfortable is not how I relate to my own body and space but how my body relates to other bodies. Which is what led me to the idea of measuring those parts that vary most from body to body for my embodiment object.

Designers for Development

phpjxw1oS

So-called “design for development” toes the line between humanitarianism and paternalism, as tends to happen in any situation where there is a significant power/wealth disparity. The problems of economic disparity on a geopolitical scale are larger and more complex than can be solved in a single human lifetime, and as such, seem overwhelming and irresolvable, as do the environmental crisis and nearly every domestic question that appears repeatedly in the New York Times. Last week, Despina argued that though such problems (she was referring specifically to the environmental crisis) are in a sense unsolvable—at least by us—it becomes our responsibility to change attitudes and model appropriate behaviors to ensure that the following generation is not hampered by the political resistance that hogtied us. What does that mean in terms of design?

At least based on this week’s readings, it seems like no one is too sure. Drawing on Amartya Sen’s formulation of welfare economics, Martha Nussbaum lays out a philosophical framework for thinking about people’s basic needs separately from any cultural considerations. She identifies ten different basic “capabilities” as prerequisites for a full and dignified life. Designers and development professionals wax equally enthusiastic about the capabilities model’s applicability in developing world contexts—especially as an alternative to more value-laden functional and economic approaches that lead to many children with one laptop but no food. The universal applicability of this particular approach, however, comes at the cost of vagueness. Economic and functional models are prescriptive: make a bicycle generator to save on electricity costs; build a water pump to cut the time spent collecting water from 5 hours a day to just 2. Makes perfect sense. Justifying the same technology in terms of its user’s dignity is a little trickier.

In the case of the water pump for instance, who’s to say that our hypothetical user doesn’t enjoy the five hours spent collecting water, that those five hours provide him/her with a great sense of accomplishment through physical exertion and communal collaboration that the pump, which landed in the community like the Coke bottle in The Gods Must Be Crazy, an alien artifact that disrupts years of harmonious if hardscrabble living? To someone living in the West, it makes intuitive sense that something that is hard to do and takes five hours is less desirable than an alternative approach that accomplishes the same result with less effort in two hours. Increased productivity! More time to do more things!

But is or should productivity be a universal goal? It was the search for increased productivity and decreased effort that led to mercantilism, colonialism, and most of the developing world’s problems to begin with. Well-intentioned and empathetic professionals devote themselves to the arduous and often thankless task of “developing” their less fortunate brethren only to then fantasize in writing and in film about a return to an Edenic state of nature. Avatar is just the latest in a long tradition of white man’s guilt stories in which a nature-worshiping indigenous race collides with an advanced techno-capitalistic race that threatens to obliterate it until one of its more enlightened members switches sides and fights to save it or at least delay its destruction. But this is the same sort of hubris that I objected to in environmental discourse last week: the market can’t fix what the market caused.

Design is not inherently capitalistic. We have been inventors for our entire history. People use and make objects and tools in every culture. Making tools that work well and objects that are beautiful (whatever the local definitions of good and beauty) is a natural species-wide compulsion. The reasons for doing so, however, are culturally (or economically or politically or socially) determined and have changed—placating angry Gods, saving time, increasing market share, sounding luxury cues. As many of the readings we looked at this week noted, the world’s wealthiest countries are the principal consumers of contemporary design, so design educations are tailored to addressing their needs. The market reduces design to applied aesthetics on the low end and conceptual misanthropy on the high end (talk to any MIT student who’s lived in Steven Holl’s Simmons Hall).

That kind of design does not work in the Chinese countryside or in a remote Bolivian town or in an African village for the same reason that democracy will never work in Afghanistan. That doesn’t mean that designers shouldn’t work to improve people in developing nations’ capabilities any more than it means that statesmen should abandon seeking peaceful relations between nations. It does mean, however, that both should be more sensitive to the problems they are seeking to solve from the point of view of the people who ultimately will be living with their solutions.

DESIGNING THE DESIGNER

Design for development. I readily admit that each assignment has caused me mild insomnia and that the constant thought stymied my ability to make anything until the day or two before its due date, but never as badly as this week. I lived in the developing world for nearly seven years, several of which I spent creating products for developing markets, but I couldn’t think of a single need I might address in a manner consonant with all my grandstanding above. I thought about creating passive solar heaters for the parts of China where Mao’s enlightened policies dictated there be no central heating, but then I remembered that southern Chinese think it’s unhealthy to be warm when it’s cold outside. I thought about anti-corruption devices of different kinds, but realized that no sane Chinese person would ever believe that a whistle-blowing device was not part of an elaborate entrapment ploy. Nussbaum’s capabilities were little help generating other ideas.

But I easily came up with a handful of products almost certain to fly off the shelves in Chinese supermarkets. Why? Because I spent years talking to Chinese consumers, working to understand their needs, no matter how bizarre they seemed to me. Nabisco, for instance, was not selling nearly enough Oreos to meet its targets. The reason, it turns out, was simple. The Chinese have never eaten cookies. There is simply no occasion in which to eat them. They’re too sweet and fatty to eat for breakfast, too heavy to snack on, and too cheap to serve to guests for dessert. Once they realized this, Nabisco created a snack-sized Oreo wafer that was less sweet and much more familiar in form to Chinese snackers. It was an instant hit.

Everyone seems to agree that economic development is the road to, if not salvation, then at least improvement, for former colonial countries. I’m not sure that in the long run encouraging local entrepreneurship through micro-finance and other community-based economic schemes is anything more than a way to polarize communities and delegate the problems of doing business in developing economies to local lieutenants, especially in resource-rich countries we will need to plunder in the future. But I do know that the tools of capitalism—of marketing in particular—are the most efficient way of uncovering people’s “unmet needs” and tapping into their desire to own and use things.

My question then shifted from how can I possibly design for development when I have no idea what problems I’m trying to solve to how can anybody? The answer most of the time is that NGOers solve problems they take as givens and those solutions aren’t adopted. Mosquito nets get used on parents’ beds or put away and saved because according to local lore children don’t need them. What if instead of tackling solutions, designers focused on using their critical thinking to uncover the problems?

Rather than a specific product or design to solve a problem established a priori, I am proposing using marketing principles to design a process that uncovers problems with a likelihood of successful solution, guarantees community involvement and investment in the outcome, and can be carried out by design teams with minimal local experience in relatively short (six- to ten-week) timeframes.

A couple of important pointers:
– Small multi-functional groups work better both on your side and the “client” side;
– Momentum is crucial. Work as hard as you can as fast as you can;
– This process will be trying and tiring. It’s a Navy Seals approach to local innovation;

STEP 1: IMMERSION

Be them, with them, about them

Be them, with them, about them is the credo of consumer insight. It basically boils down to “do your research” but proves especially useful in developing world contexts. Talk to people who have some perspective on your challenge. If you’re trying to help African farmers, don’t just talk to experts on small-scale African farming—if you talk to the same people as everyone else, you’re going to have the same ideas as everyone else. Learn about farming and growing things. Talk to farmers of all scales in different climates and conditions, to African urbanites who’ve never farmed, to agronomists, to the ladies at a garden club, to Africa scholars, to grizzly Africa hands, to meteorologists, to entomologists, to local food wholesalers, to children and their grandparents. Visit greenhouses and farms and markets and irrigation projects.

Once you’ve got some idea of the context of these farmers, learn to see the world as they do. Spend time farming in a variety of settings. Plant a garden. Use the tools they use. Live with them and shadow them until they stop paying attention to you. Try to sleep as they sleep, work as they work, bathe as they bathe, eat as they eat, and drink as they drink, and, holiest of holies, think how they think. Ask questions constantly. Don’t assume anything. Laugh at your own ignorance. This sort of research, if conducted with an open mind and a genuine desire to experience another’s world, leads to a volume of insight that’s simply not available to people who go home to their air conditioned tents at night. It also lays the groundwork and relationships for the next step.

STEP 2: IDENTIFYING THE PROBLEM

Having spent the bulk of your time being, shadowing, and talking about your target “consumers,” you should have a good idea of the things they consider troublesome or problematic. You should also have identified the people within the community that will make good brainstormers and thoughtful discussers, as well as the stakeholders whose buy-in will be necessary to ensure any sort of lasting solution.

When you’re ready, make a bit of a spectacle when they’re around, signal that something out of the ordinary is happening, and recount observations of your troubles living with them (long hours, back-breaking lifting, not enough coffee), soliciting input and encouraging them to discuss how to solve your problems. Since you are an outsider, they will consider all sorts of solutions they wouldn’t normally for your problems and more often than not start bringing their own experiences to bear. It’s at this point that the addressable problem you’re looking for should boil to the surface.

STEP 3: ADDRESSING THE PROBLEM

Now comes the familiar iterative design process. Try to use locally available materials and techniques that are within reach of the community in which you’re working. Enlist local helpers. Credit them with all progress and good ideas. Try out your design and get them to try it out. What don’t they like about it? What could be better? Iterate. Repeat.

Once a working solution is in place, see if you can get your community to show it off to another nearby community. Once they do, the idea has become theirs and you can leave, knowing that knowledge has been transferred and that you’ve earned a hot shower.