Archive for March, 2012

Some thoughts on where computing is headed

This is largely a response to this video informed by this other video:

I think that despite all his calls for “out of the box” thinking, Scott Jenson’s thinking is as bounded as that he decries. I agree, apps suck, and yes, I love the idea of browser as operating system, but I also think the idea of phones themselves as interfaces sucks. They are the apps of the physical world. We won’t need a Google for ranking the sensor-enabled objects around us because they exist in three-dimensional space just as we do. The whole point of physical computing is to eliminate screens as go-betweens.

To use his (kind of lame) example, if I want to interact with my stereo, I shouldn’t have to go to my phone. He just got done telling me how much it sucks that there needs to be an app for that and then he tells me I can tap through a list of objects around me on my phone to interact with them. How about I look at my stereo? Or I talk to it? Or I point at it? Or I think about it?

What we’re seeing is the dying of a computing metaphor. We have always had to go to computers and speak to them in their language. At first, hundreds of us flocked to massive computers and spoke to them in punchcard, an entirely human-unintelligible language. Then a revolution: one man, one computer. The graphical user interface, handmaiden to this revolution, allowed us to speak to the computer in a way we could comprehend, though it still required us to learn how to manipulate its appendages to accomplish the tasks we wanted performed. Now we’re in a world where each person has multiple, increasingly tactile computers. And as processor speeds grow and prices drop, it seems likely that the computer to people ratio will continue to increase.

The desktop metaphor, with its graphically nested menus and multiple windows, won’t survive. It didn’t translate well onto the pocket-sized screens of smartphones, and Siri is the first of peal of its death knell. Siri eliminates the physical analog of a desktop/button pad altogether and replaces it with a schema-less model where I can use a computer without learning anything about how it works.

Couple that with the increasing physical awareness and falling cost of networked devices equipped with cameras and sensors, and what you end up with is not a small computer we can carry with us to interact with the world around us but a giant computer which we inhabit, and which treats us and what we do as input.

What’s tricky about this is imagining the output. With each jump in computing, the new modes did not replace the old modes. They overlapped a bit, but mostly they expanded the possibilities of computing and the number of computable operations. No one programming on the command line imagined that a computer would one day be great for editing films. The command line is still very much in use today as it is still the best method of doing many things, but the GUI has greatly expanded the computable universe. Likewise, while it’s relatively easy to imagine the region where a physical user interface (PUI?) intersects the GUI (advancing slides in a keynote presentation without a remote, for instance), it’s much harder to imagine those tasks we’ve never even thought of as within the reach of computability.

Computing Paradigms Bubble Chart

And that’s what I’m really interested in, the film editing scenarios. Context and object awareness won’t require phones to rank nearby objects as we’ll be able to interact with them with minimal or no perceptible interfaces. We’ve slowly watched consumerization turn sophisticated operating systems into shiny idiot-proof button pads. There’s no reason to believe the trend won’t continue spreading into the backend, turning programming itself into a consumer behavior. At Google we’re obsessed with machine learning, but it seems to me the future may be its converse—human teaching. If people can tell their computers exactly what they want without having to learn C or Java, then they can start to ignore their computers entirely.

That’s the ultimate goal: invisible computing. After all, how often do you think about how you’re light switch works when you go turn on the lights in a dark room?