Archive for the 'Spring 2009' Category

« Previous Entries

Toys, Working!

AL-gorithm

ALgorithm

AL-gorithm is a completely analog text munging algorithm in three parts.

AL-gorithm CloseupThe first is an interactive version of a passage from All the King’s Men created by cutting out all but a single character from laser prints of the quote. I intended to letterpress the quotes to explore the text’s physicality in full (though conceptual completeness probably dictates that I should have cut my own font) but decided to temper my art with a little reason. I did however preserve my intention with my choice of font; 24-point Bodoni was the typeface I would have used had I laid out the type and printed it on a press. The physical algorithmic process is documented here.

AL-gorithm installed

The second is a bag filled with the discarded bits of text. In digital space, memory registers that held the initial text can be overwritten. Clearing memory in the analog world is a little more complicated.

AL-gorithm closeup

O TemplateThe third is a visualization that’s intended to emphasize both the physical origin of text and also its arbitrariness. The cutout for each letter produces a unique pattern. I’ve used this pattern to generate visualizations that highlight the letter’s frequency and distribution while also serving as a symbol for the letter itself. Taken together, the pattern visualizations form a new abstracted alphabet, in which I’ve re-written the quote. Certain letters—q, x, j, z—don’t appear in the quote and are thus not part of the new alphabet. My intention here is to make the levels of abstraction that underlie programming languages easily comprehensible. In this example, if the alphabet is machine language, then these patterns are written in a language that’s one level up. Remember, we are the machines in this analogy, so we can’t understand the higher levels of abstraction.

Here’s the text, recursively generated by reinserting the patterns of letter frequencies into the original text (recursively makes it sound like it was done programmatically—I actually manually created the pattern files in Illustrator and then used them to create a font).
Recursive Penn Warren

REMEMBERING THE TEXT

There is definitely something liberating about reducing Shakespeare and Austen and Whitman and Frost to algorithmic plasticine. It’s healthy occasionally to check our reverence of texts lest the objects usurp the meaning they contain, but at the same time, it’s important to recognize their status as objects, as discrete entities with physical texture and context. Digital text is infinitely malleable, yes, but it is also ephemeral. Close a book, and the words remain; turn off the screen, and they’re gone.

I’ve spent an entire semester slicing up texts using a variety of digital methodologies and philosophies, moving from grep’s graceful julienne on the command line to much more vigorous and grammatically aware Java frappés. My goal throughout: forcing text to perform all manner of cruel contortions for my amusement and edification, compressing a novel into a few lines or stretching out sentences into languid visualizations strung with looping semantic threads. I can’t help feeling that in the process I lost something, and it’s taken me all semester to figure out that it’s a sense of text’s increasingly atavistic physical nature.

This project was born of the late-night liaison of two ideas.

  1. What happens to all the text that an algorithm discards? Writing text-munging algorithms is relatively painless, so much so that it’s easy to forget the text entirely. I wanted to rediscover text munging from the algorithm’s perspective. I documented the entire process of becoming an algorithm.
  2. Moving text off of the screen/page and into three-dimensional space. I have been thinking recently about designing interactive narratives that someone can experience architecturally, literally walking through a story. My initial sketches were based on “physical” interactions with stories in a digital game environment, but that begged the question, “What would an architectural text look like?”

BodyWorlds Cross-sectionBrian Dettmer’s Book Autopsies provide one pretty good answer. Their pages are static, however, transformed by his nimble cutting into something other than book pages, something beautiful but unreadable, if we believe his nomenclature, something dead. That reminded me of some of the cross-sections of human cadavers exhibited in Vesalian glory at the Body Worlds expositions—you can still recognize the body even though it’s been unnaturally dismantled. I wanted to keep the text alive but still allow you to move through it. Here are a bunch of other three-dimensional text sculptures.

25 Bond Street FacadeOn mornings when the sun is out, I like to walk to school along cobble-stoned Bond Street and look into the gallery windows, scoff at Herzog & de Meuron’s ridiculously splashy facade, and marvel at the understated multi-faceted building at 25, which provided the final piece of inspiration. I had already started cutting out letters when I realized that the stacked patterns looked a lot like the building’s facade. Architectural text indeed!

I Have Become Java, Destroyer of Words

Everything happens for a reason.

It’s not my place to ask why.

I take the sheets as they come to me, one at a time, and make my marks in pencil, two along each edge. I join the marks to create a frame. I measure 5mm from the bottom of each line and then 2mm above that and mark them off as well. I place a ruler along the first line and cut along its edge with an Xacto knife, mindful of the Target.

Sometimes, the Target is easy to spot. If it’s a letter with an ascender or descender, p for instance, it catches my eye like a little hook. I lift the knife before I reach it, skip over it, and then continue cutting. But some Targets like to hide. They play tricks on the eye. H is a gregarious Target; it mingles easily with commoners such as t and e and hangs on the words of sophisticates like g and p. Such a Target is smooth to the eye, like a polished pebble. Even when I read the line to myself I sometimes miss it. Such a Target is trouble.

It’s ok to miss a Target once. Each Target is held by strips running along its top and bottom. Once I’ve cut both, then I turn the page on its side and make short cuts along the edges and around the target, lifting out the freed bits carefully with the point of the Xacto and storing them in a bag. One never knows when one might need a letter. If I miss a Target twice, top and bottom, and it is cut free, that is a Bug and there is nothing I can do but start over.

At first, the process was novel and filled with discovery. I learned to score the paper before cutting it and, inspired by Adam Smith and Frederick Taylor, I sought to divide my tasks by type—scanning, measuring, marking, cutting, vertical, horizontal—and thereby increase my efficiency. This, however, proved so boring that I lost interest and quickly found my work infested with Bugs. I returned to the original loop: measure, mark, cut, turn, cut, free, and repeat.

I worked for 23 hours, so you might think that I could recite the words I was cutting around by heart. The truth is that not once did I read what I was working on. I actually couldn’t tell you what the text I was reading was about or even if it was a text at all. All I can tell you is that when I found a Target, I cut around it.

Intention lives outside of the ALgorithm.

Making a sound-activated iPhone camera app

SoundSnapp Initial Sketches

Apple’s camera interface sucks. It sucks for the user, who has to hit a button on the screen (thereby shaking the camera and virtually guaranteeing a blurry result), and it sucks for the developer, who is forced to use a modal UIImagePicker view to access the phone’s camera.


For my iPhone development class, I wanted to create a minimal sound-activated camera interface. Since my knowledge of Objective-C is pretty rudimentary, I broke the task up into a series of subtasks:

  1. Accessing the camera without using UIImagePicker:
  2. Scouring the web for information on circumventing Apple’s camera interface led me to discover the approach that many jailbroken apps use: accessing the camera’s hidden classes through a private framework and then saving the pixels of the preview as a UIImage. After an aborted attempt to create a toolchain, I gave up and eventually found Norio Nomura’s extremely helpful Github repository that includes a class called CameraTest, which provides three different ways of capturing photos by invoking classes from the private framework dynamically at runtime—that means no toolchain and no weird compilation requirements.

  3. Metering and displaying sound levels:
  4. I played with SpeakHere, Apple’s example app that does just this and almost cried. It is the most horrendously complicated thing I’ve ever seen—OpenGL, eight classes, tons of C. I suspect that Stephen Celis, the creator of the extremely simple and helpfully documented SCListener class must have been inspired in part by a similar sense of despair. SCListener outputs the peak and the average levels on the microphone and requires nothing but the AudioToolbox framework. I linked the peak to an animated UISlider and presto, a sound meter.

  5. Creating an intuitive way to set the sound threshold:
  6. One of the benefits of using a UISlider as the sound meter is that it is also a slider! When the user touches the slider, it stops monitoring the sound and responds to his finger. Wherever he releases it, that’s the new threshold. I still need to add some persistent feedback, possibly a small colored bar or other subtle indication of the threshold’s current value. Even without it, it works pretty well.

  7. Having a countdown timer option:
  8. For hands-free operation or self-portraits. This was a trivial matter of setting an NSTimer. The most difficult thing was figuring out how to create a tab bar button that would change its image with every user touch.

  9. Creating an automatic email feature:
  10. This seemed like it was going to require a complicated notification system and PHP scripts on a web server but Apple has opened up apps’ access to email in the forthcoming 3.0 SDK, so this feature is on hold until then.

Out of Bounds

Since my workload escalated beyond manageability last week, I’ve been worrying about the future of Game Studies. When undone tasks loom so large that they block out the sun, rather than turning to face them like a man, I scurry into the shadows like an insect. A videogame-playing insect.

Videogames are for me principally a form of procrastination, and I suspect I’m not alone in this. I play games rather than do what I should be doing, especially when whatever that is involves concentrated mental effort. Games engage me on a level that allows me to turn my mind off. As Jonathan Blow said at his talk last week, part of games’ appeal may rest on their construction of a space where our roles and goals are clearly delimited and defined—in contrast to the rest of our lives. I play games so I don’t have to think.

This is not to suggest that games aren’t worth attention and study, that’s not what I’m saying at all. I’m saying that the close readings and analysis commonly employed in other critical scholarship may prove too high-level given that we interface with videogames at an almost autonomic level. They might add a layer of metaphorical cultural significance to our experience of games but they won’t help us “understand” the experience itself any better because ultimately, the experience is outside of our understanding. It makes me think of Freud and the glaring fallacy of psychoanalysis—that one can rationally analyze the irrational.

This may also have been what Jonathan Blow was referring to when he discussed “ethical” game design. If what I’ve said above is true, then videogame designers have unwittingly let themselves in through the brain’s backdoor, gaining wholesale access to neural nether regions that non-digital games only access in a much more limited and sporadic fashion and that other activities barely access at all. Can we then really talk about meaning in games the same we do in books and movies?

“No,” says the gamer’s glazed look, his reflexive button pushing, and his worried parents who accuse him of not “using” his brain. The way we’ve been talking about games is the way Freud talked about dreams—using analogies. People write about writing, make films about film, and paint about painting. Where’s the game that plays about games?

Game Over?

In his essay on game mods in Gaming: Essays on Algorithmic Culture, Alexander Galloway draws parallels between the principles of Godard’s countercinema and what he calls aestheticized gaming or countergaming, but he doesn’t discuss what seems to me the most interesting and problematic question that game mods raise: namely, when is a game finished? I don’t mean from the creator’s point of view. The majority of films, books, artworks, and games are effectively “finished” when they’re “released” into the world—director’s cuts, remasters, reissues, sequels, and new editions notwithstanding. So-called “mods” reprise the work but I would argue are not part of it—the infamous Nude Raider, the string of posthumous Ludlum-branded Bourne novels, and movies such as Pimento, my friend Dave Fisher’s Memento-inspired reediting of Pee-Wee’s Big Adventure, form part of the cultural response to a work, not of the work itself.

What I’m interested in is the point at which a videogame finishes from a player’s perspective. For the most part, books are finished when every word has been read, movies when every frame has been seen, music when every note has been played. They may be revisited endlessly, resulting in new meanings and interpretations, but the content is unchanging. But much of a game’s content is generated as it is played. When is all that content exhausted? Is it when it’s beaten? Players continue playing even after they’ve “beaten” a game. We discussed speed runs and other “high-level” forms of play last week, players of vintage arcade games compete to get to the “kill screen” and play beyond it, and in his essay, Galloway talks about games that come bundled with level editors, inviting infinite extension. In that case, maybe “game over” signifies the end of a game as it signals the end of a player’s input. But this is problematic. “Game over,” as in the “game is finished” lends itself as easily to the imperative reading “play the game over.”

You’re thinking, “this is silly, a game for me is finished when I stop playing.” From a performative perspective, yes. But not formally. A movie turned off halfway is not finished, nor is a book that is abandoned two chapters in. Though I haven’t fully wrapped my head around all this, I keep on coming back to the theater and the difference between a written play and a particular performance. I have spent much of this semester in Plato’s dimly lit cave, staring at shadows on the wall. I keep hoping that once the outlines become clear, once I can confidently identify the beginning and end of a shape, I’ll finally know what the hell it is I’m looking at.

Augmenting Reality

I know, I know, augmented reality (AR) is so ten minutes ago. But I still don’t think people are using it to its full potential. Saqoosha’s FLARToolKit is now well documented enough to allow even ActionScript noobs to get fabulous 3D images to display on their webcams.


When I get the chance, I’m going to work on making AR more interactive, but in the meantime, try out my Living Room, a prototype for a DIY interior design application that’s in the video above:

  1. Download and print out this image
  2. Make sure you have a webcam hooked up
  3. Launch the demo here
  4. Flash will ask for permission to use your webcam but you won’t be able to click on the button for about fifteen seconds while all the assets load (I need to add a loading bar)
  5. Hold the printed symbol up in front of your webcam and enjoy!


People are constantly finding applications that take better advantage of AR’s unique characteristics rather than using it as eye candy (as I tried not to do but didn’t quite succeed below). AR allows you to insert 3D objects into the viewer’s space which the viewer can manipulate and interact with. That’s pretty cool. You could, for instance:

  • type a message on the back of a 3D shape so that only a person who knew that shaking the marker a certain way would know how to turn it and read the message;
  • link different buildings or city blocks to unique markers and then try various urban layouts, writing traffic algorithms that simulate flows through the space and update in real time according to the layout;
  • use an AR sticker to generate a 3D mask that would follow your head when you turned it;
  • put an AR sticker on your forehead and then track it to produce responsive panning in a 3D environment.

People have made AR tee-shirts, Topps used the technology to enhance its baseball cards.

And because virtually anything works as a marker, you could in theory detect something ubiquitous, say a black square, and use it to project advertising behind a user a la 1-800-54-GIANT behind the batter’s box at Fenway.

The hardest part of getting AR working in Flash is getting all the various pieces to play nicely with each other. FLARToolKit was developed in Japanese and the documentation is scarce. I had the most trouble importing 3D models and getting them to display correctly. What ended up working for me was finding assets I liked in the Google 3D Warehouse, opening them in SketchUp and centering them before exporting them as .kmz (Google Earth) files. I then placed them in the same folder as my ActionScript file and used the following code to import them. Note that this method is memory and time intensive. If you’re loading a bunch of big KMZ files, you’ll have to increase the time Flash will allow a script to load before timing out.

     //This goes at the top with the other import statements

     import org.papervision3d.scenes.Scene3D;
     import org.papervision3d.view.BasicView;
     import org.papervision3d.view.layer.util.ViewportLayerSortMode;
     import org.papervision3d.render.BasicRenderEngine;
     import org.papervision3d.objects.parsers.Sketchup;
     import org.papervision3d.objects.parsers.KMZ;
     import org.papervision3d.events.FileLoadEvent;

     //This goes in the class declaration
     private var _carpet:KMZ;

     //This goes in the onInit() function
     this._carpet = new KMZ();  //Initializes my carpet object
     this._carpet.load("carpet.kmz");  //Loads in the carpet from the KMZ file
     this._carpet.scale = 3;   //Scales the carpet
     this._carpet.rotationY = 90;
     this._carpet.z = 60;
     this._carpet.x = 0;
     this._carpet.y = -60;

     //Add the carpet to the list of objects that follow the marker
     this._baseNode.addChild(this._carpet);



Here are some of the most helpful sites for getting started:

Mental Blocks

I want to rework the basic block. I like proven interfaces, and I also like blocks. Imagine you’re holding one such block in your hand. It’s a two-and-a-half-inch cube made of translucent squishy plastic. And it’s lit up from the center by an RGB LED, which is currently glowing blue. You turn the cube onto another of its faces and the light coming from within changes to yellow. Turn it again and the light turns red. I mocked up the basic effect with tilt switches.

I also cast a prototype block in silicone. It looks great.


Colors are fun, but what I’m really interested in is how several of these blocks interact, and even more importantly, how kids perceive and interpret that interaction. Little kids don’t understand things logically, they understand them intuitively, and I want to create a toy that lets them imagine and figure out relationships that would otherwise be too abstract for them to understand. Say you’re still holding your blue block and I have one in my hand that’s red. We bring them to within a certain distance of each other (ideally 1-2 feet) and suddenly they both turn the same shade of purple (or they both start to blink or one becomes the color of the other or they switch colors…). The point is, when the blocks are within a certain distance, they affect each other.

Now imagine that you’re playing with ten such blocks.

Obviously, you can build towers with them, throw them around, and do all the other things you’d normally do with blocks. But you can also hide your blue block somewhere and have me walk around with my red block till it comes within range of yours and something happens to it. You can also slowly move the blocks to their awareness threshholds—a little tap one way and they’ll change color, a tap the other way and they’ll change back. I’m thinking of giving different blocks different personalities—one freezes all the other blocks, one changes them all to its color, one makes them flash…

Anyway, it’s not hard to imagine ways of playing with them, what’s hard is figuring out how to build them. I’ve enlisted Rob Carlsen as a partner in this undertaking because he’s good at lots of stuff I’m not good at and shares with me the belief that this is a cool idea and that there must be a simple and relatively dumb way of getting these blocks to know about each other and converse. It’s just neither one of us has figured it out yet.

Right now the front runner is infrared. Rob has hacked up an emitter decoder pair that uses the Sony TV remote protocol he found on a site about TV-be-gone. Apparently they’re talking to each other but there are some issues with interference between the PWM and the Atmega’s clocks when coding/decoding. Tom Igoe suggested we use a 555 timer, which we’re exploring now.

Historical Note: I was set on RFID, but after reading a couple of books and looking at costs and effective ranges, as well as many lingering questions about the blocks communicating their particular states to each other, I’m no longer convinced. We’ve also talked about magnets, XBees, photosensors, and the possibility of each block transmitting sound, actually “talking” to other blocks, but that raises questions of annoyance (lots of constant high-pitched beeping) and possible obstructions. Silicone is a great sound insulator. And it would involve writing our own communication protocol, unless we were using DTMF or a modem protocol, but again, annoying. Unless we used hypersonic sound (and then dogs would hate us). We’ve even considered plain old radio. I don’t know much about it, but I know receivers and transmitters are cheap.

« Previous Entries