Archive for the 'Hybrid' Category

AL-Gorithm at Maker Faire

Editor's Choice 2011

I just finished up a fun but exhausting weekend at the Bay Area Maker Faire, where after the mafan and technical headaches associated with showing Scratch-n-Sniff last fall, I decided to show AL-Gorithm, my favorite electricity-less project (and an Editor’s Choice winner!). For this installment, I substituted wood for metal as a backing material, placed a tap light behind the text for added drama, and substituted an actual file cabinet armature for the iron dowels I hung the piece on last time.

As always, I had a number of really good conversations about the abstractions that underlie computing, including one specifically about analog computers with Dwight Elvey of the Computer History Museum, who actually had an analog computer set up right across from my table. One woman compared my hanging pages to a slide rule, which I really liked, while it reminded several people of the skewer-like physical sorting mechanism from the days of punch cards.

Algorithm Close up

The zen drudgery of mechanical repetition—as I learned while creating this piece—often leads to insight, and the repetition of presenting this project to hundreds of people whose technical understanding ranged from “Unix? I thought they weren’t allowed to castrate boys any longer” to knowing chuckles “Yes, I’m familiar with grep” is no exception.

I tried a number of narratives to explain why I thought it worthwhile to painstakingly cut out hundreds of paper strips. Experiencing text as a computer does resonated only with coders, who were definitely in the minority of my visitors. Forgetting how to read, while zingy, added too little substance. In the end, the explanation that clicked for both the technically oriented and the overwhelmed families that composed the majority of my audience was the power tools analogy: if all you’ve ever used is a power saw, you’re never mindful of the difficulties and characteristics of sawing, nor can you fully appreciate the materials you’re cutting. Likewise, if you only ever work with digital text, you can’t fully appreciate its texture or what it means to alter, filter, cut, or rearrange it.

Meet Eliza, the Flashiest Phone Bot Around!

Eliza sits at her desk in her office. She completes ordinary office tasks—she checks her email, she drinks her coffee, she gets up to go photocopy something or talk to a colleague, and once in a while she checks out the New York Times. Little does she know, she’s being livestreamed to the whole world over the web. If someone calls, she picks up. Sometimes she recognizes the caller, sometimes she does not, and sometimes the connection is so bad that she hangs up and calls back.

Eliza lives on a screen in an eddy of a high-trafficked area, say an out-of-the-way elevator lobby in a busy building. A user sees her and after a couple of minutes, his curiosity gets the best of him and he succumbs to the flashing invitation and calls. To his surprise, after a couple of rings Eliza picks up. Phone conversations are ritualized in the first place and the added awkwardness of voyeurism and conversing with a stranger create the ideal situation for Eliza’s black-belt phone jujitsu which with minimal effort wrests control of the conversation from her interlocutor. It’s a bit like a good dancer foxtrotting and waltzing an overwhelmed novice around the floor.

The prototype is rough, but it works, though because of Flash’s arcane and draconian cross-domain security measures, I can only run it locally through the Flash IDE or stream from my machine using a personal broadcasting service like ustream or livestream (in order for it to work properly on the web, I’d have to host all the components I enumerate below on a single box, something I have neither the hardware nor the inclination to do). The main problem is that I’m making XML socket connections from Flash; if I used a PHP intermediary, I could probably get it working, but again, the whole inclination thing is missing and the thing is already mindbogglingly complicated. Maybe at some point in the future. The following video demonstration will have to do in the meantime.

SO HOW DOES IT WORK?

Warning: this is not for the faint of heart.

Eliza has a ton of moving parts:

  1. The Asterisk script: A simple program that answers phone calls and hands them to a PHP script, which connects via socket to the main SWF.
  2. Various PHP scripts: One to handle connections from Asterisk, one to reset connections from Asterisk after a call ends, and one to initiate callbacks when required.
  3. A simple Java socket server: Adapted from Dan Shiffman’s example, this program runs in the background on the Asterisk server, waiting for connections (phone calls). When a call comes in, it connects it and broadcasts call events (new call, hangup, button press, etc) to the PHP scripts and the main SWF and allows them to talk to each other.
  4. The main SWF: This is the brains of the operation. It loads the movies of Eliza and controls the logic of their looping as well as the logic of the audio (via socket connection back to PHP and then to Asterisk via AGI).
  5. The looping movie files (not completely smooth in this prototype, notice the moving phone and the changing light conditions!): These live in the same directory as the main SWF, which streams them as needed (for a web deployment, they’d probably have to be pre-loaded and played back).
  6. The sound files: These live on the Asterisk box (completely separate from the movies) and are played back over the phone, not the web.

NEXT STEPS
UPDATE: I’m presenting Eliza at Astricon in DC in October, so I should have some interesting observations to report soon. There are several things I’d really like to do. First, I’d like to actually get this working somewhere where I can observe lots of unsuspecting folks interacting with Eliza. I never really got to see someone who didn’t know the backstory calling in, partly because I was exhausted from thesis when I had the chance to show it and partly because there were lingering bugs I had not yet located that occasionally caused the whole thing to stop working—there are so many things on so many separate machines that can go wrong, it took quite a while to troubleshoot. A larger sample of reactions would allow me to rework the conversations so that they’re more disorientingly convincing—better pause timing, more realistically intoned, and taking into account repeat callers’ stratagem’s to see if Eliza is real. I could then reshoot the video so it is completely seamless. That would require monitors, good lighting, laser levels, an OCD continuity editor, and several days of shooting.

If you know of an easy way to overcome the cross-domain headaches, leave me a comment! If you want to fund such an undertaking, please do get in touch! Otherwise, enjoy the video above.

Black Hole Box

Black Hole Box

I was supposed to create something that responded to my relationship with energy. I use energy, selfishly. Like any other creature, I think about my needs, not about how those needs impact the larger systems of which I’m a part. I wanted to make an unnatural, inorganic living thing, an exceptionalist machine.

Black Hole Box is a black box connected to the internet that uses up batteries by continually checking the charge remaining in the batteries. When it drops below a certain threshold, the onboard microprocessor orders more batteries online. The batteries, which it orders from a local supplier, arrive within four days and the Black Hole Box’s owner must change them. The system’s survival depends on money that it doesn’t earn, energy it doesn’t produce, and processes it can’t control.

Black Hole BoxBlack Hole BoxBlack Hole BoxBlack Hole Box

L.A.M.P.

Web app screen shot

I like word games, I won’t lie, so I was pretty chuffed when I came up with the idea of creating a lamp that runs LAMP (Linux, Apache, MySQL, and PHP—one of the predominant acronyms behind the scenes on the web, both because of its robustness and its appealing freeness).

Puns aside, the idea is simple: I wanted to give people (especially strangers) remote control over a physical object in my house. My initial goal was to implement a RESTful interface for as many different channels of user interaction as possible, and to that end, I built a PHP script that will accept input from as many sources as I could think of and a single web front-end that displays the results.

--INSTRUCTIONS--

Open http://chinaalbino.com/UN/light.php

1) Scenario: You're in my house
     Use the light switch!

2) Scenario: You're on the internet
    -Run the Processing sketch that
      allows you to switch the light.
    -Use the web interface directly.
    -Send dengdengalex[at]gmail an
     email with 0, 1, or 2 in the body.
          0 turns the light off,
          1 turns the light on,
          2 tells you the light's status

3) Scenario: You're on a mobile device
     -Send an email as above.
     -Text "alexlight" followed by a:
          0 to turn the light off,
          1 to turn the light on,
          2 to get the light's status

Following this fabulous tutorial, I built an Arduino-controllable power outlet. Though I chose a lamp, the system can accommodate anything that can be controlled either with an on/off switch or a relay.

There is a php script that is triggered every couple of seconds by the Arduino which records the state of the switch connected to the outlet and another script that changes that state when it receives input (via web, text message, Processing, or email) and displays the state information on a web page. The final script runs in the background on the server polling for new email.

There are a couple of little fixes that I probably won’t get around to but I will mention so I won’t forget them, the most significant being the meta refresh method I’m currently using to check for the light’s status on the web page. I know I could call a php script in the background using AJAX, I just haven’t figured out how yet, so in the interim, I’m reloading the whole page every two seconds. Because it’s so small, the user probably won’t notice, at least not until his browser crashes.

The other major problem is email. There’s a bit of a lag. If it weren’t for my sucky hosting company, I’d be able to run a cron job on the server to check for new email every five seconds or so, but my host limits me to running jobs every fifteen minutes or on a specific minute of each hour. I tried several workarounds (putting fifteen minutes-worth of looping in the script so that it runs the entire time before it is next called => crashed the server; adding the same job 60 times, one for each minute => the server ends up synchronizing the jobs and calling about a quarter of them every fifteen minutes).

DoorSob

doorDoorSob is a door that doesn’t want you to leave a room. A Processing sketch allows the playback on a screen of a human face’s progression from ecstatic happiness to utter misery to be controlled by a potentiometer activated by turning a doorknob. Depending on the state of the face (and by extension, the potentiometer), a voice repeats either “yes” or “no” more or less emphatically. The volume of the voice and the brightness of the face are affected by the amount of ambient light falling on a photoresistor. My intention is to install the photo sensor next to a doorknob so that when someone puts their hand on the knob, it blocks the light and brightens the screen so that the video is visible and the sound is audible. The pot is moved by the knob, so that as a person starts to move the knob to open the door, it reacts, getting more and more distraught the closer the person is to opening the door (and leaving the room).

A week reading about the location of consciousness (apparently behind the eyes according to most people with a minority locating it in their upper chest) and our dubious awareness of our own perceptual and cognitive shortcomings has left me scratching my head. I haven’t done huge amounts of reading in the cognitive sciences, but I’ve done enough to feel that Julian Jaynes’s arguments against the necessity of consciousness in “Origins of Consciousness” and Dan Ariely’s TEDtalk about the limits of free will are a series of cleverly erected straw men. I’ve never heard anyone claim that consciousness is as ubiquitous and constant as implied in Jaynes’ refutation, nor do I buy Ariely’s claims that people’s laziness and susceptibility to influence constitute proof of sensory and cognitive deficiencies. The self-awareness and introspection that these men refer to as consciousness seems to me a response to complicated social structures. It’s essential not to the survival of the individual but to the survival of the group. It’s no wonder then that it tends to lag a little when considered in conjunction with the senses.

And it was thinking about the conniving, scheming, backroom dealing, weighing, and planning to which consciousness presumably emerged as a response that I started thinking about all the unconscious social and physical cues that US Weekly body language experts and NLP practitioners are constantly harping on about. We like it when people laugh at our jokes and praise us, we don’t for the most part like making people unhappy or getting yelled at. How would we feel if everyday objects called our attention to the actions we perform unconsciously?