Archive for the 'Hardware' Category

Everything I know about interaction design I learned by making a scratch-n-sniff television

My favorite thing about my Scratch-n-Sniff TV is the conversations it spawns. I showed it recently at Maker Faire NY, and as at previous showings at ITP and at Greylock Arts, reactions were divided. About 70% of people were totally incredulous until they tried it, and then were delighted and had to find out how it worked. Of the remaining 30%, half looked at it suspiciously and rebuffed invitations to try it and the other half tried to predict how it worked before using it and then complained that the smells weren’t “accurate.” All of these reactions reveal an underlying attitude towards technology and its possibilities: the first, marvel—the what will they think of next effect; the second, suspicion—this has got to be a trick; the third, which shares elements of the second, a need to establish that we control technology—not the other way around.

heroShot

Smell is subjective, it’s ephemeral, and it’s not binary. What smells like citrus to one person smells like air freshener to another; smells can’t be turned on and off, they waft, so getting people to believe that their actions resulted in equal and opposite smell reactions required some clever sleight of nose. First of all, I gave people clear visual cues. When you scratch a picture of chocolate, you’re much more likely to interpret the resulting smell as chocolate. I also made the screen respond to being scratched by fading, just as scratch-n-sniff stickers do after vigorous scratching. This tie-in to a direct physical analogue was key, as people were much more likely to smell the screen where they’d scratched it and the one-to-one correspondence between action and reaction primed people to smell. A couple of times I ran out of scents, and several people still swore they’d smelled scents that simply weren’t there!

HOW IT WORKS

puff

  1. I found that the transistor-based model of the Glade Flameless Candle automatic air freshener would fire once approximately every two seconds if powered for 500 milliseconds (as opposed to the earlier version that relies on a resonant circuit that requires ten seconds before firing), so I hooked up its battery terminals to an Arduino, and voila! Controllable atomization of non-oil based scents!

arduinoinplace

  1. Trying to create an effective scent disperser from scratch is madness. One of the benefits of piggybacking on all of Glade’s hard work is that it’s easy to fill the provided smell canisters with other scents. I got most of mine from the nice folks at Demeter.

scents

  1. I aligned the scent dispensers under a touchscreen sending touch coordinates to the Arduino via Processing sketch. Thanks to the hydrostatic properties of the fine particle mist, when emitted, it flows up the screen and across it, sticking to it until the scent evaporates a few seconds later.

screenanddispensers

Meet Eliza, the Flashiest Phone Bot Around!

Eliza sits at her desk in her office. She completes ordinary office tasks—she checks her email, she drinks her coffee, she gets up to go photocopy something or talk to a colleague, and once in a while she checks out the New York Times. Little does she know, she’s being livestreamed to the whole world over the web. If someone calls, she picks up. Sometimes she recognizes the caller, sometimes she does not, and sometimes the connection is so bad that she hangs up and calls back.

Eliza lives on a screen in an eddy of a high-trafficked area, say an out-of-the-way elevator lobby in a busy building. A user sees her and after a couple of minutes, his curiosity gets the best of him and he succumbs to the flashing invitation and calls. To his surprise, after a couple of rings Eliza picks up. Phone conversations are ritualized in the first place and the added awkwardness of voyeurism and conversing with a stranger create the ideal situation for Eliza’s black-belt phone jujitsu which with minimal effort wrests control of the conversation from her interlocutor. It’s a bit like a good dancer foxtrotting and waltzing an overwhelmed novice around the floor.

The prototype is rough, but it works, though because of Flash’s arcane and draconian cross-domain security measures, I can only run it locally through the Flash IDE or stream from my machine using a personal broadcasting service like ustream or livestream (in order for it to work properly on the web, I’d have to host all the components I enumerate below on a single box, something I have neither the hardware nor the inclination to do). The main problem is that I’m making XML socket connections from Flash; if I used a PHP intermediary, I could probably get it working, but again, the whole inclination thing is missing and the thing is already mindbogglingly complicated. Maybe at some point in the future. The following video demonstration will have to do in the meantime.

SO HOW DOES IT WORK?

Warning: this is not for the faint of heart.

Eliza has a ton of moving parts:

  1. The Asterisk script: A simple program that answers phone calls and hands them to a PHP script, which connects via socket to the main SWF.
  2. Various PHP scripts: One to handle connections from Asterisk, one to reset connections from Asterisk after a call ends, and one to initiate callbacks when required.
  3. A simple Java socket server: Adapted from Dan Shiffman’s example, this program runs in the background on the Asterisk server, waiting for connections (phone calls). When a call comes in, it connects it and broadcasts call events (new call, hangup, button press, etc) to the PHP scripts and the main SWF and allows them to talk to each other.
  4. The main SWF: This is the brains of the operation. It loads the movies of Eliza and controls the logic of their looping as well as the logic of the audio (via socket connection back to PHP and then to Asterisk via AGI).
  5. The looping movie files (not completely smooth in this prototype, notice the moving phone and the changing light conditions!): These live in the same directory as the main SWF, which streams them as needed (for a web deployment, they’d probably have to be pre-loaded and played back).
  6. The sound files: These live on the Asterisk box (completely separate from the movies) and are played back over the phone, not the web.

NEXT STEPS
UPDATE: I’m presenting Eliza at Astricon in DC in October, so I should have some interesting observations to report soon. There are several things I’d really like to do. First, I’d like to actually get this working somewhere where I can observe lots of unsuspecting folks interacting with Eliza. I never really got to see someone who didn’t know the backstory calling in, partly because I was exhausted from thesis when I had the chance to show it and partly because there were lingering bugs I had not yet located that occasionally caused the whole thing to stop working—there are so many things on so many separate machines that can go wrong, it took quite a while to troubleshoot. A larger sample of reactions would allow me to rework the conversations so that they’re more disorientingly convincing—better pause timing, more realistically intoned, and taking into account repeat callers’ stratagem’s to see if Eliza is real. I could then reshoot the video so it is completely seamless. That would require monitors, good lighting, laser levels, an OCD continuity editor, and several days of shooting.

If you know of an easy way to overcome the cross-domain headaches, leave me a comment! If you want to fund such an undertaking, please do get in touch! Otherwise, enjoy the video above.

Black Hole Box

Black Hole Box

I was supposed to create something that responded to my relationship with energy. I use energy, selfishly. Like any other creature, I think about my needs, not about how those needs impact the larger systems of which I’m a part. I wanted to make an unnatural, inorganic living thing, an exceptionalist machine.

Black Hole Box is a black box connected to the internet that uses up batteries by continually checking the charge remaining in the batteries. When it drops below a certain threshold, the onboard microprocessor orders more batteries online. The batteries, which it orders from a local supplier, arrive within four days and the Black Hole Box’s owner must change them. The system’s survival depends on money that it doesn’t earn, energy it doesn’t produce, and processes it can’t control.

Black Hole BoxBlack Hole BoxBlack Hole BoxBlack Hole Box

L.A.M.P.

Web app screen shot

I like word games, I won’t lie, so I was pretty chuffed when I came up with the idea of creating a lamp that runs LAMP (Linux, Apache, MySQL, and PHP—one of the predominant acronyms behind the scenes on the web, both because of its robustness and its appealing freeness).

Puns aside, the idea is simple: I wanted to give people (especially strangers) remote control over a physical object in my house. My initial goal was to implement a RESTful interface for as many different channels of user interaction as possible, and to that end, I built a PHP script that will accept input from as many sources as I could think of and a single web front-end that displays the results.

--INSTRUCTIONS--

Open http://chinaalbino.com/UN/light.php

1) Scenario: You're in my house
     Use the light switch!

2) Scenario: You're on the internet
    -Run the Processing sketch that
      allows you to switch the light.
    -Use the web interface directly.
    -Send dengdengalex[at]gmail an
     email with 0, 1, or 2 in the body.
          0 turns the light off,
          1 turns the light on,
          2 tells you the light's status

3) Scenario: You're on a mobile device
     -Send an email as above.
     -Text "alexlight" followed by a:
          0 to turn the light off,
          1 to turn the light on,
          2 to get the light's status

Following this fabulous tutorial, I built an Arduino-controllable power outlet. Though I chose a lamp, the system can accommodate anything that can be controlled either with an on/off switch or a relay.

There is a php script that is triggered every couple of seconds by the Arduino which records the state of the switch connected to the outlet and another script that changes that state when it receives input (via web, text message, Processing, or email) and displays the state information on a web page. The final script runs in the background on the server polling for new email.

There are a couple of little fixes that I probably won’t get around to but I will mention so I won’t forget them, the most significant being the meta refresh method I’m currently using to check for the light’s status on the web page. I know I could call a php script in the background using AJAX, I just haven’t figured out how yet, so in the interim, I’m reloading the whole page every two seconds. Because it’s so small, the user probably won’t notice, at least not until his browser crashes.

The other major problem is email. There’s a bit of a lag. If it weren’t for my sucky hosting company, I’d be able to run a cron job on the server to check for new email every five seconds or so, but my host limits me to running jobs every fifteen minutes or on a specific minute of each hour. I tried several workarounds (putting fifteen minutes-worth of looping in the script so that it runs the entire time before it is next called => crashed the server; adding the same job 60 times, one for each minute => the server ends up synchronizing the jobs and calling about a quarter of them every fifteen minutes).

DoorSob

doorDoorSob is a door that doesn’t want you to leave a room. A Processing sketch allows the playback on a screen of a human face’s progression from ecstatic happiness to utter misery to be controlled by a potentiometer activated by turning a doorknob. Depending on the state of the face (and by extension, the potentiometer), a voice repeats either “yes” or “no” more or less emphatically. The volume of the voice and the brightness of the face are affected by the amount of ambient light falling on a photoresistor. My intention is to install the photo sensor next to a doorknob so that when someone puts their hand on the knob, it blocks the light and brightens the screen so that the video is visible and the sound is audible. The pot is moved by the knob, so that as a person starts to move the knob to open the door, it reacts, getting more and more distraught the closer the person is to opening the door (and leaving the room).

A week reading about the location of consciousness (apparently behind the eyes according to most people with a minority locating it in their upper chest) and our dubious awareness of our own perceptual and cognitive shortcomings has left me scratching my head. I haven’t done huge amounts of reading in the cognitive sciences, but I’ve done enough to feel that Julian Jaynes’s arguments against the necessity of consciousness in “Origins of Consciousness” and Dan Ariely’s TEDtalk about the limits of free will are a series of cleverly erected straw men. I’ve never heard anyone claim that consciousness is as ubiquitous and constant as implied in Jaynes’ refutation, nor do I buy Ariely’s claims that people’s laziness and susceptibility to influence constitute proof of sensory and cognitive deficiencies. The self-awareness and introspection that these men refer to as consciousness seems to me a response to complicated social structures. It’s essential not to the survival of the individual but to the survival of the group. It’s no wonder then that it tends to lag a little when considered in conjunction with the senses.

And it was thinking about the conniving, scheming, backroom dealing, weighing, and planning to which consciousness presumably emerged as a response that I started thinking about all the unconscious social and physical cues that US Weekly body language experts and NLP practitioners are constantly harping on about. We like it when people laugh at our jokes and praise us, we don’t for the most part like making people unhappy or getting yelled at. How would we feel if everyday objects called our attention to the actions we perform unconsciously?