Archive for October, 2009

Celebrity Site-ing

phpnDRw7g

For my Dynamic Web midterm, I built a database to log and map celebrity sightings. I implemented sorting and pagination (and understand why there are frameworks built for dealing with such things), session variables, user accounts and tracking by hand. I also used Google’s Static Maps API, which requires no Javascripting—just pass in all your parameters in the URL.

I apologize in advance to any celebrities who might stumble on this site when self-Googling: I saw all of you; the dates, however, might be a tad shaky.

THE TABLES

Overall Structure:

phpnfsXQN

Sightings:

phpbuCk0R

phphycLrD

Linking Tables:

phpQq7pw7

Users:

phpZ6MgEU

EyeR

IMG_9427
I’m working with a small side-looking infrared emitter/receiver pair that I got from Sparkfun to see if I can detect blinks. My theory is that the surfaces of my cornea and of my eyelid will reflect (detectably) different amounts of IR light, thus allowing me to sense blinks. I’m getting awesome values from the pair using a 10k resistor, 25-1000, so almost the full possible range.

There’s not much reliable research online about the effects of long-term IR exposure. Some people say that because it doesn’t cause the iris to contract as bright visible light does, it will blind you if you look at it too long. Other people say that’s nonsense, claiming that the IR light in question would have to be much brighter than an LED to cause any damage. Still others (my favorite group) post frantically to medical forums after having spent hours staring at their remote controls while pushing the buttons (?!), suddenly panicked that they may have caused themselves irreparable damage.


I tested various brightnesses using a digital camera and varied resistance, and am using a highly directional light that will be aimed at the side of my cornea rather than directly at my retina, so I’m not too worried.

IMG_9428To detect reflection, the emitter and receiver need to be installed completely flush to the same surface and about 10mm apart. I get good readings when holding them up to my eye, a variation of between 50 and a 100, good enough to work with.

One hour later…

Mounted on the glasses, it doesn’t really work. There’s too much infrared variation in ambient light. I may need to use a camera. And my eye feels like I’ve been staring into the sun for too long. It might be fine to stare at an IR LED from a distance, but right up against your eye, it starts to feel not so good after very little time. I am nixing this particular plan.

New York Single-Handed

I noticed last winter that New Yorkers lose an inordinate number of gloves. I started counting on my way to and from school every day and gave up sometime after about thirty, lying in piles of slush or thoughtlessly trampled underboot. This winter I decided to do something about it.

NYC Glove Orphanage

The New York City Glove Orphanage is a collection of gloves found on the streets of New York. The site was an exercise in PHP and MySQL backend building for a class assignment, but the idea lives on. Initially the idea was to allow people to create profiles a la dating site for their remaining gloves (“looking for other my half”) or to use the site as a mismatched pair clearing house, but both of those require getting other people to come to the site. Too much work. In the spring when the gloves come off, I hope to have a box full that I can make something with, a tree of blooming fingers and palms I can “plant” somewhere along my daily route so that people can be reunited with their long lost gloves.

I have a couple of rules. I don’t pick up latex gloves or any other disposable glove (I count work gloves, of which I have one representative sample, as disposables because they are interchangeable, ubiquitous, and more often than not filthy). If I see the owner of a glove drop it, I return it immediately. I carry around a plastic bag for quick hygienic scoop-ups of gloves of questionable origin.

How Data Gets to Asia

Asian Crossing (Click to enlarge)
Asian Data Routes Visualization (Click to enlarge)

Regardless of where I start, it always takes me forever to get to Asia, though given a choice, I prefer flying overland from Western Europe. Data packets seem to make the same trip pretty effortlessly, so I was curious to see what route they take.

I selected thirty Asian websites and traced the route the data took from my computer in New York to get to them using Tellurian.net’s handy traceroute script (I’m behind an NYU firewall that makes tracerouting pretty difficult). The results were pretty crappy. Half the time, the trace timed out before it even reached Asia.

Show/hide raw traceroute data

I ended up using this nifty visual traceroute tool which uses Google maps to plot an approximate route to figure out how packets got from here to there. I discovered a number of interesting things:

  • Most data heads to China from the Los Angeles area, though interestingly enough, Baidu always seems to go through Mexico. I’m guessing this has something to do with the wires it favors.
  • The latency between hops once the data reaches China jumps from between 4 and 40ms to well over 200ms (an effect of the Great Firewall, I assume).
  • Because of this, most data that is bound for Asian destinations other than China tends to avoid China, with the notable exception of SK Telecom’s website which is routed through Suide, a Chinese city I’d never heard of.
  • The majority of the data lines belong either to Verizon or to AT&T, though there are other providers, such as Cogentco also pop up occasionally.
  • Many of the Indian and Vietnamese sites I looked up are hosted in the US so they didn’t make it onto my visualization.
  • Traceroutes are not all that reliable.

Interesting stuff. I’ll do the same exercise from Shanghai the next time I’m in China just for comparison’s sake.

Painting Pong

IMG_4590

For my networked pong game controller, I thought I’d have a go at using an accelerometer and a paint roller. Instinctively, everyone knows how to use a roller, so it seemed like a natural interface for a paddle that moves either up and down or left and right. I initially thought I was going to be using the accelerometer to measure movement. Turns out that accelerometers only measure movement with respect to gravity, which makes them kind of sucky for anything but determining orientation in space. Plan B was to use a photo sensor, an LED, and some black tape to make a rotary encoder.

IMG_4598

To get my encoder working, I counted the transitions from light to dark and dark to light and timed them, figuring that the longer of the two would represent the larger piece of tape and thus tell me which direction we were moving in. It sort of works.

The Arduino Code

I need to sit on this for a while. I'm sure there are plenty of documented ways of doing this (Tom Gerhardt used this method with his awesome spinning plates synthesizer) but I sort of want to figure it out on my own. I can easily get the orientation of the roller using the accelerometer but I might not be able to get the direction of its rolling using the method I'm using. Some sort of rotary switch attached along the actual axis of rotation would probably do the trick. I like the idea of doing it with light, though, so I'm going to keep on thinking about this, though I may give in to the Google soon.

IMG_4589

In-class Visualization

IMG_4604After the debacle at The Smith last weekend where my shoddy GSR crapped out on me, I had a chance to rethink what I was trying to sense, how, and why. Accelerometers, I learned, aren’t so good at measuring a turning head (though it does register slightly on one axis), so I had to consciously tilt my head to one side when looking to the left and to the other when looking to the right to ensure I got good readings. Which meant I was conscious at all times of which way I was looking—not ideal. It also meant that I could discard two of the three accelerometer axes readings and focus on how the GSR readings matched up with where (at whom) I was looking. To that end, I made a new GSR sensor, which more than makes up in robustness what it lacks in subtlety.

My short-lived attempt last week was enough to establish that there is no direct correlation (at least not one my setup can detect) between my feelings towards a person and my micro-sweating. So instead, this week I re-hot glued my glasses and attempted to measure my engagement in the discussion going on during this week’s class. I thought a bunch about how I wanted my data to look and decided the visualization should graphically represent what I was actually measuring—as opposed to a more abstract rising and falling line. The eyeballs approximate where I was looking and the size of the mouth represents my GSR. I would have liked to have the eyes grow wider at local maxima and blink at local minima but I couldn’t figure out how to access these values in code. I would also have liked to give the viewer control over the playback, but this too proved too daunting a programming task.

I’m not sure I can derive any solid conclusions other than I spent a lot of time looking at Dan O, and that I’m apparently obsessed with changing facial expressions. Here’s a sample of the output: