Archive for the 'Random' Category

« Previous Entries

The iSmell: every five years, like clockwork

Ophone

It’s that time again. Yup, computer scent peripheral. This incarnation requires users to stick their noses right up against the disturbingly organic end of a little scent stick for a personalized whiff “symphony.” The interaction, while bizarre, is definitely an improvement over more pervasively ambient approaches, but imagining someone sniffing at a plastic cylinder while absorbed in a screen makes me think of the people I sit next to on the bus who have the off-putting habit of smelling their fingers while they read. That or one of those old Vicks eucalyptus inhalers.

Timer reset. Sigh.

I Told You So: New York Times Paywall has Arrived

Finally: the Times announces that it will finally institute the subscription model it first announced last year. Maybe my thesis will get some press! Check out some alternative paywalls while we wait to see whether people will cough up the cash, old-school print journalism will be saved, and I’ll finally get that TED invite I’ve been waiting for.

Protected: Google Gas

This content is password protected. To view it please enter your password below:

Eliza’s Astriconversations

P1020038

Astricon in DC a couple of weeks ago was my first trade show as an exhibitor, and I had a fabulous time. John Todd, Digium’s Asterisk Open Source Community Director invited me to attend and show off Eliza, my video chatterbot. The conference took place at the gargantuan Gaylord National Resort and Convention Center in the altogether bizarre and otherworldly National Harbor development on the banks of the Potomac.

My table was in the little open-source corner of the hall, tucked between some very fancy commercial exhibitors and the constantly rotating cornucopia of caffeinated beverages and high-calorie snacks. Eliza was set up between Astlinux, a custom Linux distribution centered around Asterisk, and the rowdy Atlanta Asterisk Users Group. I was also within spitting distance of the OpenBTS project (roll your own GSM cell tower), of which I’m a big fan, and Areski Belaid, a developer with a finger in numerous telephony pies, including Star2Billing, which essentially allows anyone to become a long-distance phone company. Really interesting stuff.

P1020040

The most surprising thing about the whole experience, other than the incredible amounts of cookies and sweets, was the communityness of the Asterisk community. Everyone seemed to know everyone, most people over a certain age were way into ham radio, there was nary a GUI in sight, and everyone seemed genuinely interested in everyone else’s projects, including mine.

I spoke for nearly an hour to Tim Panton from PhoneFromHere, a company that integrates voice and chat services into existing websites so businesses can interact directly with their customers over the web. He suggested I cut Flash out of Eliza by using HTTP Live Streaming, which also made me realize that I might also be able to ditch the socket server and use HTML5 web sockets!

Mark Spencer, the boffin responsible for Asterisk, stopped by and seemed genuinely pleased to see that a couple of years on, ITPers are still playing with his baby, making it contort in unexpected ways.

The folks at LumenVox (speech recognition) and GM Voices (speech synthesis and lightning-turnaround voice recording) generously offered to help robustify Eliza for her next iteration.

Also enthusiastic were Jason Goecke and Ben Klang, who are the principal movers behind the Ruby Adhearsion framework which reskins Asterisk in a slick modern web way and also involved with Tropo, by far the best cloud-hosted Asterisk service I’ve seen—write scripts in a variety of languages, host them yourself or on their servers and debug them through a web interface, take advantage of the built-in speech recognition system, seamlessly integrate with AGI, and best of all it’s all free for development, pay only when you’re looking to cash in! They turned me onto this interactive phone/video piece, which got me thinking.

ELIZA 2.0

For her next iteration, Eliza’s going to be on the web, hopefully in gloriously standards-compliant HTML5. Instead of canned conversations, she’ll rely on silence detection and Markov chains to generate much more dynamic conversations. The GM Voices people told me that they often record vocabularies—phrases in a variety of intonations so that you can do text to speech with real voices rather than those slightly Scandinavian sounding canned computer voices. I’ll be posting my progress soon.

Bleep.tv: Sensorship


I was just thinking about how much I’m looking forward to starting classes again. For some reason, though, I can’t seem to think anything positive for very long. No matter that this past year and a half has been the happiest and most productive of my life or that even the worst day at ITP is better than the best day at any job I’ve ever had, immediately my mind looks for the cracks (minute though they might be), pisses in them, and waits patiently for temperatures to drop below freezing.

So the one thing I really don’t like in class is when people decide that they have something to say and must say it immediately, patient protocol-abiding handraisers be damned. I’m all for impetuousness, just not during class discussions when what you say is no more important than what I say (and certainly not more important than what the instructor says). There are several chronic offenders who are responsible for a fair amount of collective arm strain and lip biting, and it is to them I dedicate CensorMe.

CensorMe is a little Processing app that uses the OpenCV library’s face tracking methods to superimpose a black bar over any eyes it finds in the frame. It also emits a loud beep any time it detects speech. If the government does it, that’s a no no, but self-censorship has always seemed very much de rigueur in the US.

I’d like to encourage a number of my classmates to download it.

(I’m working on an on-line version, but I may have to port it to flash as doing video capture in Java over the web is kind of a nightmare).

tweepetry

My father-in-law is obsessed with Twitter poetry, or tweepetry, as he calls it. As someone incapable of succinct expression, I’m a great admirer of the masters of the 140 character quip, foremost among them the inimitable Anderson Miller and the by now overfollowed Shitmydadsays. I’m not sure anyone really knows what the point of Twitter is, but it does lend itself to recording felicitous turns of phrase, so I applaud my pop-in-law’s to up the literary ante.

As a gift last year, I made him a site that aggregates tweepetry using Seaofclouds’s nifty little Twitter-scraping javascript. The concept was super simple: tag any tweet with #tweepetry and it automatically appears on tweepetry.com. I added a couple of snippets to the code to remove the hash tag and done!

It worked great for a couple of weeks. Then the bits of tweepetry that had appeared on the site began disappearing. After three weeks, they were all gone. I did some research into the Twitter Search API and it turns out that only the last 10 days or so are indexed. Anything older than that can no longer be fetched dynamically using search. So tweepetry.com sat empty for six months gathering internet dust.

After three months of database programming, I realized I’d amassed the tools to resolve this problem so I revisited the site, rewriting everything but its visual elements from scratch.

Some recent tweeps

HOW IT WORKS

The site is still dynamic, but the dynamism is all backstage.

There’s now a php script running on my server that polls Twitter every 10 minutes (the maximum number of searches from one IP is limited to 150 per 24 hours). It talks to a MySQL database with two very simple tables, the first to recorded the id, username, time, and text of a particular tweet and the second to keep track of the id of the last tweet. I used two tables to keep my database queries fast. Every time the script runs, it searches Twitter for tweets that include the “#tweepetry” hash tag. If it finds some, it compares the id of the latest with the id stored in the second table. If they’re the same, nothing happens, but if they’re different, each tweet is stored in the database until the id of the last recorded tweet is reached, at which point it is updated to reflect the new additions. Then there’s a simple script that fetches all this information and displays it. That’s it!

AL-gorithm all over the web.

Extreme closeup

I’m nearly famous.

Here.
And here.
And on Cool Hunting.

Mounted

SoundSnapp: A sound-activated iPhone camera app

SoundSnapp Overview
After struggling to take a picture of myself with my iPhone a couple of months ago, I had the idea of creating an iPhone camera app that doesn’t require pushing buttons on the screen to take a picture. The simplest way to do this it seemed to me was to use the microphone as a shutter release. [Apparently, I wasn’t alone (warning: annoying iTunes link).] Set a sound threshold, an optional timer, make noise, and when its volume crosses the threshold, a picture gets taken and saved to the phone’s Camera Roll. Below are some screenshots from the working application. The full documentation is here.

« Previous Entries