Physical Computing: Wheredipuddit? RFID Inventory Boxes

For my final project in physical computing, I wanted to follow through with one of my pre-ITP goals to accomplish during the program, which I outlined in an older blog post.


I wanted to build an inventory system which used RFID/wifi/whatever to check stuff in to boxes, so that when I needed to find something, I could pull it up on my phone or in a browser and ask it where it was, and the box it was in would glow with LED light.  At the same time, these boxes and things would become individuals and traits, respectively, that I could add to or subtract from to create objects with personalities.

Because I hate having to wait for the delivery, here are some videos showing it in action:

Pinging an object from the Wheredipuddit interface:

Pinging an object via QR code (I would show this via a cellphone but I didn’t have another camera):

Checking in emotions into a box:

Some Inspirations for This Project

This was an idea somewhat fleshed out in Cory Doctorow’s book “Makers”.  The book involves two hackers who work together in a junkyard to produce lots of low-tech but highly ingenious inventions and gadgets that end up making modestly large amounts of money.  Serial tinkerer-capitalists or something.  Some of the relevant text, from Cory Doctorow’s free (!) online version of “Makers”:

Tjan opened the door with a flourish and she stepped in and stopped short. When she’d left, the place had been a reflection of their jumbled lives: gizmos, dishes, parts, tools and clothes strewn everywhere in a kind of joyful, eye-watering hyper-mess, like an enormous kitchen junk-drawer.

Now the place was *spotless* — and what’s more, it was *minimalist*. The floor was not only clean, it was visible. Lining the walls were translucent white plastic tubs stacked to the ceiling.

“You like it?”

“It’s amazing,” she said. “Like Ikea meets *Barbarella*. What happened here?”

Tjan did a little two-step. “It was Lester’s idea. Have a look in the boxes.”

She pulled a couple of the tubs out. They were jam-packed with books, tools, cruft and crud — all the crap that had previously cluttered the shelves and the floor and the sofa and the coffee table.

“Watch this,” he said. He unvelcroed a wireless keyboard from the side of the TV and began to type: T-H-E C-O. . . The field autocompleted itself: THE COUNT OF MONTE CRISTO, and brought up a picture of a beaten-up paperback along with links to web-stores, reviews, and the full text. Tjan gestured with his chin and she saw that the front of one of the tubs was pulsing with a soft blue glow. Tjan went and pulled open the tub and fished for a second before producing the book.

“Try it,” he said, handing her the keyboard. She began to type experimentally: U-N and up came UNDERWEAR (14). “No way,” she said.

“Way,” Tjan said, and hit return, bringing up a thumbnail gallery of fourteen pairs of underwear. He tabbed over each, picked out a pair of Simpsons boxers, and hit return. A different tub started glowing.

“Lester finally found a socially beneficial use for RFIDs. We’re going to get rich!”

“I don’t think I understand,” she said.

“Come on,” he said. “Let’s get to the junkyard. Lester explains this really well.”

He did, too, losing all of the shyness she remembered, his eyes glowing, his sausage-thick fingers dancing.

“Have you ever alphabetized your hard drive? I mean, have you ever spent any time concerning yourself with where on your hard drive your files are stored, which sectors contain which files? Computers abstract away the tedious, physical properties of files and leave us with handles that we use to persistently refer to them, regardless of which part of the hard drive currently holds those particular bits. So I thought, with RFIDs, you could do this with the real world, just tag everything and have your furniture keep track of where it is.

“One of the big barriers to roommate harmony is the correct disposition of stuff. When you leave your book on the sofa, I have to move it before I can sit down and watch TV. Then you come after me and ask me where I put your book. Then we have a fight. There’s stuff that you don’t know where it goes, and stuff that you don’t know where it’s been put, and stuff that has nowhere to put it. But with tags and a smart chest of drawers, you can just put your stuff wherever there’s room and ask the physical space to keep track of what’s where from moment to moment.

“There’s still the problem of getting everything tagged and described, but that’s a service business opportunity, and where you’ve got other shared identifiers like ISBNs you could use a cameraphone to snap the bar-codes and look them up against public databases. The whole thing could be coordinated around ‘spring cleaning’ events where you go through your stuff and photograph it, tag it, describe it — good for your insurance and for forensics if you get robbed, too.”

He stopped and beamed, folding his fingers over his belly. “So, that’s it, basically.”

Perry slapped him on the shoulder and Tjan drummed his forefingers like a heavy-metal drummer on the side of the workbench they were gathered around.

They were all waiting for her. “Well, it’s very cool,” she said, at last. “But, the whole white-plastic-tub thing. It makes your apartment look like an Ikea showroom. Kind of inhumanly minimalist. We’re Americans, we like celebrating our stuff.”

“Well, OK, fair enough,” Lester said, nodding. “You don’t have to put everything away, of course. And you can still have all the decor you want. This is about clutter control.”

“Exactly,” Perry said. “Come check out Lester’s lab.”

“OK, this is pretty perfect,” Suzanne said. The clutter was gone, disappeared into the white tubs that were stacked high on every shelf, leaving the work-surfaces clear. But Lester’s works-in-progress, his keepsakes, his sculptures and triptychs were still out, looking like venerated museum pieces in the stark tidiness that prevailed otherwise.

Tjan took her through the spreadsheets. “There are ten teams that do closet-organizing in the network, and a bunch of shippers, packers, movers, and storage experts. A few furniture companies. We adopted the interface from some free software inventory-management apps that were built for illiterate service employees. Lots of big pictures and autocompletion. And we’ve bought a hundred RFID printers from a company that was so grateful for a new customer that they’re shipping us 150 of them, so we can print these things at about a million per hour. The plan is to start our sales through the consultants at the same time as we start showing at trade-shows for furniture companies. We’ve already got a huge order from a couple of local old-folks’ homes.”

I kind of read into the book a post-Apple world, where the production process has become so hyper and quick in order to account for gadgetphiles’ fickle tastes that smaller ideas are put into mass production, that grand visions are no longer marketable in a soon-enough timeframe.  What we’re seeing today is the democratization of hardware, following in the shadow of software’s reign, which has dominated the last 30 years or so.  With lots of small shops now selling microcontrollers, Radio Shack retooling its stores to sell circuitry components once again, and the advent of the internet of things and sensor-based objects that are learning how to sense the world around them, our world is going autonomous.  Think military drones, but on smaller scales and for more common-day applications.

Having boxes talk made me think of the TED Talk by the MIT student who built small toy blocks, Siftables, with screens on them which had accelerometers and sensors to detect tilting, proximity to other blocks, etc. and could be configured immediately to play games instructed by the nearby computer:

My classmate Mark Breneman was telling me to look into near-field communication, or NFC.  It’s included in the Nexus S phone for Android:

This will probably obsolete RFID, but right now it’s not quite cheap enough for usage in the same way that the ID-12 and other RFID readers are.  Its security is an improvement upon RFID though, so it will likely win for more complicated applications.  I’d love to continue doing projects related to presence, identification, and communication using these techs though.

Another classmate (and make: contributor) Matt Richardson sent me this project, “Doh”, which uses RFID and Arduino to help you remember your wallet and keys before heading out the door.

Planning and Ordering Stuff

Here is a sketch I drew for what I want to build, along with some of the components I already ordered to make it work.

“EL tape” in the top-right should read “digital RGB LED strip”.  I bought both but ended up just using the digital RGB LED strip.

I wanted the Indiana Jones-y guy at the bottom to be significantly jowly, as he is in the film, but I think I just ended up messing up his whole head.  I love that quote though.  “We have top men working on it right now.”  “Who?”  “TOP. MEN.”

I’d love to have a tour of some of the tech behind Walmart’s and UPS’s logistics systems, which reportedly make use of RFID to help with real-time inventory.  The breadth of data and the information that they must be able to extract from it all is staggering to think about.

My system was only composed of two working boxes though, since it gets rather expensive quickly to get wifi-capable microcontrollers (I got Diamondbacks from cutedigi), RFID kits, some RFID stickers, digital RGB LED strips (to make the tupperware boxes I’ll get to glow), 4AA battery packs from Radio Shack to power the microcontrollers, and other assorted power connectors.  I also got some LED screens in case I wanted to do some interface stuff.  The tupperware I bought at KMart.  I had to order plastic PVC ID cards off, as they are surprisingly hard to find in town (Staples didn’t even have them).  Maybe they are a somewhat controlled item because people use them to make fake IDs?

Now, having never played with any of these things, and having never done a physical computing project of this magnitude, I was fully expecting that I would have pieces that wouldn’t work with each other (I’m worried about the cards/tags/stickers and RFID readers being compatible), and that I may have had to buy MORE stuff.  I chose this project because I thought it’d be doable, given my experience and the capability of the hardware.

What I wanted to do is just prototype a simple inventory system where each box has a scanner that lets me “check in/out” objects on an RFID reader, which then passes the data over wifi to my server, which then displays a nice PHP script showing where stuff is.  If I want to find something, the PHP script asks the box that contains it to light up.  That’s about it.  And maybe, if I had enough time, I could let the boxes talk to each other in some silly way.  If I had time.  And if things didn’t blow up in my face.

Well, things kind of blew up in my face.  I gave myself plenty of time, but I ran into tons of problems.  Here’s the process:


Sparkfun RFID Starter Kit

I bought a couple RFID starter kits from Sparkfun, which include 2 ID-card sized 125KHz cards, an ID-12 RFID module, and an RFID reader breakout board.  I tested the RFID readers by just plugging in a mini-USB to USB cable I bought from cutedigi.  Those read fine, displaying the RFID card codes into the ZTerm modem comms app for OS X.  That working, I then tried to get the reader to work when connected to my Arduino Uno, using the code posted by Nick Gammon on the Arduino forum.  Soldered wires to the VCC (5V), GND (ground), and TX.  Made sure to not have the TX wire attached to the RX pin on the Arduino while uploading the above sketch.  Then, success!  Successful reading of RFID numbers to Arduino’s serial monitor!

RGB LED Strips

My next task was to get my LPD8806 digital RGB LED strips to work with my Arduino Uno.

Several other students had worked with the strips for their physical computing projects, so I got some good tips on what transistors, shields, etc. to use for EL tape and EL wire, and what pages to look at.  Lady Ada’s guide was invaluable.

It looked like the RGB LED strips have the potential to draw way too much power for a portable solution.  I was worried I might have to go with EL tape or wire as potential solutions, which provide less visual feedback to the user (the RGB LED strips have individually addressed LEDs that you could make any color and thus give the user visual cues).  At that point, I also started considering what other components could provide feedback.  I wasn’t too happy about using individual LEDs — I thought that might look sloppy.  And a wave shield for playing sounds has its own host of problems: you have to solder the kit yourself, the shield would be inside the box so the sound would be muffled, etc.

I had ordered 2 meters of the LPD8806.  I unsoldered the first meter from the second as instructed by Limor’s (Lady Ada) guide, by peeling the strips apart as I heated them up with a soldering iron.  Then I soldered wires onto the input connections on the strip.

Then I wired everything up according to this diagram, also on the guide:

I had a massive problem understanding the power requirements, as the product page said that you’d need a 2 amp power supply in order to run the strips at full white output.  Well, I tried 1 meter strips with a 4 AA battery pack (with wire-only connectors; bought at Radio Shack).  Those worked great with the strandtest sketch from the LPD’s library I downloaded off Lady Ada’s site.  Success!:

I was most nervous about getting the LED stuff to work, but they already came with a library with examples to command the LEDs how I wanted; the hard part was figuring out the power and wiring.  4 AA batteries worked fine, when connected with the Arduino and running whites on each LED (though I wouldn’t want to keep it that way, lest it burn out or  be underpowered).  Where I ran into trouble later was when I wanted to run the RFID reader as well; the power draw coming off the LEDs while also running the wifi and RFID code was too much — not enough juice?  I was excited about creating color moods for the containers to represent their feelings, and I was looking forward to a beautiful fade-in, fade-out cyan for when a box says it contains an object I want.


I bought two Diamondbacks, which are basically ATmega328 chips on Arduino Duemilanove boards with on-board wifi (supporting open networks, WPA, WPA2, and WEP…a key bonus for accessing the multitude of networks out there) on recommendation from my classmate Gavin and my prof, Scott Fitzgerald.

I was a little worried about getting this part working because wifi is still a little sketchy and undocumented in Arduino-land.  I spent hours upon hours downloading different peoples’ libraries and looking through the and linksprite and asynclabs forums and docs.  I even looked through the Chinese linksprite files, which was painful.

But the example sketches I was trying wouldn’t compile!  I asked Gavin, a classmate, for help and he mentioned that he was using Arduino version 0022, whereas I was using the latest version, 0023 (RC beta 3).  I downloaded 0022 and the sketches started compiling!  Not too long later, I managed to connect my little Diamondback to my WPA2 router!

So here’s what I learned:

  • Get a git clone of the user-contributed WiShield Arduino library from asynclabs.  Install it to your Arduino sketch folder in “libraries/”.
  • Edit apps-conf.h so that only one “APP_” define is uncommented.  For the client-based example sketches (like SimpleClient), uncomment “#define APP_WEBCLIENT”.
  • If things don’t compile and you get weird errors, try a different Arduino version.  I had to install Arduino 0022 to get my Diamondback to work.
  • Opening .pde sketches in Arduino 0023 will cause them to be renamed to .ino, which 0022 won’t see.  So you’ll have to go back and rename the sketch and re-open Arduino 0022 to see it again.
  • I think I had to go track down wire.h on the interwebs and save it as wiring.h because it was missing for some reason, giving me a compiler error message.
  • Make sure you set a static IP on your router for the wifi device to connect to — it can’t handle DHCP.


What a headache to get all that working!  The forums were not very descriptive or helpful for getting this to work.

I had to combine that code (which lets me GET a string of variables to a PHP web page, which can then be passed into a MySQL database, with two other pieces of code: code to read in the RFID ID # via serial port (which I did above), and code to poll the web server to see if the server is instructing it to light up the LEDs or not.  I was really, really, really hoping this is a smooth process.

Front- and Back- Ends

I set up the database and front-end for the web server.  I set up a few tables, one for my list of things, one for my list of containers (I made things and containers separate because I’m thinking some things might also be containers), one for a log tracking what events have happened.

The front-end went smoother than I thought.  I used jQuery and jQueryUI to easily build in a working interface.  It took me the most time to figure out how to correctly encode the MySQL pull into JSON via PHP so that I could access it via JavaScript and jQuery for my autocomplete search functions.  But it helped me to better understand how to navigate the DOM and inter-operate between the different languages.

Now I’ve got a pretty slick interface, though I might need to restructure the MySQL pulls into PHP classes instead of one-offs.  I also might need to restructure the data so that not everything is given away in my source’s JSON.  There’s a ton of work I need to do adding functionality for things like having the web server tell the Arduino to turn on its LEDs, etc.  But the basic layout is done!

I even added quests and recipes.  The quests seen above: OCD is unlocked when you put all the objects of the same type in the same box.  Fort Knox is when you check in your wallet, your checkbook, and a small box into the same container.  Love at First Light was going to be the coup de grace of my demo: both boxes would have “love” checked into them, and then they’d face each other (both have a reader and an RFID attached to their fronts) to talk, and then the small box would be checked into Eve.  All these conditionals would give birth to a new container in my database, called “Caintainer”.  So I wanted to create life for my class demo.  It didn’t work. :/

You can try out the Wheredipuddit interface on my web server.  It’s not connected to anything though.


These pieces all worked fine on their own.  It came time to put them all together.  This is when I started running into issues.  First of all, I was wrestling with the code to send the GET request.  The sample Arduino code that connected to a weather database worked.  My PHP script to read an HTTP URL into a MySQL database worked.  But when I tried to modify the Arduino sketch so that it could insert a different RFID based on which one was scanned, and then requested through the Arduino, I’d often get Arduino resets.  My professor suggested that I stop trying to mess around with char pointers (char *) and arrays, and just hardcode in the URLs based on if..then checks for which RFID was scanned.

This worked with one example but for 20, it also reset the board.  I suspected I was filling up the memory on the Arduino or something.  I reduced the number of if..else if..then checks to just 3 different RFID tags, and that seemed to work okay.

I then added the digital RGB LED strips.  With the way I wired it up, everything was drawing from the same power, and I’d get the Arduino to work fine, but when I scanned the RFID reader, it would click (not enough power, or a short circuit), or the LEDs wouldn’t come on.  I ended up getting another 4 AA battery pack and connecting it to the Arduino + RFID, with the LEDs getting their own (but grounded with the Arduino).  I’m not sure if this circuit caused later problems — I don’t think it did since I didn’t get any more issues, power-wise.  Save for the flimsy battery pack wires.  If I had more time, I’d definitely solder the wires to header pins so they’d be more robust to being moved.  The wires popped out pretty easily without pins.

I added another GETrequest at the end of the loop, which checked to see if the server had changed the box’s mood.  The server would just output 1 integer on the ping.php page, which the Arduino would read and then display on the LEDs the corresponding color for the new mood.

I blew out an RFID reader after being frustrated and trying a different power setup.  I guess it didn’t like being hooked up along a connection tied to a 12V battery pack.  Oops.  First time I’ve blown out a component.

Maybe this was caused by the bug — or butterfly — in my Arduino.  I guess the butterfly was from a classmate’s butterfly project (she apparently had butterflies mailed to her).

I had some problems understanding the callback function that was coded into the example sketch.  It basically said, if you receive data in the serial buffer, then run this function to process it.  It had incoming variables already so they were my constraints.  I didn’t understand the underlying code enough to either scrap it or modify it.  I spent tons of time trying to figure out how to read my HTTP request’s response (which, in this sketch, shows the whole HTTP headers, which was a pain in the ass because that’s a lot of extra characters I have to deal with in my serial buffer).  I tried many examples and tried writing some failed C.  I ended up, though, with this:

// Function that prints data from the server
void printData(char* data, int len) {
  // Print the data returned by the server
  // Note that the data is not null-terminated, may be broken up into smaller packets, and
  // includes the HTTP header.
  while (len-- > 0) {
    if (*(data) == '*' ) { //'*' is our beginning character
      startRead = true; //Ready to start reading the part
    if (startRead) {
      if (*(data) == '0'){ // content
        dither(strip.Color(0,127,127), 20);

The reading of Serial.print(*(data++)) in the startRead conditional is crucial to read in the next character.  I did try to do a serial.flush() after this callback function but I think it was interfering with the other GETrequest so I removed it.  But I suspect more must be done here to make it a clean read…

I have a feeling having two GETrequests could have contributed to one of my major issues, which seemed to be flooding the serial buffer.  I’d often get the LED 13 light stuck on, as if it was being flooded with data.  I did try to serial.flush() my serial connections but that seemed to destroy communications — nothing was logged in my database as having been touched.  Other times, I would scan stuff in or try to ping them from my browser (pinging from browser would tell the Arduino on its next connection that it needed to light up to show the user that the requested object was inside), and nothing would happen!

This is actually exactly what happened during my class presentation.  Nothing worked.  I felt bad about it because I prepared for the project well, bought stuff early, put in early test work, and then did all-nighters for almost two weeks trying to figure the whole thing out.  I felt like I tried hard, picked a project I could accomplish, and put in the time.  And it still didn’t work during the demo, though I KNOW some parts of it work well…sometimes. Here’s a video clip of me giving my presentation (may have been edited for time):


So there you go.  It was a highly disappointing ending to a final project.  I wondered if working alone caused my problems, or not asking others for enough help.  My lessons learned:

  • When doing serial communications, make sure you try to understand EVERYTHING being passed across.  Make sure to always use the serial monitor, just to be sure you’re getting expected results.
  • When doing communications between client and server, always build tons of logging into your code, even at first when you’re just scaffolding the project.  You will need to do small unit testing on each case individually to make sure it works, before trying to put everything together.  The worst is when you can’t figure out where your problems are being caused, because there’s too much going on and too many points of failure.  Keep the network traffic thin so there’s more margin for error.  TEST EVERYTHING INDIVIDUALLY.
  • I would have bought an extra Arduino and the new WiShield 2.0 shields, or waited for the new Arduinos with built-in wifi.  The problem I had with the Diamondback was that I ended up using poorly documented code that didn’t do what I wanted it to and didn’t seem very configurable, and operated far less usefully than Arduino’s example code in the ethernet library.  The WiShield 2.0 code seemed far more user-friendly.


What interests me about this is the long-term application.  What will it be like when objects can tell you if they’re missing parts, or they can report to you on their health, or they can take out some of the daily logistics planning that we send to our brains’ subroutines every day while we try to get other stuff done?  What will the world be like when things start talking to each other outside of the internet?  Can I get my boxes to talk to each other while they’re near each other?  Can I build recipes, like scan a bunch of characteristics (honesty, humor, good looks) into the boxes so that, if the recipe is right for both, they “fall in love”?  In talking with another classmate, Tak, we realized that if there were, say, 50 boxes piled against a wall, you could turn them into interactive pixels, controllable via Processing sketches!  And as I talked to yet another classmate, Christie, I realized, what if there will be a job in the future for creative storytellers where their job is to imbue objects with personalities?  Think of just observing everyday objects talk to each other, all being coded by different people, all with unpredictable and surprising interactive behaviors, with companies competing to hire the most creative people to give their products signature anthropomorphized personalities.

I didn’t get my project to work.  I thought I’d be able to do it.  But I ended up encountering issues at almost every step of the way, with five obstacles popping up once I solved just one.

I definitely need a mental break from school now that the semester’s over.  I want this idea to work, but I’m going to have to accept that I need to move on from it because I’ll have other class projects to do.  But it’s very hard for me to leave a problem unsolved.  So it’s been gnawing at me.  But I’ll be using the break to forget about it, and try not to come back and try to figure out what’s wrong with it.  At least for a while, or until I can use the project for another class or application.  Sigh.

My Arduino code is below the jump:

Read More »

Comm Lab: Web, ProbablyGonna

Our first class of comm lab: web.  We went over some HTML and forms.  Our homework was to produce two documents, one with a recipe, and the other with a basic email form.  I have a feeling we’ll be building upon these documents since we were told to mark them up clearly and as much as possible.  This will make styling easier once we start doing some CSS.  I went ahead and added more IDs and classes, and threw in some jQuery as well for the form validation.  The documents are valid under HTML5.

My recipe page, I wanted to make a recipe for making “A Good Human”.  A flawed recipe I’m sure, but it’s good enough for now.  Hoping it may tie into at some point.

My email form was a preview for something I’m hoping to generate through the course of the class, ProbablyGonna.  One thing I hate is being out on the town but finishing early, and wanting to still go out and have more fun.  I know some friends are out and about, but it’s too hard at some point to meet up with people once they’re already having fun.  So I wanted to make a service that lets you put what you’re probably gonna do.

Like, you might be, “I’m probably gonna go dancing this Saturday in Adams Morgan, so if anyone’s in the area…”  Or, “I’m probably gonna go hiking some weekend, is anyone possibly down?”  The person who opens the ProbablyGonna item is the pioneer of fun.  But the pioneer always needs the validators, the next few who make the event a “go”.  Then everyone else piles on.  But you need the initial sparks, and you need to nurture the kindling until it catches fire.

The thing is, with events, most people don’t plan well in advance.  Facebook has events and our email gets spammed with e-vites, but really, don’t you just end up at 4PM on Friday or Saturday wondering what the hell you really want to do?

Most plans happen at the last minute.  Some even happen on the fly, when you’re in the area.  Who has enough active friends on Foursquare to use it to locate a place nearby to go?  Who checks in regularly on Foursquare?

It’d be nice if someone puts out an open invite for friends (or even strangers) to converge on one spot — at their own leisure.  This also gives context to a location’s events during a specific time.  Look, there are these parties going on at this venue; once you get there of course, the parties dissolve into one mass of people enjoying the location. (it would be nice to have a site where you could do wrap-ups of how cool a party was the night before and what cool stories happened that you just HAVE to tell)

I want to do this because I hate when people blow their Friday and Saturday nights not knowing where the fun is.  I want to do this because I think it’d be cool to have a job where you just ensure that people attend one kickass party that they’ll talk about forever.

So with ProbablyGonna, you just need to enter in a rough when/where/activity entry, and see if others will join.

I’m also thinking there’ll be reputations, developed over time, with how reliable someone is for actually making an event happen after declaring it on ProbablyGonna.  That is, if someone posts 5 invites but bails on all 5, he’ll get a 0% reputation, whereas someone who’s solid will signal to others that it’s safe to make the trek to that venue because the host is definitely going to have made it happen.

So this is what I’ll be working on, hopefully in Sinatra/Ruby, so I can learn those languages.  If those don’t work out, I’ll switch back to PHP.

ICM: Final Project, Genetic Crossing

For my final project in Intro to Computational Media, I intend to improve upon my genetic crossing sketches. The sketch already has some rudimentary functions to combine traits from two parents to create offspring. I want to expand the procreation to include an averaging of the parents’ trait values, then adding in randomized “luck” which may skew offspring’s traits quite a bit. Ideally, I also want to implement age, which affects traits over time as well, but this means I’ll also have to code in the phenomenon of death. I might try to make height and weight fluctuate, and have certain traits improve or worsen over time.

Screenshot from the last iteration I worked on

I talked to my professor, Heather Dewey-Hagborg, and she recommended that I look into genetic algorithms to push myself further in this project.  This led me to Professor Daniel Shiffman’s “Nature of Code” class, which does deal with genetic algorithms and has sample sketches with functions named darwin() and classes named DNA, and deals with the concept of “fitness” for evolution.

I am not sure I want to implement “fitness” but I think I understand what it’s for when doing these sorts of sketches.  The ratios of influences upon the development of people (and things) fascinate me.  So I am very interested in how a person’s makeup of traits combines with environment and profound events (breakups, parents divorcing, personal trauma, etc.) and one’s innate personality to lead to how people become successful, mature adults.

I want to implement to a larger degree specific balances of traits within equations and algorithms for defining people, since that directly relates to my eventual thesis project,, an ecosystem for identity and reputation.  I would like the individuals to have loyalties to countries and to communities, and have that reflected in some degree by gravity, pulling them in certain directions.

I would like to have a pan-able interface where you can drag around the screen, like moving around on Google Maps.  Also I’d like to be able to zoom in and out to find interesting areas.  Heather and I talked about visualizing my entities/data better, perhaps changing from the differently-sized circles around a central circle, to more of an amoeba/cell-like look where the traits are contained within the cell’s walls/cytoplasm.

This led me to look at how others are doing data visualization in Processing.  I found Jer Thorp‘s guide, which was very useful.  It made me think about whether I could make the visualizations look more like chromosome staining/karyograms.


Instead of connecting to a MySQL database, which may get tricky, I might dump a comma-separated value document from the MySQL database and then read that into my sketch — the upside of this is that it’ll force me to be more proactive on the side of things.  The downside is I don’t want to try to do too much with the limited time I have to complete this project.

Part of my primary challenge is to visualize the data in a manner that makes at least some sense to a viewer, even with context pop-ups and menus, while still using a flexible data input source.  All of this within a framework that accurately demonstrates procreation, attraction, and movement according to human gravitational rules to create some signs of mass-scale life and A.I.

Adding Processing Sketches to WordPress Posts

Since people have a lot of problems getting their Processing sketches to work when they try to post their work online, here’s a guide I made.

First off, I don’t use the Processing plugins for WordPress.  I know that some people do but I couldn’t get it to work.  Plus, I think using embed code from is more consistent.

Next thing you need to know is that there are two key modes when you edit a WordPress post, “Visual” and “HTML”, tabs attached to the upper-right corner of this edit window. (see the purple)  Visual is the simple mode where you just type in your text and it will be automagically formatted for you.  But if you need to insert HTML markup or paste in Arduino or Processing raw code, you need to use HTML mode.

So if you switch to HTML mode, you will see non-formatted text in HTML format.  Go to where you want to paste your code (to document your work!) and then type

<PRE CLASS=”brush: java;”>

And then paste your code, with


after the code ends.  See the image above for an example of how it looks.

This will actually format your code into a fixed-width format, and it will also highlight your code to a specific language’s syntax rules, IF you have the Easy Google Syntax Highlighter plugin installed (which you can install via this site: )  It’ll be an automatic install if you search for it via the “Add New” menu under “Plugins” on the left side of your administration menu and install it that way, instead of trying to go to the site and download the code to your host and then etc. etc.

Having the CLASS=”brush: java;” or CLASS=”brush: php;” or whatever will give you prettified code (and line numbers!) if you have the Easy Google Syntax Highlighter installed:

By the way, you’ll see I have Blackbird Pie and Viper’s Video Quicktags installed.  These let me embed YouTube and Vimeo videos just by pasting in the URL to the box that pops up after I click on the icons:

Blackbird Pie lets me display tweets just by entering in a tweet’s unique URL:

You’re not done yet.  You’ll have to make sure in your code that any left- and right- arrows (< & >) are translated into HTML entities first.  So < translates to &lt; and > translates to &gt; Don’t ask why for now.  It just prevents everything from getting cut off — probably a parser error.

So it’ll look like this in your HTML-mode:

If you don’t do this, then your code may get cut off/truncated once you post it.

Next you probably want to post your sketch.  I have found that embedded code almost always works unless there’s some included extra library that you decided to use for your sketch. has a pretty helpful interface to help you upload your applet.  So yes, you will have to export from Processing in “Standard” mode first, in order to get an “applet” folder.

Then go to to upload your code.

You’ll see instructions to compress your “applet” folder, which won’t exist until you export it in “Standard” mode from within Processing.

In OS X, you just right click the “applet” folder and “Compress “applet””, which will create an “” file.  This is what is looking for.

Once you upload (and name your project), you’re not done yet!  You will be taken to a verification screen to check if your Processing sketch actually works:

You must verify by clicking on the button in green in order for your upload to be complete and so people can find your sketch. Also, don’t link to the verification page — link to the resultant project page.

Then your project page will have a button where you can get the embed code for the sketch, which you can paste in HTML-mode into WordPress.  However, I would recommend if you want faster load time for your blog post that you only link to the sketch instead of embedding it, because when people try to read your blog, their browser will have to load your Java applet as well — this can bog down a system, especially if your blog displays several sketches at a time.  One way to reduce this problem is to use the “More” button in your WordPress post editor (see the blue below):

This will, on the main blog page, only display the part of the post above the “More” tag.  Someone will have to click to go to the full post in order to see the rest.  But this is good for hiding lengthy code or videos or Java that will slow down someone’s load time.

Let me know if there’s anything I should add to this guide.

Physical Computing Group Project: Your Tweet Has Been Scent

For my physical computing class’s media controller group project, I worked with Gavin, Michael, and Yucef.  Our task was to build a controller that interacted with human interface and media.  For our project, we chose to work with web interaction, whereas other groups chose to work with video or sound or light.

Your Tweet Has Been Scent/Odoruino (unofficial name)

Our team has built a device that sprays out certain scents based on which friend or family member sends you a tweet.


Originally our team’s idea was to build a game.  We were thinking it might be like the prisoner’s dilemma game theory examples, where two people would go into a room they’d not seen the layout of and then try to figure out puzzles in order to escape.  Each person would have someone outside the room who would use a computer to interface with the person inside the room, via the web and Arduino.  The person in the room would not be able to see much feedback at all, especially not from his teammate, and he might find that the person outside the room was not his partner but in fact the opponent’s partner.  We had all kinds of crazy ideas, like the person inside the room having to use an Indiana Jones-like sceptre on a model temple to figure out where to go next.

In the end though, we could never figure out a way to make a project out of this.  We wanted to have an Arduino unlocking a wall in the maze or puzzle that the web user would need in order to progress, and for unlocks for items in the room for the person there.  But it just never progressed.

Working with Smell

Yucef came up with an idea to use smell, where if you received a tweet from someone you knew, you would smell the presence of their message for you, so you could employ your sense of smell in case you were using your other senses for other projects.  Or if you came home, the room would have a lingering smell that reminded you of someone close to you.  Yucef liked the idea that smells are so powerfully linked to memory.

This idea, we were far more quick to turn into something we knew how to build.  Gavin had worked with the Twitter API for his first project, and I had familiarity with the Twitter API and PHP.

We found the Glade Sense & Spray device in the store.  The device is actually pretty clever.  It has a motion sensor inside that detects movement and sprays when it senses something.  The canister top is depressed by a geartrain in the back of the housing, spraying out a scent very quickly.  It operates on 2 AA batteries.

By the end, we chopped off the front of the housing after removing the front lid, so we’d have more room to house 4 different diffusers.  We opened the device up and cut the wires leading to the mini breadboard with the motion sensor on it.  Then we soldered the toy motor (which powers the plastic geartrain) to our power and ground wires to plug in to our own breadboard to be powered with a 4-AA battery pack.

Here’s someone else’s teardown of the device.

I wrote a jQuery script originally to post values to a web page that the Arduino and its ethernet shield could connect to the internet and parse, in order to figure out which spray diffusers to activate.  The jQuery script would parse JSON results from Twitter’s URL-based search.  The problem, which of course I realized after finishing it, was that the code that you can write for Arduino to connect and process data of course doesn’t render a JavaScript model.  So the Arduino was trying to parse the JS code.  Duh!  So then I rewrote the script in PHP so that only the exact values we wanted outputted would show in the page.

This worked fairly well because PHP makes it just as easy to turn JSON into an array that it can use.  But I ended up having to put in a last-checked checker, because we didn’t want the Odoruino to see ALL past tweets, just the ones that were new from the last time it checked.  It took me a long time of debugging to figure out why my checker wasn’t working right.  It was a question of making sure each tweet was being checked against the last time-check — originally my for..loop was just a simple test of whether any new tweets had come in, but then each tweet wasn’t checked.  I’m guessing that makes zero sense to anyone.

Our code checked for an initial token, “*”, and an end token, “b” (just chosen randomly).  In between were our four digits, showing binary results.  For instance, “0101” meant that the 2nd and 4th diffusers had new data and would activate.

First Demo

We demo’d our project in class with just two scents (we could only find “bed linen” and “apple & cinnamon” scents in stores).  We made Twitter accounts for the user (“Dano”, i.e. Dan O’Sullivan, an ITP badass who was rumored to have patched into local Manhattan cable access TV) and for his mother and “Woz”, as in Steve Wozniak.  His mother would smell like linens while Woz would of course smell like Apple (he looks like he smells like an apple, doesn’t he?  so furry!).

@ Hey can you fix my QuickTime VR on my Mac? I broke it :-(

Final Demo

Adding two more sensors was problematic because we didn’t increase the numSensors variable in our Arduino code until we found it, plus we had issues with power — we tried 2 AA’s and 4 AA’s and found the 2 didn’t draw enough, but the 4 were probably pushing too much power.  We ended up using 4 AA’s.  We experimented with multiple resistors till we found that a 10Ohm resister was the highest resistance we could add and still get the diffusers to be powered correctly.

Our circuits were basically thus: a diffuser would be linked to a resistor and battery pack on one side, while an output pin controlled it, along with a diode to keep power from flowing back into the toy motor and burning it out.

We also found that the diffusers would not work well once we sawed off the fronts, because the lower part of the diffuser no longer had any screw threading.  So Gavin and Michael had to wrap the bottom of the two sides of the casing tightly so that the plastic geartrain would not come loose when activated.


We were thinking of using a big blue plushy Angry Bird, since the official Twitter plushies were too small to house our diffusers and the Arduino + breadboard.  Then we would cut off the tail of a plushy skunk and stick the diffusers in the bird’s butt, so that it would tweet-spray out of its butt.  Here’s a quick Photoshop of what it might have looked like.

But we ended up using a bird/owl lunchpack which was actually a tighter, more snug fit for our device.  We took a thin piece of wood and nailed some foam board onto one side of it, to provide a base for the diffusers above and a compartment below to hold the Arduino + ethernet shield and breadboard.  We cut away the top of the lunchpack to make an opening for the diffusers to spray.  We wanted it to be a neater job up top but there was barely enough room to fit all 4 diffusers so we ended up cutting the whole thing away.

We added two more accounts, GF (girlfriend) and Gram (grandma), which were the scents of Hawai’ian Breeze and Vanilla & Lavender, which I found at K-Mart.  So we had Dano being tweeted by Woz, his Mom, his GF, and his Gram.

You’ll notice that this is probably not something we’ll want to leave on the subway.

Finally, here’s the video of the device/Odoruino, taken in the ITP workshop:


Code is below:

Read More »

Curation of Some of My Favorite ITP Stuff So Far – Fall ’11

My school program, NYU’s Interactive Telecommunications Program, generates so many creative and innovative projects that we have a severe internal problem keeping track of them all and transferring knowledge.  Either people don’t share via blogs and videos and photos what they’re doing (though we’re encouraged to document our work thoroughly), or people don’t have time to go look at others’ stuff.  The best way is an oldie — just being around each other physically in a workshop-like space.  Density and physicality are still the most powerful enablers.  But curators can fill in some of the gap.

I wanted to look through my classmates’ blogs and note the stuff I liked the most.  It’s not intended to be exclusive, but highlight things that struck me about peoples’ work.  If you saw something really awesome, let me know about it.  Mostly I tried to look through everyone’s blogs, but if it wasn’t posted, I couldn’t find it!

My Intro to Computational Media classmate, Engin Ayez, who went to Stanford for civil engineering and architectural design, upped the ante with his Processing sketch “Disappearing Ideas”:

“Disappearing Ideas is a layered interactive sketch that the user can create, one that erases itself subtly over time if the user stops interacting with it. It is intended to have a calm look that contrasts with its fast pace of random shape generation. I see the colored rectangles as ideas that slowly vanish in our minds, almost like moments of brilliance lost in misplaced post-its.”

What I liked about this was that it was beautifully geometric, but also represented the ephemeral quality of memory.  As F. Scott Fitzgerald (for whom I assume my physical computing professor Scott Fitzgerald is named, and who is also my favorite author) wrote in “This Side of Paradise”, “I’m not sentimental–I’m as romantic as you are. The idea, you know, is that the sentimental person thinks things will last–the romantic person has a desperate confidence that they won’t.”

Engin also worked with Kaitlin Till-Landry on PetMusic, which uses Arduino and conductive fabric to play music when you pet fur:

“PetMusic consists of a touch-sensitive white imitation-fur, which, when caressed by an actor, plays  Moonlight Sonata by Beethoven, until the caressing is over. We see PetMusic as a portable, therapeutic artifact that can be used for meditation and relaxation. Rather than just pressing the “play” button on an mp3 player, PetMusic demands minimal, but repetitive maintenance activity from the user, creating a positive feedback loop and leading to an experience of escape.”

Kojo Opuni, also in my ICM class, made this gem, “Night Fever”.  It’s missing the Bee Gees music though. :(

Yucef Merhi, who seems to me to have a real artist’s personality, put together his “Archetypes”, and made this Processing sketch of one of them.  Beautiful.

Hyeyoung Yoon doesn’t say much in class, but she volunteered to show her work one day and it represented to her the loss of those close to her in her family.  A lot of her Processing work so far has focused on a ghostly theme of absence or voids or disappearance.

Christie Leece made a “Hybrid Animals” sketch in Processing to explore animal husbandry for hybrids like zorses.

The best part about the video and sound communications lab was that the final project is a video.  Here’s a video by Robbie Tilton, Yoonjo Choi, Mick Hondlik, and Johann Diedrick, featuring Tony, possibly the funniest guy in our class year.

From my class, Sara Al-Bassam, Angela Bond, Tiffany (Hsiao-Wen) Choo, and Zena Koo made a video about romance in Washington Square Park.  Great use of color, made by the most ethnically diverse team ever?

Gavin Hackeling’s New Yorker imitation is priceless in this short film about I <3 New York (w/ Chris Egervary, Bona Kim, and Federico Zannier:

Phil Groman, Jee Won Kim, Annelie Berner, and Michelle Boisson made this video about being stood life.  PHILLLLL!  NOOOOOOO!  DAMN YOU DANNE!!!

One of our physical computing main assignments was to create a stupid pet trick.  There’s a tumblr with a lot of them, along with vids and documentation.

Matt Richardson, who works with make:, …

… made this cat-dog topsy-turvy pet picture.

Christie Leece’s bullseye-bow’n’arrow Stupid Pet Trick was pretty cool!

Ben Light’s massive LED glasses that blink. This challenged me to think about doing something more ambitious for future projects. Thanks for the kick in the butt, Ben. Bens unite.

Atif Ahmad, total New York baller and clothier, made a Make It Rain machine for his Stupid Pet Trick.

Comm Lab: Video & Sound, Final Video

First, here is the final cut of our group video:

My class spent an entire class period looking at each group’s video and then critiquing the hell out of it.  Our professor, as I mentioned in the last comm lab post, taught us what to look for when revising our videos.

My group decided to do some more shooting, since our video jumped from one vignette to the next without establishing much context.  I was late to the group meeting because I remembered the wrong time, so I apologized for that.  But then we went out to the NYU-Washington Square Park area again to film.  This time we used a tripod, which made our new footage much smoother.  We actually brought the tripod along with us the first time, but I think we were still so unused to the equipment and to filming in general that we didn’t think it through, even though our professor told us to always use a tripod!  It’s funny how you can tell someone to do things, from his own experience, but you won’t listen until you learn the hard way yourself.

We also paid more attention to lighting this time, avoiding direct sunlight and areas with varying sunlight, to avoid hotness later when editing the video.

We took long shots of our characters walking or skateboarding or eating, with The Asshole walking as well, to set up the convergence of his Assholish event.  The footage came out a lot better, though I think some of my zooming was still kind of rough.

When we were done, we went back to school to edit.  Final Cut Pro X went a LOT smoother this time around.  We took turns editing, so we’d all be able to watch each other and learn how to do certain operations.  While this can be more laborious, it 1) lets everyone learn what everyone else knows and 2) retains continuity since everyone is involved in the same editing process.

We were not planning to use any audio from the footage but when we ran the first cut with music, we found that the sound effects could add context to the video without taking away from the effect of the music, in carefully-placed spots.  I think I had wanted originally to use Denis Leary’s “Asshole Song” for our soundtrack, …

… thinking it was perfectly suited, but my team rightly kept seeking ideas and we found Crystal Castles.  The song worked great, and I think the rest of the group was right that Denis Leary’s lyrics would have distracted too much from the video.  The audio editing was fun, raising the sound effect levels in some areas and dampening the street sounds in others.  We lined up some scenes with changes in the song.

Color balancing helped a lot.  Final Cut Pro X has a color matching function which lets you choose the colors from one frame and map it onto another, so you get more continuity.  We had footage from one day of shooting where it was cloudy, and shooting from another where it was sunny, so we minimized the disparity using this tool, plus some of Danne’s color editing skills to clean it up.

Thanks to Stefanie, Michael, and Danne.  And to our professor, Marianne Petit, for the wonderful instruction!

PComp: Serial Lab

[Update to the Serial Lab Part 2 exercise at the bottom of this post.]

Observation Exercise

Our observation exercise blog assignment involved picking a piece of interactive technology in public, used by multiple people, and then describing the context in which it’s being used.  We would then watch people use it, preferably without them knowing they’re being observed, taking notes on how they use it, what they do differently, what appear to be the difficulties, what appear to be the easiest parts.  Then we would record what takes the longest, what takes the least amount of time, and how long the whole transaction takes.

Okay cool, so I could have done an ATM machine or something.  And I will, later.  I promise.  But I just HAD to write something about that god damn elevator in my building.  See, it’s not that the elevator is broken, or that it is so barely functional that you’d be better walking up the 7 flights of stairs of the small building.  The problem is that it’s just annoying ENOUGH to make you think about it incessantly.  I’m not even OCD and I swear, using this thing is like dealing with some ornery cat or with a freaking mule at a water crossing.  I mean, don’t you hate it when you’re trying to drag your fucking mule across the river?

I’ll get in the elevator during non-peak times and it’ll work fine.  I was warned by the woman I rent from NOT to touch the gate.  Here I am thinking that it’s going to trap me like in Alien or like that Tom Selleck movie where the robot spiders are after him.  DON’T TOUCH THE GATE.  The elevator has a gate!  That’s enough to make you stop and pause because it was probably made back before they had building codes against crushing tenants in faulty coal mine-era gate technology.

So you press the button to pick your floor and the gate closes on its own.  You’re in a small box.  More than 3 people fitting inside?  Nope.  2 people with bags?  Forget it.  I’m not really sure there’s a ceiling panel to get out of the damn thing if it stops between floors.  No John McClane exits.

Fair enough.  Things start to go wrong when multiple people are involved.  I’ve been puzzling over the internal software logic in the thing.  Did someone actually code this?  I’ve gone from the 1st floor, pressing 7, and then the elevator stops at 5 and someone is there waiting to go down.  I’m not sure that it’s going up to 7 after because I had to hammer the button while on 1, and it didn’t light up to give me feedback about whether it was pressed or not.  So I’ll get out at 5 and walk up the 2 flights of stairs, only to see the person who got in at 5 up there because the elevator kept going up.  Sometimes the elevator will stop at a random floor, like 4, and no one will be there.  Was anyone there to begin with?  Did they just figure they’d walk down instead?  Or was it a poltergeist?

See?  I accept that it’s an old elevator.  But I’m not sure I can comprehend some engineer working out the logic and saying, “You know what’d be good?  If anyone on any floor could interrupt the elevator’s direction at ANY time.”  The other part is that the floors are not far enough apart that walking up and down is completely out of the realm of possibility.  So when you and someone else are trying to figure out which floor the elevator will go to next, you’re feeling guilty about being a lazy son of a bitch who didn’t want to walk two flights of stairs extra.

I heard a tale about the elevator (it even has tales!) that it broke one time and they closed it for a week because it wasn’t up to code.  Apparently when it re-opened, the tenants could only see that they added wooden paneling inside, and no larger changes for the whole shaft.  I should have asked what’s behind the panels — metal spikes?

Here’s the video.  The elevator knew it was on camera unfortunately and decided to work just fine.  Bastard.

Look!  The gate is held together at one place with a couple plastic zip ties!  Spray-painted black!

Alright, so also I remember reading about a Wells Fargo ATM design a while back and the write-up (now gone, but saved on Wayback Machine) was brilliant.

What I love about it is it addressed all those dumb concerns we have at the ATM.  Different heights see the arrows on the sides in different locations, so a taller person has to stoop over to make sure he’s pressing the right side button.  Then the button design is often too colorful and distracting and you feel like you’re Indy Jones in The Last Crusade stepping on the trap letter stones to spell Jesus in Aramaic.

Touchscreens and hardware durability have improved the process considerably.  While some people are just not inclined to use ATM machines, other tech-savvy folks like me should not have difficult using the things.  Ideally, no one will have difficulty using them, and I think the design team did a good job by making the buttons larger and simpler, and using the whole screen as possible inputs while keeping the interface clean.

The thing about the elevator and ATM machines is that they employ infinite hardware configurations and can’t be changed uniformly at one time.  Technology is a gradual process in the real world, while it’s a luxury taken for granted online now that web apps can be deployed immediately and (with improving standardization of protocols and webdev kits) without large variation across browsers/operating systems in many cases.  But tech and hardware, especially for such oft-used things as elevators and ATM machines, must be improved gradually.

Serial Lab

For my serial lab I wanted to turn the serial reader graph into a beach wave, with Mr. Wilson the volleyball from Cast Away.  It would have taken some wrangling, though, to keep Mr. Wilson on a wave, being drawn across the screen, without leaving a trail of Mr. Wilsons and so I didn’t follow through on it.  The fact that the SerialEvent() function seems to be overwritten by draw() (like, when redrawing the background) complicates matters.

Below is a video I made of a crude mouse-like device from the example, employing a potentiometer, a switch, and an FSR.  If you notice the readings, the switch’s reading would not change from 0, so I didn’t bother pressing the switch during the video.   The code was supposed to fill the moving circle on screen when the switch was pressed, but for some reason I could only get readings (just fine) off an Arduino serial test.  When I moved over to Processing, it was like the value was getting truncated off the end of the line.

Update: I sort of solved my problem with why Processing was not picking up the value of the 3rd sensor, the switch.  In my Arduino sketch, I divided the other two sensors’ analog values by 4, so they’d fit within 1 byte (2^10 = 0 to 1023, while 2^8 = 0 to 255).  I suspected that the last value (the switch’s digital value) was being truncated and thus wasn’t being picked up by Processing.  The math was a bit of a mystery of me at first, but I think I figured it out, though I’m unsure of the language.  Serial.println(“1024,1024,1”) sends a total of 13 bytes, and a baud rate of 9600 is not enough to send that many bytes.  It can only transmit 12 bytes.  Is it 12 (bytes) * 8 (bits/byte) * 9600 (updates/second) = 921600?   Still a mystery: if I send “1024,1024,1”, why would Arduino show it fine in the serial monitor and Processing wouldn’t receive the same result?

I just know that when I divided the values by 4 in Arduino, then the switch’s values of 1 or 0 showed up in my System.out in Processing.

On #OccupyWallStreet

For last night’s Red Burns – Applications class, one of the group presentations addressed #OccupyWallStreet.  I don’t usually speak in large groups anyway, mainly because I really, really hate fighting for my turn to speak, but #OWS is tough because it really makes me seethe to listen to people talk about it.  It’s the same thing on Twitter.  Even the group itself, which went to Zuccotti Park (I don’t believe in calling it Liberty Park till it has a breakthrough, and I think it fitting that they are occupying a private park named after a real estate mogul) often in order to document the experience and take part in it as well, was split on how it felt about the protest.  A few classmates stood up to say that they are tired of protests and they don’t believe anything will ever change as a result of them, and they’ve seen it before from where they used to live (San Francisco, Phillippines, youth activist efforts).  One said that they didn’t go because they didn’t understand what #OWS wants.

Maddening.  I kept myself distracted by wrestling with a re-install of MySQL on OS X Lion, which I’d fucked up earlier in the day.

Here’s the problem I have with my fellow Americans with regards to #OWS.  These same people, who spoke glowingly of the noble, courageous efforts being undertaken by the downtrodden working classes in the Arab Spring become strangely silent when it comes to American-born protests.  On Twitter, I follow a lot of security people and people you would consider to be in the consulting, intellectual, and management classes.  While they tweeted up a storm on the Arab Spring, presumably because of its implications for regional security, when fellow Americans voiced their disapproval with current conditions (which are, I think it’s still under-appreciated, historically bad in terms of income inequality and prejudiced against the public good, reaching levels only seen before the Great Depression), these people mocked them or ignored them.

It was the same during the summer and fall of Tea Party movements in 2009.  And during the anti-war protests.  Mocking or ignoring.

I went to most of the Tea Party rallies in DC.  I was in DC during the inauguration, inauguration concert, OBL killing, government shutdown crisis, and other massive rallies (immigration, gay rights).  The tenor of the city has definitely changed since Obama came in.

While I disagreed with the Tea Party (mostly I think they do not understand the role of public policy at all) and thought its invocations of history were bankrupt (read my blog posts here and here), I do think they were symbols that the Jacksonian school of thought is well and alive in America, and I felt sad that fellow Americans saw conditions as being so bad.  The stupid two-party system, which has existed for, what, two centuries or something?, is now infused with corporate money and shadow organizations, introducing into our political DNA a pernicious political rift that only pits Americans against each other.

I do think that #OWS directly addresses the chief problems within the American system today.  We are fortunate enough in this country that we do not have one simple demand, which is what people seem to be looking for.  In other countries, this “simple demand” might be the removal of a corrupt dictator.  That is the danger of singular cause movements.  It focuses, essentially, on one person, or one group/class of people.

Saying you don’t understand what #OWS wants sounds precisely alike to me as when people say, “I don’t understand computers/science/math, it’s too hard.”  Were you just supposed to know it intuitively?  No, you have to go read about it and study and research it.  There are two billion articles about why #OWS doesn’t have or want a simple list of demands.  There are plenty of theories about whether they should seek specific issues later or just try to organize at this point.  It should be noted that the Tea Party kind of fell off the rails once it was co-opted by politicians, was confronted for its fringe elements, and came up with its specific list of demands.  Specific demands alienate people who were on board with some ideals but not others.

So saying you don’t understand #OWS is an intellectual cop-out.  I am disappointed that master’s-level students would use this argument.

Issues of The Occupied Wall Street Journal
Another point which some of the group members who presented brought up was that while #OWS may not amount to much, it is still important in itself.  The beauty of seeing the General Assembly, of seeing humans together, figuring out systems of hand gestures for communication or innovating low-tech solutions, forming working groups, blending internet viral activism with Occupied Wall Street Journal newspaper tactics; this is really important stuff.  If I had kids, I would want them to see that shit.  I would want them to see humans self-organizing, sharing, communicating, seeking a shared future.  When I went to Zuccotti Park and to the Washington Square Park protests, that was the real deal fucking Holyfield, seeing humans do what they do best: transfer information.

Some people are waiting for heroes to emerge to lead the movement.  They inevitably bring up MLK Jr. The problem is, if you’ve ever read about MLK Jr., most of his life was spent being extremely frustrated with the impact of his work.  He was thrown into fits of despair often when he would attempt to organize and galvanize people and it wouldn’t work.  It wouldn’t produce the results he wanted, either from getting the majority out into the streets, or in achieving political results.  It wasn’t until things magically came together at some of the larger national protests that his voice took root and now the legend has taken over.  But MLK Jr. was depressed during most of his time during the Civil Rights Movement.  Being a “hero” is often a lonely experience.  It ended in his assassination, and in JFK’s, and in RFK’s.  If that is not a somber message about the role of heroes, I don’t know what is.

Counter of People Worldwide Offering Support
Anonymous has been interesting.  It plays a fringe role in the Occupy movement, while it was pretty much center-stage on its own a year ago because of Wikileaks.  I doubt many people actually saw V for Vendetta (like they didn’t see the Tea Party rallies or #OWS events either) which is the basis for the Guy Fawkes Anonymous masks that you see more and more these days.  I think one of the most poignant scenes in that film is when the girl in the Anonymous mask is killed after she is caught putting up graffiti. After that, the social contract between the public and government breaks down, and the movie concludes with a sea of Anonymous masks converging in London, eventually overrunning the police and unmasking themselves to return to their real identities.  Anonymous is some kind of Hobbesian manifestation that bothers people in that it reminds them that the strongest, most powerful man is still just a man, able to be brought down by the weakest, least important of men.

Anonymous/Guy Fawkes
During a decade of post-9/11 hysteria, with all the stupid regulations from TSA, the newly-authorized secret spying on Americans based on mere suspicions, the corporate-endorsed wiretapping of internet service providers, anti-Muslim sentiment, and overseas military/intelligence/covert adventures, the anti-war movement barely registered.  It was kept at bay by a respect for the warfighters and their tasks.  It was kept at bay by apathy.

But most of all, it was kept at bay because very few Americans have ever actually participated in the armed services or known someone very close to them who has.  Military bases are positioned well outside the normal paths and travels of most people, so unless military blood is in your family, you’ve probably never seen the sprawl outside Fort Benning or seen the old World War 2-era barracks or even seen many people in uniform except at sporting events and in Grand Central Station.

So these servicemembers have lived a silent decade, where friends have died, some have lost limbs, others have lost their minds. They can’t talk about it in public, because 1) no one will understand or 2) they will be put on display like they’re in a zoo.  There is no shared sacrifice among the American public for military service.  Just imagine how much more insular it is within AmeriCorps, Peace Corps, and other large civil service programs!

The people who didn’t protest out of respect for the troops, I question their logic.  Had they shared the sacrifice, it would have been their necks on the line.  Servicemembers aren’t really going to protest war — they volunteered to do service for the government.  It requires attentive citizenries to defend the usage of servicemembers in warfighting for appropriate contexts.  That civilians have abdicated their responsibilities towards the servicemembers of the U.S. is the ultimate slap in the face.  But perhaps it also says a lot about the military, that it will continue to do its job professionally.  The Army keeps rolling along.

So pardon me if I question peoples’ stated intentions for participating or not participating.  Or maybe it’s not so much that, but the dismissiveness which those people give to OTHER people giving a damn about something.

I think that might be the only time I really get pissed.  When people denigrate the efforts of others.  When they put down or make fun of people who are trying to do something, anything, to make things better.  That is the worst kind of cowardice, whether it’s born out of elitism or out of past failures or out of being afraid of future failures.  Negative people tend to be reacting out of their own insecurities.

Maybe what topped the class off was sensing the disparity between the divided class and Red Burns, the mater familias of our program and a woman who’s changed the way thousands of people (at least) see the world.  She is old and feeble now, and hard to hear, and definitely stubborn, but she obviously thinks #OWS is something important for us to pay attention to.  But students, whom she is probably 3 to 4 times older than, ignore her hints that #OWS is a gamechanger.  She has a keen eye and most importantly, has experience that is directly relevant to our futures, and yet I feel like she was being dismissed.

Back when I was at Georgetown, I was picked by my program’s staff to represent the program at the Achievement Summit, which brought together people like George Lucas, Bill Russell, Desmond Tutu, Sylvia Earle, Michael Dell, and many others, to talk to us grad students about what to do with the rest of our lives.

The overwhelming message was not that we should pat ourselves on our backs, but that we had a deep responsibility towards society.  Given all these opportunities, privileges, and advantages, our role in life was to be leaders and to always look to better the lives of those less fortunate.  We were told that our job was to take the hands of others and help them succeed as well.  Our job was to use our talents and creativity and personal security to try things that were extremely risky, knowing they might fail, hoping to build something wonderful for the world.  If we, the privileged few, were not going to take risks or to look out for others, then who would?

That resonated with me but I’m not sure how much it gets to others.

When I was in the Army, I had my clearance temporarily suspended (but got it back later, since nothing was officially wrong with it) because I was blogging and taking photos of my experience in Iraq.  There weren’t the chilling effect regs there are in place now, which have stifled almost any word coming out of our overseas theaters. When I studied and worked in DC, my classmates and friends would refuse to use Facebook or other social networking because, mostly, they were afraid that employers would find out!

Now I would have trouble hiring someone who DIDN’T appear on social media, but I get that privacy is a big deal.  You should still show up SOMEWHERE though.

Anyway, here’s the kicker. America is indeed the land of opportunity.  You can come here as a poor immigrant and build a pretty good life for yourself and your family and your kids.  But you will probably have to keep your head down and stay out of trouble.  You will most likely not be able to have an opinion, or to campaign for the ridiculous idea of equal rights for all people.  You will have to act in your professional life like you never drink or party, that you never have a controversial opinion.  You will have to get slapped in the face and take it, because you understand that that’s what it takes to just get by and raise your family.

And my generation feels that tension at the higher levels.  You better have a squeaky clean resume if you want to go into finance.  All your outward correspondences better relate to your work (how many Twitter folks do you know who ONLY tweet about their work stuff? it’s kind of sad sometimes).  You better fit in or else you’re not going to get paid.  You won’t “succeed” in life.

I worry that kids coming out of higher education are ready to subvert their entire personalities just to get a job.  One problem with income inequality is that it narrows your choices. Instead of being able to find employment in a variety of services or good production or data analysis or entrepreneurial endeavors, you have to pick health care or finance or business or law.  Or you work as a barista.  There’s not as much in between, particularly outside of the large cities.  It hollows out society.  And thus you have to jump through more hoops to reach the higher echelons.  You have to keep your head down, calm down and carry on.

I don’t think any of us want to see the U.S. become a place where the calculus changes such that people would much rather set themselves on fire or stand off against the military because they have no hope of jobs, families, or future.  Right now most people still have options (though with vastly increasing structural unemployment, I worry this will change).

I don’t fault the companies so much.  They are doing what they should be doing.  Making money wherever they can, sending lobbyists to live lavishly in DC to represent their core interests, to evade taxes as best they can legally.  They are winning the policy war in DC.

Mostly it is government failing to assert itself as a balancer of public, private, security, innovation interests.  We have a complete failure of political leadership.  And while government protectionism of business stymies much of public ability to organize and voice its own opinion (since business employs much of the public), I still do blame at least a little the citizenry for not drawing the line somewhere.

We are just all too busy fighting for our own little Americas, instead of building a new inclusive American Dream.  I don’t think any of us were raised to be overtly tribal, but the system rewards those who do.

This is all the prism that the nation sees the #OWS movement through.  It’s a depressing state of affairs.  People arguing for separation of corporation and state, for denying corporate personhood, for removing private shadow financing of political campaigns, for increased enforcement by government agencies tasked to do what they currently are not doing, for balancing out the business-government-public-media equation so that they are all properly warring against each other.  These are not crazy concepts.  These systemic problems have been identified and much has been written on the subjects.

I don’t think my point is that everyone must participate in #OWS, but that those who don’t should not condemn it or dismiss it, for whatever reason.  It has fringe.  Yes, of course.  Everything inclusive has fringe.  Fellow Americans are taking part and we owe it to ourselves to understand it.  We owe it to ourselves to care about SOMETHING in this life outside of ourselves and the kids, products, whatever we leave behind.  How sad is it to see our best and brightest, our graduate students and creatives and intellectuals, saying they are “tired of protest” or “don’t think anything will change”?  Have they given up on life already?  Don’t we want to see our children be proud of what we have accomplished as human beings?  I want my kids to look at me as someone who gave a shit about something and stuck to it.  Or, to be, as Bill the Butcher says, “the only man I ever killed worth remembering.”

So here’s a challenge I guess.  Who do you want to end up being?  Will you end up defending your own interests at the expense of a majority’s?  What is going to end up being so important to you that you will do dumb things and sacrifice your time and humiliate yourself for, just because you believe in it?  And do you want to be the kind of person who puts down the beliefs of others?  Will you try to work with them to build something better?  Or will you keep your head down?  Are you your fucking khakis?

A final note.  Our guest speaker, Frank Migliorelli, an ITP alumnus from way back, was a great speaker, the kind of person you want to work with.  All the junky debate and politicking that people get engaged in melts away when around someone like this.  He’s passionate about education, about the opportunities and cool new niches that one could do the next project in.  You forget about all the other stuff and you just want to make cool things.  I’m happy that he ended the class on a good note, and I hope that he and others like him win the good fight.


ICM, Genetic Crossing Part 2

[UPDATED for 11-10-27 homework: I intend to expand upon this sketch for my final project.  I’ve added functionality for offspring taking an average of the parents’ traits, but of course this is only one small aspect of where we believe we get our genes/traits/habits from.  While we may be born to smart parents, we may end up dumb.  So maybe weighing genetics vs. luck vs. environment vs. whatever will need to be the preferred future model.  Which is great, because it will dovetail nicely into my plans to create’s “evolutions”, or formulae hypothesizing what characters people are made up of.  So someone’s numbers may be range-bound via genetics but may leap way out of bounds with other factors playing a part.  What is the role of which country one’s parents are identified with in how one turns out?  I also need to add aging, and tracking of offspring from parents, perhaps seeing how the code determines procreation likelihood.

I would also like to patch the Processing sketch into my already-existing user database for in MySQL.  I suspect Processing might have some nice data visualization libraries I can use to feltronize the data I have already.]

For our homework this week, we were supposed to write some Processing code that creates multiple instances of an object. I took my genetic crossing homework from last week and expanded it.

Most notably, all the “persons” were turned into an array of Person objects.  I hashed out the sex() function more, but didn’t do a blend of all the characteristics, leaving it a randomized process for the time being.  But offspring of offspring could now reproduce via the flirt(), chemistry(), and sex() functions that calculate whether two random male-female pairs (sorry, I didn’t make adopt() or insemination() functions yet) find an attraction and mate.

I also added a new class for nations, creating the US, China, and EU.  I thought about checking each person object to see if it would be more attracted to a certain nation and thus gravitate towards it, but I was having problems either with how to move the persons accurately (weighting between preferences for the three nations) and also how to keep them from bunching up together so much that they are unreadable.  Everything is there though except for the requisite movement functions.  The nations draw squares based on the rating (1-10 scale) of their security, innovation, jobOpportunity, and immigrationPolicy.  Obviously this doesn’t include all variables but some to just test to see if it works.  I wanted the person-nation attraction to be based on intelligence vs. innovation (which I realize is not a very strong relationship always).

Pressing the space bar stops the looping so you can get a look at the non-moving sketch.  Mousing over the different person nodes pops up a box with that node’s precise data.

Demo and code below the jump:

Read More »