Physical Computing: Wheredipuddit? RFID Inventory Boxes

For my final project in physical computing, I wanted to follow through with one of my pre-ITP goals to accomplish during the program, which I outlined in an older blog post.

Proposal

I wanted to build an inventory system which used RFID/wifi/whatever to check stuff in to boxes, so that when I needed to find something, I could pull it up on my phone or in a browser and ask it where it was, and the box it was in would glow with LED light.  At the same time, these boxes and things would become individuals and traits, respectively, that I could add to or subtract from to create objects with personalities.

Because I hate having to wait for the delivery, here are some videos showing it in action:

Pinging an object from the Wheredipuddit interface:

Pinging an object via QR code (I would show this via a cellphone but I didn’t have another camera):

YouTube Preview Image

Checking in emotions into a box:

YouTube Preview Image

Some Inspirations for This Project

This was an idea somewhat fleshed out in Cory Doctorow’s book “Makers”.  The book involves two hackers who work together in a junkyard to produce lots of low-tech but highly ingenious inventions and gadgets that end up making modestly large amounts of money.  Serial tinkerer-capitalists or something.  Some of the relevant text, from Cory Doctorow’s free (!) online version of “Makers”:

Tjan opened the door with a flourish and she stepped in and stopped short. When she’d left, the place had been a reflection of their jumbled lives: gizmos, dishes, parts, tools and clothes strewn everywhere in a kind of joyful, eye-watering hyper-mess, like an enormous kitchen junk-drawer.

Now the place was *spotless* — and what’s more, it was *minimalist*. The floor was not only clean, it was visible. Lining the walls were translucent white plastic tubs stacked to the ceiling.

“You like it?”

“It’s amazing,” she said. “Like Ikea meets *Barbarella*. What happened here?”

Tjan did a little two-step. “It was Lester’s idea. Have a look in the boxes.”

She pulled a couple of the tubs out. They were jam-packed with books, tools, cruft and crud — all the crap that had previously cluttered the shelves and the floor and the sofa and the coffee table.

“Watch this,” he said. He unvelcroed a wireless keyboard from the side of the TV and began to type: T-H-E C-O. . . The field autocompleted itself: THE COUNT OF MONTE CRISTO, and brought up a picture of a beaten-up paperback along with links to web-stores, reviews, and the full text. Tjan gestured with his chin and she saw that the front of one of the tubs was pulsing with a soft blue glow. Tjan went and pulled open the tub and fished for a second before producing the book.

“Try it,” he said, handing her the keyboard. She began to type experimentally: U-N and up came UNDERWEAR (14). “No way,” she said.

“Way,” Tjan said, and hit return, bringing up a thumbnail gallery of fourteen pairs of underwear. He tabbed over each, picked out a pair of Simpsons boxers, and hit return. A different tub started glowing.

“Lester finally found a socially beneficial use for RFIDs. We’re going to get rich!”

“I don’t think I understand,” she said.

“Come on,” he said. “Let’s get to the junkyard. Lester explains this really well.”

He did, too, losing all of the shyness she remembered, his eyes glowing, his sausage-thick fingers dancing.

“Have you ever alphabetized your hard drive? I mean, have you ever spent any time concerning yourself with where on your hard drive your files are stored, which sectors contain which files? Computers abstract away the tedious, physical properties of files and leave us with handles that we use to persistently refer to them, regardless of which part of the hard drive currently holds those particular bits. So I thought, with RFIDs, you could do this with the real world, just tag everything and have your furniture keep track of where it is.

“One of the big barriers to roommate harmony is the correct disposition of stuff. When you leave your book on the sofa, I have to move it before I can sit down and watch TV. Then you come after me and ask me where I put your book. Then we have a fight. There’s stuff that you don’t know where it goes, and stuff that you don’t know where it’s been put, and stuff that has nowhere to put it. But with tags and a smart chest of drawers, you can just put your stuff wherever there’s room and ask the physical space to keep track of what’s where from moment to moment.

“There’s still the problem of getting everything tagged and described, but that’s a service business opportunity, and where you’ve got other shared identifiers like ISBNs you could use a cameraphone to snap the bar-codes and look them up against public databases. The whole thing could be coordinated around ‘spring cleaning’ events where you go through your stuff and photograph it, tag it, describe it — good for your insurance and for forensics if you get robbed, too.”

He stopped and beamed, folding his fingers over his belly. “So, that’s it, basically.”

Perry slapped him on the shoulder and Tjan drummed his forefingers like a heavy-metal drummer on the side of the workbench they were gathered around.

They were all waiting for her. “Well, it’s very cool,” she said, at last. “But, the whole white-plastic-tub thing. It makes your apartment look like an Ikea showroom. Kind of inhumanly minimalist. We’re Americans, we like celebrating our stuff.”

“Well, OK, fair enough,” Lester said, nodding. “You don’t have to put everything away, of course. And you can still have all the decor you want. This is about clutter control.”

“Exactly,” Perry said. “Come check out Lester’s lab.”

“OK, this is pretty perfect,” Suzanne said. The clutter was gone, disappeared into the white tubs that were stacked high on every shelf, leaving the work-surfaces clear. But Lester’s works-in-progress, his keepsakes, his sculptures and triptychs were still out, looking like venerated museum pieces in the stark tidiness that prevailed otherwise.

Tjan took her through the spreadsheets. “There are ten teams that do closet-organizing in the network, and a bunch of shippers, packers, movers, and storage experts. A few furniture companies. We adopted the interface from some free software inventory-management apps that were built for illiterate service employees. Lots of big pictures and autocompletion. And we’ve bought a hundred RFID printers from a company that was so grateful for a new customer that they’re shipping us 150 of them, so we can print these things at about a million per hour. The plan is to start our sales through the consultants at the same time as we start showing at trade-shows for furniture companies. We’ve already got a huge order from a couple of local old-folks’ homes.”

I kind of read into the book a post-Apple world, where the production process has become so hyper and quick in order to account for gadgetphiles’ fickle tastes that smaller ideas are put into mass production, that grand visions are no longer marketable in a soon-enough timeframe.  What we’re seeing today is the democratization of hardware, following in the shadow of software’s reign, which has dominated the last 30 years or so.  With lots of small shops now selling microcontrollers, Radio Shack retooling its stores to sell circuitry components once again, and the advent of the internet of things and sensor-based objects that are learning how to sense the world around them, our world is going autonomous.  Think military drones, but on smaller scales and for more common-day applications.

Having boxes talk made me think of the TED Talk by the MIT student who built small toy blocks, Siftables, with screens on them which had accelerometers and sensors to detect tilting, proximity to other blocks, etc. and could be configured immediately to play games instructed by the nearby computer:

My classmate Mark Breneman was telling me to look into near-field communication, or NFC.  It’s included in the Nexus S phone for Android:

YouTube Preview Image

This will probably obsolete RFID, but right now it’s not quite cheap enough for usage in the same way that the ID-12 and other RFID readers are.  Its security is an improvement upon RFID though, so it will likely win for more complicated applications.  I’d love to continue doing projects related to presence, identification, and communication using these techs though.

Another classmate (and make: contributor) Matt Richardson sent me this project, “Doh”, which uses RFID and Arduino to help you remember your wallet and keys before heading out the door.

Planning and Ordering Stuff

Here is a sketch I drew for what I want to build, along with some of the components I already ordered to make it work.

“EL tape” in the top-right should read “digital RGB LED strip”.  I bought both but ended up just using the digital RGB LED strip.

I wanted the Indiana Jones-y guy at the bottom to be significantly jowly, as he is in the film, but I think I just ended up messing up his whole head.  I love that quote though.  “We have top men working on it right now.”  “Who?”  “TOP. MEN.”

YouTube Preview Image YouTube Preview Image

I’d love to have a tour of some of the tech behind Walmart’s and UPS’s logistics systems, which reportedly make use of RFID to help with real-time inventory.  The breadth of data and the information that they must be able to extract from it all is staggering to think about.

My system was only composed of two working boxes though, since it gets rather expensive quickly to get wifi-capable microcontrollers (I got Diamondbacks from cutedigi), RFID kits, some RFID stickers, digital RGB LED strips (to make the tupperware boxes I’ll get to glow), 4AA battery packs from Radio Shack to power the microcontrollers, and other assorted power connectors.  I also got some LED screens in case I wanted to do some interface stuff.  The tupperware I bought at KMart.  I had to order plastic PVC ID cards off Amazon.com, as they are surprisingly hard to find in town (Staples didn’t even have them).  Maybe they are a somewhat controlled item because people use them to make fake IDs?

Now, having never played with any of these things, and having never done a physical computing project of this magnitude, I was fully expecting that I would have pieces that wouldn’t work with each other (I’m worried about the cards/tags/stickers and RFID readers being compatible), and that I may have had to buy MORE stuff.  I chose this project because I thought it’d be doable, given my experience and the capability of the hardware.

What I wanted to do is just prototype a simple inventory system where each box has a scanner that lets me “check in/out” objects on an RFID reader, which then passes the data over wifi to my server, which then displays a nice PHP script showing where stuff is.  If I want to find something, the PHP script asks the box that contains it to light up.  That’s about it.  And maybe, if I had enough time, I could let the boxes talk to each other in some silly way.  If I had time.  And if things didn’t blow up in my face.

Well, things kind of blew up in my face.  I gave myself plenty of time, but I ran into tons of problems.  Here’s the process:

Documentation

Sparkfun RFID Starter Kit

I bought a couple RFID starter kits from Sparkfun, which include 2 ID-card sized 125KHz cards, an ID-12 RFID module, and an RFID reader breakout board.  I tested the RFID readers by just plugging in a mini-USB to USB cable I bought from cutedigi.  Those read fine, displaying the RFID card codes into the ZTerm modem comms app for OS X.  That working, I then tried to get the reader to work when connected to my Arduino Uno, using the code posted by Nick Gammon on the Arduino forum.  Soldered wires to the VCC (5V), GND (ground), and TX.  Made sure to not have the TX wire attached to the RX pin on the Arduino while uploading the above sketch.  Then, success!  Successful reading of RFID numbers to Arduino’s serial monitor!

RGB LED Strips

My next task was to get my LPD8806 digital RGB LED strips to work with my Arduino Uno.

YouTube Preview Image

Several other students had worked with the strips for their physical computing projects, so I got some good tips on what transistors, shields, etc. to use for EL tape and EL wire, and what pages to look at.  Lady Ada’s guide was invaluable.

It looked like the RGB LED strips have the potential to draw way too much power for a portable solution.  I was worried I might have to go with EL tape or wire as potential solutions, which provide less visual feedback to the user (the RGB LED strips have individually addressed LEDs that you could make any color and thus give the user visual cues).  At that point, I also started considering what other components could provide feedback.  I wasn’t too happy about using individual LEDs — I thought that might look sloppy.  And a wave shield for playing sounds has its own host of problems: you have to solder the kit yourself, the shield would be inside the box so the sound would be muffled, etc.

I had ordered 2 meters of the LPD8806.  I unsoldered the first meter from the second as instructed by Limor’s (Lady Ada) guide, by peeling the strips apart as I heated them up with a soldering iron.  Then I soldered wires onto the input connections on the strip.

Then I wired everything up according to this diagram, also on the guide:

I had a massive problem understanding the power requirements, as the product page said that you’d need a 2 amp power supply in order to run the strips at full white output.  Well, I tried 1 meter strips with a 4 AA battery pack (with wire-only connectors; bought at Radio Shack).  Those worked great with the strandtest sketch from the LPD’s library I downloaded off Lady Ada’s site.  Success!:

YouTube Preview Image

I was most nervous about getting the LED stuff to work, but they already came with a library with examples to command the LEDs how I wanted; the hard part was figuring out the power and wiring.  4 AA batteries worked fine, when connected with the Arduino and running whites on each LED (though I wouldn’t want to keep it that way, lest it burn out or  be underpowered).  Where I ran into trouble later was when I wanted to run the RFID reader as well; the power draw coming off the LEDs while also running the wifi and RFID code was too much — not enough juice?  I was excited about creating color moods for the containers to represent their feelings, and I was looking forward to a beautiful fade-in, fade-out cyan for when a box says it contains an object I want.

Networking

I bought two Diamondbacks, which are basically ATmega328 chips on Arduino Duemilanove boards with on-board wifi (supporting open networks, WPA, WPA2, and WEP…a key bonus for accessing the multitude of networks out there) on recommendation from my classmate Gavin and my prof, Scott Fitzgerald.

I was a little worried about getting this part working because wifi is still a little sketchy and undocumented in Arduino-land.  I spent hours upon hours downloading different peoples’ libraries and looking through the Arduino.cc and linksprite and asynclabs forums and docs.  I even looked through the Chinese linksprite files, which was painful.

But the example sketches I was trying wouldn’t compile!  I asked Gavin, a classmate, for help and he mentioned that he was using Arduino version 0022, whereas I was using the latest version, 0023 (RC beta 3).  I downloaded 0022 and the sketches started compiling!  Not too long later, I managed to connect my little Diamondback to my WPA2 router!

So here’s what I learned:

  • Get a git clone of the user-contributed WiShield Arduino library from asynclabs.  Install it to your Arduino sketch folder in “libraries/”.
  • Edit apps-conf.h so that only one “APP_” define is uncommented.  For the client-based example sketches (like SimpleClient), uncomment “#define APP_WEBCLIENT”.
  • If things don’t compile and you get weird errors, try a different Arduino version.  I had to install Arduino 0022 to get my Diamondback to work.
  • Opening .pde sketches in Arduino 0023 will cause them to be renamed to .ino, which 0022 won’t see.  So you’ll have to go back and rename the sketch and re-open Arduino 0022 to see it again.
  • I think I had to go track down wire.h on the interwebs and save it as wiring.h because it was missing for some reason, giving me a compiler error message.
  • Make sure you set a static IP on your router for the wifi device to connect to — it can’t handle DHCP.

 

What a headache to get all that working!  The forums were not very descriptive or helpful for getting this to work.

I had to combine that code (which lets me GET a string of variables to a PHP web page, which can then be passed into a MySQL database, with two other pieces of code: code to read in the RFID ID # via serial port (which I did above), and code to poll the web server to see if the server is instructing it to light up the LEDs or not.  I was really, really, really hoping this is a smooth process.

Front- and Back- Ends

I set up the database and front-end for the web server.  I set up a few tables, one for my list of things, one for my list of containers (I made things and containers separate because I’m thinking some things might also be containers), one for a log tracking what events have happened.

The front-end went smoother than I thought.  I used jQuery and jQueryUI to easily build in a working interface.  It took me the most time to figure out how to correctly encode the MySQL pull into JSON via PHP so that I could access it via JavaScript and jQuery for my autocomplete search functions.  But it helped me to better understand how to navigate the DOM and inter-operate between the different languages.

Now I’ve got a pretty slick interface, though I might need to restructure the MySQL pulls into PHP classes instead of one-offs.  I also might need to restructure the data so that not everything is given away in my source’s JSON.  There’s a ton of work I need to do adding functionality for things like having the web server tell the Arduino to turn on its LEDs, etc.  But the basic layout is done!

I even added quests and recipes.  The quests seen above: OCD is unlocked when you put all the objects of the same type in the same box.  Fort Knox is when you check in your wallet, your checkbook, and a small box into the same container.  Love at First Light was going to be the coup de grace of my demo: both boxes would have “love” checked into them, and then they’d face each other (both have a reader and an RFID attached to their fronts) to talk, and then the small box would be checked into Eve.  All these conditionals would give birth to a new container in my database, called “Caintainer”.  So I wanted to create life for my class demo.  It didn’t work. :/

You can try out the Wheredipuddit interface on my web server.  It’s not connected to anything though.

Problems

These pieces all worked fine on their own.  It came time to put them all together.  This is when I started running into issues.  First of all, I was wrestling with the code to send the GET request.  The sample Arduino code that connected to a weather database worked.  My PHP script to read an HTTP URL into a MySQL database worked.  But when I tried to modify the Arduino sketch so that it could insert a different RFID based on which one was scanned, and then requested through the Arduino, I’d often get Arduino resets.  My professor suggested that I stop trying to mess around with char pointers (char *) and arrays, and just hardcode in the URLs based on if..then checks for which RFID was scanned.

This worked with one example but for 20, it also reset the board.  I suspected I was filling up the memory on the Arduino or something.  I reduced the number of if..else if..then checks to just 3 different RFID tags, and that seemed to work okay.

I then added the digital RGB LED strips.  With the way I wired it up, everything was drawing from the same power, and I’d get the Arduino to work fine, but when I scanned the RFID reader, it would click (not enough power, or a short circuit), or the LEDs wouldn’t come on.  I ended up getting another 4 AA battery pack and connecting it to the Arduino + RFID, with the LEDs getting their own (but grounded with the Arduino).  I’m not sure if this circuit caused later problems — I don’t think it did since I didn’t get any more issues, power-wise.  Save for the flimsy battery pack wires.  If I had more time, I’d definitely solder the wires to header pins so they’d be more robust to being moved.  The wires popped out pretty easily without pins.

I added another GETrequest at the end of the loop, which checked to see if the server had changed the box’s mood.  The server would just output 1 integer on the ping.php page, which the Arduino would read and then display on the LEDs the corresponding color for the new mood.

I blew out an RFID reader after being frustrated and trying a different power setup.  I guess it didn’t like being hooked up along a connection tied to a 12V battery pack.  Oops.  First time I’ve blown out a component.

Maybe this was caused by the bug — or butterfly — in my Arduino.  I guess the butterfly was from a classmate’s butterfly project (she apparently had butterflies mailed to her).

I had some problems understanding the callback function that was coded into the example sketch.  It basically said, if you receive data in the serial buffer, then run this function to process it.  It had incoming variables already so they were my constraints.  I didn’t understand the underlying code enough to either scrap it or modify it.  I spent tons of time trying to figure out how to read my HTTP request’s response (which, in this sketch, shows the whole HTTP headers, which was a pain in the ass because that’s a lot of extra characters I have to deal with in my serial buffer).  I tried many examples and tried writing some failed C.  I ended up, though, with this:

// Function that prints data from the server
void printData(char* data, int len) {
  // Print the data returned by the server
  // Note that the data is not null-terminated, may be broken up into smaller packets, and
  // includes the HTTP header.
  while (len-- > 0) {
    Serial.print(*(data++));
    if (*(data) == '*' ) { //'*' is our beginning character
      startRead = true; //Ready to start reading the part
    }
    if (startRead) {
      Serial.print(*(data++));
      if (*(data) == '0'){ // content
        dither(strip.Color(0,127,127), 20);
      }
    ...
    }
  ...
}

The reading of Serial.print(*(data++)) in the startRead conditional is crucial to read in the next character.  I did try to do a serial.flush() after this callback function but I think it was interfering with the other GETrequest so I removed it.  But I suspect more must be done here to make it a clean read…

I have a feeling having two GETrequests could have contributed to one of my major issues, which seemed to be flooding the serial buffer.  I’d often get the LED 13 light stuck on, as if it was being flooded with data.  I did try to serial.flush() my serial connections but that seemed to destroy communications — nothing was logged in my database as having been touched.  Other times, I would scan stuff in or try to ping them from my browser (pinging from browser would tell the Arduino on its next connection that it needed to light up to show the user that the requested object was inside), and nothing would happen!

This is actually exactly what happened during my class presentation.  Nothing worked.  I felt bad about it because I prepared for the project well, bought stuff early, put in early test work, and then did all-nighters for almost two weeks trying to figure the whole thing out.  I felt like I tried hard, picked a project I could accomplish, and put in the time.  And it still didn’t work during the demo, though I KNOW some parts of it work well…sometimes. Here’s a video clip of me giving my presentation (may have been edited for time):

Conclusion

So there you go.  It was a highly disappointing ending to a final project.  I wondered if working alone caused my problems, or not asking others for enough help.  My lessons learned:

  • When doing serial communications, make sure you try to understand EVERYTHING being passed across.  Make sure to always use the serial monitor, just to be sure you’re getting expected results.
  • When doing communications between client and server, always build tons of logging into your code, even at first when you’re just scaffolding the project.  You will need to do small unit testing on each case individually to make sure it works, before trying to put everything together.  The worst is when you can’t figure out where your problems are being caused, because there’s too much going on and too many points of failure.  Keep the network traffic thin so there’s more margin for error.  TEST EVERYTHING INDIVIDUALLY.
  • I would have bought an extra Arduino and the new WiShield 2.0 shields, or waited for the new Arduinos with built-in wifi.  The problem I had with the Diamondback was that I ended up using poorly documented code that didn’t do what I wanted it to and didn’t seem very configurable, and operated far less usefully than Arduino’s example code in the ethernet library.  The WiShield 2.0 code seemed far more user-friendly.

 

What interests me about this is the long-term application.  What will it be like when objects can tell you if they’re missing parts, or they can report to you on their health, or they can take out some of the daily logistics planning that we send to our brains’ subroutines every day while we try to get other stuff done?  What will the world be like when things start talking to each other outside of the internet?  Can I get my boxes to talk to each other while they’re near each other?  Can I build recipes, like scan a bunch of characteristics (honesty, humor, good looks) into the boxes so that, if the recipe is right for both, they “fall in love”?  In talking with another classmate, Tak, we realized that if there were, say, 50 boxes piled against a wall, you could turn them into interactive pixels, controllable via Processing sketches!  And as I talked to yet another classmate, Christie, I realized, what if there will be a job in the future for creative storytellers where their job is to imbue objects with personalities?  Think of just observing everyday objects talk to each other, all being coded by different people, all with unpredictable and surprising interactive behaviors, with companies competing to hire the most creative people to give their products signature anthropomorphized personalities.

I didn’t get my project to work.  I thought I’d be able to do it.  But I ended up encountering issues at almost every step of the way, with five obstacles popping up once I solved just one.

I definitely need a mental break from school now that the semester’s over.  I want this idea to work, but I’m going to have to accept that I need to move on from it because I’ll have other class projects to do.  But it’s very hard for me to leave a problem unsolved.  So it’s been gnawing at me.  But I’ll be using the break to forget about it, and try not to come back and try to figure out what’s wrong with it.  At least for a while, or until I can use the project for another class or application.  Sigh.

My Arduino code is below the jump:

Read More »

Physical Computing Group Project: Your Tweet Has Been Scent

For my physical computing class’s media controller group project, I worked with Gavin, Michael, and Yucef.  Our task was to build a controller that interacted with human interface and media.  For our project, we chose to work with web interaction, whereas other groups chose to work with video or sound or light.

Your Tweet Has Been Scent/Odoruino (unofficial name)

Our team has built a device that sprays out certain scents based on which friend or family member sends you a tweet.

Origin

Originally our team’s idea was to build a game.  We were thinking it might be like the prisoner’s dilemma game theory examples, where two people would go into a room they’d not seen the layout of and then try to figure out puzzles in order to escape.  Each person would have someone outside the room who would use a computer to interface with the person inside the room, via the web and Arduino.  The person in the room would not be able to see much feedback at all, especially not from his teammate, and he might find that the person outside the room was not his partner but in fact the opponent’s partner.  We had all kinds of crazy ideas, like the person inside the room having to use an Indiana Jones-like sceptre on a model temple to figure out where to go next.

In the end though, we could never figure out a way to make a project out of this.  We wanted to have an Arduino unlocking a wall in the maze or puzzle that the web user would need in order to progress, and for unlocks for items in the room for the person there.  But it just never progressed.

Working with Smell

Yucef came up with an idea to use smell, where if you received a tweet from someone you knew, you would smell the presence of their message for you, so you could employ your sense of smell in case you were using your other senses for other projects.  Or if you came home, the room would have a lingering smell that reminded you of someone close to you.  Yucef liked the idea that smells are so powerfully linked to memory.

This idea, we were far more quick to turn into something we knew how to build.  Gavin had worked with the Twitter API for his first project, and I had familiarity with the Twitter API and PHP.

We found the Glade Sense & Spray device in the store.  The device is actually pretty clever.  It has a motion sensor inside that detects movement and sprays when it senses something.  The canister top is depressed by a geartrain in the back of the housing, spraying out a scent very quickly.  It operates on 2 AA batteries.

By the end, we chopped off the front of the housing after removing the front lid, so we’d have more room to house 4 different diffusers.  We opened the device up and cut the wires leading to the mini breadboard with the motion sensor on it.  Then we soldered the toy motor (which powers the plastic geartrain) to our power and ground wires to plug in to our own breadboard to be powered with a 4-AA battery pack.

Here’s someone else’s teardown of the device.

I wrote a jQuery script originally to post values to a web page that the Arduino and its ethernet shield could connect to the internet and parse, in order to figure out which spray diffusers to activate.  The jQuery script would parse JSON results from Twitter’s URL-based search.  The problem, which of course I realized after finishing it, was that the code that you can write for Arduino to connect and process data of course doesn’t render a JavaScript model.  So the Arduino was trying to parse the JS code.  Duh!  So then I rewrote the script in PHP so that only the exact values we wanted outputted would show in the page.

This worked fairly well because PHP makes it just as easy to turn JSON into an array that it can use.  But I ended up having to put in a last-checked checker, because we didn’t want the Odoruino to see ALL past tweets, just the ones that were new from the last time it checked.  It took me a long time of debugging to figure out why my checker wasn’t working right.  It was a question of making sure each tweet was being checked against the last time-check — originally my for..loop was just a simple test of whether any new tweets had come in, but then each tweet wasn’t checked.  I’m guessing that makes zero sense to anyone.

Our code checked for an initial token, “*”, and an end token, “b” (just chosen randomly).  In between were our four digits, showing binary results.  For instance, “0101” meant that the 2nd and 4th diffusers had new data and would activate.

First Demo

We demo’d our project in class with just two scents (we could only find “bed linen” and “apple & cinnamon” scents in stores).  We made Twitter accounts for the user (“Dano”, i.e. Dan O’Sullivan, an ITP badass who was rumored to have patched into local Manhattan cable access TV) and for his mother and “Woz”, as in Steve Wozniak.  His mother would smell like linens while Woz would of course smell like Apple (he looks like he smells like an apple, doesn’t he?  so furry!).

@ Hey can you fix my QuickTime VR on my Mac? I broke it :-(
@ITP_Smell_Woz
ITP_Smell_Woz

Final Demo

Adding two more sensors was problematic because we didn’t increase the numSensors variable in our Arduino code until we found it, plus we had issues with power — we tried 2 AA’s and 4 AA’s and found the 2 didn’t draw enough, but the 4 were probably pushing too much power.  We ended up using 4 AA’s.  We experimented with multiple resistors till we found that a 10Ohm resister was the highest resistance we could add and still get the diffusers to be powered correctly.

Our circuits were basically thus: a diffuser would be linked to a resistor and battery pack on one side, while an output pin controlled it, along with a diode to keep power from flowing back into the toy motor and burning it out.

We also found that the diffusers would not work well once we sawed off the fronts, because the lower part of the diffuser no longer had any screw threading.  So Gavin and Michael had to wrap the bottom of the two sides of the casing tightly so that the plastic geartrain would not come loose when activated.

Design

We were thinking of using a big blue plushy Angry Bird, since the official Twitter plushies were too small to house our diffusers and the Arduino + breadboard.  Then we would cut off the tail of a plushy skunk and stick the diffusers in the bird’s butt, so that it would tweet-spray out of its butt.  Here’s a quick Photoshop of what it might have looked like.

But we ended up using a bird/owl lunchpack which was actually a tighter, more snug fit for our device.  We took a thin piece of wood and nailed some foam board onto one side of it, to provide a base for the diffusers above and a compartment below to hold the Arduino + ethernet shield and breadboard.  We cut away the top of the lunchpack to make an opening for the diffusers to spray.  We wanted it to be a neater job up top but there was barely enough room to fit all 4 diffusers so we ended up cutting the whole thing away.

We added two more accounts, GF (girlfriend) and Gram (grandma), which were the scents of Hawai’ian Breeze and Vanilla & Lavender, which I found at K-Mart.  So we had Dano being tweeted by Woz, his Mom, his GF, and his Gram.

You’ll notice that this is probably not something we’ll want to leave on the subway.

Finally, here’s the video of the device/Odoruino, taken in the ITP workshop:

YouTube Preview Image

Nerdcode

Code is below:

Read More »

PComp: Serial Lab

[Update to the Serial Lab Part 2 exercise at the bottom of this post.]

Observation Exercise

Our observation exercise blog assignment involved picking a piece of interactive technology in public, used by multiple people, and then describing the context in which it’s being used.  We would then watch people use it, preferably without them knowing they’re being observed, taking notes on how they use it, what they do differently, what appear to be the difficulties, what appear to be the easiest parts.  Then we would record what takes the longest, what takes the least amount of time, and how long the whole transaction takes.

Okay cool, so I could have done an ATM machine or something.  And I will, later.  I promise.  But I just HAD to write something about that god damn elevator in my building.  See, it’s not that the elevator is broken, or that it is so barely functional that you’d be better walking up the 7 flights of stairs of the small building.  The problem is that it’s just annoying ENOUGH to make you think about it incessantly.  I’m not even OCD and I swear, using this thing is like dealing with some ornery cat or with a freaking mule at a water crossing.  I mean, don’t you hate it when you’re trying to drag your fucking mule across the river?

I’ll get in the elevator during non-peak times and it’ll work fine.  I was warned by the woman I rent from NOT to touch the gate.  Here I am thinking that it’s going to trap me like in Alien or like that Tom Selleck movie where the robot spiders are after him.  DON’T TOUCH THE GATE.  The elevator has a gate!  That’s enough to make you stop and pause because it was probably made back before they had building codes against crushing tenants in faulty coal mine-era gate technology.

So you press the button to pick your floor and the gate closes on its own.  You’re in a small box.  More than 3 people fitting inside?  Nope.  2 people with bags?  Forget it.  I’m not really sure there’s a ceiling panel to get out of the damn thing if it stops between floors.  No John McClane exits.

Fair enough.  Things start to go wrong when multiple people are involved.  I’ve been puzzling over the internal software logic in the thing.  Did someone actually code this?  I’ve gone from the 1st floor, pressing 7, and then the elevator stops at 5 and someone is there waiting to go down.  I’m not sure that it’s going up to 7 after because I had to hammer the button while on 1, and it didn’t light up to give me feedback about whether it was pressed or not.  So I’ll get out at 5 and walk up the 2 flights of stairs, only to see the person who got in at 5 up there because the elevator kept going up.  Sometimes the elevator will stop at a random floor, like 4, and no one will be there.  Was anyone there to begin with?  Did they just figure they’d walk down instead?  Or was it a poltergeist?

See?  I accept that it’s an old elevator.  But I’m not sure I can comprehend some engineer working out the logic and saying, “You know what’d be good?  If anyone on any floor could interrupt the elevator’s direction at ANY time.”  The other part is that the floors are not far enough apart that walking up and down is completely out of the realm of possibility.  So when you and someone else are trying to figure out which floor the elevator will go to next, you’re feeling guilty about being a lazy son of a bitch who didn’t want to walk two flights of stairs extra.

I heard a tale about the elevator (it even has tales!) that it broke one time and they closed it for a week because it wasn’t up to code.  Apparently when it re-opened, the tenants could only see that they added wooden paneling inside, and no larger changes for the whole shaft.  I should have asked what’s behind the panels — metal spikes?

Here’s the video.  The elevator knew it was on camera unfortunately and decided to work just fine.  Bastard.

YouTube Preview Image

Look!  The gate is held together at one place with a couple plastic zip ties!  Spray-painted black!

Alright, so also I remember reading about a Wells Fargo ATM design a while back and the write-up (now gone, but saved on Wayback Machine) was brilliant.

What I love about it is it addressed all those dumb concerns we have at the ATM.  Different heights see the arrows on the sides in different locations, so a taller person has to stoop over to make sure he’s pressing the right side button.  Then the button design is often too colorful and distracting and you feel like you’re Indy Jones in The Last Crusade stepping on the trap letter stones to spell Jesus in Aramaic.

Touchscreens and hardware durability have improved the process considerably.  While some people are just not inclined to use ATM machines, other tech-savvy folks like me should not have difficult using the things.  Ideally, no one will have difficulty using them, and I think the design team did a good job by making the buttons larger and simpler, and using the whole screen as possible inputs while keeping the interface clean.

The thing about the elevator and ATM machines is that they employ infinite hardware configurations and can’t be changed uniformly at one time.  Technology is a gradual process in the real world, while it’s a luxury taken for granted online now that web apps can be deployed immediately and (with improving standardization of protocols and webdev kits) without large variation across browsers/operating systems in many cases.  But tech and hardware, especially for such oft-used things as elevators and ATM machines, must be improved gradually.

Serial Lab

For my serial lab I wanted to turn the serial reader graph into a beach wave, with Mr. Wilson the volleyball from Cast Away.  It would have taken some wrangling, though, to keep Mr. Wilson on a wave, being drawn across the screen, without leaving a trail of Mr. Wilsons and so I didn’t follow through on it.  The fact that the SerialEvent() function seems to be overwritten by draw() (like, when redrawing the background) complicates matters.

Below is a video I made of a crude mouse-like device from the example, employing a potentiometer, a switch, and an FSR.  If you notice the readings, the switch’s reading would not change from 0, so I didn’t bother pressing the switch during the video.   The code was supposed to fill the moving circle on screen when the switch was pressed, but for some reason I could only get readings (just fine) off an Arduino serial test.  When I moved over to Processing, it was like the value was getting truncated off the end of the line.

YouTube Preview Image

Update: I sort of solved my problem with why Processing was not picking up the value of the 3rd sensor, the switch.  In my Arduino sketch, I divided the other two sensors’ analog values by 4, so they’d fit within 1 byte (2^10 = 0 to 1023, while 2^8 = 0 to 255).  I suspected that the last value (the switch’s digital value) was being truncated and thus wasn’t being picked up by Processing.  The math was a bit of a mystery of me at first, but I think I figured it out, though I’m unsure of the language.  Serial.println(“1024,1024,1″) sends a total of 13 bytes, and a baud rate of 9600 is not enough to send that many bytes.  It can only transmit 12 bytes.  Is it 12 (bytes) * 8 (bits/byte) * 9600 (updates/second) = 921600?   Still a mystery: if I send “1024,1024,1”, why would Arduino show it fine in the serial monitor and Processing wouldn’t receive the same result?

I just know that when I divided the values by 4 in Arduino, then the switch’s values of 1 or 0 showed up in my System.out in Processing.

PComp: Stupid Pet Trick

For our first big physical computing project, we were required to come up with a “stupid pet trick”, demonstrating what we’ve learned on the Arduino board and about building circuits so far.

I immediately latched onto the idea of building a combination lock for security.  This made me think of one of my favorite movies of all time, Sneakers.  This film, I must add, is also up there in the man canon of “Best Hacking Movies Ever”.  By the way, lest you ever unfortunately remind yourself of the Sandra Bullock movie “The Net”, where she stars as a “hacker”, I did actually find a site (I was looking for help on installing etherpad on OS X) that had a Guy Fawkes Anonymous mask outline in the bottom right corner of the site.  So somehow that film penetrated our consciousness. :(

Anyway, Sneakers.  Here’s the trailer.

YouTube Preview Image

Keep in mind that this was a film made in 1992 about hacking (!), that somehow attracted Dan Akroyd, Robert Redford, Sidney Poitier, Ben Kingsley, and, get this (there must have been a lot of coke involved), River freaking Phoenix.  I don’t know how this film got greenlit but I thank Hollywood for it.

So the film is about a bunch of hackers who stumble upon the existence of a chip that can decode any encrypted data, including on infrastructure networks and for governments.  Naturally the NSA is interested.  So Redford and his crew have to go find it.  A lot of cool stuff happens.

YouTube Preview Image

I loved the scene when the crew is trying to figure out a scrambled message by using Scrabble tiles.

I wanted a puzzle theme for my project.  I love the look of the Arduino and breadboard and how it’s a puzzle waiting for you to figure it out.  So, for my limited knowledge in building things, I wanted to concentrate on the Arduino as a puzzle with a rudimentary interface, similar to a bomb to be defused or some Cloak and Dagger (remember that film?  with Dabney Coleman?/Sneakers -like device.

My device has no directions and uses potentiometers and switches for its inputs.  The feedback is through simple lights.  Much is intuitive, but it’ll require some morse skills to finish the puzzle.

My classmate Matt Richardson sent me this cool Nokia N900 unboxing video later, showing a hackable box that must be opened before you can get at the phone itself:

YouTube Preview Image

Here is the circuit diagram (thanks fritzing.org):

Here’s the code:

// Ben Turner
// NYU-ITP, PComp Fall 2011

// Stupid human pet trick code:  SETEC ASTRONOMY
// Requires watching the Serial Monitor.

// With much love to everyone who worked on Sneakers:
// http://www.imdb.com/title/tt0105435/

// lights showing which tests have been completed
const int test1LED = 2;
const int test2LED = 3;
const int test3LED = 4;

// booleans for each of 3 completed tests
boolean test1Complete = false;
boolean test2Complete = false;
boolean test3Complete = false;

// three red lights for displaying patterns/interface
const int pattern1LED = 5;
const int pattern2LED = 6;
const int pattern3LED = 7;

float analogNumVal = 0.0;

// test1 vars

// for randomized pattern used in test1, plus user's answers
int test1Num1 = 0;
int test1Num2 = 0;
int test1Num3 = 0;
int test1Answers[3];

// test2 vars

// morse alphabet and phrase converted to array for Arduino to understand
// gave up on the morse -.-. because of array/char problems. Used #s instead.
float ditOrDah = 0.0;
int morseCompletePhrase[26] = {2,1,1,1,0,1,1,0,1,1,1,0,1,1,1,1,0,2,2,2,0,1,2,2,1};
char morseCode[] = "";
char morseA[6] = ".-";
char morseB[6] = "-...";
char morseC[6] = "-.-.";
char morseD[6] = "-..";
char morseE[6] = ".";
char morseF[6] = "..-.";
char morseG[6] = "--.";
char morseH[6] = "....";
char morseI[6] = "..";
char morseJ[6] = ".---";
char morseK[6] = "-.-";
char morseL[6] = ".-..";
char morseM[6] = "--";
char morseN[6] = "-.";
char morseO[6] = "---";
char morseP[6] = ".--.";
char morseQ[6] = "--.-";
char morseR[6] = ".-.";
char morseS[6] = "...";
char morseT[6] = "-";
char morseU[6] = "..-";
char morseV[6] = "...-";
char morseW[6] = ".--";
char morseX[6] = "-..-";
char morseY[6] = "-.--";
char morseZ[6] = "--..";

int test2Answer = 2; // 2 is answer: (2) lights: "Martin?", (1) light: "Cosmo?"
int test2Guess = 0;
boolean awaitingAnswer = false;

// test3 vars

boolean fixedSwitch = false;
boolean awaitingAnswerTest3 = false;
char test3Answer[47] = "2022202220220120210212201110102121012101020111";
int test3Guess[47];

int n;

void setup() {
  Serial.begin(9600);
  pinMode(A0,INPUT);
  pinMode(13,INPUT);
  // test LEDs (2 yellow, 1 green) show which # test you're on
  pinMode(test1LED,OUTPUT);
  pinMode(test2LED,OUTPUT);
  pinMode(test3LED,OUTPUT);
  // 3 red LEDs for showing patterns
  pinMode(pattern1LED,OUTPUT);
  pinMode(pattern2LED,OUTPUT);
  pinMode(pattern3LED,OUTPUT);
}

void loop() {
  // need to randomize test 1 pattern
  randomSeed(analogRead(A5));
  test1Num1 = random(1,3);
  delay(50);
  randomSeed(analogRead(A5));
  test1Num2 = random(1,3);
  delay(50);
  randomSeed(analogRead(A5));
  test1Num3 = random(1,3);

  // test 1
  if (test1Complete == false && test2Complete == false && test3Complete == false) {

    // turns on 1st number in 3-part pattern
    switch (test1Num1) {
    case 1:
      digitalWrite(pattern1LED,HIGH);
      break;
    case 2:
      digitalWrite(pattern2LED,HIGH);
      break;
    case 3:
      digitalWrite(pattern3LED,HIGH);
      break;
    }
    // need this delay so LEDs are visible
    delay(1000);
    digitalWrite(pattern1LED,LOW);
    digitalWrite(pattern2LED,LOW);
    digitalWrite(pattern3LED,LOW);
    delay(50);

    // turns on 2nd number in 3-part pattern
    switch (test1Num2) {
    case 1:
      digitalWrite(pattern1LED,HIGH);
      break;
    case 2:
      digitalWrite(pattern2LED,HIGH);
      break;
    case 3:
      digitalWrite(pattern3LED,HIGH);
      break;
    }
    delay(1000);
    digitalWrite(pattern1LED,LOW);
    digitalWrite(pattern2LED,LOW);
    digitalWrite(pattern3LED,LOW);
    delay(50);

    // turns on 3rd number in 3-part pattern
    switch (test1Num3) {
    case 1:
      digitalWrite(pattern1LED,HIGH);
      break;
    case 2:
      digitalWrite(pattern2LED,HIGH);
      break;
    case 3:
      digitalWrite(pattern3LED,HIGH);
      break;
    }
    delay(1000);
    digitalWrite(pattern1LED,LOW);
    digitalWrite(pattern2LED,LOW);
    digitalWrite(pattern3LED,LOW);
    delay(3000);

    n = 0;

    // loop for allowing user to enter pattern
    while (n= 333 && analogNumVal < 666) {
        digitalWrite(pattern2LED,HIGH);
        digitalWrite(pattern1LED,LOW);
        digitalWrite(pattern3LED,LOW);
      }
      else {
        digitalWrite(pattern3LED,HIGH);
        digitalWrite(pattern1LED,LOW);
        digitalWrite(pattern2LED,LOW);
      }

      // user selects LED via the switch
      delay(100);
      if (digitalRead(13) == HIGH) {
        if (analogNumVal < 333) {
           test1Answers[n] = 1;
           delay(50);
           n++;
        }
        else if (analogNumVal >= 333 && analogNumVal < 666) {
          test1Answers[n] = 2;
          delay(50);
          n++;
        }
        else {
          test1Answers[n] = 3;
          delay(50);
          n++;
        } // end potentiometer scale
        delay(50);
      } // end user input

      // checks user's answers versus the random pattern
      if (test1Answers[0] == test1Num1 && test1Answers[1] == test1Num2 && test1Answers[2] == test1Num3) {
        test1Complete = true;
        digitalWrite(test1LED,HIGH);
        digitalWrite(pattern1LED,LOW);
        digitalWrite(pattern2LED,LOW);
        digitalWrite(pattern3LED,LOW);
      }
      else {
        digitalWrite(pattern1LED,LOW);
        digitalWrite(pattern2LED,LOW);
        digitalWrite(pattern3LED,LOW);
      }
    } // end while loop for allowing user to enter pattern
  } // end test1

  // test2: Read morse, understand context.
  if (test1Complete == true && test2Complete == false && test3Complete == false) {

    // displays morse pattern on LEDs
    Serial.println("This board is careful whom it associates with. Who does it favor?");
    Serial.println("1 light: Cosmo?");
    Serial.println("2 lights: Martin?");

    for (int i = 0; i<26; i++) {
      if (morseCompletePhrase[i] == 2) { // dah
        digitalWrite(pattern3LED,HIGH);
        digitalWrite(pattern2LED,HIGH);
        delay(1000);
      }
      else if (morseCompletePhrase[i] == 1) { // dit
        digitalWrite(pattern2LED,HIGH);
        delay(1000);
      }
      else if (morseCompletePhrase[i] == 0) { // space
        digitalWrite(pattern1LED,HIGH);
        delay(1000);
      }
      digitalWrite(pattern1LED,LOW);
      digitalWrite(pattern2LED,LOW);
      digitalWrite(pattern3LED,LOW);
      delay(1000);
    } // end for

    // keeps cycling internally to loop() until input received
    while (awaitingAnswer == false) {
      analogNumVal = analogRead(A0);
      // turns on respective LED when potentiometer value is in its range
      if (analogNumVal < 500) {
        digitalWrite(pattern2LED,HIGH);
        digitalWrite(pattern3LED,LOW);
      }
      else {
        digitalWrite(pattern2LED,HIGH);
        digitalWrite(pattern3LED,HIGH);
      }

      // user selects LED via the switch
      if (digitalRead(13) == HIGH) {
        delay(1000);
        if (analogNumVal < 500) {
          test2Guess = 1;
          delay(50);
        }
        else {
          test2Guess = 2;
          delay(50);
        } // end potentiometer scale
        delay(50);
      } // end user input

      // checks user's answers versus the random pattern
      if (test2Answer == test2Guess) {
        test2Complete = true;
        awaitingAnswer = true;
        digitalWrite(test2LED,HIGH);
        digitalWrite(pattern1LED,LOW);
        digitalWrite(pattern2LED,LOW);
        digitalWrite(pattern3LED,LOW);
      }
      else {
        digitalWrite(pattern1LED,LOW);
        digitalWrite(pattern2LED,LOW);
        digitalWrite(pattern3LED,LOW);
      }// end answer check
    } // end while

  } // end test2

  // test3: Respond in morse.  Is the breadboard wired correctly?
  if (test1Complete == true && test2Complete == true && test3Complete == false) {
    if (fixedSwitch == false) {
      while (fixedSwitch == false) {
        if (digitalRead(12) != HIGH) {
          Serial.println("ERR: NO INPUT DEVICE DETECTED.  FIX, THEN INPUT TO CONTINUE.");
        }
        else {
          fixedSwitch = true;
          delay(3000);
        }
      }
    }
    else { // if board is wired and input received on digitalRead(12)
      Serial.println("Scrabble? SETEC ASTRONOMY? Tell me what it means, Bishop.");
      int n = 0;

      while (awaitingAnswerTest3 == false) {
        analogNumVal = analogRead(A0);
        // turns on respective LED when potentiometer value is in its range
        if (analogNumVal > 333 && analogNumVal <= 666) {
          digitalWrite(pattern2LED,HIGH);
          digitalWrite(pattern3LED,LOW);
          digitalWrite(pattern1LED,LOW);
        }
        else if (analogNumVal <= 333) {
          digitalWrite(pattern1LED,HIGH);
          digitalWrite(pattern2LED,LOW);
          digitalWrite(pattern3LED,LOW);
        }
        else {
          digitalWrite(pattern2LED,HIGH);
          digitalWrite(pattern3LED,HIGH);
          digitalWrite(pattern1LED,LOW);
        }

        // user selects LED via the switch
        if (digitalRead(13) == HIGH) {
          delay(1000);
          if (analogNumVal <= 333) {
            test3Guess[n] = 0;
            Serial.println(test3Guess[n]);
            n++;
            delay(50);
          }
          else if (analogNumVal > 333 && analogNumVal <= 666) {
            test3Guess[n] = 1;
            Serial.println(test3Guess[n]);
            n++;
            delay(50);
          }
          else {
            test3Guess[n] = 2;
            Serial.println(test3Guess[n]);
            n++;
            delay(50);
          } // end potentiometer scale
          delay(50);
        }

        // finalizes response by pressing 2nd switch
        if (digitalRead(12) == HIGH) {
          delay(1000);
          // checks user's answers versus the random pattern
          int testIfArraysEqual = 0;
          for (int z=0;z<46;z++) {
            if (test3Answer[z] != test3Guess[z]) {
              testIfArraysEqual += 0;
            }
            else {
              testIfArraysEqual += 1;
            }
          } // end for
          if (testIfArraysEqual == 4) {
              test3Complete = true;
              awaitingAnswer = true;
              digitalWrite(test3LED,HIGH);
              digitalWrite(pattern1LED,LOW);
              digitalWrite(pattern2LED,LOW);
              digitalWrite(pattern3LED,LOW);
              Serial.println("Success!");
          }
          else {
            Serial.println("Nope. Try again.");
            n = 0;
          }
        } // end digitalRead if
      } // end while
    }
  } // end test3
} // end loop()

For the rest of how the puzzle works, go below the jump: SPOILERS.

Read More »

PComp: Multimeters

For PComp class, I read Norretranders’ The User Illusion, Chapter 6: The Bandwidth of Consciousness.  A good reference for research into how much bandwidth our consciousness has.  While our senses can take in a lot of data, we can only perceptualize a very small amount of it, perhaps 45 bits/second when just reading, much lower for other activities.  This is interesting to me because clearly there are some people who cannot process more than one task without freaking out and freezing up, while others can handle a ton of data.  Different people have different pipes for bandwidth.  I also think difficult training and experience can give one fatter bandwidth pipes, but I’d say most of it is probably dependent on childhood behavior and habits and learned behavior from parents, and not so much from lack of training.  Nervouswrecks stay that way most of their lives.

The lab this week was intended to teach us how to use multimeters between different connections on a circuit.

There is not much to show except for my breadboard layouts for each part of the lab.  Below the jump.

Read More »

PComp, Week #2 Homework

Lab

The homework for PComp involved wiring up two different analog sensors to an Arduino and then displaying serial output and LED brightness as the feedback changes.

Two screenshots from Professor Fitzgerald’s class, showing clean boards and wiring:

A photocell:

A potentiometer:

Here is my layout, showing both LEDs on.

The first part of the lab involved simply hooking up a potentiometer (in this case, a small blue volume-like knob) and then outputting both to Serial and to an LED.  Video below:

YouTube Preview Image

You can tell that in the Serial monitor, the value gets higher as the knob is turned to the right.  The LED however would only have two states, on and off.  It was supposed to get brighter as the potentiometer was turned to the right.  I discovered in the next part of the lab that the reason for this was because the LED’s output pin needed to be connected from the Arduino Uno via a true analog pin, not a digital one.

The analog pin (with a tilde by the number) indicates that it is designed to handle analog input and output.  In other words, it would be able to adjust the LED’s brightness via stepping, whereas a purely digital pin could not.

In the next stage of the lab, another variable input was added, and since I had no other knobs, I used the force-sensing resistor, which detects pressure placed on it (for example, by squeezing it).  Both sensors were supposed to gradually dim or brighten the LED, depending on the analog input values.  Video below:

YouTube Preview Image

At first I was getting weird power bleeding errors where the FSR would work when I squeezed it (the value increased in the serial monitor) but when I turned up the blue knob, both the knob’s analog value and the FSR’s would go up.  This indicated that their circuits were connected somehow.

Eventually I realized (after some excellent troubleshooting from Roopa!) that I had power and ground hooked up to the same row on the breadboard, probably causing weird power-related issues.  Once I straightened that out, along with plugging in the LEDs to analog pins, then my circuits worked exactly as intended, with both the FSR and knob varying on a scale from 0 to 1023.

For my creative project, I am planning on building an envelope that simulates a wife’s kiss before she sends it off to her deployed husband, to capture the magic of a kiss to seal an envelope, unlocked only by the intended recipient.  It will work with an FSR, which is not ideal, but I think will convey the idea okay. :)

Sensor Walk

A list of the sensors I come across on my commute from Tompkins Square Park to the Tisch building:

  • Touchscreen on HTC Incredible Android phone when I turn off my alarm
  • HTC Incredible checks sunlight input via sensor and dims screen accordingly
  • CPU temperature sensor on PC desktop triggers CPU and case fans to spin faster or slower
  • Scrollwheel on mouse detects user input when scrolling through a page
  • Touchpad on MacBook Air detects multiple finger presses along with movement
  • Fridge sensor cools fridge when it gets too warm
  • Traffic lights detect when ambulances with strobe lights pass by
  • Police radar guns detect multi-band measures of target speeds
  • Swiping my NYU ID card at the gym allows me entry through the turnstile
  • Inserting ID card into copy machine at library and then adding money via bills
  • Sensors in faucets and for towel dispenser in bathroom detect motion
  • Bar code scanners at Walgreens for buying goods
  • Adjusting volume of speakers for computer in classroom
  • Using accelerometer on Nike+ to calculate running distance automatically

Fantasy Device

Glasses that let you tap into a darknet, similar to Daniel Suarez’s “Daemon” and “Freedom (TM)” books.  You would put on normal-looking glasses and they would verify your identity through retinal scan. The glasses would display your chosen “layers” of data layered onto your view.  So above everyone’s heads, you would see data about them.  People who were not on the darknet would be somewhat derisively called “NPCs”, or non-player characters, for the people who chose not to interface with this deeper level of integration.  The displays above people would display their reputations, perhaps their “soundtracks”, their openness to dating, their interest groups…whatever they chose to display there that wasn’t by default public.

The layering for people would probably need to be done by some form of direction- and range- finding, along with body recognition to identify peoples’ heads.  Data would be transmitted by something with a slightly better range than Bluetooth.  GPS would be too unreliable for this type of feature, since it would require outdoor use with line-of-sight, plus locking in on a satellite over and over.  We would need something more lo-fi.

Obviously there would be data about objects, buildings, etc. in our heads-up displays as well, using similar broadcasting tech from those non-human objects.  By asking for directions to a place in your HUD, a green line would extend towards your destination in your HUD, so you just have to follow the line.

The problem with my fantasy device in the context of this discussion is that it is not really physically interactive.  So, while I would prefer that the glasses could recognize your hands and allow you to manipulate an interface with them, similar to something between Minority Report and the Kinect, then I guess for now I’d settle for a cellphone or iPad extended interface.

Objects and people would appear for manipulation on your screen, letting you add them as starred IDs or add the person as “followed” or add their contact info or whatever.  The device would also be something like Neal Stephenson’s primer from The Diamond Age, with an anonymized actor on the other end of the network interacting with you through the device to help you get to where you need to go, or to help children learn at their own pace.  An immediate application for all this would be a game in which you’re looking at augmented reality and trying to hunt down other people, or do scavenger hunts.

The screen would transmit data with the glasses, and would be operated by touchscreen and whatever knobs or buttons that would also need.

Mostly I am interested in having the glasses display peoples’ reputations and other data about them so that you can decipher these things when meeting someone or figuring out your safety level.  There’s something to be said for anonymity, but there’s also said for an alternate system where those who opt-in can enjoy layered augmented reality data.

PComp, Week 1: Physical Interactivity

Part of our assignment for intro to physical computing class, other than the Arduino assignment, was to read the first two chapters of Chris Crawford’s “The Art of Interactive Design: An Euphonious and Illuminating Guide to Building Successful Software”, and to visit the Museum of Modern Art’s (MoMA) new “Talk to Me” exhibit, which I talked about a bit in a previous blog post.

Crawford says that in order for something to be interactive, like for instance a conversation, there are three components: listening, considering, and responding.  My immediate thought, intensely based on personal experience, was that men must be badly designed for interaction.  I had an ex who would constantly tell me I was not responding to her long complaint sessions in a helpful manner, saying I would just end up saying “that sucks” and “I’m sorry”.  I really did try my best.  I listened, tried to be helpful, tried to resist my male instincts to try to find ways to solve the problems.  Then I tried to just listen and be the shoulder.  None of it worked.  Eventually I just figured out I couldn’t win that game and that she really needed better girlfriends to deal with that crap.  Guys can only deal with so much.

So what is physical interactivity within the context of Crawford and the “Talk to Me” exhibit?  Well, to me, interactivity in its best form involves one input being used by another input, synthesized into creating an entirely new and unique input.  So in other words, when I have a good conversation with my friend Chris, he throws out an idea for a possible business, and I add my ideas for how it could work, and then we’ve both created a unique idea from that.  Whereas when I was talking with my ex, it was more like me watching The View on TV.  Only one input allowed, my eyes glazed over.

In our first class for Applications, Red Burns invited Vito Acconci to speak to us.  An artist and architect, Mr. Acconci described to us the progress of his art.  What strikes me about his work is that he focused intensely on one core idea for a while and tried to magnify and exaggerate it to its fullest.  So he would start out following someone on the streets and taking photos of her, but then he felt too involved in the process, so then he built an empty room with a ramp on one side, which he was hidden under.  Then he would decide that he needed to leave the museum, since anything stuck in a museum is not very public and not very accessible, so he edged towards architecture.

I bring this up because when attending the “Talk to Me” exhibit, it was a bit like going to a zoo, with all the animals behind cages and glass walls.  An exhibit about the interactivity between human and computer, and there were plenty of TV screens, “Do Not Touch” signs, and glass cases.  Necessary to protect the artifacts, assuredly, but a constant reminder of the inaccessibility of artifacts to the masses.

The homeless person’s city folk map, for example, is not really interactive, and not very digital either, but it does add to the exhibit because it shows how iconography can be passed along as language that is invisible (how often have you seen it?), easy to understand (even for those who are illiterate), and mutable.  I first learned about hobo code through a Mad Men episode, when Don is introduced to how to tell whether a house is safe to approach or not.

Kacie Kinzer’s Tweenbot (ITP alumna), which is a little cardboard robot wheeling through Washington Square Park, requiring the help of passersby to adjust its course, seems more interactive, in that people must decode the Tweenbot’s intent and then can reposition it to send it to its ultimate stated destination (according to the flag on it).  While this may not fit my description of needing both inputs to change their own behaviors together to form something new, what seemed to make it more interactive was the videotaping of it by the author and then the wide re-broadcast of those interactions to others.  Still, is that true physical interactivity?  If this Tweenbot, which is only designed to move forward via a motor, has no other potential behavior, does the viewing of people interacting with the Tweenbot, fixing its wheels, deliberating with other people in order to figure out what to do with it, is this physical interactivity?  Is this a Schroedinger’s robot type of situation where interactivity requires observability by third parties?

YouTube Preview Image

It seems as though even though the robot has no complex behaviors, by the end of the video, you end up loving the dumb little thing, and you care about its well-being, and you attach anthropomorphic and personified feelings and emotions towards it.  Would this work as well if you were guiding it on a screen and not guiding an actual robot physically?  I’m not so sure.

The SMSlingshot is a wooden slingshot that lets you enter in a text message and then wrist-rocket it onto a projected screen on a building wall or whatever in a blob of virtual paint…and the text message.

YouTube Preview Image

This seems physically interactive in that you are creating something unique together with the artifact, which is not merely a tool to let you do something, but also transmits your information onto a projected screen.  The only problem is that nothing real was created, but is a projection upon a physical surface, and the message will likely disappear when the system is reset, unless photographed, videotaped, or recorded in data logs.

The Feltron Report is an example of something that is a good use of digital technology but which is not really interactive.  It is a report of personal statistics, but it uses data visualizations, statistical analysis, logging, and other methods that really have only become feasible in recent years in terms of aggregating data and then prettifying and visualizing it.  It makes the data more accessible to humans while at the same time putting the data in perspective with everything else.  It makes it more human-centric in terms of “interface”, though it is not interactive, per se.

Returning to the idea of museums stifling interactivity, I did feel like, although I loved the exhibit, it was like wearing blinders.  Each piece had a QR code that you could put into your Google Goggles, but it only takes you to a MoMA page that has scant data in a poor interface, not optimized for mobile phones.  What I wanted was the ability to tweet that I was looking at that piece and wanted to share it with others.  There was no built-in way to do that.  Also, my data was not being saved in any way, so the record I had of which pieces I liked enough to Google Goggle were only saved in my Goggles history; this seems like a perhaps unintended but beautiful behavior of the Goggles software: a record of your piqued curiosity moments.

Thinking about design and interactivity make me think of Apple’s products.  I’m not a fanboi of Apple, but I do appreciate how they’ve upped the game for system interfaces and for accessibility to beautiful artifacts.  The iPad is highly interactive, highly flexible and malleable, and is instantly accessible to today’s digital kids.  The price point for those, and especially for mobiles worldwide, has gotten so low that even the poor invest in having a phone.  The iconography of the hobo code is something being tried by Nokia pilot projects, to develop system interfaces that use pictures as menus, instead of text, since many in the world are still illiterate.

A key test of physical interactivity, then, to me, is whether the masses can access it, and that it creates unique things from its interaction with humans.

PComp, Week 1: Lab, 1st Arduino Program

For my NYU ITP intro to physical computing class, I was required to prepare my breadboard and Arduino Uno together and then test a switch that would toggle off a red LED and then turn on a green LED as long as the switch was pressed. The red LED would remain on again after releasing the switch, with the green LED toggling off, so that both LEDs were at their original states.

I purchased a Gorillapod for my videos, so I could mount my Flip Mino on an adjustable tripod.  The two combine to look like some type of mini-droid.  But I need a better light on my desk to get less grainy video footage.  The camera on my phone has a flash; I don’t know if I want to spend the money on a fancier camera yet.  Not until I have some better skills or show at least some talent.

I had some problems understanding how you hook up both sides of the breadboard, not understanding that the blue and red lines, which I thought were hard-coded on the breadboard to be ground and power, respectively, were arbitrary assignments, and it only depended on how you wired up your board.  So while I followed a diagram exactly and got the lab to work, I didn’t understand the underlying principles until that point.

For the exercise, here are the documentation photos:

Arduino Uno and breadboard mounted together:

Arduino & breadboard, mounted

Testing red LED blinking with uploaded script:

Testing Blinking

Video:

YouTube Preview Image

Basic state for toggling red and green LEDs, red on/green off:

Arduino Switch Test

Switch pressed, green LED on, red LED off:

Arduino Switch Test

Video:

YouTube Preview Image

And finally, the code:

// Ben Turner
// NYU-ITP PComp, Scott Fitzgerald Thu 12:30-15:00
// Switch toggles red and green LEDs.

// Init states.
void setup() {
  pinMode(2, INPUT); // Set switch pin to input.
  pinMode(3, OUTPUT); // Set green LED pin to be output.
  pinMode(4, OUTPUT); // Set red LED pin to be output.
}

// Logic to control LED toggling.
void loop() {
  if (digitalRead(2) == HIGH) { // If switch pressed/closed, then:
    digitalWrite(3, HIGH); // Turn on green LED.
    digitalWrite(4, LOW); // Turn off red LED.
  }
  else { // If switch open/not pressed, then:
    digitalWrite(3, LOW); // Turn off green LED.
    digitalWrite(4, HIGH); // Turn on red LED.
  }
}