PComp, Week 1: Physical Interactivity

Part of our assignment for intro to physical computing class, other than the Arduino assignment, was to read the first two chapters of Chris Crawford’s “The Art of Interactive Design: An Euphonious and Illuminating Guide to Building Successful Software”, and to visit the Museum of Modern Art’s (MoMA) new “Talk to Me” exhibit, which I talked about a bit in a previous blog post.

Crawford says that in order for something to be interactive, like for instance a conversation, there are three components: listening, considering, and responding.  My immediate thought, intensely based on personal experience, was that men must be badly designed for interaction.  I had an ex who would constantly tell me I was not responding to her long complaint sessions in a helpful manner, saying I would just end up saying “that sucks” and “I’m sorry”.  I really did try my best.  I listened, tried to be helpful, tried to resist my male instincts to try to find ways to solve the problems.  Then I tried to just listen and be the shoulder.  None of it worked.  Eventually I just figured out I couldn’t win that game and that she really needed better girlfriends to deal with that crap.  Guys can only deal with so much.

So what is physical interactivity within the context of Crawford and the “Talk to Me” exhibit?  Well, to me, interactivity in its best form involves one input being used by another input, synthesized into creating an entirely new and unique input.  So in other words, when I have a good conversation with my friend Chris, he throws out an idea for a possible business, and I add my ideas for how it could work, and then we’ve both created a unique idea from that.  Whereas when I was talking with my ex, it was more like me watching The View on TV.  Only one input allowed, my eyes glazed over.

In our first class for Applications, Red Burns invited Vito Acconci to speak to us.  An artist and architect, Mr. Acconci described to us the progress of his art.  What strikes me about his work is that he focused intensely on one core idea for a while and tried to magnify and exaggerate it to its fullest.  So he would start out following someone on the streets and taking photos of her, but then he felt too involved in the process, so then he built an empty room with a ramp on one side, which he was hidden under.  Then he would decide that he needed to leave the museum, since anything stuck in a museum is not very public and not very accessible, so he edged towards architecture.

I bring this up because when attending the “Talk to Me” exhibit, it was a bit like going to a zoo, with all the animals behind cages and glass walls.  An exhibit about the interactivity between human and computer, and there were plenty of TV screens, “Do Not Touch” signs, and glass cases.  Necessary to protect the artifacts, assuredly, but a constant reminder of the inaccessibility of artifacts to the masses.

The homeless person’s city folk map, for example, is not really interactive, and not very digital either, but it does add to the exhibit because it shows how iconography can be passed along as language that is invisible (how often have you seen it?), easy to understand (even for those who are illiterate), and mutable.  I first learned about hobo code through a Mad Men episode, when Don is introduced to how to tell whether a house is safe to approach or not.

Kacie Kinzer’s Tweenbot (ITP alumna), which is a little cardboard robot wheeling through Washington Square Park, requiring the help of passersby to adjust its course, seems more interactive, in that people must decode the Tweenbot’s intent and then can reposition it to send it to its ultimate stated destination (according to the flag on it).  While this may not fit my description of needing both inputs to change their own behaviors together to form something new, what seemed to make it more interactive was the videotaping of it by the author and then the wide re-broadcast of those interactions to others.  Still, is that true physical interactivity?  If this Tweenbot, which is only designed to move forward via a motor, has no other potential behavior, does the viewing of people interacting with the Tweenbot, fixing its wheels, deliberating with other people in order to figure out what to do with it, is this physical interactivity?  Is this a Schroedinger’s robot type of situation where interactivity requires observability by third parties?

It seems as though even though the robot has no complex behaviors, by the end of the video, you end up loving the dumb little thing, and you care about its well-being, and you attach anthropomorphic and personified feelings and emotions towards it.  Would this work as well if you were guiding it on a screen and not guiding an actual robot physically?  I’m not so sure.

The SMSlingshot is a wooden slingshot that lets you enter in a text message and then wrist-rocket it onto a projected screen on a building wall or whatever in a blob of virtual paint…and the text message.

This seems physically interactive in that you are creating something unique together with the artifact, which is not merely a tool to let you do something, but also transmits your information onto a projected screen.  The only problem is that nothing real was created, but is a projection upon a physical surface, and the message will likely disappear when the system is reset, unless photographed, videotaped, or recorded in data logs.

The Feltron Report is an example of something that is a good use of digital technology but which is not really interactive.  It is a report of personal statistics, but it uses data visualizations, statistical analysis, logging, and other methods that really have only become feasible in recent years in terms of aggregating data and then prettifying and visualizing it.  It makes the data more accessible to humans while at the same time putting the data in perspective with everything else.  It makes it more human-centric in terms of “interface”, though it is not interactive, per se.

Returning to the idea of museums stifling interactivity, I did feel like, although I loved the exhibit, it was like wearing blinders.  Each piece had a QR code that you could put into your Google Goggles, but it only takes you to a MoMA page that has scant data in a poor interface, not optimized for mobile phones.  What I wanted was the ability to tweet that I was looking at that piece and wanted to share it with others.  There was no built-in way to do that.  Also, my data was not being saved in any way, so the record I had of which pieces I liked enough to Google Goggle were only saved in my Goggles history; this seems like a perhaps unintended but beautiful behavior of the Goggles software: a record of your piqued curiosity moments.

Thinking about design and interactivity make me think of Apple’s products.  I’m not a fanboi of Apple, but I do appreciate how they’ve upped the game for system interfaces and for accessibility to beautiful artifacts.  The iPad is highly interactive, highly flexible and malleable, and is instantly accessible to today’s digital kids.  The price point for those, and especially for mobiles worldwide, has gotten so low that even the poor invest in having a phone.  The iconography of the hobo code is something being tried by Nokia pilot projects, to develop system interfaces that use pictures as menus, instead of text, since many in the world are still illiterate.

A key test of physical interactivity, then, to me, is whether the masses can access it, and that it creates unique things from its interaction with humans.