The homework for PComp involved wiring up two different analog sensors to an Arduino and then displaying serial output and LED brightness as the feedback changes.
Two screenshots from Professor Fitzgerald’s class, showing clean boards and wiring:
Here is my layout, showing both LEDs on.
The first part of the lab involved simply hooking up a potentiometer (in this case, a small blue volume-like knob) and then outputting both to Serial and to an LED. Video below:
You can tell that in the Serial monitor, the value gets higher as the knob is turned to the right. The LED however would only have two states, on and off. It was supposed to get brighter as the potentiometer was turned to the right. I discovered in the next part of the lab that the reason for this was because the LED’s output pin needed to be connected from the Arduino Uno via a true analog pin, not a digital one.
The analog pin (with a tilde by the number) indicates that it is designed to handle analog input and output. In other words, it would be able to adjust the LED’s brightness via stepping, whereas a purely digital pin could not.
In the next stage of the lab, another variable input was added, and since I had no other knobs, I used the force-sensing resistor, which detects pressure placed on it (for example, by squeezing it). Both sensors were supposed to gradually dim or brighten the LED, depending on the analog input values. Video below:
At first I was getting weird power bleeding errors where the FSR would work when I squeezed it (the value increased in the serial monitor) but when I turned up the blue knob, both the knob’s analog value and the FSR’s would go up. This indicated that their circuits were connected somehow.
Eventually I realized (after some excellent troubleshooting from Roopa!) that I had power and ground hooked up to the same row on the breadboard, probably causing weird power-related issues. Once I straightened that out, along with plugging in the LEDs to analog pins, then my circuits worked exactly as intended, with both the FSR and knob varying on a scale from 0 to 1023.
For my creative project, I am planning on building an envelope that simulates a wife’s kiss before she sends it off to her deployed husband, to capture the magic of a kiss to seal an envelope, unlocked only by the intended recipient. It will work with an FSR, which is not ideal, but I think will convey the idea okay. :)
A list of the sensors I come across on my commute from Tompkins Square Park to the Tisch building:
- Touchscreen on HTC Incredible Android phone when I turn off my alarm
- HTC Incredible checks sunlight input via sensor and dims screen accordingly
- CPU temperature sensor on PC desktop triggers CPU and case fans to spin faster or slower
- Scrollwheel on mouse detects user input when scrolling through a page
- Touchpad on MacBook Air detects multiple finger presses along with movement
- Fridge sensor cools fridge when it gets too warm
- Traffic lights detect when ambulances with strobe lights pass by
- Police radar guns detect multi-band measures of target speeds
- Swiping my NYU ID card at the gym allows me entry through the turnstile
- Inserting ID card into copy machine at library and then adding money via bills
- Sensors in faucets and for towel dispenser in bathroom detect motion
- Bar code scanners at Walgreens for buying goods
- Adjusting volume of speakers for computer in classroom
- Using accelerometer on Nike+ to calculate running distance automatically
Glasses that let you tap into a darknet, similar to Daniel Suarez’s “Daemon” and “Freedom (TM)” books. You would put on normal-looking glasses and they would verify your identity through retinal scan. The glasses would display your chosen “layers” of data layered onto your view. So above everyone’s heads, you would see data about them. People who were not on the darknet would be somewhat derisively called “NPCs”, or non-player characters, for the people who chose not to interface with this deeper level of integration. The displays above people would display their reputations, perhaps their “soundtracks”, their openness to dating, their interest groups…whatever they chose to display there that wasn’t by default public.
The layering for people would probably need to be done by some form of direction- and range- finding, along with body recognition to identify peoples’ heads. Data would be transmitted by something with a slightly better range than Bluetooth. GPS would be too unreliable for this type of feature, since it would require outdoor use with line-of-sight, plus locking in on a satellite over and over. We would need something more lo-fi.
Obviously there would be data about objects, buildings, etc. in our heads-up displays as well, using similar broadcasting tech from those non-human objects. By asking for directions to a place in your HUD, a green line would extend towards your destination in your HUD, so you just have to follow the line.
The problem with my fantasy device in the context of this discussion is that it is not really physically interactive. So, while I would prefer that the glasses could recognize your hands and allow you to manipulate an interface with them, similar to something between Minority Report and the Kinect, then I guess for now I’d settle for a cellphone or iPad extended interface.
Objects and people would appear for manipulation on your screen, letting you add them as starred IDs or add the person as “followed” or add their contact info or whatever. The device would also be something like Neal Stephenson’s primer from The Diamond Age, with an anonymized actor on the other end of the network interacting with you through the device to help you get to where you need to go, or to help children learn at their own pace. An immediate application for all this would be a game in which you’re looking at augmented reality and trying to hunt down other people, or do scavenger hunts.
The screen would transmit data with the glasses, and would be operated by touchscreen and whatever knobs or buttons that would also need.
Mostly I am interested in having the glasses display peoples’ reputations and other data about them so that you can decipher these things when meeting someone or figuring out your safety level. There’s something to be said for anonymity, but there’s also said for an alternate system where those who opt-in can enjoy layered augmented reality data.