Nature of Code Midterm: Genetic Crossing with Verlet Physics

For my Nature of Code midterm, I added a physics world to my genetic crossing project from last semester.

A short video, sped up quite a bit because my MacBook Air + QuickTime screencap choked on it:

Here’s some past posts on the project: http://blog.benturner.com/category/itp/icm/

Professor Shiffman‘s notes from his Nature of Code book were invaluable in understanding vector- and particle- based systems, so by the time we got to Box2D and ToxicLibs, understanding the underlying framework needed to make a world that reacts to physical forces was far easier.

My genetic crossing project, which allowed for people to reproduce and carry characteristics from their parents and their religious and national environments, was a perfect project for last semester but also a perfect transition project for a physics system, because I was trying to study the natural flow and connection between many different objects with differing characteristics.

I chose to use ToxicLibs’ VerletPhysics2D library because all I had to do was have my Person, Nation, and Religion objects to inherit the VerletParticle2D class, and then be added as particles with connecting springs in order to have them react to each other.

class Person extends VerletParticle2D {
  Hashtable trait = new Hashtable();
  String namePerson, gender;
  int parent1, parent2, uniqueID, pWeight, pHeight, age, mbti, lastBaby,
  nationality, religion, children,strength,intelligence,wisdom,charisma,stamina,wit,
  humor,education,creativity,responsibility,discipline,honesty,religiosity,entrepreneurialism,
  appearance,money,gracefulness,stress,health,luck,talentMath,talentArt,talentSports, happiness, employed;
  int[] colorBaseArray = new int[3]; // need base colors for each person
  float[][] traitPosArray = new float[numCharacteristics][2]; // save x,y for all characteristics
  float[][] traitPosArray_orig = new float[numCharacteristics][2]; // original copy of coords
  boolean alive;

  // constructor
  Person(int _uniqueID, int _mbti, int _stress, int _health, int _gracefulness, int _luck, int _talentMath, int _talentArt, int _talentSports, int _strength,
  int _intelligence, int _wisdom, int _charisma, int _stamina, int _wit, int _humor, int _pWeight, int _pHeight,
  int _education, int _age, int _creativity, int _responsibility, int _discipline, int _honesty, int _religiosity,
  int _entrepreneurialism, int _appearance, int _money, String _namePerson, String _gender, int _parent1,
  int _parent2, int _nationality, int lastBaby, int _religion, int _children, int _employed, float x, float y) {
    super(x, y);

Two things I found out pretty quickly.  First, I made this project with arrays and a hashtable but had I known better, I would have used ArrayList and no hashtables (just variables with integer values.  The other?  I had to rewrite my class xPos, yPos variables as x,y and include them as direct variables instead of references to new variables, in order for the objects to be recognized in the physics world.  Basically, my objects would drop like a stone in a pond because they were not connected to the system but still were “particles” in a world with gravity.

I still have a problem with bizarre array counting for adding the large amount of springs that connect every person, nation, and religion to each other, and more added as each new person is born into the world.  I’m probably wasting tons of memory and overwriting certain springs because I didn’t make entirely unique arrays.

But the simulation is now operating within a 2D world and now I can go through the process of tweaking the relationships between the objects to more clearly match up with the characteristics/heredity/uniqueness that I coded into the simulation last semester.  God is located in the center now, instead of in the bottom right, so he is a fixture within the world, with the nations and religions along the periphery of the map (no meaning behind that!) to space out the particles which are tethered by their springs.

Relevant code:

  physics.addParticle(god);
  god.lock();

  // #PeopleChange init all person objects & nations
  for (int i=0;i<numPeople;i++) {
    physics.addParticle(person[i]);
  }

  for (int k=0;k<numPeople;k++) {
    springArray[k] = new VerletConstrainedSpring2D(god, person[k], godRL, godGravity);
    physics.addSpring(springArray[k]);
  }

  int z=0;
  for (int k=0;k<numPeople-1;k++) {
    if (z < numPeople) {
      for (int l=0;l<numPeople;l++) {
        newPersonSpringArray[z] = new VerletConstrainedSpring2D(person[l], person[k], int(random(250, 650)), random(0.001, 0.003));
        physics.addSpring(newPersonSpringArray[z]);
        z++;
      }
    }
  }

Code is on Github.  I recommend running this in Processing since I have it set to something like 1200×700 pixels.

Mobile Web: 2 Classes Left

With a couple classes to go, here is the status of StreetEyes, the app made by Phil and me:

Mainly what we have left to do is add some markers of some dummy users so that we can do a view request from them, and then have that data passed on via database so that they can see it, and then accept it.  We need to make sure the HTML forms passing along the broadcaster’s submission as well as the view request’s submission are linked up.

Perhaps the most work needs to be done on making the beacon page viewable so that you can easily scroll down a list of updates sent by various people, with new updates added dynamically.

Redial Midterm Ideas

I’m not sure what to do for my Redial midterm at all.  We’re supposed to demonstrate our understanding of concepts we’ve learned in class, such as creating dialplans, routing within a dialplan, recording audio and playing it back, and interfacing with Ruby, PHP, the shell, and AJAX.

I think what interests me the most is doing some server-side scripting in the background to make the dial-in appear to be intelligent and interactive.  Mostly I am hung up technically and conceptually on not knowing how to do on-the-fly text-to-speech synthesis, to streamline creating an interactive dialplan.  I find not being able to have this functionality limits the extent to which I experiment with Asterisk, more than anything else.

So far I’ve just made some quick example dialplans, though I incorporated it into a puzzle I made, as well as added an extension that gives you Jeremy Lin’s points and assists in-game by scraping ESPN’s web site and returning the numbers.

My favorite telephony-related project so far is Big Screams, created by an ITP alumn, Elie Zananiri:

People call in to Big Screams and their phone number generates a unique creature who shows up on a big row of screens.  The caller then has to scream into his phone, and the loudness repels other creatures on-screen.  If someone pushes you off the screen, you get hung up and you can dial back in — since the creature is unique to your appearance, you look the same the next time you dial in, too.

Here are the ideas I’ve come up with so far.  I’m not usually this clueless when coming up with project ideas, but I’m just stumped on how I can take advantage of what a phone is good at and how I can build it within Asterisk/scripting.

  • Android app that lets you quickly and discreetly press a panic button so that it dials you in 5 minutes or whatever, so you can get out of an awkward situation.  “OH, LOOK, I HAVE A PHONECALL!  GOTTA GO!”
  • Something like Big Screams, where someone dials in, answers a series of questions, presses 0 through 9 (such as, rate yourself creativity-wise, 0-9), and then creates a character based on those entered variables, via Processing.  This might be more of a final project, in conjunction with my Nature of Code class.
  • Simon. Like the old toy that lit up in certain patterns that you had to repeat.  This could be done with tones and a dialpad.
  • Rock, paper, scissors, multi-player, having two players call in and the results displayed on a monitor.
  • System that, when you dial in, calls a random person in its database to link you two up.  It’s opt-in, and would be like a phone version of chatroulette.  Minus the penises.
  • Multi-User Domain (MUD) where you use the dialpad to move and use things.  So you can press 1 to move northwest.  5 to attack or use, # to block, * to exit.  Or whatever.  This one is pretty text-to-speech heavy, which discourages me.  Maybe it could move a character on a large monitor.  There could be an epic melee with multiple users?

 

That’s all I’ve got so far. :/

ITP Puzzles

So at ITP, it all started when Michael Colombo posted an email to the list about an as-yet unsolved puzzle that he left up on his locker.  He had a cardboard cutout taped to the door of his locker.  Here is a similar image of it from the web:

This led me to start posting my own puzzles on my locker.  Can you figure them out?  There are two on this photo (don’t count the “READING IS SEXY” sticker):

And the most recent one here:

Email me if you think you solved any of them.

Nature of Code Week #4: Particle Systems

Nature of Code continues to be fascinating.  Professor Shiffman’s notes are incredibly well-done and very easy to understand as he explains pretty dense physics problems sequentially — the forthcoming kickstarted Nature of Code book will be well worth its price.

This week we covered particle systems, inheritance, and polymorphism.  The particle systems go hand in hand with repelling forces, which were built upon applying forces to other objects, which was built upon vector movement.  I’ve been having to re-read the notes for each chapter several times each week to make sure I understand it.

Our homework:

“At this point we’re a bit deeper in the semester and approaching the midterm project. Feel free to simply start on a midterm idea or continue something you’ve been working on previously. If you would like to try an exercise related to particle systems, here are some suggestions:

  • Use a particle system in the design of a “Mover” object. In other words take, say, one of our earlier examples and instead of rendering a Mover object as a simple circle, emit particles from the mover’s location. Consider using the Asteroids example and emit particles from the ship when a thrust force is applied.
  • Create a particle system where the particles respond to each other via forces. For example, what if you connect the particles with spring forces? Or an attraction / repulsion force?
  • Model a specific visual effect using a particle system — fire, smoke, explosion, waterfall, etc.
  • Create a simulation of an object shattering into many pieces. How can you turn one large shape into thousands of small particles?
  • Create a particle system in which each particle responds to every other particle. (Note we’ll be doing this in detail in Week 6.)”

 

I immediately started playing with creating systems of systems of particles, using Prof. Shiffman’s example code.  I turned down the particle size, increased the constrained force distance, and also increased the number of repelling objects to 60.  I had some particle systems randomly placed upon sketch startup, but also made it so you could add more systems upon mousePressed().  I tweaked the color and background to look more like fading embers or fire, then made the repeller objects very faint.  The particles gradually fade away until they die, using opacity as a measure of life strength, so that when opacity reaches 0, the particle dies.  This helps to preserve framerate.

The result is that the system ends up looking like Dorsey-ish flows of traffic, or seeing dynamic internet traffic as it passes through the world’s backbones at night.  Feedback in class suggested that I try to add repeller forces as geometrical shapes, to guide the particles instead of having random patterns generated.

Particle(PVector l) {
    acceleration = new PVector(0,0);
    velocity = new PVector(random(-.3,0.3),random(-.1,.1));
    location = l.get();
    lifespan = 200.0;
    randColor = color(int(random(150,230)),int(random(90,130)),int(random(25, 90)));
}

and

for (ParticleSystem2 psX: systems) {
    psX.run();
    psX.addParticle();
    for (int i = 0; i < repellers.length; i++) {
      psX.applyRepeller(repellers[i]);
    }
}

A video:

Code at Github & OpenProcessing.org.

What I’m thinking now is that, for my NoC midterm and final, I can adapt my final from Intro to Computational Media last semester so that all my objects are essentially particles exerting forces on each other.

For a recap of my project, read this long blog post.  Simply put, the project was a simulation incorporating the traits of people, nations, and religions, creating offspring who are summations of their genetics and environments.

I might try to make the text labels for each person orbit the particles, but this might result in a massive drop in framerate.  Here’s a screenshot of what my final project looked like.

What’s good is that I didn’t attempt movement last semester, since I didn’t really know how to dynamically and smoothly move the “people” using vectors.  I am a little worried I’ll have to re-write some deep parts of the code in order to get this to work, but it’s a worthy project.

Dynamic Web Dev Week #4 Homework: Karaoke Flow

My Karaoke Flow project has now been moved over to Node.js Express.  What a joy!  Express is like a beautiful JavaScripty version of Sinatra for Ruby.  And then installing the node module nodemon is a perfect substitute for shotgun on Sinatra.  This stuff makes webdev exciting again!

I upped the new code to github, which is using a few views to display the (so far minimal) code and templated structure.  I’ve also deployed the app to heroku; you can see it at http://karaokeflow.herokuapp.com/ but nothing’s really working there yet since we haven’t learned how to hook into MongoDB yet.

Here’s some screens of the app now:

Next steps: set up the MongoDB structure, hook into it, and figure out a way to implement a timer that starts once someone creates a flow.  Then people need to be able to join that flow and everyone needs to be on the same timer.  I’m not sure how this will work — maybe I’ll just scale it back for now.  I’d also like to have random hiphop beats playing while people come up with their flows.  The production side, once someone starts performing — this will have to come at the late stages, if it’s possible.

Mobile Web: Week #3 Homework

For the homework, we were supposed to go through some JavaScript exercises in the browser console to play with the Document Object Model and with events.  We were also supposed to post some screenshots of the first pages made for our mobile apps.  To avoid spamming my blog, the rest of the post is below the jump:

Read More »

Mobile Web: Homework and Project Proposal

For this week’s mobile web homework, we were to open the default PhoneGap app project within Eclipse and then mess with the underlying HTML and CSS within the JavaScript developer console.  I did some simple selections, creating a variable “test” to store the object of HTML containing all the “H1” elements, using “document.getElementsByTagName”.  After that, I changed the innerHTML, style.color, and style.background to demonstrate how anything in the DOM can be manipulated using the console.

Screenshots:

HTML source

changing some H1 tags to black background

changing some H1 tags to red text

We also for our homework had to work on our final project proposals.  Me and Phil are working together on LiveBeam.  We have some pencil wireframe photos below, and they will be converted into the front end next week.  The plan is below.  We entered LiveBeam into the NYU ITP Pitch Fest, as well.

Project: LiveBeam

Team:

Phil Groman, Ben Turner

Summary:

LiveBeam is your eyes on the ground.  It coordinates people who want to see images or updates or video from a remote location, together with people with mobile phones who can provide that on-the-ground real-time information.

Core Functionality:

  1. There is a dynamic interface with a map locating active Broadcasters worldwide.  A “viewer” sees a “broadcaster” at a certain location on a digital map in his browser.  Broadcasters can update their profiles concerning events/ situations happening around them.
  2. The viewer clicks on the broadcaster and requests multimedia from the broadcaster by way of sending a message through LiveBeam.
  3. The broadcaster can accept the request and can look at the requesting viewer’s profile, captures the multimedia, and shares it via LiveBeam.
  4. The viewer can chat with the broadcaster to direct the information, and leaves feedback and rates the quality and relevance of the video feed.
  5. Levels of openness and privacy can be controlled by the broadcasters to only allow video requests from certain contact groups or individuals.
  6. The viewer confirms receipt of the multimedia, ending the transaction and registering appropriate credit to both parties automatically.

 

Potential Use Cases:

  1. Check conditions at specific locations (traffic / weather / waves / lines at restaurants)
  2. Share live sports / music events
  3. Watch / broadcast breaking news as it happens
  4. Stalk / spy on people
  5. Idle travel to interesting locations
  6. Live pornography
  7. News organisations can access a global network of amateur video journalists
  8. Friends can offer a request-based video feed in Facebook status updates — “Beautiful sunset over the East River — Join Me at LiveBeam”
  9. Allows anyone to become a Broadcaster and to build a live audience

 

Potential Revenue Streams:

  1. Advertising — banners and pre-rolls
  2. Subscription services — premium accounts with more viewing and broadcasting privileges and functionality
  3. Partnership with a news agency
  4. Sales of branded mobile handsets, optimized for broadcasting
  5. Promotion of top curators and broadcasters, having them pay a fee for premium (see #2) but share revenue with them like YouTube does

 

Stage 1: Wireframing and Proposal (Due: 09 Feb 12)
Show wireframes of what various screens may look like.

  • Tasks:
    • Convert meeting notes into proposal and project plan
    • Make wireframes showing mockups of main screens used by broadcasters and viewers
    • Documentation for class
  • Questions / Pitfalls:
    • Are we staying focused on our core functionality to get that working first?
    • Will we have enough time to build core functionality within the course’s 7-week timeframe?
  • Resources:
    • Adobe Photoshop, meetings, Google Docs

 

Stage 2: Front-End Design (Due: 16 Feb 12)
Convert wireframes to a front-end with dynamic interface, using jQuery Mobile, jQuery, CSS, JavaScript.

  • Tasks:
    • Make main user interfaces for broadcasters and viewers
    • Decide on buttons and layout
  • Questions / Pitfalls:
    • Avoid making the interface too complex with too many features or buttons
  • Resources:
    • jQuery Mobile, jQuery, CSS, JavaScript, virtual web host, PhoneGap

 

Stage 3: Back-End Design (Due: 23 Feb 12)
Add a back-end database to save data, change front-end so it can interface with the database.

  • Tasks:
    • Link up database with appropriate search and data entry queries via PHP/MySQL
    • Add in privacy settings (Google+ circles possibly, or just add individuals)
  • Questions / Pitfalls:
    • May not know exactly how we want to structure our data or db interfaces
  • Resources:
    • PHP, MySQL, HTML

 

Stage 4: User Handshaking (Due: 23 Feb 12)
Test and strengthen robustness of handshaking and connection between viewer and broadcaster.  Add in GPS/geolocation/triangulation/manual location entry.

  • Tasks:
    • Test with dummy accounts the basic interactions between viewers and broadcasters
  • Questions / Pitfalls:
    • Keep the interactions simple, work iteratively so as to avoid deep-rooted bugs
    • Will we need a user authentication system or can we use fake accounts for a demo?
  • Resources:
    • Other students, PHP, MySQL, PHP code breakpoint and use case testing

 

Stage 5: Multimedia Input and Sharing (Due: 01 Mar 12)
Add in video streaming or use external source.  Share video, photos, audio, text through LiveBeam after evaluating best options for each.

  • Tasks:
    • Add in GPS or other geolocational sharing
    • Allow broadcaster to send or share video and images
    • Allow viewer to see chat, video, etc. in viewer’s window
  • Questions / Pitfalls:
    • How accurate can we rely on the geolocation to be, or do we need to work around it?
    • Can we capture video or photos into PhoneGap?
    • Can we share video and photos through the app or will we rely on third-parties?
    • Will we need to link up with other services’ APIs?
    • Will we need to provide links to other services to show multimedia?
  • Resources:
    • PhoneGap, virtual web server, data hosting

 

Stage 6: Use Cases and User Testing (Due: 01 Mar 12)
Test with multiple users and pre-populated testing accounts.  Conduct actual tests within Manhattan.

  • Tasks:
    • Test core functionality with other ITP students
    • Fix bugs and add in redundancy for potential failpoints
  • Questions / Pitfalls:
    • Will need a lot of time to conduct proper user testing
  • Resources:
    • Other ITP students, extensive note-taking, video recording, screen capturing?

 

Stage 7: Build a Presentation (Due: 08 Mar 12)
Prepare a slide presentation including research, explanation of core functionalities, potential use cases.

  • Tasks:
    • Make slidedecks for presentations, both technical and business
    • Competitor research
    • Market research
    • Results of user testing
    • Explanation of core functionalities
    • Explanation of potential use cases
  • Questions / Pitfalls:
    • May need to conduct this stage throughout in order to stay on track
    • May need to wait to build business until after successful user adoption
  • Resources:
    • Meet with entrepreneurship mentors and Stern folks for advice

Redial: Playing with Asterisk & Voicemail

For Redial this week, I’m supposed to give a short presentation — I chose my topic to be dealing with phone security, operational security, basic cellphone theory, and the burners used in The Wire.  Should be fun.  I didn’t prepare any notes or make a slide presentation.

This week’s homework for Redial involved building a simple voicemail dialplan.  Here’s mine:

[vt520_week2hw]
;exten => s,1,Wait(1)
;exten => s,n,SayDigits(${CALLERID(num)})
; same => n,Playback(vm-num-i-have)
; same => n,SayDigits(9725107983)
; use Archer's joke voicemail sound
;exten => s,n,Playback(demo-echotest)
;exten => s,n,Echo()
;exten => s,n,Playback(demo-echodone)
;exten => s,n,Hangup()

[vt520]
exten => s,1,Wait(1)
;exten => s,n,Playback(tt-weasels)
exten => s,n,Playback(/home/vt520/asterisk_sounds/archer_voicemail)
exten => s,n,Goto(vt520-vm,s,1)

[vt520-vm]
; vm-review: Press 1 to accept this recording. Press 2 to listen to it. Press 3 to re-record your message.
exten => s,1,Voicemail(112@vt520_voicemail, u)
;exten => s,n,Record(asterisk-recording%d:ulaw)
exten => s,n,Playback(vm-review)
;exten => s,n,Festival('Press 1 to continue or 2 to change your message')
exten => s,n,WaitExten(10)
exten => s,n,Hangup()
;voicemail will go the a extension if * is hit during voicemail app
exten => a,1,VoiceMailMain(112@vt520_voicemail)
exten => a,n,Hangup()

exten => 1,1,Playback(queue-quantity1)
exten => 1,n,SayNumber(18)
exten => 1,n,Playback(queue-quantity2)
exten => 1,n,Hangup()

exten => 2,1,Playback(${RECORDED_FILE})
exten => 2,n,Playback(vm-review)
exten => 2,n,WaitExten(10)
exten => 2,n,Goto(vt520-vm,s,1)

Basically what is going on here is that when you dial in to my extension, it first plays back Sterling Archer’s elaborate voicemail hoax, as I’m not available to take the call:

I recorded the audio off a copy of the Archer episode using fraps, then exported it to WAV via Adobe Premiere after trimming down the sound clip.  Then I had to convert the WAV to a .sln file using the command:

sox archer_voicemail.wav -t raw -r 8000 -s -w -c 1 archer_voicemail.sln

…which I learned about through voip-info.org’s helpful wav conversion page.

I also found voip-info.org’s list of audio files and descriptions to be very useful for locating exactly which pre-recorded sound file would be appropriate.

Once the voicemail message plays, the system prompts the caller to record a message.  I wasn’t able to get recording to work, though I did in other experiments manage to get it to try to record and then email me with the voicemail details.  But this dialplan WILL allow you to confirm the message you left and will then move to another “extension” via a menu, before eventually hanging up.

I used a Goto function between the initial voicemail message and the rest of the dialplan so that later I could make it loop back to recording the message again if necessary.

Shout-out to Archer and the Archer show crew and cast for the voicemail.