Today in our international negotiation class, one of the readings we looked at was Robert Axelrod’s highly influential article, “The Evolution of Cooperation”.

In it, he describes a computer simulation he ran based on the game theory game “prisoner’s dilemma”.  Basically, the game assumes two prisoners, speaking to the police separately.  If both stay mum on the crime they both committed, they get the lightest punishment.  But if both tattle on the other, they both get a fairly bad punishment.  If one tells on the other but the other doesn’t do the same, then the traitor gets a lighter punishment and the other gets a really horrible punishment.  So it would seem to be “safest” to confess on the other party, just to be sure.

Instead of analyzing one iteration of the game, he extended it to run multiple iterations, building on past results, to generate the best outcome for both parties.

What Axelrod concluded was that the best strategy for a party in this game is to reciprocate the other’s move, after opening with a cooperation move.  That is, the first thing you should do is cooperate.  But if the other party defects, then you should reciprocate by defecting too and reciprocate every move henceforth.  But your overall strategy should be to want to cooperate and forget past defections.

Basically, as our professor put it, this strategy is “be nice, retaliatory, and forgiving”.

There are flaws in this game, as my brilliant classmates pointed out:  parties are rarely equal in international negotiations (the US vs. Panama), the first defector ultimately has a slight advantage, and sometimes payoffs for cooperating/defecting are different for each party.

But I am interested in the overall message:  reciprocity.

If I am going to try to build a comprehensive reputation system, then it cannot just be a quantitative measure of your life-long statistics.  It must also incorporate social and qualitative measures, such as what other people think about you.  In other words, your reputation as we tend to think of it now.  It is an amalgamation of all the acts that other people know (and suspect) you’ve done.

If you are going to evaluate someone else’s honesty and trustworthiness, you want to know whether they’ve ever pulled a fast one on someone else.  Even then, would you still conduct commerce with them? will allow people to post bad things about other people, something that’s fairly neutered online right now (for obvious reasons).  But what mechanism needs to exist in order to counter this is the ability to reciprocate both by the person aggrieved and by the system itself.

Yes, you can post something negative about me.  But the system will see this as a negative attack that should cost you something, and both my friends and myself will have methods to attack you back.  All done in public.  So it’s a transparent system that holds people accountable to their actions.  It allows for reciprocity.

Says Axelrod:

“Once the word gets out that reciprocity works – among nations or among individuals – it becomes the thing to do. If you expect others to reciprocate your defections as well as your cooperations, you will be wise to avoid starting any trouble. Moreover, you will be wise to respond appropriately after someone else defects, showing that you will not be exploited. Thus you too would be wise to use a strategy based upon reciprocity. So would everyone else. In this manner the appreciation of the value of reciprocity becomes self-reinforcing. Once it gets going, it gets stronger and stronger.  This is the essence of the ratchet effect: Once cooperation based upon reciprocity gets established in a population, it cannot be overcome even by a cluster of individuals who try to exploit the others. The establishment of stable cooperation can take a long time if it is based upon blind forces of evolution, or it can happen rather quickly if its operation can be appreciated by intelligent players.”

It will be interesting to see how different personalities approach this functionality.  Will some people defect more often on others?  Will some really come to value being trustworthy?  Will people become hypersensitive about any sort of negative attack like on eBay’s buyer/seller ratings?  How can facilitate an equitable system.

These won’t be easy questions to answer, but my overall goal is to create a reputation management system that rewards cooperation and social capital while at the same time valuing negative feedback, both in the sense of finding it necessary and important, and in the sense that if you say something bad about someone, it should cost you something to take that course of action.

A last point I want to make is about the value holds:  its permanence.  It establishes a long-term view and accountability upon all users.

Axelrod says, “For cooperation to prove stable, the future must have a sufficiently large shadow. This means that the importance of the next encounter between the same two individuals must be great enough to make defection an unprofitable strategy.”

Clay Shirky, who’s been thinking about reputation a lot lately, talks about the “shadow of the future” as well, based on Axelrod’s reciprocity.  You get a sense that the Ayn Randish selfishness and every man for himself attitude is only a short-term game.  Some might call this social Darwinism or survival of the fittest, or Malthusian or gladiatorial (as Wikipedia says in its article on Axelrod’s “The Evolution of Cooperation”).

But the shadow of the future, the mankind and individual memory we have of the past, allows us to play out into the long-term.  And this is where altruism and “mutual aid” come in.  It might serve our purposes better to work together than to play for ourselves (i.e. the prisoner’s dilemma).  We may need to be competitive to advance, but if we destroy each other in a zero-sum game, then we will all die.  The ecosystem we are all a part of thrives when we’re all there, participating.

So was not born of a cutthroat mentality where every person must compete with everyone else. is that ecosystem, made up of many levels and units and layers of biomass and data, and the more datamass there is, the more information we can learn from it in order to grow more individually.  It is a network of trust and accountability that creates teeming supplies of social capital so that we can conduct better business and social networking and self-awareness.

Mike Neuenschwander wrote a post on how it’s good that social science is coming into the internet security (which feeds off of reputation) discussion, which had been dominated by computer science in the past.  What he says is,

“However, I’d like to add that Axelrod’s work is only a starting point—a portal into the discipline of what I now refer to as “social trust online.” Some of my “Laws of Relation” hark back to Axelrod. But Axelrod’s work on reciprocity isn’t sufficient for developing new pathways to trust on the Internet. In fact, filling in all the other applicable research on trust is the entire purpose of my contributions to this site.”

So with that in humble mind, there’s a lot more to research in order to make the massive all-encompassing trust network for every human and thing on the Earth that I envision it to eventually be.

One Comment

  1. Thanks for the reference to my post! In addition to Axelrod, I think you’ll find the work of Elinor Ostrom enlightening ( Her work on resolving social dilemmas in “common pool resources” is very applicable to Internet security. Judith Donath also has some fascinating work on signaling:

Comments are closed.