The Evolution of Cooperation by Robert Axelrod

The Evolution of Cooperation was, somewhat surprisingly, a story about math. Math that actually describes a lot of things in life. It’s the story of The Prisoner’s Dilemma.  What makes The Prisoner’s Dilemma interesting, is that the players in the game have conflicting incentives.  You can be rewarded either for cooperating, or for defecting.  Unlike most things we think of as “games”, it is not zero-sum: both players can win, and both players can lose.  Too often it seems like this possibility is forgotten.  The dilemma goes like this.

Two suspected accomplices are taken into custody for a crime and separately interrogated.  Each is pressured to rat out the other.  If neither of them squeals (they cooperate) then both of them get short jail terms.  If both of them rat, they both get fairly long terms.  If only one of them gives in, and the other remains silent, then the fink gets off, and the honorable thief goes away for a long long time.

Mathematically, there are 4 possible payoffs: for giving in to Temptation (T), for mutual Cooperation (C), for mutual Defection (D), and for being the Sucker (S).  For a situation to be a Prisoner’s Dilemma, you must have: T>C>D>S.  Additionally, the total payoff for mutual cooperation must be greater than the payoff for giving in to temptation, i.e. 2C>T.

The “correct” strategy in the game depends on what the other person is doing.  If you’re only going to play the game once, then the rational thing to do is defect.  But often in the real world, you find yourself in situations analogous to the PD over and over again, potentially with the same players, so it’s more interesting to think about an iterated series of games, in which you can consider now the value of the future interactions you will have with the other player.  It turns out that the main factor determining whether or not cooperation can arise naturally, is the present valuation of the future, or as Axelrod puts it, the shadow of the future.  When players highly value the future, cooperation is likely to emerge.  When the future is heavily discounted, there is little hope of cooperative behavior, because those future interactions are treated as worthless, essentially reducing the iterated PD to a series of isolated games.

The book looks at the PD in a few different contexts, using two computer tournaments which were run in the 1980s as the backdrop.  Axelrod solicited dozens of submissions from game theorists, and eventually the computer savvy public at large, to compete in two rounds of iterated games.  After the first round and before the second, he informed would-be contestants of the initial outcome, what did well, and what didn’t, and why, so the second round was significantly more sophisticated.  Shockingly, one of the simplest strategies won both tournaments.  It turns out that Tit-for-Tat is almost unassailable.  Tit-for-Tat is a strategy which is initially cooperative (it’s “nice”), and which forever after simply does whatever the other player did in the last round.  It’s an eye for and eye.  He describes the strategy as “nice, retaliatory, forgiving, and clear”, and shows how these qualities lead to good, and very robust performance.  Although Tit-for-Tat never does better than any of its opponents, it very effectively elicits cooperation.  This is true one-on-one, and in an “ecosystem” context, in which those strategies which score poorly eventually die out of the population.  It is capable of invading a pre-existing population of serial defectors, if it enters as a group, and once established, it cannot be disloged, so long as the shadow of the future is sufficiently long.

Two chapters look at real-world applications of these facts, the first examining trench warfare in WWI, and the second involving biological systems.  Sadly, the biology chapter, written with W.D. Hamilton, is not very clear, but Richard Dawkins did a great job of incorporating Axelrod and Hamilton’s ideas into the revised 2nd edition of The Selfish Gene.  The basic lessons though are that cooperation does not require foresight, it does not require altruism, or friendship, or intelligence at all.  It can arise in any system that has at least a short term memory (Tit-for-Tat as a strategy can only remember one previous move), the ability to recognize agents with whom prior interactions have taken place, and in which the future matters.

I think we need to have a discussion, as a civilization, as a species, about the wisdom of discounting the future.  Our economic system discounts the future fairly heavily, as if we do not in fact care about the value of our grandchildren’s lives.  Our mantra of perpetual exponential growth, and ever greater material consumption is in many ways incompatible with a valuable future.  In the extreme case, Wall St. is almost congenitally incapable of thinking more than one quarterly report ahead.  Especially with the rate of technological and social change we have to contend with, it’s easy to understand how one might have trouble thinking intergenerationally.  Who could have imagined what the world would look like today when the Wright Brothers flew at Kittyhawk?  How can we hope to predict today what the world will be like a century from now?  We can’t, but we can behave in ways that maximize the number of options open to the future.

I can’t help but wonder though, if the best thing for our future prospects might not be, ironically, to somehow arrange for us as individuals to be there.  Regardless of what we might say, we are behaving as if we don’t care about our grandchildren, and that means a lot more than what we say.  How would society behave if instead of having a fairly predictable lifespan, we were more like radioactive nuclides, with a certain probability of dying each year, and an otherwise indefinite lifespan?  This is exactly the situation that was provided in Axelrod’s iterated game, because a game with a known endpoint invites last minute exploitation, and the foreknowledge of that can allow cooperation to unravel all the way to the present round.  This is the situation we would face if we were free of aging, and could only die through infectious disease or trauma.

We’re being called on, as a civilization and as a species, to deal with issues which have timescales vastly larger than our own individual lives at the moment, and unsurprisingly we are failing abjectly.  The long term nuclear waste storage facilities are supposed to remain intact and intelligible for 10,000 years… longer by far than any human civilization or edifice ever has.  Our impact on the atmosphere’s composition will take millennia to dissipate.  The species have driven to extinction will not have their niches re-filled for 5-10 million years, under natural evolution.  Some of the genes we will soon be designing and introducing into wild populations may very well last a billion years or more, if they are sufficiently clever (all Earthlings alive today carry genes that have been successfully replicating for billions of years.)  We just haven’t evolved under pressure to plan long term.   A year surely, a generation perhaps, but millennia?

The last part of the book offers suggestions to participants in Prisoner’s Dilemma type scenarios, and to those who have design and regulatory power, who would like to either foster or, in the case of collusive business practices, discourage cooperation.  Dawkins in the foreward suggests that every world leader and diplomat should be locked up with the book until they’ve read and understood it.  I’m not sure if understanding is enough, given the difference in timescales we face, but it certainly can’t hurt.

Published by

Zane Selvans

A former space explorer, now marooned on a beautiful, dying world.

3 thoughts on “The Evolution of Cooperation by Robert Axelrod”

Leave a Reply