John Nash, winner of the Nobel Memorial Prize in Economic Sciences and the equally prestigious Abel Prize in mathematics, died alongside his wife in a tragic car accident this week. Nash rose to popular fame due to Russell Crowe's portrayal of his life in the Oscar-winning *A Beautiful Mind*, but his fame within economics needed no such Hollywood treatment to be assured.

Now, Nash's contributions to economics are very small, though enormously influential. He was a pure mathematician who took only one course in economics in his studies; more on this fortuitous course shortly. The contributions are simple to state: Nash founded the theory of non-cooperative games, and he instigated an important, though ultimately unsuccessful, literature on bargaining. Nash essentially only has two short papers on each topic, each of which is easy to follow for a modern reader, so I will generally discuss some background to the work rather than the well-known results directly.

First, non-cooperative games. Robert Leonard has a very interesting intellectual history of the early days of game theory, the formal study of strategic interaction, which begins well before Nash (Leonard 2012). Many like to cite von Neumann's (1928) "Zur Theorie der Gesellschaftsspiele" ("A Theory of Parlor Games"), from whence we have the minimax theorem, but Emile Borel in the early 1920s, and Ernst Zermelo with his eponymous theorem a decade earlier, surely form relevant prehistory as well. These earlier attempts, including von Neumann's book with Morganstern (von Neumann and Morgenstern 1944), did not allow general investigation of what we now call non-cooperative games, or strategic situations where players do not attempt to collude. The most famous situation of this type is the Prisoner's Dilemma, a simple example, yet a shocking one. Competing agents, be they individuals, firms or countries, may (in a sense) rationally find themselves taking actions that both parties think are worse than some alternative. Given the US government’s interest in how a joint nuclear world with the Soviets would play out, analysing situations of that type was not simply a "Gesellschaftsspiele" in the late 1940s; Nash himself was funded by the Atomic Energy Commission, and RAND, the site of a huge amount of important early game theory research, was linked to the military.

Nash's insight was, in retrospect, very simple. Consider a penalty kick in football, where the only options are to kick left and right for the shooter, and to simultaneously dive left or right for the goalkeeper. Now at first glance, it seems like there can be no equilibrium: if the shooter will kick left, then the goalkeeper will jump to that side, in which case the shooter would prefer to shoot right, in which case the goalkeeper would prefer to switch as well, and so on. In real life, then, what do we expect to happen? Well, surely we expect that the shooter will sometimes shoot left and sometimes right, and likewise the goalkeeper will mix which way he dives. That is, instead of two strategies for each player, we have a continuum of mixed strategies, where a mixed strategy is simply a probability distribution over the strategies "left, right".

Nash realised that the idea of mixed strategies, alongside a reasonable solution concept that I optimise given my beliefs about what others will do, with my beliefs in the end correct, guaranteed an equilibrium in every finite strategy game where players maximize expected utility. The essential idea of the proof is that mixed strategies ’convexify’ the strategy space so that fixed point theorems guarantee an equilibrium exists.^{1} More importantly, the fixed-point theorems Nash used to generate his equilibria are now so broadly applied that no respectable economist should now get a PhD without understanding how they work.

Note the intriguing intellectual history of Nash's concept: game theory, as opposed to Walrasian/Marshallian economics, does not derive from physics or other natural sciences, but rather from a program at the intersection of formal logic and mathematics, primarily in Germany, primarily in the early 20th century. In a sense, economics post-Samuelson, von Neumann and Nash forms a rather continuous methodology of qualitative deduction, whereas it is our sister social sciences which, for a variety of reasons, go on writing papers without the powerful tools of modern logic and the mathematics which followed Hilbert. When Nash makes claims about the existence of equilibria due to Brouwer, the mathematics is merely the structure holding up and extending ideas concerning the interaction of agents in non-cooperative systems that would have been totally familiar to earlier generations of economists who simply didn't have access to tools like the fixed point theorems, in the same way that Samuelson and Houthakker's ideas on utility are no great break from earlier work, aside from their explicit incorporation of deduction on the basis of relational logic, a tool unknown until well into the 19th century.

Nash only applies his theory to one game: a simplified version of poker, due to his advisor, called Kuhn Poker. More broad applications of Nash's non-cooperative solution were not immediately seen, at least to the types of applied situations where it is now commonplace, without a handful of modifications. In my reading of the intellectual history, non-cooperative games was a bit of a failure outside the realm of pure mathematics in its first 25 years because we still needed Harsanyi's purification theorem and Bayesian equilibria to understand what exactly was going on with mixed strategies, Reinhard Selten's idea of subgame perfection to reasonably analyse games with multiple stages, and the idea of mechanism design of Gibbard, Vickers, Hurwicz, Myerson, Maskin, and Satterthwaite (among many others) to make it possible to discuss how institutions affect outcomes which are determined in equilibrium. It is not simply economists that Nash directly or indirectly influenced; among many other fields, his work leads to the evolutionary games of Maynard Smith and Price in biology and linguistics, the upper and lower values of the 1953 results have been used to prove other mathematical results and to discuss what is meant as truth in philosophy, and Nash is widespread in the analysis of voting behaviour in political science and international relations.

The bargaining solution is a trickier legacy. Recall Nash's sole economics course, which he took as an undergraduate. In that course, he wrote a term paper, eventually to appear in *Econometrica*, where he attempted to axiomatise what will happen when two parties bargain over some outcome (Nash 1950b). The idea is simple. Whatever the bargaining outcome is, we want it to satisfy a handful of reasonable assumptions. First, since ordinal utility is invariant to affine transformations of a utility function, the bargaining outcome should not be affected by these types of transformations; only ordinal preferences should matter. Second, the outcome should be Pareto optimal – the players would have to be mighty spiteful to throw away part of the pie rather than give it to at least one of them. Third, given their utility functions, players should be treated symmetrically. Fourth (and a bit controversially, as we will see), Nash insisted on independence of irrelevant alternatives, meaning that if f(T) is the set of ‘fair bargains’ when T is the set of all potential bargains, then if the potential set of bargains is smaller yet still contains f(T), say S strictly contained by T where f(T) is in S, then f(T) must remain the bargaining outcome. It turns out that under these assumptions, there is a *unique* outcome which maximises (u(x)-u(d))*(v(x)-v(d)), where u and v are each player's utility functions, x is the vector of payoffs under the eventual bargain, and d the ‘status-quo’ payoff if no bargain is made. This is natural in many cases. For instance, if two identical agents are splitting a dollar, then 50-50 is the only Nash outcome. Uniqueness is not at all obvious; recall the Edgeworth box and you will see that individual rationality and Pareto optimality alone leave many potential equilibria. Nash's result is elegant and surprising, and it is no surprise that Nash's grad school recommendation letter famously was only one sentence long: "This man is a genius."

There is one problem with Nash bargaining, however. Nash was famously schizophrenic in real life, but there is an analogous bipolar feel to the idea of Nash equilibrium and the idea of Nash bargaining. Where exactly are threats in Nash's bargain theory? That is, Nash bargaining as an idea completely follows from the cooperative theory of von Neumann and Morganstern. Consider two identical agents splitting a dollar once more. Imagine that one of the agents already has 30 cents, so that only 70 of the cents are actually in the middle of the table. The Nash solution is that the person who starts with the 30 cents eventually winds up with 65 cents, and the other person with 35 cents. But play this out in your head.

*Player 1: "I, already having the 30 cents, should get half of what remains. It is only fair, and if you don't give me 65 I will walk away from this table and we will each get nothing more."*

*Player 2: "What does that have to do with it? The fair outcome is 50 cents each, which leaves you with more than your originally 30, so you can take your threat and go jump off a bridge!" *

That is, 50/50 might be a reasonable solution here, right? This might make even more sense if we take a more concrete example: bargaining over wages. Imagine trying to hire a CEO. Two identical CEOs will generate $500,000 in value for the firm if hired. CEO Candidate One has no other job offer. CEO Candidate Two has an offer from a job with similar prestige and benefits, paying $175,000. Surely we can't believe that the second CEO will wind up with higher pay, right? It is a completely non-credible threat to take the $175,000 offer, hence it shouldn't affect the bargaining outcome.

Nash was quite aware of this, as can be seen by his 1953 *Econometrica* paper (Nash 1953), where he attempts to give a non-cooperative bargaining game that reaches the earlier axiomatic outcome. Indeed, this paper inspired an enormous research agenda, called the Nash Program, devoted to finding non-cooperative games that generate well-known or reasonable-sounding cooperative solution outcomes. In some sense, the idea of ‘implementation’ in mechanism design, where we investigate whether there exists a game that can generate socially or coalitionally preferred outcomes non-cooperatively, can be thought of as a successful modern branch of the Nash Program. Nash's 1953 non-cooperative game simply involves adding a bit of noise to the set of possible outcomes. Consider splitting a dollar again. Let a third party tell each player to name how many cents they want. If the joint requests are feasible, then the dollar is split (with any remainder thrown away), else each player gets nothing. Clearly every split of the dollar on the Pareto frontier is a Nash equilibrium, as is each player requesting the full dollar and getting nothing. However, if there is a tiny bit of noise about whether there is exactly one dollar, or 99 cents, or 101 cents, etc. when deciding whether to ask for more money, I will have to weigh the higher payoff if the joint demand is feasible against the payoff zero if my increased demand makes the split impossible and hence neither of us earns anything. In a rough sense, Nash shows that as the distribution of noise becomes degenerate around the true bargaining frontier, players will demand exactly their Nash bargaining outcome. Of course it is interesting that there exists some bargaining game that generates the Nash solution, and the idea that we should study non-cooperative games which implement cooperate solution concepts is without doubt seminal, but this particular game seems very strange. What is the source of the noise? Why does it become degenerate? And so on.

On the shoulders of Nash, however, bargaining progressed a huge amount. Three papers in particular are worth your time, although hopefully you have seen these before: Kalai and Smorodinsky (1975), who retain the axiomatic approach but drop IIA, Rubinstein's famous 1982 *Econometrica* paper on non-cooperative bargaining with alternative offers (Rubinstein 1982), and Binmore, Rubinstein and Wolinsky on implementation of bargaining solutions, which deals with the idea of threats as in the example above (Binmore et al. 1986).

You can read all four Nash papers in their original form during your lunch hour; this seems to me a worthy way to tip your cap toward a man who helped make modern economics possible.

## References

Binmore, K, A Rubinstein and A Wolinsky (1986), “The Nash Bargaining Solution in Economic Modelling”, *The RAND Journal of Economics *17(2), pp. 176-188.

Kalai, E and M Smorodinsky (1975), “Other Solutions to Nash’s Bargaining Problem”, *Econometrica* 43(3), pp. 513-518.

Leonard, R (2012), *Von Neumann, Morgenstern, and the Creation of Game Theory: From Chess to Social Science, 1900-1960*, Cambridge: Cambridge University Press.

Nash, J (1950a), “Equilibrium Points in n-Person Games [2]”, *Proceedings of the National Academy of Sciences of the United States of America *36(1), pp. 48-49.

Nash, J (1950b), “The Bargaining Problem [3]”, *Econometrica* 18(2), pp. 155-162.

Nash, J (1951), “Non-Cooperative Games [4]”,* The Annals of Mathematics *54(2), pp. 286-295.

Nash, J (1953), “Two-Person Cooperative Games [5]”, *Econometrica* 21(1), pp. 128-140.

Rubinstein, A (1982), “Perfect Equilibrium in a Barganing Model”, *Econometrica* 50(1), pp. 97-109.

von Neumann, J (1928), “Zur Theorie der Gesellschaftsspiele”, *Mathematische Annalen *100(1), pp. 295-300.

von Neumann, J and O Morgenstern (1944), *Theory of Games and Economic Behavior*, Princeton, NJ: Princeton University Press.

## Endnotes

[1] Kakutani's Fixed Point in the initial one-page paper in *PNAS* which Nash wrote his very first year of graduate school (Nash 1950a) and Brouwer's Fixed Point in the *Annals of Mathematics* paper (Nash 1951), which more rigorously lays out Nash's non-cooperative theory.