
Existence Nash’s existence theorem[edit] Nash proved that if mixed strategies (where a player chooses probabilities of using various pure strategies) are allowed, then every
game with a finite number of players in which each player can choose from finitely many pure strategies has at least one Nash equilibrium, which might be a pure strategy for each player or might be a probability distribution over strategies
for each player. 
A Nash equilibrium for a mixedstrategy game is stable if a small change (specifically, an infinitesimal change) in probabilities for one player leads to a situation where
two conditions hold: 1. the player who did not change has no better strategy in the new circumstance 2. the player who did change is now playing with a strictly worse strategy. 
[13] The contribution of Nash in his 1951 article “NonCooperative Games” was to define a mixedstrategy Nash equilibrium for any game with a finite set of actions and prove
that at least one (mixedstrategy) Nash equilibrium must exist in such a game. 
In a Nash equilibrium, each player is assumed to know the equilibrium strategies of the other players, and no one has anything to gain by changing only one’s own strategy.

[citation needed] Occurrence If a game has a unique Nash equilibrium and is played among players under certain conditions, then the NE strategy set will be adopted.

However, a Nash equilibrium exists if the set of choices is compact with each player’s payoff continuous in the strategies of all the players.

If only condition one holds then there are likely to be an infinite number of optimal strategies for the player who changed.

If we admit mixed strategies (where a pure strategy is chosen at random, subject to some fixed probability), then there are three Nash equilibria for the same case: two we
have seen from the purestrategy form, where the probabilities are (0%, 100%) for player one, (0%, 100%) for player two; and (100%, 0%) for player one, (100%, 0%) for player two respectively. 
This idea was formalized by R. Aumann and A. Brandenburger, 1995, Epistemic Conditions for Nash Equilibrium, Econometrica, 63, 11611180 who interpreted each player’s mixed
strategy as a conjecture about the behaviour of other players and have shown that if the game and the rationality of players is mutually known and these conjectures are commonly known, then the conjectures must be a Nash equilibrium (a common
prior assumption is needed for this result in general, but not in the case of two players. 
[2] If each player has chosen a strategy – an action plan based on what has happened so far in the game – and no one can increase one’s own expected payoff by changing one’s
strategy while the other players keep theirs unchanged, then the current set of strategy choices constitutes a Nash equilibrium. 
Strict/weak equilibrium[edit] Suppose that in the Nash equilibrium, each player asks themselves: “Knowing the strategies of the other players, and treating the strategies
of the other players as set in stone, would I suffer a loss by changing my strategy?” 
However, subsequent refinements and extensions of Nash equilibrium share the main insight on which Nash’s concept rests: the equilibrium is a set of strategies such that each
player’s strategy is optimal given the choices of the others. 
For example, with payoffs 10 meaning no crash and 0 meaning a crash, the coordination game can be defined with the following payoff matrix: In this case there are two purestrategy
Nash equilibria, when both choose to either drive on the left or on the right. 
Stability is crucial in practical applications of Nash equilibria, since the mixed strategy of each player is not perfectly known, but has to be inferred from statistical
distribution of their actions in the game. 
They showed that a mixedstrategy Nash equilibrium will exist for any zerosum game with a finite set of actions.

According to Nash, “an equilibrium point is an ntuple such that each player’s mixed strategy maximizes his payoff if the strategies of the others are held fixed.

The modern concept of Nash equilibrium is instead defined in terms of mixed strategies, where players choose a probability distribution over possible pure strategies (which
might put 100% of the probability on one pure strategy; such pure strategies are a subset of mixed strategies). 
For this purpose, it suffices to show that This simply states that each player gains no benefit by unilaterally changing their strategy, which is exactly the necessary condition
for a Nash equilibrium. 
It is unique and called a strict Nash equilibrium if the inequality is strict so one strategy is the unique best response: The strategy set can be different for different
players, and its elements can be a variety of mathematical objects. 
Part of the definition of a game is a subset S of Rm such that the strategytuple must be in S. This means that the actions of players may potentially be constrained based
on actions of other players. 
If these cases are both met, then a player with the small change in their mixed strategy will return immediately to the Nash equilibrium.

A second interpretation, that Nash referred to by the mass action interpretation, is less demanding on players: [i]t is unnecessary to assume that the participants have full
knowledge of the total structure of the game, or the ability and inclination to go through any complex reasoning processes. 
Thus each player’s strategy is optimal against those of the others.”

A refined Nash equilibrium known as coalitionproof Nash equilibrium (CPNE)[17] occurs when players cannot do better even if they are allowed to communicate and make “selfenforcing”
agreement to deviate. 
If either player changes their probabilities (which would neither benefit or damage the expectation of the player who did the change, if the other player’s mixed strategy
is still (50%,50%)), then the other player immediately has a better strategy at either (0%, 100%) or (100%, 0%). 
To prove the existence of a Nash equilibrium, let be the best response of player i to the strategies of all other players.

(…) One interpretation is rationalistic: if we assume that players are rational, know the full structure of the game, the game is played just once, and there is just one
Nash equilibrium, then players will play according to that equilibrium. 
Nash equilibrium requires that one’s choices be consistent: no players wish to undo their decision given what the others are deciding.

As a result of these requirements, strong Nash is too rare to be useful in many branches of game theory.

[19] Further, it is possible for a game to have a Nash equilibrium that is resilient against coalitions less than a specified size, k. CPNE is related to the theory of the
core. 
[16] If instead, for some player, there is exact equality between the strategy in Nash equilibrium and some other strategy that gives exactly the same payout (i.e.

Even if the equilibrium is unique, it might be weak: a player might be indifferent among several strategies given the other players’ choices.

In this case there is no particular reason for that player to adopt an equilibrium strategy.

• Each of two players chooses a real number strictly less than 5 and the winner is whoever has the biggest number; no biggest number strictly less than 5 exists (if the number
could equal 5, the Nash equilibrium would have both players choosing 5 and tying the game). 
The simple insight underlying Nash’s idea is that one cannot predict the choices of multiple decision makers if one analyzes those decisions in isolation.

Nash equilibrium may also have nonrational consequences in sequential games because players may “threaten” each other with threats they would not actually carry out.

In many cases, the third condition is not met because, even though the equilibrium must exist, it is unknown due to the complexity of the game, for instance in Chinese chess.

This is also the Nash equilibrium if the path between B and C is removed, which means that adding another possible route can decrease the efficiency of the system, a phenomenon
known as Braess’s paradox. 
Other extensions of the Nash equilibrium concept have addressed what happens if a game is repeated, or what happens if a game is played in the absence of complete information.

In game theory, the Nash equilibrium, named after the mathematician John Nash, is the most common way to define the solution of a noncooperative game involving two or more
players. 
However, in games such as elections with many more players than possible outcomes, it can be more common than a stable equilibrium.

Definitions Nash equilibrium[edit] A strategy profile is a set of strategies, one for each player.

For instance if a player prefers “Yes”, then that set of strategies is not a Nash equilibrium.

Rationality The Nash equilibrium may sometimes appear nonrational in a thirdperson perspective.

Alternate proof using the Brouwer fixedpoint theorem[edit] We have a game where is the number of players and is the action set for the players.

Let be a strategy profile, a set consisting of one strategy for each player, where denotes the strategies of all the players except .

This said, the actual mechanics of finding equilibrium cells is obvious: find the maximum of a column and check if the second member of the pair is the maximum of the row.

For a mixed strategy , we let the gain for player on action be The gain function represents the benefit a player gets by unilaterally changing their strategy.

Now we claim that To see this, first if then this is true by definition of the gain function.

Nash showed that there is a Nash equilibrium, possibly in mixed strategies, for every finite game.


Where the conditions are met[edit] In his Ph.D. dissertation, John Nash proposed two interpretations of his equilibrium concept, with the objective of showing how equilibrium
points can be connected with observable phenomenon. 
This game has a unique purestrategy Nash equilibrium: both players choosing 0 (highlighted in light red).

Although it would not fit the definition of a competition game, if the game is modified so that the two players win the named amount if they both choose the same number, and
otherwise win nothing, then there are 4 Nash equilibria: (0,0), (1,1), (2,2), and (3,3). 
Or, the strategy set might be a finite set of conditional strategies responding to other players, e.g.

Thus, each strategy in a Nash equilibrium is a best response to the other players’ strategies in that equilibrium.

If either player changes their probabilities slightly, they will be both at a disadvantage, and their opponent will have no reason to change their strategy in turn.

[1] The principle of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to competing firms choosing outputs.

The concept of a mixedstrategy equilibrium was introduced by John von Neumann and Oskar Morgenstern in their 1944 book The Theory of Games and Economic Behavior, but their
analysis was restricted to the special case of zerosum games. 
[14] Game theorists have discovered that in some circumstances Nash equilibrium makes invalid predictions or fails to make a unique prediction.

Examples Coordination game[edit] Main article: Coordination game The coordination game is a classic twoplayer, twostrategy game, as shown in the example payoff matrix to
the right. 
Putting the problem in this framework allowed Nash to employ the Kakutani fixedpoint theorem in his 1950 paper to prove existence of equilibria.

The players know the planned equilibrium strategy of all of the other players.

If every player’s answer is “Yes”, then the equilibrium is classified as a strict Nash equilibrium.

The key to Nash’s ability to prove existence far more generally than von Neumann lay in his definition of equilibrium.

Nash’s result refers to the special case in which each Si is a simplex (representing all possible mixtures of pure strategies), and the payoff functions of all players are
bilinear functions of the strategies. 
We see that Next we define: It is easy to see that each is a valid mixed strategy in .

Therefore, if rational behavior can be expected by both parties the subgame perfect Nash equilibrium may be a more meaningful solution concept when such dynamic inconsistencies
arise. 
A common special case of the model is when S is a Cartesian product of convex sets , such that the strategy of player i must be in Si.

[3] Nash equilibrium A solution concept in game theory; Relationship: Subset of: Rationalizability, Epsilonequilibrium, Correlated equilibrium; Superset of: Evolutionarily
stable strategy, Subgame perfect equilibrium, Perfect Bayesian equilibrium, Trembling hand perfect equilibrium, Stable Nash equilibrium, Strong Nash equilibrium, Cournot equilibrium; Significance: Proposed by: John Forbes Nash Jr.; Used for:
All noncooperative games Applications Game theorists use Nash equilibrium to analyze the outcome of the strategic interaction of several decision makers. 
This game is used as an analogy for social cooperation, since much of the benefit that people gain in society depends upon people cooperating and implicitly trusting one another
to act in a manner corresponding with cooperation. 
[17] Formally, a strong Nash equilibrium is a Nash equilibrium in which no coalition, taking the actions of its complements as given, can cooperatively deviate in a way that
benefits all of its members. 
One particularly important issue is that some Nash equilibria may be based on threats that are not ‘credible’.

[18] However, the strong Nash concept is sometimes perceived as too “strong” in that the environment allows for unlimited private communication.

Researchers who apply games theory in these fields claim that strategies failing to maximize these for whatever reason will be competed out of the market or environment, which
are ascribed the ability to test all strategies.
Works Cited
[‘1. Osborne, Martin J.; Rubinstein, Ariel (12 Jul 1994). A Course in Game Theory. Cambridge, MA: MIT. p. 14. ISBN 9780262150415.
2. ^ Kreps D.M. (1987) “Nash Equilibrium.” In: Palgrave Macmillan (eds) The New Palgrave Dictionary of Economics. Palgrave
Macmillan, London.
3. ^ Nash, John F. (1950). “Equilibrium points in nperson games”. PNAS. 36 (1): 48–49. doi:10.1073/pnas.36.1.48. PMC 1063129.
4. ^ Schelling, Thomas, The Strategy of Conflict, copyright 1960, 1980, Harvard University Press,
ISBN 0674840313.
5. ^ De Fraja, G.; Oliveira, T.; Zanchi, L. (2010). “Must Try Harder: Evaluating the Role of Effort in Educational Attainment”. Review of Economics and Statistics. 92 (3): 577. doi:10.1162/REST_a_00013. hdl:2108/55644. S2CID
57072280.
6. ^ Ward, H. (1996). “Game Theory and the Politics of Global Warming: The State of Play and Beyond”. Political Studies. 44 (5): 850–871. doi:10.1111/j.14679248.1996.tb00338.x. S2CID 143728467.,
7. ^ Thorpe, Robert B.; Jennings, Simon;
Dolder, Paul J. (2017). “Risks and benefits of catching pretty good yield in multispecies mixed fisheries”. ICES Journal of Marine Science. 74 (8): 2097–2106. doi:10.1093/icesjms/fsx062.,
8. ^ “Marketing Lessons from Dr. Nash – Andrew Frank”. 20150525.
Retrieved 20150830.
9. ^ Chiappori, P. A.; Levitt, S.; Groseclose, T. (2002). “Testing MixedStrategy Equilibria when Players Are Heterogeneous: The Case of Penalty Kicks in Soccer” (PDF). American Economic Review. 92 (4): 1138. CiteSeerX 10.1.1.178.1646.
doi:10.1257/00028280260344678.
10. ^ Djehiche, B.; Tcheukam, A.; Tembine, H. (2017). “A MeanField Game of Evacuation in Multilevel Building”. IEEE Transactions on Automatic Control. 62 (10): 5154–5169. doi:10.1109/TAC.2017.2679487. ISSN 00189286.
S2CID 21850096.
11. ^ Djehiche, Boualem; Tcheukam, Alain; Tembine, Hamidou (20170927). “MeanFieldType Games in Engineering”. AIMS Electronics and Electrical Engineering. 1: 18–73. arXiv:1605.03281. doi:10.3934/ElectrEng.2017.1.18. S2CID 16055840.
12. ^
Cournot A. (1838) Researches on the Mathematical Principles of the Theory of Wealth
13. ^ J. Von Neumann, O. Morgenstern, Theory of Games and Economic Behavior, copyright 1944, 1953, Princeton University Press
14. ^ Carmona, Guilherme; Podczeck,
Konrad (2009). “On the Existence of Pure Strategy Nash Equilibria in Large Games” (PDF). Journal of Economic Theory. 144 (3): 1300–1319. doi:10.1016/j.jet.2008.11.009. hdl:10362/11577. SSRN 882466.
15. ^ Jump up to:a b von Ahn, Luis. “Preliminaries
of Game Theory” (PDF). Archived from the original (PDF) on 20111018. Retrieved 20081107.
16. ^ “Nash Equilibria”. hoylab.cornell.edu. Retrieved 20191208.
17. ^ Jump up to:a b B. D. Bernheim; B. Peleg; M. D. Whinston (1987), “CoalitionProof
Equilibria I. Concepts”, Journal of Economic Theory, 42 (1): 1–12, doi:10.1016/00220531(87)900998.
18. ^ Aumann, R. (1959). “Acceptable points in general cooperative nperson games”. Contributions to the Theory of Games. Vol. IV. Princeton, N.J.:
Princeton University Press. ISBN 9781400882168.
19. ^ D. Moreno; J. Wooders (1996), “CoalitionProof Equilibrium” (PDF), Games and Economic Behavior, 17 (1): 80–112, doi:10.1006/game.1996.0095, hdl:10016/4408.
20. ^ MIT OpenCourseWare. 6.254:
Game Theory with Engineering Applications, Spring 2010. Lecture 6: Continuous and Discontinuous Games.
21. ^ Rosen, J. B. (1965). “Existence and Uniqueness of Equilibrium Points for Concave NPerson Games”. Econometrica. 33 (3): 520–534. doi:10.2307/1911749.
hdl:2060/19650010164. ISSN 00129682.
22. ^ T. L. Turocy, B. Von Stengel, Game Theory, copyright 2001, Texas A&M University, London School of Economics, pages 141144. Nash proved that a perfect NE exists for this type of finite extensive form game[citation
needed] – it can be represented as a strategy complying with his original conditions for a game with a NE. Such games may not have unique NE, but at least one of the many equilibrium strategies would be played by hypothetical players having perfect
knowledge of all 10150 game trees[citation needed].
23. ^ J. C. Cox, M. Walker, Learning to Play Cournot Duoploy Strategies Archived 20131211 at the Wayback Machine, copyright 1997, Texas A&M University, University of Arizona, pages 141144
24. ^
Fudenburg, Drew; Tirole, Jean (1991). Game Theory. MIT Press. ISBN 9780262061414.
25. ^ Wilson, Robert (19710701). “Computing Equilibria of NPerson Games”. SIAM Journal on Applied Mathematics. 21 (1): 80–87. doi:10.1137/0121011. ISSN 00361399.
26. ^
Harsanyi, J. C. (19731201). “Oddness of the Number of Equilibrium Points: A New Proof”. International Journal of Game Theory. 2 (1): 235–250. doi:10.1007/BF01737572. ISSN 14321270. S2CID 122603890.
Photo credit: https://www.flickr.com/photos/teresa_grau_ros/8622493424/’]