Search

The Online Encyclopedia and Dictionary

 
     
 

Encyclopedia

Dictionary

Quotes

 

Game theory

This article discusses the mathematical modelling of incentive structures. For other games (and their theories) see Game (disambiguation). For the band named Game Theory, please see Game Theory (band).

Game theory is a branch of applied mathematics that uses models to study interactions with formalised incentive structures ("games"). It has applications in a variety of fields, including economics, international relations, evolutionary biology, political science, and military strategy. Game theorists study the predicted and actual behaviour of individuals in games, as well as optimal strategies. Seemingly different types of interactions can exhibit similar incentive structures, thus all exemplifying one particular game.

John von Neumann and Oskar Morgenstern first formalised the subject in 1944 in their book Theory of Games and Economic Behavior. Game theory has important applications in fields like operations research, economics, collective action, political science, psychology, and biology. It has close links with economics in that it seeks to find rational strategies in situations where the outcome depends not only on one's own strategy and "market conditions", but upon the strategies chosen by other players with possibly different or overlapping goals. Applications in military strategy drove some of the early development of game theory.

Game theory has come to play an increasingly important role in logic and in computer science. Several logical theories have a basis in game semantics. And computer scientists have used games to model interactive computations. Computability logic attempts to develop a comprehensive formal theory (logic) of interactive computational tasks and resources, formalising these entities as games between a computing agent and its environment.

Game theoretic analysis can apply to simple games of entertainment or to more significant aspects of life and society. The prisoner's dilemma, as popularized by mathematician Albert W. Tucker, furnishes an example of the application of game theory to real life; it has many implications for the nature of human co-operation, and has even been used as the basis of a game show called Friend or Foe?.

Biologists have used game theory to understand and predict certain outcomes of evolution, such as the concept of evolutionarily stable strategy introduced by John Maynard Smith and George R. Price in a 1973 paper in Nature (See also Maynard Smith 1982). See also evolutionary game theory and behavioral ecology.

Analysts of games commonly use other branches of mathematics, in particular probability, statistics and linear programming, in conjunction with game theory.

Contents

Mathematical definitions

There are a few alternative definitions of the notion of a 'game'. We shall hereby give a short introduction and say a few words about the relations between them.

Normal form game

A game in normal or strategic form combines the set of possible strategies for each player and records the payoffs for each outcome. Let N be a set of players. For each player i \in \mathrm{N} there is given a set of strategies \Sigma\ ^i. The game is then a function:

\pi\ : \prod_{i\in \mathrm{N}} \Sigma\ ^i \to \mathbb{R}^\mathrm{N}

So that, if one knows the tuple of strategies that were chosen by the players, one is given the allocation payments, a real number assignment. A further generalization can be achieved by splitting the game into two functions: the normal form game', describing the way in which strategies define outcomes, and a second function depicting player's preferences on the set of outcomes. Hence:

\pi\ : \prod_{i \in \mathrm{N}} \Sigma\ ^i \to \Gamma\

Where \Gamma\ is the outcome set of the game. And for each player i\in \mathrm{N} there is a preference function

\nu\ ^i : \Gamma\ \to \mathbb{R}.

A reduced normal form exists as well. The reduced normal form combines strategies for which are associated with the same payoffs.

Extensive form game

(Main article: Extensive form game)

The normal form gives the mathematician an easy notation for the study of equilibria problems, because it bypasses the question of how strategies are calculated, i.e. how the game is actually played. The convenient notation for dealing with these questions, more relevant to combinatorial game theory, is the extensive form of the game. This is given by a tree, where at each vertex of the tree a different player has the choice of choosing an edge.

Simple game

The normal form and the extensive form capture the essence of non-cooperative games. But in some games the formation of coalitions and the way cooperation is developed are more important. For dealing with questions of cooperation, the notion of a simple game was developed.

Types of games and examples

Game theory classifies games into many categories that determine which particular methods one can apply to solving them (and indeed how one defines "solved" for a particular category). Common categories include:

Zero-sum and non-zero-sum games

In zero-sum games the total benefit to all players in the game, for every combination of strategies, always adds to zero (or more informally put, a player benefits only at the expense of others). Go, chess and poker exemplify zero-sum games, because one wins exactly the amount one's opponents lose. Most real-world examples in business and politics, as well as the famous prisoner's dilemma are non-zero-sum games, because some outcomes have net results greater or less than zero. Informally, a gain by one player does not necessarily correspond with a loss by another. For example, a business contract ideally involves a positive-sum outcome, where each side ends up better off than if they did not make the deal.

Note that one can more easily analyse a zero-sum game; and it turns out that one can transform any game into a zero-sum game by adding an additional dummy player (often called "the board"), whose losses compensate the players' net winnings.

A game's payoff matrix is a convenient way of representation. Consider for example the two-player zero-sum game with the following matrix:

                                Player 2 
                    Action A    Action B    Action C
          Action 1    30         -10          20 
Player 1
          Action 2    10          20         -20

This game proceeds as follows: the first player chooses one of the two actions 1 or 2; and the second player, unaware of the first player's choice, chooses one of the three actions A, B or C. Once the players have made their choices, the payoff gets allocated according to the table; for instance, if the first player chose action 2 and the second player chose action B, then the first player gains 20 points and the second player loses 20 points. Both players know the payoff matrix and attempt to maximize the number of their points. What should they do? (Clearly, Player 2 will not pick action A.)

Player 1 could reason as follows: "with action 2, I could lose up to 20 points and can win only 20, while with action 1 I can lose only 10 but can win up to 30, so action 1 looks a lot better." With similar reasoning, player 2 would choose action C (negative numbers in the table represent payoff for him). If both players take these actions, the first player will win 20 points. But what happens if player 2 anticipates the first player's reasoning and choice of action 1, and deviously goes for action B, so as to win 10 points? Or if the first player in turn anticipates this devious trick and goes for action 2, so as to win 20 points after all?

John von Neumann had the fundamental and surprising insight that probability provides a way out of this conundrum. Instead of deciding on a definite action to take, the two players assign probabilities to their respective actions, and then use a random device which, according to these probabilities, chooses an action for them. Each player computes the probabilities so as to minimise the maximum expected point-loss independent of the opponent's strategy; this leads to a linear programming problem with a unique solution for each player. This minimax method can compute provably optimal strategies for all two-player zero-sum games.

For the example given above, it turns out that the first player should choose action 1 with probability 57% and action 2 with 43%, while the second player should assign the probabilities 0%, 57% and 43% to the three actions A, B and C. Player one will then win 2.85 points on average per game.

Co-operative games

A cooperative game is characterized by an enforceable contract. Theory of co-operative games gives justifications of plausible contracts. The plausibility of a contract is closely related with stability.

Axiomatic bargaining

Two players may bargain how much share they want in a contract. The theory of axiomatic bargaining tells you how much share is reasonable for you. For example, Nash bargaining solution demands that the share is fair and efficient (see an advanced textbook for the complete formal description).

However, you may not be concerned with fairness and may demand more. How does Nash bargaining solution deal with this problem? Actually, there is a non-cooperative game of alternating offers (by Rubinstein) supporting Nash bargaining solution as the unique Nash equilibrium.

Characteristic function games

Many players, instead of two players, may cooperate to get a better outcome. Again, how much share should be given to each player of the total output is not clear. Core gives a reasonable set of possible shares. A combination of shares is in a core if there exists no subcoalition in which its members may gain a higher total outcome than the share of concern. If the share is not in a core, some members may be frustrated and may think of leaving the whole group with some other members and form a smaller group.

Games of complete information

In games of complete information each player has the same game-relevant information as every other player. Chess and the prisoner's dilemma exemplify complete-information games, while poker illustrates the opposite. Complete information games occur only rarely in the real world, and game theorists usually use them only as approximations of the actual game played.

Risk aversion

Main article: Risk aversion

For the above example to work, one must assume risk-neutral participants in the game. For example, this means that they would place an equal value on a bet with a 50% chance of receiving 20 points and a bet with a 100% chance of receiving 10 points. However, in reality people often exhibit risk averse behaviour and prefer a more certain outcome - they will only take a risk if they expect to make money on average. Subjective expected utility theory explains how to derive a measure of utility which will always satisfy the criterion of risk neutrality, and hence serve as a measure for the payoff in game theory.

Game shows often provide examples of risk aversion. For example, if a person has a 1 in 3 chance of winning $50,000, or can take a sure $10,000, many people will take the sure $10,000.

Lotteries can show the opposite behaviour of risk seeking: for example many people will risk $1 to buy a 1 in 14,000,000 chance of winning $7,000,000. This illustrates the nature of people's preferences over risk: they are risk-loving where losses are small and risk averse where losses are high, even if potential gains are greater - people care less about a marginal dollar than say a marginal $1000 - most people would not risk $1000 for the same chance of winning $7,000,000,000.

Games and numbers

John Conway developed a notation for certain complete information games and defined several operations on those games, originally in order to study Go endgames, though much of the analysis focused on Nim. This developed into combinatorial game theory.

In a surprising connection, he found that a certain subclass of these games can be used as numbers as described in his book On Numbers and Games, leading to the very general class of surreal numbers.

History

Though touched on by earlier mathematical results, modern game theory became a prominent branch of mathematics in the 1940s, especially after the 1944 publication of The Theory of Games and Economic Behavior by John von Neumann and Oskar Morgenstern. This profound work contained the method -- alluded to above -- for finding optimal solutions for two-person zero-sum games.

Around 1950, John Nash developed a definition of an "optimum" strategy for multi-player games where no such optimum was previously defined, known as Nash equilibrium. Reinhard Selten with his ideas of trembling hand perfect and subgame perfect equilibria further refined this concept. These men won The Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel (also known as The Nobel Prize in Economics) in 1994 for their work on game theory, along with John Harsanyi who developed the analysis of games of incomplete information.

Research into game theory continues, and there remain games which produce counter-intuitive optimal strategies even under advanced analytical techniques like trembling hand equilibrium. One example of this occurs in the Centipede Game, where at every decision players have the option of increasing their opponents' payoff at some cost to their own.

Some experimental tests of games indicate that in many situations people respond instinctively by picking a 'reasonable' solution or a 'social norm' rather than adopting the strategy indicated by a rational analytic concept.

The finding of Conway's number-game connection occurred in the early 1970s.

Specific applications

See also

External links and references

The contents of this article are licensed from Wikipedia.org under the GNU Free Documentation License. How to see transparent copy