Center for Interdisciplinary Mathematics
This is an example of a HTML caption with a link.

# Noncooperative Games and Applications

August 15, 2013
=========================================

A basic problem in optimization theory is to find the maximum value of a function \phi over a set X:

\max_{x\in X}~\phi(x)\,. (1)

One can think of X as the set of possible choices available to an individual, while \phi(x) is his corresponding payoff.

Game theory, on the other hand, is concerned with the more complex situation where two or more individuals, or "players" are present [13]. Each player can choose among his set of available options and seeks to maximize his own payoff. For simplicity, consider the case of two players.

Player 1 chooses a strategy x_1\in X_1 and seeks to maximize \phi_1(x_1, x_2). (2)
Player 2 chooses a strategy x_2\in X_2 and seeks to maximize \phi_2(x_1, x_2).

Here the catch is that the payoff of each player also depends on the actions of the other player. This is indeed what happens in most economics models, where the profit of one agent also depends on the decisions of the other agents.

In contrast with (1), it is clear that the problem (2) does not admit an "optimal" solution. Indeed, in general it will not be possible to find a couple (\bar x_1, \bar x_2)\in X_1\times X_2 which at the same time maximizes the payoff of the first player and of the second player, so that

\phi_1(\bar x_1, \bar x_2)~=~\max_{x_1,x_2}~ \phi_1( x_1, x_2)\,,\qquad\qquad \phi_2(\bar x_1, \bar x_2)~=~\max_{x_1,x_2}~ \phi_2( x_1, x_2)\,.

For this reason, various alternative concepts of solutions have been proposed in the literature. These can be relevant in different situations, depending on the information available to the players and their ability to cooperate.

For example, if the players have no means to talk to each other and do not cooperate, then an appropriate concept of solution is the Nash equilibrium [12], defined as a fixed point of the best reply map. In other words, (x_1^*, x_2^*) is a Nash equilibrium if

(i) the value x_1^*\in X_1 is the best choice for the first player, in reply to the strategy x_2^* adopted by the second player. Namely

\phi_1(x_1^*, x_2^*)~=~\max_{x_1\in X_1}~\phi_1(x_1, x_2^*),

(ii) the value x_2^*\in X_2 is the best choice for the second player, in reply to the strategy x_1^* adopted by the first player. Namely

\phi_2(x_1^*, x_2^*)~=~\max_{x_2\in X_2}~\phi_2(x_1^*, x_2).

A different solution concept is relevant when one of the players takes a leading role, and announces his strategy in advance. This leads to the concept of Stackelberg equilibrium.

The situation modeled by (2) represents a static game, i.e. a "one-shot" game. Each player makes one single choice x_i\in X_i, and this completely determines the payoffs. In other relevant situations, the game takes place not instantaneously but over a whole interval of time. This leads to the study of dynamic games.

We recall that, in the standard model of control theory [5 , 11], the state of a system is described by a variable x\in \mathbb{R}^n. This state evolves in time according to an ODE

Here t\mapsto u(t)\in U is the control function, ranging within a set U of admissible control values.

Given an initial condition x(0)= x_0, a basic problem in optimal control is to find a control function u(\cdot) which maximizes the payoff

J~=~\psi(x(T)) -\int_0^T L(t, x(t),\, u(t))\, dt\,.

Here \psi is a terminal payoff, while L accounts for a running cost.

Differential games provide a natural extension of this model to the case where two or more players are present, each with his own payoff [1 ,2 ,10]. In the case of two players, one thus considers a system whose state x\in \mathbb{R}^n evolves according to the ODE

Here t\mapsto u_i(t)\in U_i, i=1,2, are the control functions implemented by the two players.

For a given initial condition, the goal of the i-th player is

\hbox{maximize:}\qquad J_i~=~\psi_i(x(T)) -\int_0^T L_i(t, x(t),\, u_1(t),\, u_2(t))\, dt\,. (5)

As in the case of one-shot games, various concepts of solution (Nash, Stackelberg, \ldots) can be considered. In addition, different types of strategies are available to players, depending on the amount of information they have. Namely, one may consider cases where

• The state x of the system is known only at the initial time t=0.
• The players can observe the current state of the system x(t) at each time t\in [0,T].
• Instead of the deterministic ODE (4), the evolution of the system in is governed by stochastic differential equation. The initial state x(0) is a random variable, with a given probability distribution.
• The functions \psi_i, L_i, determining the payoffs for each player, are not precisely known.

For example, agents buying and selling in a stock market have only partial information about the value of their assets, and hence on their eventual payoff.

If players can observe the current state of the system, then they can adopt feedback strategies u_i=u_i(t,x). In other cases, they can only implement open-loop strategies u_i=u_i(t) depending exclusively on time.

This leads to a rich variety of mathematical models, which largely remain yet to be studied. An interesting (and difficult!) problem arises when different players have access to different amount of information. In this case, the better informed agent seeks to make the most of his advantage, while less informed competitors will try to gain information by carefully observing the actions of the leading player.

An important class of differential games, currently receiving much attention, are those involving infinitely many players. In this case, no player can single-handedly affect the state x(t) of system. The time evolution of the system is only determined by the average combined actions of all players.

The research of our group on game theory and applications is presently focused in two main directions:

- General theory of non-cooperative differential games.

- Analysis of specific models arising in economics and finance.

### 1. General theory of non-cooperative differential games

To understand whether the non-cooperative game described at (4)-(5) has equilibrium solutions (in the sense of Nash, or Stackelberg), a basic approach calls for the study of the value functions V_i. Here V_i(\tau,\bar x) is the expected payoff for the i-th player, if the game were to start at time \tau from the initial state \bar x.

Under suitable regularity conditions, it can be shown that these functions V_i satisfy a system of Hamilton-Jacobi equations. By studying these PDEs, one can understand whether the game has a solution, and how to compute the optimal strategies for the various players [6].

When players can implement strategies in feedback form, it often turns out that the system of PDEs determined by a Nash equilibrium is ill posed: solutions either do not exist, or are very sensitive to small changes in the data [7 ,2]. In connection with Stackelberg equilibria, these issues have been recently studied in [8].

### Competitive bidding for a random incoming order

In this specific model, we assume that an external buyer asks for a random amount X>0 of a certain asset (say, a particular stock traded on the stock market). This external agent will buy the amount X at the lowest available price, as long as this price does not exceed a given upper bound \overline P. One or more sellers offer various quantities of this asset at different prices, competing to fulfill the incoming order, whose size is not known a priori.

Having observed the prices asked by his competitors, each seller must determine an optimal strategy, maximizing his expected payoff. Of course, when other sellers are present, asking a higher price for an asset reduces the probability of selling it.

This model was introduced in [3] and further studied in [4]. Sellers may differ from each other in various respects:

• Each agent assigns a different probability distribution to the random variable X, based on his own beliefs. An optimistic seller expects a large incoming order, which will fill most of the outstanding bids. A pessimistic seller will expect a small order, filling only the lowest priced bids. In the following, we denote by
\psi_i(s)~=~Prob.\{X> s\}

the probability distribution assigned by the i-th seller to the random variable X.

• The i-th agent has a total amount q_i of assets to offer for sale, and assigns a different fundamental value q_i to these assets. By selling a unit amount at price p, he would thus achieves a profit p- p_i.

For this game-theoretical model it is natural to seek a Nash equilibrium, where each player optimizes his own expected payoff, given the strategies adopted by all other players. Three natural questions arise:

- Does a Nash equilibrium exist ?

- Is it unique ?

- How can it be computed ?

The results proved in [3 ,4] show that, for the existence of a Nash equilibrium, the crucial assumption is that the probability distributions \psi_i be log-convex, i.e.

In several cases, one can show that this equilibrium solution is unique and can be determined by solving a two-point boundary value problem for a system of ODEs with discontinuous right hand side [3 ,9].

Figure 1: An example of Nash equilibrium with four sellers. Here all agents regard p_0 as the fundamental value of the asset, while \kappa_1<\kappa_2<\kappa_3<\kappa_4 are the total amounts they can offer for sale. For any p>p_0 The amounts that the four agents put on sale at price x correspond to the areas of the colored regions, to the left of the vertical line through p. Notice that all players spread out their bids over a continuum of prices. In addition, Player 4 (blue) puts an amount \kappa_4-\kappa_3 of asset for sale at the maximum price \overline P.

### References

1. T. Basar and G. J. Olsder, Dynamic Noncooperative Game Theory. Reprint of the second edition. SIAM, Philadelphia, 1999.
2. A. Bressan, Noncooperative differential games. Milan J. of Mathematics, 79 (2011), 357--427.
3. A. Bressan and G. Facchi, A bidding game in a continuum limit order book, SIAM J. Control Optim., to appear.
4. A. Bressan and G. Facchi, Discrete bidding strategies for a random incoming order, SIAM J. Math. Finance, submitted.
5. A. Bressan and B. Piccoli, Introduction to the Mathematical Theory of Control, AIMS Series in Applied Mathematics, Springfield Mo. 2007.
6. A. Bressan and W. Shen, Small BV solutions of hyperbolic non-cooperative differential games, SIAM J. Control Optim. 43 (2004), 104-215.
7. A. Bressan and W. Shen, Semi-cooperative strategies for differential games, Intern. J. Game Theory 32 (2004), 561-593.
8. A. Bressan and D. Wei, Stackelberg solutions of feedback type for differential games with random initial data. Dynamic Games & Appl., to appear.
9. A. Bressan and D. Wei, A bidding game with heterogeneous players. submitted.
10. E. J. Dockner, S. Jorgensen, N. V. Long, and G. Sorger, Differential games in economics and management science. Cambridge University Press, 2000.
11. W. H. Fleming and R. W. Rishel, Deterministic and Stochastic Optimal Control, Springer-Verlag, New York, 1975.
12. J. Nash, Non-cooperative games, Ann. of Math. 2 (1951).
13. J. Wang, The Theory of Games. Oxford University Press, 1988.