The story of conflict and cooperation

Similar documents
UNIVERSITY OF CALIFORNIA, SAN DIEGO DEPARTMENT OF ECONOMICS

GAME THEORY. Analysis of Conflict ROGER B. MYERSON. HARVARD UNIVERSITY PRESS Cambridge, Massachusetts London, England

Mohammad Hossein Manshaei 1393

Refinements of Nash equilibria. Jorge M. Streb. Universidade de Brasilia 7 June 2016

Game Theory II: Maximin, Equilibrium, and Refinements

Game Theory. Academic Year , First Semester Jordi Massó. Program

Bargaining and Cooperation in Strategic Form Games

THREATS TO SUE AND COST DIVISIBILITY UNDER ASYMMETRIC INFORMATION. Alon Klement. Discussion Paper No /2000

Game Theory for Political Scientists. James D. Morrow

Supporting Information Political Quid Pro Quo Agreements: An Experimental Study

the social dilemma?» Emmanuel SOL, Sylvie THORON, Marc WILLINGER

Sincere versus sophisticated voting when legislators vote sequentially

LEARNING FROM SCHELLING'S STRATEGY OF CONFLICT by Roger Myerson 9/29/2006

1 Electoral Competition under Certainty

Economics Marshall High School Mr. Cline Unit One BC

Sincere Versus Sophisticated Voting When Legislators Vote Sequentially

Game Theory and the Law: The Legal-Rules-Acceptability Theorem (A rationale for non-compliance with legal rules)

International Cooperation, Parties and. Ideology - Very preliminary and incomplete

When Transaction Costs Restore Eciency: Coalition Formation with Costly Binding Agreements

Politics is the subset of human behavior that involves the use of power or influence.

Maximin equilibrium. Mehmet ISMAIL. March, This version: June, 2014

Notes for an inaugeral lecture on May 23, 2002, in the Social Sciences division of the University of Chicago, by Roger Myerson.

INTERNATIONAL ECONOMICS, FINANCE AND TRADE Vol. II - Strategic Interaction, Trade Policy, and National Welfare - Bharati Basu

Enriqueta Aragones Harvard University and Universitat Pompeu Fabra Andrew Postlewaite University of Pennsylvania. March 9, 2000

Introduction to Game Theory

Bilateral Bargaining with Externalities *

Introduction to Game Theory

Introduction to Political Economy Problem Set 3

Computational Social Choice: Spring 2007

VOTING ON INCOME REDISTRIBUTION: HOW A LITTLE BIT OF ALTRUISM CREATES TRANSITIVITY DONALD WITTMAN ECONOMICS DEPARTMENT UNIVERSITY OF CALIFORNIA

Coalitional Game Theory

Cooperation and Institution in Games

Political Change, Stability and Democracy

The Principle of Convergence in Wartime Negotiations. Branislav L. Slantchev Department of Political Science University of California, San Diego

Goods, Games, and Institutions : A Reply

Game Theory and Climate Change. David Mond Mathematics Institute University of Warwick

Utilitarianism, Game Theory and the Social Contract

Learning and Belief Based Trade 1

1 Prepared for a conference at the University of Maryland in honor of Thomas C. Schelling, Sept 29, 2006.

Mehmet Ismail. Maximin equilibrium RM/14/037

Authority versus Persuasion

Brown University Economics 2160 Risk, Uncertainty and Information Fall 2008 Professor: Roberto Serrano. General References

ONLINE APPENDIX: Why Do Voters Dismantle Checks and Balances? Extensions and Robustness

Experimental Computational Philosophy: shedding new lights on (old) philosophical debates

The Provision of Public Goods Under Alternative. Electoral Incentives

Voting. Suppose that the outcome is determined by the mean of all voter s positions.

"Efficient and Durable Decision Rules with Incomplete Information", by Bengt Holmström and Roger B. Myerson

1 Strategic Form Games

Notes for Session 7 Basic Voting Theory and Arrow s Theorem

Bargaining Power and Dynamic Commitment

MORALITY - evolutionary foundations and policy implications

I assume familiarity with multivariate calculus and intermediate microeconomics.

IMPERFECT INFORMATION (SIGNALING GAMES AND APPLICATIONS)

Mechanism design: how to implement social goals

BOOK REVIEW BY DAVID RAMSEY, UNIVERSITY OF LIMERICK, IRELAND

Buying Supermajorities

Rationality of Voting and Voting Systems: Lecture II

Safe Votes, Sincere Votes, and Strategizing

Games With Incomplete Information A Nobel Lecture by John Harsanyi

On the current state of game theory Bernard Guerrien [l'université Paris 1, France]

SHAPLEY VALUE 1. Sergiu Hart 2

Any non-welfarist method of policy assessment violates the Pareto principle: A comment

Title: Adverserial Search AIMA: Chapter 5 (Sections 5.1, 5.2 and 5.3)

Political Selection and Persistence of Bad Governments

Roger B. Myerson The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2007 Autobiography

Political Economics II Spring Lectures 4-5 Part II Partisan Politics and Political Agency. Torsten Persson, IIES

14.770: Introduction to Political Economy Lecture 11: Economic Policy under Representative Democracy

CURRICULUM VITAE Quan Wen December 2014

Rational Choice. Pba Dab. Imbalance (read Pab is greater than Pba and Dba is greater than Dab) V V

Institutions Design for Managing Global Commons

LOGROLLING. Nicholas R. Miller Department of Political Science University of Maryland Baltimore County Baltimore, Maryland

1 Grim Trigger Practice 2. 2 Issue Linkage 3. 3 Institutions as Interaction Accelerators 5. 4 Perverse Incentives 6.

Defensive Weapons and Defensive Alliances

Michael Laver and Ernest Sergenti: Party Competition. An Agent-Based Model

Strategy in Law and Business Problem Set 1 February 14, Find the Nash equilibria for the following Games:

Problems with Group Decision Making

Property Rights and the Rule of Law

Problems with Group Decision Making

On the Irrelevance of Formal General Equilibrium Analysis

Complexity of Manipulating Elections with Few Candidates

FAIR REPUTATIONS: A GAME-THEORETIC MECHANISM FOR E-COMMERCE DISPUTES*

Matthew Adler, a law professor at the Duke University, has written an amazing book in defense

Published in Canadian Journal of Economics 27 (1995), Copyright c 1995 by Canadian Economics Association

MIDTERM EXAM 1: Political Economy Winter 2017

Exercise Set #6. Venus DL.2.8 CC.5.1

Organized Interests, Legislators, and Bureaucratic Structure

Strategic Reasoning in Interdependence: Logical and Game-theoretical Investigations Extended Abstract

Convergence of Iterative Voting

Illegal Migration and Policy Enforcement

The Origins of the Modern State

Robust Political Economy. Classical Liberalism and the Future of Public Policy

Social Rankings in Human-Computer Committees

Electing the President. Chapter 12 Mathematical Modeling

Random tie-breaking in STV

Discussion Paper No FUNDAMENTALS OF SOCIAL CHOICE THEORY by Roger B. Myerson * September 1996

What is Computational Social Choice?

MIDTERM EXAM: Political Economy Winter 2013

Game-Theoretic Remarks on Gibbard's Libertarian Social Choice Functions

Rhetoric in Legislative Bargaining with Asymmetric Information 1

On Preferences for Fairness in Non-Cooperative Game Theory

Transcription:

The story of conflict and cooperation Mehmet S. Ismail 1 Version: 19 August 2018 Abstract The story of conflict and cooperation has started millions of years ago, and now it is everywhere: In biology, computer science, economics, humanities, law, philosophy, political science, and psychology. Wars, airline alliances, trade, oligopolistic cartels, evolution of species and genes, and team sports are examples of games of conflict and cooperation. However, Nash (1951) s noncooperative games in which each player acts independently without collaboration with any of the others has become the dominant ideology in economics, game theory, and related fields. A simple falsification of this noncooperative theory is scientific publication: It is a rather competitive game, yet collaboration is widespread. In this paper, I propose a novel way to rationally play games of conflict and cooperation under the Principle of Free Will players are free to cooperate to coordinate their actions or act independently. Anyone with a basic game theory background will be familiar with the setup in this paper, which is based on simple game trees. In fact, one hardly needs any mathematics to follow the arguments. Acknowledgment: Without prior work and many discussions with my colleagues and collaborators, especially with Steven Brams, I would never be able to come up with the concepts that I propose in this paper. I would like to thank them. 1 Department of Political Economy, King s College London, London, UK. E-mail: mehmet.s.ismail@gmail.com. Any comments on this draft would be more than welcome.

1. Introduction The story of conflict and cooperation has started millions of years ago, and now it is everywhere: In biology, computer science, economics, humanities, law, philosophy, political science, psychology and so on. Cars cooperate as well as compete in the traffic to get to one place from another. Firms, and even countries, form cartels to cooperate among themselves and increase their market power, competing with the outsiders. Trade is a game of conflict and cooperation: Sellers want a higher price, whereas buyers want a lower price; yet, many people find a settling agreement because it is mutually beneficial. Many elections are games of cooperation as well as competition. In a judicial process, we may have conflicting interests with the other side, but we also cooperate with our lawyer and possibly with officials. Global airplane transportation is a giant competitive market, but alliances among airline companies are common Star Alliance being one of the biggest in the world. Many popular sports such as association football, American football, basketball, cricket, and volleyball involve competition as well as cooperation. 2 However, Nash s (1951, p. 286) noncooperative games in which each player acts independently without collaboration with any of the others has become the dominant ideology in economics, game theory, and the related fields. In addition to the aforementioned examples, a simple falsification of this noncooperative theory is scientific publication: It is a rather competitive game, yet collaboration is widespread. This paper proposes a novel way to rationally play games of conflict and cooperation under the Principle of Free Will players are free to cooperate to coordinate their actions or act independently. One of the big questions in sciences has been how cooperation has evolved. Somehow, evolution has furnished species with an ability to collaborate and compete to survive and pass their genes onto the next generations. Conflict and cooperation is widespread in animals including humans and other living organisms. Genes, however selfish they may be, engage in cartels. Evolutionary biologist Richard Dawkins coined the term The Selfish Cooperator, after having noticed that the title of his earlier book, The Selfish Gene, might have given a wrong impression 2 For competition and cooperation among freight carriers, see, e.g., Krajewska et al. (2008); for more examples in multi-agent systems in computer science, Doran et al. (1997); for more applications of game theory, Binmore (2007); for sports, Barrow (2012). 2

(Dawkins, 2000; 2006). Since the seminal work of Smith and Price (1973), game theory has been developed and extensively applied to biological sciences (see, e.g., Hamilton, 1967; Smith, 1982; and, Haigh and Cannings, 1989). First World War was fought between a coalition of Allied Powers and Central Powers, in which members of each coalition cooperated strategically to defeat the other coalition. The coalition members at the beginning of the war were different than the ones at the end, and some even changed sides. Payoffs at the end of the war differed among and across coalitions. Russian Empire, from the victorious Allied Powers, collapsed, as well as the three losing empires. There were even coalitions within coalitions in part because the larger coalition was not that stable. Some countries stayed neutral, which can be more beneficial for a player than being in either coalition. An example is Switzerland which has been neutral for over two centuries and has arguably benefited from this. Chess grandmaster Mamedyarov (world no. 3) recently allegedly admitted that grandmaster Karjakin (world no. 10) and him have involved in pre-arranged draws. This reveals that cooperation can take place even in a highly competitive game like chess. In chess tournaments, two players might benefit from drawing, and if other competitors realize this, then they can change their playing strategy towards the cooperating players. It seems that Magnus Carlsen, the current World Chess Champion, has already noticed such cooperation between the two grandmasters. 3 I am not the first to study the connections between noncooperative games and cooperative games. In fact, von Neumann (1928) studied the maximin solution in a three-person game and noticed that two players may benefit from collaboration in zero-sum games. (Note von Neumann s incredible anticipation the chess tournament mentioned above is an n-person zero-sum game!) In modern game theory, however, strategic games are studied under the framework of noncooperative games, in which players act independently and collaboration to coordinate actions is not possible (Nash, 1951). The framework for cooperative games were developed by von Neumann and Morgenstern (1944). Harsanyi (1974) s extension of this framework led to a recently a growing literature that incorporates elements from noncooperative games into cooperative games such as farsightedness and backward induction, which has greatly improved our understanding of both frameworks and their interrelations. There is a vast literature on coalitions in strategic and 3 Chess rankings are as of 31 July 2018. For more information, see: https://www.chess.com/news/view/norway-chess-anand-wins-mamedyarov-admits-pre-arranged-draws. 3

nonstrategic contexts; see, e.g., Bloch (1996), Brams et al. (2005), Herings et al. (2009), Ray and Vohra (2015), Petrosyan and Zaccour (2016), and Karos and Kasper (2018). The main contribution of the present paper is the solution in games of conflict and cooperation, which is based on a unique procedure that combines backward induction and forward induction reasoning in which players rationally collaborate or act independently. 4 Just like credible threats play a crucial role in noncooperative games (Schelling, 1980; Selten, 1965; Brams, 1994), they are indispensable in determining the stability of coalitions in this framework. I call a threat credible if the player or the coalition that make it would rationally carry out the threat. Moreover, compared with more abstract settings, I work on extensive-form games in which players, timing of the game, and strategies are specified in a simple game tree like in Figure 1. In that sense, anyone with a basic game theory background shall be familiar with the setup of games of conflict and cooperation. In fact, one hardly needs any mathematics to follow the arguments in this paper. 2. Illustrative examples 2.1. Is banning abortion effective? It is commonly believed that in noncooperative games there is no external authority to enforce certain behavior. In fact, they do require an authority to enforce noncooperative behavior the authority must guarantee that players will not cooperate and coordinate their actions in any way. 5 If a player knows that others can collude against it, then this would potentially change its behavior, as the following example illustrates. Consider the three-player sequential-move game presented in Figure 1 in which the government (Player 1) chooses between making abortion legal or illegal; an individual (Player 2), who is considering abortion, chooses between having abortion (Y) or not (N); and, an abortion clinic (Player 3) chooses between charging a High or Low price. Figure 1 illustrates players 4 Backward induction reasoning is based on the assumption that at any point in the game players make rational choices taking into account the future only, so they do not draw any conclusions from past choices. Forward induction reasoning generally assumes that past choices affect future behavior in a rational way. Unlike backward induction, forward induction does not have a unique definition in the literature. For more information, see, e.g., Perea (2012). 5 For further discussion between cooperative and noncooperative games, see, e.g., Serrano (2004). 4

Figure 1. A three-player sequential-move game in which government moves first, the person who is considering abortion moves second, and the clinic moves last. actions and preferences over the outcomes, which are represented by 1, 2, 3, and 4 from worst to best. I assume the following preferences in this rather stylized example: 1. The government prefers N to Y, and Legal to Illegal in any situation. 2. The individual s worst outcome is when the choices are Illegal, Y, and H, whereas her best outcome is when the choices are Legal, Y, and L. Her 2nd most preferred outcomes are when the choices are Illegal, Y, and L and Legal, Y, and H, in which case I assume that she goes to an alternative clinic with a cheaper price as abortion is legal. She receives a utility of 2 if she chooses N. 3. The clinic s worst outcome is when the individual does not have abortion, whereas its best outcome is when the choices are Illegal, Y, and H. When the individual chooses Y and the price is L, the clinic prefers Legal to Illegal, so its utility is 3 and 2, respectively. If we consider this game as a noncooperative game, its solution can be found by following the backward induction procedure: The clinic would choose H on the left node (4 vs. 2) and L on the right node (1 vs. 3). Given the clinic s choice, the individual would choose N and Y on the left and right nodes, respectively. Anticipating these choices, the government would choose to make abortion illegal. So, the outcome of this solution would be (3, 2, 1). By making abortion illegal, the government relies on the assumption that the clinic and the individual will act independently and not cooperate and coordinate their moves. However, after the government plays Illegal, the 5

person and the clinic have an incentive to cooperate because they would prefer outcome (1, 3, 2) to (3, 2, 1). So, making abortion illegal does not prevent it from happening because there is a mutually beneficial outcome. If the government anticipates that the individual and the clinic will coordinate their moves, then it would rather legalize abortion, in which case the outcome will be (2, 4, 3). In such cases, I will say that the cooperation between Player 2 and Player 3 would be a credible threat if the government chooses Illegal, then it will be carried out. To give more examples, in some countries, it is illegal to cooperate under certain circumstances: oligopolistic cartels, drug dealing, organ trade, forming partnerships such as dating and same-sex couple marriage or partnership just to name a few. In these games and other games played outside of restricted lab conditions, it is difficult and costly, if not impossible, to enforce that players will not exercise their free will to cooperate. In addition to encouraging competition, modern society is based on rules that in many ways facilitate cooperation, collaboration and coordination among individuals. For example, citizens are free to make contracts a simple e-mail can count as a binding agreement and engage in partnerships such as marriage, employer-employee partnership, management team of a company, friendships, and relationships, which are based on formal or informal institutions. With that in mind, I assume the Principle of Free Will players are free to act independently or form coalitions to coordinate their actions, which could be via formal or informal institutions as mentioned earlier. But I do recognize that the right to exercise free will can be restricted and it might be impossible to coordinate actions under certain reasonable situations. If there is an external authority that can enforce noncooperative behavior among some players, then this will be part of the model so that all players would rationally take this into account. In that sense, a game of conflict and cooperation is an extension of a noncooperative game. 2.2. Example 2: The effect of threats and counter-threats Figure 2 illustrates a three-player game of conflict and cooperation in which Player 1 (P1) starts by choosing L or R. For convenience, I will use pronouns it for P1, she for P2, and he for P3. If cooperation were not possible, then the standard backward induction outcome of this game would be (5, 5, 3), as is illustrated by step (i) in Table 1. However, this is only the beginning of the analysis because in this model players may join forces and form coalitions as long as it is mutually beneficial. Members of a coalition coordinate their actions as if they were a single 6

Figure 2. A three-player game of conflict and cooperation. player. For simplicity, in this example I assume that a coalition prefers more egalitarian outcomes to less egalitarian outcomes i.e., one outcome is preferred to the other if the minimum utility a member of the coalition receives from the former outcome is greater than the minimum utility a member receives from the latter. For example, a coalition of P1 and P3, if forms, would prefer (6, 3, 5) to (1, 1, 6). (In section 3, the model allows for more general coalitional preferences.) I next solve this game intuitively using a procedure that is based on backward and forward induction (BFI) reasoning. (I will define the BFI procedure precisely in section 3.) The outcome that will be implemented if no further agreements reached is called a reference point. The initial reference point is the backward induction outcome, (5, 5, 3). Step (ii) illustrates the left subgame in which it is P2 to make a choice. The backward induction (BI) path in this subgame is a and e, whose outcome is (5, 5, 3). But if P2 uses the following forward induction (FI) reasoning and finds a mutually beneficial outcome, then she might convince P3 to form a coalition, as they would have the full control of the outcome in this subgame. Note that both P2 and P3 prefer (1, 6, 4) to the reference point, (5, 5, 3). Thus, P2 and P3 will rationally get together and play b and h, because it is mutually beneficial. This overrides the backward induction outcome, and, therefore, the reference point is updated to (1, 6, 4). But what if P1 anticipates that if it plays L, then P2 and P3 will collaborate against it? Then, the initial BI action, L, may not be optimal any more (as we have seen in Example 1), so it may need to update its decision based on the new reference point in which it receives a payoff of 1. Indeed, if P1 chooses R then the outcome would be (2, 2, 6), which is preferred to the last reference 7

(i). The backward induction outcome is (5, 5, 3), (ii). 1st coalition forms: Looking forward, which is also the first reference point. P2 forms a coalition with P3 to play b and h. This overrides the BI outcome, so the new reference point is (1, 6, 4). Table 1. Steps (i) and (ii) are illustrated. Dashed lines represent individual independent best responses, whereas thick solid lines represent coordinated best responses. One can find the outcome at each step by following the arrows. (iii). P1 anticipates the coalition of P2 and P3 and hence revises its strategy to R as 2 > 1. The new reference point is (2, 2, 6). Table 2. Steps (iii) and (iv) are illustrated. (iv). Coalition {2, 3} breaks down, so new partnerships are sought. P1 and P2 cooperate to play L and a to receive 5 each, which is better than 2 each at the last reference point. The new reference point is, once again, (5, 5, 3). 8

point, (1, 6, 4). Step (iii) illustrates this situation. Therefore, P1 s new best response is R, and the new reference point becomes (2, 2, 6). 6 Next, having figured out what would happen if P1 acts independently, it needs to check using FI reasoning whether there are any collaboration opportunities. Notice that P1 would like to form a coalition with P3 to obtain the outcome (6, 3, 5), but P3 would reject such an offer because he receives 6 from the current reference point, (2, 2, 6). P1 could threaten to play L, thereby decrease P3 s payoff; however, this threat would not be credible because P1 would not rationally carry out its threat: It would receive 1 (as opposed to 2) as we have concluded in step (ii). Actually, P3 would not be interested in forming a coalition with any player because it receives its highest payoff at (2, 2, 6). I next check whether P1 and P2 can do better by forming a coalition, which is illustrated in step (iv). Notice that if P1 and P2 get together and play L and a, then P3 would best respond to this choice by e. The resulting outcome, (5, 5, 3), is better for P1 and P2 than (2, 2, 6). Therefore, P1 and P2 form a coalition, which breaks down the alliance between P2 and P3 in this subgame. The new reference point is, one more time, (5, 5, 3). But this is not the end of the analysis because P3 anticipates that in case he does not act, the outcome will be (5, 5, 3). Remember that P1 by itself could not credibly threaten P3; however, its forming a coalition with P2 has sent P3 a credible threat, which brings P3 back to the bargaining table. P3 would seek to form a coalition with P1 to possibly get (6, 3, 5), which is mutually beneficial. The agreement would be that if P1 plays R, then P3 would play l: P2 s choice would be d because she would receive 3, whereas if she plays c, then she would get at most 2. Step (v) in Figure 3 illustrates the backward induction outcome, (6, 3, 5), when P1 and P3 cooperate. This is the last reference point and the final outcome because no other coalition including the grand coalition can do better. A complete solution of this game can be summarized by a list of players, stable coalitions, and their strategies: [(1, 3), 2: R, {a, d}, {e, g, j, l}], in which P1 and P3 form a coalition, P1 chooses R, P2 chooses a and d, and P3 chooses e, g, j, 6 Note that I have jumped one step. Specifically, I should also have checked whether P2 and P3 would form a coalition in the right subgame. This would not be possible because P3, by playing independently, already receives its greatest payoff in this subgame, so he would not be willing to form a coalition. 9

Figure 3. Step (v): A coalition breaks down once again. P1 s forming a coalition with P2 was a credible threat, which brings P3 back to the bargaining table. Anticipating that his payoff will be 3, P3 forms a coalition with P1: P1 will play R and P3 will play l. P2 is not happy about this but his best response is d against the coalition. As a result, (6, 3, 5) is the final outcome because no other coalition including the grand coalition can do better. and l, from left to right. The outcome of this solution is (6, 3, 5). It is notable that during the solution process we have seen that all coalitions {1,2}, {2, 3}, and {1, 3} except the grand coalition rationally formed, though the only stable coalition turned out to be the one between P1 and P3. 7 As a result, we have obtained that the only stable coalition is the one between P1 and P3. At the outset, it might be tempting to conclude without running the (BFI) procedure that P1 and P3 will obviously form a coalition to obtain (6, 3, 5). However, this conclusion would be false. To give an example, consider the game in Figure 2 in which, all else being equal, outcome (3, 1, 2) is replaced with outcome (3, 4, 3). This change seems to be irrelevant because P1 and P3 can still form a coalition to obtain (6, 3, 5). However, the outcome of the new game based on the same procedure would be (5, 5, 3), which is significantly different than the previous outcome why is this? This is because P2 has now a credible threat against the coalition of P1 and P3. Notice that if P1 plays R, then P2 will respond by c, knowing that the coalition would choose i that leads 7 Of course, another interpretation could be that first this process occurs in the minds of the players, and then they form coalitions. 10

to the more egalitarian outcome, (3, 4, 3), as I assumed in the example. 8 Because the reference point in step (iv) was (5, 5, 3), it would not be individually rational for P1 to collaborate with P3, given P2 s credible threat. As a result, the solution of this game can be summarized [(1,2): L, {a, c}, {e, g, j, k}], in which P1 and P2 form a coalition, P1 chooses L, P2 chooses a and c, and P3 chooses e, g, j, and k, from left to right. The outcome of this solution is (5, 5, 3). A credible threat by P2 has prevented P3 destabilizing the coalition of P1 and P2. 3. Games of conflict and cooperation The framework is finite extensive-form games with perfect information. 9 An n-person game of conflict and cooperation is a game in extensive-form which consists of a set of players, N = {1, 2,, n}, their preferences represented by von Neumann-Morgenstern utilities, a rooted game tree (as in Example 1), and two additional properties: i. A set that describes which players cannot form coalitions with which players, consistent with the Principle of Free Will. The chance player, if any, cannot form any coalition. ii. For each possible coalition, a value function as a function of each player s utility in the coalition, representing von Neumann-Morgenstern preferences of the coalition. I assume that the full description of the game is common knowledge. When a coalition forms at a node x in the game tree, the cooperators coordinate their actions in accordance with the preferences given by the value function of the coalition. The value function can also be interpreted as the von Neumann-Morgenstern utility of an auxiliary player who acts on behalf of the coalition. For illustrative purposes, I assumed in the example that a coalition weakly prefers one outcome to the other if every member of the coalition weakly prefers former to the latter. But the value function may capture very different coalitional preferences (with or without transferable payoffs) in general. 8 In this example, for simplicity I assume that payoff transfers are not possible, but even if it were, one could construct a similar example. 9 Any basic textbook in game theory covers extensive-form games with perfect information; see, e.g., Peters (2015). Perfect information means that at any point in the game players know what happened in the past. 11

As mentioned earlier, a game of conflict and cooperation is an extension of a game in extensive-form in which I assume the Principle of Free Will players may act independently, form coalitions to coordinate their actions, and decide which coalitions to form. If there is an external authority that can enforce that no one can cooperate with anyone, which would be described in the set in (i), then a game of conflict and cooperation reduces to a noncooperative game in extensive form. I say that a node is labeled if a utility vector is associated with it. In an extensive-form game, only the terminal nodes are labeled. As is standard, I define a subgame of a game at a node x as a game that includes x and all its successors. Note that a game is a subgame of itself. A coalition is called individually rational if every member of the coalition prefers to be in the coalition to the reference point, which is implemented if the specific coalition does not form. I next give a mathematical algorithm or a logical procedure that can be used to label all nodes including the initial node or the root of the tree when the finite procedure halts. I call this Backward-Forward Induction (BFI) or Mixed Induction (MI) algorithm, because it is based on the recursive application of a combination of backward induction and forward induction reasoning until it returns an outcome. Definition 1 (Backward-Forward Induction Algorithm). The BFI algorithm for solving a finite perfect-information game of conflict and cooperation in extensive-form is defined by the following procedure. 1. Let i be the player who is active at an unlabeled node x such that its immediate successors are labeled. Define the reference point as the utility vector associated with the choice of Player i such that it maximizes i s utility at x. a. If i s utility-maximizing choice leads to a terminal node, then label x with the reference point, and go to Step 3. b. Otherwise, go to Step 2. 2. Given the reference point at node x, players look forward (i.e., down the tree) and seek ways to form a coalition with or against Player i. 10 10 Players know that if no more coalitions are formed, then the outcome will be the reference point, which can be interpreted as the disagreement point. 12

a. For every coalition containing i, apply a new BFI procedure to the subgame starting at node x with the condition that the coalition containing i forms. Note the outcome of each BFI procedure from worst to best for i, c ;, c <,, c =. Go to (b). b. Find, if any, the smallest j such that c > is individually rational for every member of its coalition with respect to the reference point. Then, update the reference point with c >, and repeat (b). If such j does not exist, label x with the reference point, and go to Step 3. 3. Repeat Step 1 and Step 2 unless all the nodes have been labeled, in which case stop the algorithm. Step 1 includes backward induction reasoning and Step 2 includes forward induction reasoning as well as a recursive sub-step in which the algorithm travels back-and-forth in the tree. Applying a new BFI procedure means running the same algorithm independently of the results we have obtained so far in the preceding algorithm. Just like the BI outcome, there may be more than one BFI outcome. This is because there may be more than one choice that maximizes a player s or a coalition s payoff at a given node, in which case any of them can be chosen (ties broken arbitrarily). I call a coalition stable if it survives the BFI procedure; otherwise I call it unstable. For example, coalition of {1, 2} in Example 2 is unstable because a stable coalition of {1, 3} forms against it, as I have showed. As defined earlier, a coalition is individually rational if every member of the coalition prefers to be in the coalition to the reference point. A coalition may be individually rational but not stable. Members of the coalition can be better off with the coalition than the reference point, but there may be another coalition opportunity which could arise only if the former individually rational coalition forms, which destabilizes itself. A (BFI) equilibrium of a game of conflict and cooperation consists of the strategies of players and stable coalitions that give rise to the outcome obtained by the BFI procedure. Equilibrium strategies, stable coalitions and their strategies can be identified by keeping a record of them when the original algorithm runs at the root of the game tree till the end of the algorithm. This notion is based on the equilibrium ideas of Cournot (1838), von Neumann (1928), von Neumann and Morgenstern (1944), Nash (1951), and Selten (1965). 13

The Backward-Forward Induction equilibrium concept is fundamentally different from the standard backward induction solution or subgame perfect equilibrium in at least one aspect. A central feature of subgame perfect equilibrium is that it constitutes an equilibrium in every subgame. However, a BFI equilibrium does not constitute a BFI equilibrium in every subgame. 11 This is because it is based on not only backward induction reasoning but also forward induction reasoning; as such the agreements that have been made in the past affect the future plans of players and coalitions in a rational way. Theorem 1. There exists a BFI outcome and equilibrium in every finite n-person perfectinformation game of conflict and cooperation. Proof. The BFI procedure is well-defined because the game is finite there are finitely many players and finitely many pure strategies. Therefore, it is guaranteed to terminate because all terminal nodes are labeled in the beginning. Step 1 is the standard backward induction procedure, so it does not need an explanation. Step 2 needs some elaboration. Note that this step is recursive in that it calls for a new BFI algorithm to be run and potentially repeated. Then, the outcome of the new algorithm will be compared with the reference outcome. This recursive procedure will also end after finitely many steps because there are finitely many players and each time the BFI procedure is at Step 2a, the number of players will (weakly) decrease due to forming coalitions. Q.E.D. Theorem 1 shows that every finite game of conflict and cooperation has an equilibrium in pure strategies and an associated outcome. Note that players cannot improve their payoff by switching their strategy unilaterally, coalitions are individually rational and stable each member of the coalition gets more than he or she would get otherwise. 4. Conclusions Games of conflict and cooperation include wars, airline alliances, and scientific publication. I 11 In Example 2, recall that P2 and P3 form a coalition in the left subgame as described in step (ii), yet in the BFI equilibrium of the game the only stable coalition is the one between P1 and P3. 14

propose a solution in such games, which is based on a unique procedure that combines backward and forward induction reasoning in which players act independently or cooperate in a rational way. A Backward-Forward Induction (BFI) equilibrium is a list of strategies and stable coalitions such that independent players do not have any incentive to deviate unilaterally and coalitions are individually rational and stable in the sense that their members prefer to be in the coalition than be out of the coalition. Credibility of threats and counter-threats by individuals and coalitions play a crucial role in determining the stability of coalitions in this setting. The BFI concept is fundamentally different from the standard backward induction solution or subgame perfect equilibrium in at least one aspect. Unlike subgame perfect equilibrium, a BFI equilibrium does not constitute a BFI equilibrium in every subgame. This is because, in addition to backward induction reasoning, forward induction reasoning plays a key role in the BFI procedure: Players and coalitions draw rational conclusions from the agreements that have been made, which rationally affect their future plans. Traditionally, noncooperative games were extended first to imperfect-information games (Selten, 1965) and then to incomplete-information games (Harsanyi, 1967). I believe that the model in this paper and the associated Backward-Forward Induction concept can be extended to more general settings in an analogous way. But I feel that this is a nontrivial task and it is beyond the scope of this text. I believe that a number of fields may benefit from applications of the model in this paper. A political scientist may apply the model to conflict and cooperation among countries, a computer scientist to multi-agent systems, a biologist to evolution of species and genes, an ethologist to animal behavior, an operations researcher to freight carriers, and an economist to examples such as airline alliances and oligopolistic cartels. Although Example 2 helps us to illustrate several points about the setup and solution, it is rather artificial. More insightful examples from these disciplines would be welcome. References Barrow, J. D. (2012). Mathletics: 100 Amazing Things You Didn't Know about the World of Sports. New York: W. W. Norton. Binmore, K. (2007). Game theory: a very short introduction (Vol. 173). UK: Oxford University 15

Press. Bloch, F. (1996). Sequential formation of coalitions in games with externalities and fixed payoff division. Games and Economic Behavior, 14(1), 90 123. Brams, S. J. (1994). Theory of Moves. Cambridge, UK: Cambridge University Press. Brams, S. J., Jones, M. A., & Kilgour, D. M. (2005). Forming stable coalitions: The process matters. Public Choice, 125(1-2), 67 94. Cournot, A. A. (1838). Recherches sur les principes mathématiques de la théorie des richesses. Paris: Hachette. Dawkins, R. (2000). Unweaving the rainbow: Science, delusion and the appetite for wonder. Boston: Houghton Mifflin. Dawkins, R. (2006). The Selfish Gene: 30th Anniversary Edition. UK: Oxford University Press. Doran, J. E., Franklin, S. R. J. N., Jennings, N. R., & Norman, T. J. (1997). On cooperation in multi-agent systems. The Knowledge Engineering Review, 12(3), 309 314. Haigh, J., & Cannings, C. (1989). The n-person war of attrition. Acta Applicandae Mathematica, 14(1-2), 59 74. Hamilton, W. D. (1967). Extraordinary sex ratios. Science, 156(3774), 477 488. Harsanyi, J. C. (1967). Games with incomplete information played by Bayesian players, I III Part I. The basic model. Management science, 14(3), 159 182. Harsanyi, J. C. (1974). An equilibrium-point interpretation of stable sets and a proposed alternative definition. Management science, 20(11), 1472 1495. Herings, P. J. J., Mauleon, A., & Vannetelbosch, V. (2009). Farsightedly stable networks. Games and Economic Behavior, 67(2), 526 541. Karos, D., & Kasper, L. (2018). Farsighted Rationality. Preprint. Krajewska, M. A., Kopfer, H., Laporte, G., Ropke, S., & Zaccour, G. (2008). Horizontal cooperation among freight carriers: request allocation and profit sharing. Journal of the Operational Research Society, 59(11), 1483 1491. Nash, J. (1951). Non-cooperative games. Annals of mathematics, 286 295. Perea, A. (2012). Epistemic game theory: Reasoning and choice. Cambridge, UK: Cambridge University Press. Peters, H. (2015). Game theory: A Multi-leveled approach. Berlin: Springer-Verlag. 16

Petrosyan, L. A., & Zaccour, G. (2016). Cooperative Differential Games with Transferable Payoffs. In: T. Başar & G. Zaccour, eds. Handbook of Dynamic Game Theory. Springer. Ray, D., & Vohra, R. (2015). Coalition formation. In: H. P. Young & S. Zamir, eds. Handbook of game theory with economic applications (Vol. 4, pp. 239 326). Elsevier. Schelling, T. C. (1980). The strategy of conflict. Cambridge, MA: Harvard University Press. Selten, R. (1965). Spieltheoretische behandlung eines oligopolmodells mit nachfrageträgheit. Zeitschrift für die gesamte Staatswissenschaft, 301 324 and 667 689. Serrano, R. (2004). Fifty years of the Nash program, 1953 2003. https://ssrn.com/abstract=724233 Smith, J. M. (1982). Evolution and the Theory of Games. Cambridge, UK: Cambridge University Press. Smith, J. M., & Price, G. R. (1973). The logic of animal conflict. Nature, 246(5427), 15. Von Neumann, J. (1928). Zur theorie der gesellschaftsspiele. Mathematische annalen, 100(1), 295 320. Von Neumann, J., & Morgenstern, O. (1953). Theory of games and economic behavior. Third ed. Princeton University Press. 17