Watson - Strategy Solutions.

March 26, 2018 | Author: Raul Sanchez | Category: Game Theory, Calculus, Strategic Management, Lecture, Probability Theory


Comments



Description

Strategy: An Introduction to Game TheorySecond Edition Instructors’ Manual Joel Watson with Jesse Bull April 2008 1 Strategy: An Introduction to Game Theory, Second Edition, Instructors’ Manual, version 4/2008. c Copyright 2008, 2002 by Joel Watson. This document is available, with the permission of W. W. Norton & Company, for use by textbook adopters in conjunction with the textbook. The authors thank Pierpaolo Battigalli, Takako Fujiwara-Greve, Michael Herron, David Miller, David Reinstein, Christopher Snyder, and Charles Wilson for pointing out some typographical errors in earlier versions of this manual. 2 This Instructors’ Manual has four parts. Part I contains some notes on outlining and preparing a game theory course that is based on the textbook. Part II contains more detailed (but not overblown) materials that are organized by textbook chapter. Part III comprises solutions to all of the exercises in the textbook. Part IV contains some sample examination questions. Please report any typographical errors to Joel Watson ([email protected]). Also feel free to suggest new material to include in the instructors’ manual or web site. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002, 2008 by Joel Watson For instructors only; do not distribute. Contents I II General Materials 7 Chapter-Specific Materials 12 1 Introduction 13 2 The Extensive Form 15 3 Strategies and the Normal Form 19 4 Beliefs. 2008 by Joel Watson For instructors only. and Enforcement in Static Settings 38 14 Details of the Extensive Form 41 15 Backward Induction and Subgame Perfection 43 16 Topics in Industrial Organization 45 17 Parlor Games 46 Instructors' Manual for Strategy: An Introduction to Game Theory 3 Copyright 2002. Crime. Mixed Strategies. and Voting 34 11 Mixed-Strategy Nash Equilibrium 35 12 Strictly Competitive Games and Security Strategies 37 13 Contract. do not distribute. . Tariffs. and Expected Payoffs 22 5 General Assumptions and Methodology 24 6 Dominance and Best Response 25 7 Rationalizability and Iterated Dominance 28 8 Location and Partnership 30 9 Nash Equilibrium 32 10 Oligopoly. Law. Options. 2008 by Joel Watson For instructors only.CONTENTS 4 18 Bargaining Problems 48 19 Analysis of Simple Bargaining Games 50 20 Games with Joint Decisions. Mixed Strategies. do not distribute. Auctions. and Goodwill 58 24 Random Events and Incomplete Information 60 25 Risk and Incentives in Contracting 63 26 Bayesian Nash Equilibrium and Rationalizability 65 27 Lemons. Trade Agreements. and Ownership 54 22 Repeated Games and Reputation 56 23 Collusion. Negotiation Equilibrium 52 21 Unverifiable Investment. . and Expected Payoffs 82 6 Dominance and Best Response 84 7 Rationalizability and Iterated Dominance 86 Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. and Information Aggregation 66 28 Perfect Bayesian Equilibrium 68 29 Job-Market Signaling and Reputation 70 30 Appendices 71 III 72 Solutions to the Exercises 2 The Extensive Form 73 3 Strategies and the Normal Form 77 4 Beliefs. Hold Up. 2008 by Joel Watson For instructors only. Trade Agreements. Negotiation Equilibrium 130 21 Unverifiable Investment. and Enforcement in Static Settings 106 14 Details of the Extensive Form 111 15 Backward Induction and Subgame Perfection 112 16 Topics in Industrial Organization 116 17 Parlor Games 120 18 Bargaining Problems 124 19 Analysis of Simple Bargaining Games 127 20 Games with Joint Decisions. Tariffs.and Voting 96 11 Mixed-Strategy Nash Equilibrium 100 12 Strictly Competitive Games and Security Strategies 105 13 Contract. Crime. Options.CONTENTS 5 8 Location and Partnership 88 9 Nash Equilibrium 92 10 Oligopoly. . and Goodwill 139 24 Random Events and Incomplete Information 143 25 Risk and Incentives in Contracting 145 26 Bayesian Nash Equilibrium and Rationalizability 147 Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. Law. and Ownership 133 22 Repeated Games and Reputation 136 23 Collusion. do not distribute. Hold Up. 2008 by Joel Watson For instructors only. Auctions. . do not distribute. and Information Aggregation 152 28 Perfect Bayesian Equilibrium 155 29 Job-Market Signaling and Reputation 157 30 Appendix B 162 IV Sample Examination Questions Instructors' Manual for Strategy: An Introduction to Game Theory 163 Copyright 2002.CONTENTS 6 27 Lemons. 7 Part I General Materials This part contains some notes on outlining and preparing a game theory course for those adopting Strategy: An Introduction to Game Theory. do not distribute. 2008 by Joel Watson For instructors only. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. . but not all. Analysis of Dynamic Settings 6 7 8 9 10 Extensive form. applications 14-15 16-17 18-19 20-21 22-23 D. . applications Equilibrium.8 Sample Syllabi Most of the book can be covered in a semester-length (13-15 week) course. and 29) can be covered selectively or skipped Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. For example. Analysis of Static Settings 2-3 3-4 5 5 Best response. and enforcement 6-8 9-10 11-12 13 C. and normal form Beliefs and mixed strategies Chapters 1-3 4-5 B. backward induction. 16. 23. applications Other equilibrium topics Contract. 21. applications Perfect Bayesian equilibrium and applications 24 25 26-27 28-29 In a ten-week (quarter system) course. 10. and subgame perfection Examples and applications Bargaining Negotiation equilibrium and problems of contracting and investment Repeated games. any of the chapters devoted to applications (Chapters 8. Here is a sample thirteen-week course outline: Weeks 1 1-2 Topics A. For this length of course. 25. do not distribute. rationalizability. law. Representing Games Introduction. of the book can be covered. 2008 by Joel Watson For instructors only. most. extensive form. Information 11 11 12 13 Random events and incomplete information Risk and contracting Bayesian equilibrium. strategies. 27. you can easily leave out (or simply not cover in class) some of the chapters. Analysis of Dynamic Settings 6 7 7-8 8-9 Backward induction. applications Other equilibrium topics Contract. 21. 20. and enforcement 6-8 9-10 11-12 13 C. and an application Bargaining Negotiation equilibrium and problems of contracting and investment Repeated games. Weeks 1 1-2 Topics A. . Representing Games Introduction. and 29. and 25. I selectively cover Chapters 18. do not distribute. I usually cover only one application from each of Chapters 8. Chapters 12 and 17 contain material that may be regarded as more esoteric than essential. 28. and normal form Beliefs and mixed strategies Chapters 1-3 4-5 B. Analysis of Static Settings 2-3 3-4 5 5 Best response. applications Equilibrium. subgame perfection. Instructors who prefer not to cover contract can skip Chapters 13. Below is a sample ten-week course outline that is formed by trimming some of the applications from the thirteen-week outline. extensive form. application Perfect Bayesian equilibrium and an application Instructors' Manual for Strategy: An Introduction to Game Theory 24 26-27 28-29 Copyright 2002. I skip Chapter 25. 27. 16. one can easily have the students learn the material in these chapters on their own. rationalizability. 10. 2008 by Joel Watson For instructors only. depending on the pace of the course. 20. I avoid some end-of-chapter advanced topics.9 without disrupting the flow of ideas and concepts. This is the outline that I use for my quarter-length game theory course. such as the infinite-horizon alternating-offer bargaining game. strategies. and. applications 14-17 18-19 20-21 22-23 D. Information 9 10 10 Random events and incomplete information Bayesian equilibrium. law. and 29. 23. 27. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. you can make sure all of the students can differentiate simple polynomials. In each case. only the most basic knowledge of differentiation suffices to follow the textbook derivations. and bonus questions. First. do not distribute. Here is a list of the examples that are analyzed with calculus in the textbook: • the partnership example in Chapters 8 and 9. Second.10 Experiments and a Course Competition In addition to assigning regular problem sets. Some sections of the textbook benefit from the use of calculus. students should be very comfortable with set notation. Some suggestions for classroom games and bonus questions appear in various places in this manual. and basic probability theory. The competition is mainly for sharpening the students’ skills and intuition. In particular. . Prizes can be awarded to the winning students at the end of the term. • the dynamic oligopoly model in Chapter 23 (Cournot-based). and thus the students’ performance in the course competition should not count toward the course grades. a few examples and applications can be analyzed most easily by calculating derivatives. 2008 by Joel Watson For instructors only. algebraic manipulation. Bonus questions can be sent by e-mail. this can be accomplished by either (a) specifying calculus as a prerequisite or (b) asking the students to read Appendix A at the beginning of the course and then perhaps reinforcing this by holding an extra session in the early part of the term to review how to differentiate a simple polynomial. You have two choices regarding the use of calculus. The competition consists of a series of challenges. • the Cournot application in Chapter 10 (the tariff and crime applications in this chapter are also most easily analyzed using calculus. it can be fun and instructive to run a course-long competition between the students. the expressions requiring differentiation are simple polynomials (usually quadratics). Thus. so the students should come into the course with the proper mathematics background. you can avoid calculus altogether by either providing the students with non-calculus methods to calculate maxima or by skipping the textbook examples that use calculus. • the advertising and limit capacity applications in Chapter 16 (they are based on the Cournot model). some experiments can be done by e-mail as well. but the analysis is not done in the book). classroom experiments. Appendix A in the textbook provides a review of mathematics at the level used in the book. Level of Mathematics and Use of Calculus Game theory is a technical subject. Students receive points for participating and performing near the top of the class. For example. even if calculus is not a prerequisite for the game theory course. It takes only an hour or so to explain slope and the derivative and to give students the simple rule of thumb for calculating partial derivatives of simple polynomials. . 2008 by Joel Watson For instructors only. Each of these examples can be easily avoided. My feeling is that using a little bit of calculus is a good idea. such as the Cournot model and auctions. Then one can easily cover some of the most interesting and historically important game theory applications. and • the analysis of auctions in Chapter 27. • the Cournot example in Chapter 26. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. if you so choose. do not distribute. There are also some related exercises that you might avoid if you prefer that your students not deal with examples having continuous strategy spaces.11 • the discussion of risk-aversion in Chapter 25 (in terms of the shape of a utility function). do not distribute. Others may have a very different style or intent for their courses. and • Suggestions for classroom examples and/or experiments. these instructors will probably find the lecture outlines of limited use. The notes do not represent any claim about the “right” way to lecture.12 Part II Chapter-Specific Materials This part contains instructional materials that are organized according to the chapters in the textbook. The lecture notes are merely suggestions for how to organize lectures of the textbook material. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. Some instructors may find the guidelines herein to be in tune with their own teaching methods. these instructors may decide to use the lecture outlines without much modification. I hope this material will be of some use to you. For each textbook chapter. . • Lecture notes (including an outline). 2008 by Joel Watson For instructors only. the following is provided: • A brief overview of the material covered in the chapter. if at all. Ask the students to stand and then. football. then they have to decided whether to clap.” and comments on the science and art of applied theoretical work.” Then show how these outcomes can be converted into payoff numbers. poker. and so on. the format and style of the book are described. • Competition and cooperation are both strategic topics. elections. • The elements of a formal game representation. students confused.) Show them how to diagram the strategic situation as an extensive form tree. international trade. Instructors' Manual for Strategy: An Introduction to Game Theory 13 Copyright 2002. followed by a description of “non-cooperative theory” (which the book emphasizes). • A few simple examples of the extensive form representation (point out the basic components). do not distribute. biological competition. If you ask them to stand. and other parlor games. firm/employee relations. firm competition. and other standard economic examples. Write the outcomes at terminal nodes in descriptive terms such as “professor happy. The chapter explains that the word “game” should be associated with any well-defined strategic situation. The chapter contains a short history of game theory. (This is a silly game. Lecture Notes The non-administrative segment of a first lecture in game theory may run as follows. Clap game. international relations. Game theory is a general methodology for studying strategic settings (which may have elements of both competition and cooperation). a discussion of the notion of contract and the related use of “cooperative theory. If you ask them to clap. not just adversarial contests. and other sports. If they stand. Examples and Experiments 1. if they comply.1 Introduction This chapter introduces the concept of a game and encourages the reader to begin thinking about the formal analysis of strategic situations. • Definition of a strategic situation. 2008 by Joel Watson For instructors only. • Examples (have students suggest some): chess. tennis. ask them to clap. The game starts with your decision about whether to ask them to stand. then they (modeled as one player) have to choose between standing and staying in their seats. Finally. then you decide between saying nothing and asking them to clap. . 2008 by Joel Watson For instructors only. The winning bid may exceed the bookstore’s price. do not distribute. especially if they can get a good deal. giving you an opportunity to talk about the “winner’s curse” and to establish a fund to pay students in future classroom experiments. first-price auction (using real money). Auction the textbook. quite a few students will not know the price of the book. Many students will probably not have purchased the textbook by the first class meeting.1 INTRODUCTION 14 2. However. This is a common-value auction with incomplete information. Without announcing the bookstore’s price. These students may be interested in purchasing the book from you. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. hold a sealed-bid. . a good understanding of this concept makes a dramatic difference in each student’s ability to progress. • We assume that the players know the game tree. At any place in a tree. decision. • Example of a game tree. terminal. • Outcomes and how payoff numbers represent preferences. 2008 by Joel Watson For instructors only. but that a given player may not know where he is in the game when he must make any particular decision. • How to describe simultaneous moves. Then describe what it means for a player to not know what another player did. never converging back on themselves.2 The Extensive Form This chapter introduces the basic components of the extensive form in a non-technical way. . Students who learn about the extensive form at the beginning of a course are much better able to grasp the concept of a strategy than are students who are taught the normal form first. Start by describing the tree as a diagram that an external observer creates to map out the possible sequences of decisions. the tree summarizes the strategic possibilities. Assume the external observer sees all of the players’ actions. Thus. • Information sets. • Player and action labels. The chapter avoids the technical details of the extensive form representation in favor of emphasizing the basic components of games. This is captured by dashed lines indicating that a player cannot distinguish between two or more nodes. • An information set is a place where a decision is made. The technical details are covered in Chapter 14. do not distribute. • Build trees by expanding. Branches are individual actions taken by the players. branches. • Types of nodes: initial. you should always know exactly how you got there. Lecture Notes The following may serve as an outline for a lecture. Try not to use the same label for different places where decisions are made. • Basic components of the extensive form: nodes. Since strategy is perhaps the most important concept in game theory. Nodes are where things happen. Instructors' Manual for Strategy: An Introduction to Game Theory 15 Copyright 2002. 16 2 THE EXTENSIVE FORM Examples and Experiments Several examples should be used to explain the components of an extensive form. Note that the dealer can decide which of the cards goes to player 2 and which of the cards goes to player 3. You can discuss moves of Nature at this point. the contestant selects one of the doors (by saying “a. and c′ . and it runs as follows. Three-card poker. nor does player 3 observe the card dealt to player 2.) Then. Payoffs indicate that each player prefers winning to folding and folding to losing. (You might write Monty’s actions as a′ . The contestant observes which door Monty opens. but he is not allowed to open the door that is in front of the prize. In this game. Call the doors a. The game is played by Monty (player 1) and a contestant (player 2). to differentiate them from those of the contestant. do not distribute. (There is no move by Nature in this game. b. Note that Monty does not have a choice if the contestant chooses a different door than Monty chose for the prize. Monty secretly places a prize (say. there is a dealer (player 1) and two potential betters (players 2 and 3).” “b. the dealer looks at the cards and gives one to each of the other players. b′ . Monty must open one of the doors.” or “c”). This is the three-door guessing game that was made famous by Monty Hall and the television game show Let’s Make a Deal. a middle card. Let’s Make a Deal game. After this. At the beginning of the game. here are a few I routinely use: 1. There are three cards in the deck: a high card. . but it is not necessary. player 3 observes his own card and also whether player 2 folded or bet. After the dealer’s move.) Player 2 does not observe the card dealt to player 3. Assume the dealer is indifferent between all of the outcomes (or specify some other preference ordering). and a low card. 2008 by Joel Watson For instructors only. In addition to some standard economic examples (such as firm entry into an industry and entrant/incumbent competition). the remaining doors are opened and the contestant wins the prize if it is behind the door she chose. The book does not deal with moves of Nature until Part IV. without observing Monty’s choice. After player 3’s move. The contestant obtains a Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. The contestant then has the option of switching to the other unopened door (S for “switch”) or staying with the door she originally selected (D for “don’t switch”). $1000) behind one of three doors. nor is he allowed to open the door that the contestant selected. After player 2’s decision. the game ends. First. player 2 observes his card and then decides whether to bet or to fold. Note that she will see no prize behind this door. Then player 3 must decide whether to fold or bet. Finally. 2. and c. do not distribute. . zero otherwise. Students who submit a correct extensive form can be given points for the class competition.17 2 THE EXTENSIVE FORM payoff 1 if she wins. 2008 by Joel Watson For instructors only. you can challenge the students to draw the extensive form representation of the Let’s Make a Deal game or the Three-Card Poker game. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. Monty is indifferent between all of the outcomes. For a bonus question. The Let’s Make a Deal extensive form is pictured on the next page. 2008 by Joel Watson For instructors only.2 THE EXTENSIVE FORM Instructors' Manual for Strategy: An Introduction to Game Theory 18 Copyright 2002. . do not distribute. strategy spaces for the players. illustrated with some examples. where S = S1 × S2 × · · · × Sn (product set). for two-player. and payoff functions. • Strategy profile: s ∈ S. The chapter concludes with a few comments on the comparison between the normal and extensive forms. • Refer to Appendix A for more on sets. individual strategy si ∈ Si . This leads to the definition of a payoff function. • Formal definition of strategy. • Notation: i and −i. Example: Si = {H.” A strategy prescribes an action at every information set. even those that would not be reached because of actions taken at other information sets. s = (si . • Examples of strategies. Lecture Notes The following may serve as an outline for a lecture. • Examples of strategies and implied payoffs. leading to a terminal node and payoff vector. The critical point is that strategies are more than just “plans.3 Strategies and the Normal Form As noted already. Chapter 3 starts with the formal definition of strategy. s−i ). • Describe how a strategy implies a path through the tree. is illustrated. The chapter then briefly describes seven classic normal form games. (2) it facilitates exploring how a player responds to his belief about what the other players will do. • Why we need to keep track of a complete contingent plan: (1) It allows the analysis of games from any information set. The chapter then defines the normal form representation as comprising a set of players. • Notation: strategy space Si . and (3) it prescribes a contingency plan if a player makes a mistake. finite games. Chapter 3 proceeds to the construction of the normal-form representation. • Discuss how finite and infinite strategy spaces can be described. L} and si = H. A student that does not understand the concept of a “complete contingent plan” will fail to grasp the sophisticated logic of dynamic rationality that is so critical to much of game theory. . Instructors' Manual for Strategy: An Introduction to Game Theory 19 Copyright 2002. do not distribute. 2008 by Joel Watson For instructors only. introducing the extensive form representation at the beginning of a course helps the students appreciate the notion of a strategy. starting with the observation that each strategy profile leads to a single terminal node (an outcome) via a path through the tree. The matrix form. finite games. who will then play the game as their agents. Any of the classic normal forms. 4.20 3 STRATEGIES AND THE NORMAL FORM • Definition of payoff function. cooperation. first in the extensive form and then in the normal form. . In this scene. do not distribute. 2. Otherwise. • Example: a matrix representation of players. protagonist Wesley matches wits with the evil Vizzini. such as the centipede game. ui : S → R. Discuss mistakes as a reason for specifying a complete contingent plan. 2008 by Joel Watson For instructors only. Then discuss how strategy specifications helps us develop a theory about why players make particular decisions (looking ahead to what they would do at various information sets). This helps solidify that a strategy is a complete contingent plan. Note the different strategic issues represented: conflict. The centipede game (like the one in Figure 3. Several variations of this game can be diagrammed for the students. As with the bargaining game. ui (s). Refer to Appendix A for more on functions. • Comparing the normal and extensive forms (translating one to the other).1(b) if the textbook). one goblet near himself and the other near Vizzini. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. Wesley must drink from the other goblet. Examples and Experiments 1. The Princess Bride poison scene. 3. the game must be described by sets and equations. strategies. Vizzini must choose from which goblet to drink. Have students give instructions to others as to how to play the game. Wesley puts poison into one of the goblets. There are two goblets filled with wine.) • Formal definition of the normal form. competition. • Note: The matrix representation is possible only for two-player. Ultimatum-offer bargaining game. Then Wesley sets the goblets on a table. coordination. • The classic normal form games and some stories. (Use any abstract game. Show the “poison” scene (and the few minutes leading to it) from the Rob Reiner movie The Princess Bride. Away from Vizzini’s view. have some students write their strategies on paper and give the strategies to other students. and payoffs. Those who play the role of “responder” will have to specify under what conditions to accept and under what conditions to reject the other player’s offer. repeated prisoners’ dilemma. ask the students to compute the number of strategies for player 1 when k = 3. Have the students write their strategies and their names on slips of paper. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. Describe the k-period. Collect the slips and randomly select a player 1 and a player 2. For a bonus question. Challenge the students to find a mathematical expression for the number of strategies as a function of k. Pay these two students according to their strategy profile. . 6. Calculate the class distribution over the strategies. Repeated Prisoners’ Dilemma. such as the following. The payoffs are in dollars. which you can later use when introducing dominance and iterated dominance.21 3 STRATEGIES AND THE NORMAL FORM 5. A 3 × 3 dominance-solvable game. do not distribute. It is very useful to have the students play a game such as this before you lecture on dominance and best response. This will help them to begin thinking about rationality. and their behavior will serve as a reference point for formal analysis. 2008 by Joel Watson For instructors only. ’” • Translate into probability numbers. 2008 by Joel Watson For instructors only. µj (sj ) ∈ [0. Instructors' Manual for Strategy: An Introduction to Game Theory 22 Copyright 2002. • Example of belief in words: “Player 1 might say ‘I think player 2 is very likely to play strategy L. • Notation: µj ∈ ∆Sj . The chapter defines expected payoff and gives some examples of how to compute it. Definition of expected payoff. Risk preferences are discussed in Chapter 25. 2/3). which is a similar probability distribution.  sj ∈Sj µj (sj ) = 1. It then covers the idea of a mixed strategy. The appropriate notation is defined. • Refer to Appendix A for more on probability distributions. Notation: σi ∈ ∆Si. At the end of the chapter. R} defined by µj (L) = 1/3 and µj (R) = 2/3. . • Examples and alternative ways of denoting a probability distribution: for Sj = {L.4 Beliefs. there are a few comments about cardinal versus ordinal utility (although it is not put in this language) and about how payoff numbers reflect preferences over uncertain outcomes. • Mixed strategy. risk. • Briefly discuss how payoff numbers represent preferences over random outcomes. • Other examples of probabilities. Mixed Strategies. we can write µj = (1/3. R} and µj ∈ ∆{L. 1]. and Expected Payoffs This chapter describes how a belief that a player has about another player’s behavior is represented as a probability distribution. Lecture Notes The following may serve as an outline for a lecture. • Definition of expected value. Defer elaboration until later. • Examples: computing expected payoffs. do not distribute. and Wooders J.BELIEFS AND EXPECTED PAYOFFS 23 Examples and Experiments 1. Monty randomizes by choosing his actions with equal probability. M.. and batters may have probabilistic beliefs about which pitch will be thrown to them. Let’s Make a Deal game again. at each of his information sets. . do not distribute. Many sports provide good examples of randomized strategies. Baseball pitchers may desire to randomize over their pitches. “Minimax Play at Wimbledon. Tennis serve and return play is another good example. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. Is it optimal for the contestant to select “switch” or “don’t switch” when she has this choice? Why? (b) Are there conditions (a strategy for Monty) under which it is optimal for the contestant to make the other choice? 2. For the class competition.” American Economic Review 91 (2001): 1521-1538.1 1 See Walker. Randomization in sports. 2008 by Joel Watson For instructors only. you can ask the following two bonus questions: (a) Suppose that. 5 General Assumptions and Methodology This chapter contains notes on (a) the trade-off between simplicity and realism in formulating a game-theoretic model. It is helpful to briefly discuss these items with the students during part of a lecture. Instructors' Manual for Strategy: An Introduction to Game Theory 24 Copyright 2002. do not distribute. (c) the notion of common knowledge and the assumption that the game is commonly known by the players. and (d) a short overview of solution concepts that are discussed in the book. 2008 by Joel Watson For instructors only. . (b) the basic idea and assumption of rationality. (Use a 3 × 2 game. • A simple example of a strategy dominated by another pure strategy. The first strategic tension (the clash between individual and joint interests) is illustrated with reference to the prisoners’ dilemma.) • An example of a pure strategy dominated by a mixed strategy. Set of player i’s strategies that can be justified as best responses to some beliefs. • Best response examples. • Discuss how to search for dominated strategies. BRi(µ−i ). . The chapter begins with examples in which a strategy is dominated by another pure strategy. An algorithm for calculating these sets is presented. The last section of the chapter contains analysis of the relation between the set of undominated strategies and the set of strategies that are best responses to some beliefs. Bi . Lecture Notes The following may serve as an outline for a lecture. the game can be quickly analyzed to show the students what is to come. Next comes the definition of best response and examples. (Use a 2 × 2 game. After the formal definition of dominance. 2008 by Joel Watson For instructors only. If a 3 × 3 dominancesolvable game (such as the one suggested in the notes for Chapter 4) was played in class earlier. and then the notion of efficiency is defined. • Definition of efficiency and an example. UDi .6 Dominance and Best Response This chapter develops and compares the concepts of dominance and best response. the battle of the sexes.) • Formal definition of si being a best response to belief µ−i . (Use simple games such as the prisoners’ dilemma. • Optional introduction: analysis of a game played in class. followed by an example of mixed strategy dominance. • The first strategic tension and the prisoners’ dilemma. do not distribute. Set of best responses for player i. • Note that forming beliefs is the most important exercise in rational decision making. Set of undominated strategies for player i. the chapter describes how to check for dominated strategies in any given game. Cournot duopoly.) • Formal definition of strategy si being dominated. Instructors' Manual for Strategy: An Introduction to Game Theory 25 Copyright 2002. . This helps avoid confusion (students sometimes interchange the weak and strong versions) and. State formal results. The line corresponding to the expected payoff playing T is below at least one of the lines giving the payoffs of M and B. On the graph. on a slip Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. The y-axis is player 1’s expected payoff of the various strategies. To demonstrate the relation between dominance and best response. there is little need for the weak dominance concept. do not distribute. This game can be played by everyone in the class. The 70 percent game. the following game can be used. Next show that T is dominated by player 1’s strategy (0. A simple graph will demonstrate this. • Algorithm for calculating Bi = UDi in two-player games: (1) Strategies that are best responses to simple (point mass) beliefs are in Bi . Example of dominance and best response. Then prove that there is no belief for which T is a best response. for every p. 2. either by e-mail or during a class session. Examples and Experiments 1. the x-axis is the probability p that player 1 believes player 2 will select L. (3) Other strategies can be tested for mixed strategy dominance to see whether they are in Bi . 2008 by Joel Watson For instructors only. 1/2. 1/2). whereas B is the best response to R. The book does not discuss weak dominance until the analysis of the second-price auction in Chapter 27. • Note: Remember that payoff numbers represent preferences over random outcomes. Step 3 amounts to checking whether a system of inequalities can hold. (2) Strategies that are dominated by other pure strategies are not in Bi . First show that M is the best response to L. • Note that Appendix B contains more technical material on the relation between dominance and best response. besides.6 DOMINANCE AND BEST RESPONSE 26 • Example to show that Bi = UDi . Each of the n students selects an integer between 1 and 100 and writes this number. which puts equal probability on M and B but zero probability on T. along with his or her name. do not distribute. this game should be played between the lecture on Best Response and the lecture on Rationalizability/Iterated Dominance. If two or more students tie. The average of the students’ numbers is then computed and the student whose number is closest to 70 percent of this average wins 20 dollars. then they share the prize in equal proportions. The few students whose numbers fall within a preset interval of 70 percent of the average can be given bonus points. . The students make their selections simultaneously and independently. Ideally. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002.6 DOMINANCE AND BEST RESPONSE 27 of paper. 2008 by Joel Watson For instructors only. It is as though the information is publicly announced while the players are together. . order of deletion does not matter. ”). • Formally. • Common knowledge: information that each player knows. each player knows the others know. . highlighting hierarchies of beliefs (“player 1 knows that player 2 knows that player 1 will not select. The second strategic tension—strategic uncertainty—is explained. the logic of rationalizability and iterated dominance is demonstrated with an example. 2008 by Joel Watson For instructors only. let Rk be the set of strategy profiles that survives k rounds of iterated dominance. do not distribute. Instructors' Manual for Strategy: An Introduction to Game Theory 28 Copyright 2002.7 Rationalizability and Iterated Dominance This chapter follows naturally from Chapter 6. . For finite games. • The second strategic tension: strategic uncertainty (lack of coordination between beliefs and behavior). Lecture Notes The following may serve as an outline for a lecture. At the beginning of the chapter. It discusses the implications of combining the assumption that players best respond to beliefs with the assumption that this rationality is common knowledge between the players. and only implies. Then the rationalizable set R is the limit of Rk as k gets large. no more strategies will be deleted. . • Notes on how to compute R: algorithm. never playing dominated strategies) with common knowledge implies. . We call these the rationalizable strategies. . • Combining rationality (best response behavior. Then iterated dominance and rationalizability are defined more formally. • Example of iterated dominance. after some value of k. that players will play strategies that survive iterated dominance. each player knows the others know that they all know. yet the rationalizable set is quite difficult to compute. Note that one player’s beliefs about the strategies chosen by the other players is. tell the students that.7 RATIONALIZABILITY AND ITERATED DOMINANCE 29 Examples and Experiments 1. or it can be played over the Internet for bonus points. In this case. a very complicated probability distribution. after the students select their strategies. this always stimulates a lively discussion of rationality and common knowledge. . $2. n equals the number of students.00 or 25 points). Discuss why it may be rational to select a different number if common knowledge of rationality does not hold. In the second version of the game. do not distribute. In my experience. If all of the players selected A. In the game. tell the students that you will pay only a few of them (randomly chosen) but that their payoffs are determined by the strategies of everyone in the class. you will randomly choose two of them. then they each obtain a larger prize ($5. whose payoffs are determined by only each other’s strategies. n players simultaneously and independently write “A” or “B” on slips of paper. then you can show that the player’s best response must be strictly less than x (considering that the player believes at least one other player will select x with positive probability). n = 2. Analyze the game and show that the only rationalizable strategy is to select 1. because there is a sense in which strategic uncertainty is likely to increase (and players are more likely to choose B) with n. However. It is perhaps easier to demonstrate that 100 is never a best response. The students will readily agree that selecting 100 is a bad idea. If x > 1. It is a good example of a game that has a rationalizable solution. 2. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. a randomly drawn student who selected A gets the larger prize if and only if everyone else in the class also picked A. but it can be summarized by the “highest number that the player believes the others will play with positive probability. then those who chose A get nothing and those who chose B get a small prize (say.00 or 10 points). 2008 by Joel Watson For instructors only. The game can be used to demonstrate strategic uncertainty. In the first version. A good way to demonstrate strategic uncertainty is to play two versions of the game in class. showing that 100 is dominated can be quite difficult. The 70 percent game again. Generalized stag hunt. That is.” Call this number x. In this case. This game can be played in class by groups of different sizes. If any of the players selected B. in general. You will most likely see a much higher percentage of students selecting A in the first version than in the second. do not distribute. 2008 by Joel Watson For instructors only. • Player i’s belief about player j’s strategy can be complicated. 6. present the analysis of the Cournot duopoly game in class and let the students read the parallel analysis of the partnership game from the book). 7. Both of these require nontrivial analysis and lead to definitive conclusions. The following may serve as an outline for a lecture. 6. R1i = {2. The notion of strategic complementarity is briefly discussed in the context of the partnership game. R3i = {4. The rationalizable set is determined as the limit of an infinite sequence. 7}. The location and partnership examples in this chapter are excellent choices for presentation. 9}. Write the dominance condition ui (1. R4i = {5} = R. 5. This game has a unique rationalizable strategy profile. sj ) < ui (2. R2i = {3. but it too has a unique rationalizable strategy profile. 8}. 7. . • Describe the partnership game (or Cournot game. 4. but. only the average (mean) matters. Analysis of the partnership game coaches the reader on how to compute best responses for games with differentiable payoff functions and continuous strategy spaces. Thus. 8. • Repeat. • Applications of the location model. Lecture Notes Students should see the complete analysis of a few games that can be solved by iterated dominance. Instructors' Manual for Strategy: An Introduction to Game Theory 30 Copyright 2002. for the Cournot duopoly game. sj ). 5. they demonstrate the art of constructing game theory models. This gives the students exposure to two games that have continuous action spaces. It is useful to draw the extensive form. The partnership game has infinite strategy spaces. • Show that the end regions are dominated by the adjacent ones. and they guide the reader on how to calculate the set of rationalizable strategies. 5. 6}. The location game is a finite (nine location) version of Hotelling’s well-known model. BRi (qj ). etc. Si = {1. or other). for expected payoff calculations. • Describe the location game and draw a picture of the nine regions. 5. write BR1(y) or. 3. 4. The applications illustrate the power of proper game theoretic reasoning. 2.8 Location and Partnership This chapter presents two important applied models. Thus. • Sets of possible best responses (Bi ). 6. for example. It may be useful to substitute for the partnership game in a lecture (one can. • Differentiate (or argue by way of differences) to get the best response functions. 3. 4. you can have different pairs of students play in different rounds. In one version. 2008 by Joel Watson For instructors only. You can play different versions of the location game in class (see. do not distribute. Rather than have the students play repeatedly. Repeated play and convergence. simply invite two students to play a one-shot game. An interesting variant on the convergence experiment can be used to demonstrate that pre-play communication and/or mediation can align beliefs and behavior. The history of play can be recorded on the chalkboard. they can be allowed to communicate (agree to a self-enforced contract) before playing. you or a student can recommend a strategy profile to the players (but in this version. which is a nice transition to the material in Chapter 9. keep the players from communicating between themselves and separate them when they are to choose strategies). Probably the easiest way of running the experiment is to have just two students play a game like the following: A game like the one from Exercise 6 of Chapter 9 may also be worth trying. . 3. This gets the students thinking about an institution (historical precedent. R3i . Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. Location games. in this case) that helps align beliefs and behavior. Contract or mediation. You can motivate the game with a story.8 LOCATION AND PARTNERSHIP 31 • Restrictions: implications of common knowledge of rationality. 2. In a second version. although it takes time and prizes. but rather to see whether experimental play stabilizes on one particular strategy profile or subset of the strategy space. to engage your class in repeated play of a simple matrix game. Construct R1i . Indicate the limit R. • Concept of strategic complementarity. for instance. the variations in the Exercises section of the chapter). The point is not to discuss reputation. To avoid repeated-game issues. Examples and Experiments 1. It may be useful. R2i . the mathematical representation of some coordination between players’ beliefs and behavior. (b) pre-play communication (contracting). but we should better understand how. do not distribute. • An algorithm for finding Nash equilibria in matrix games. A weakly congruous strategy profile. there is an aside on behavioral game theory (experimental work). and congruous strategy sets. Discuss real examples of inefficient coordination in the world. • Stories: (a) repeated social play with a norm. • Institutions that alleviate strategic uncertainty: norms. Strict Nash equilibrium (a congruous strategy profile). The chapter begins with the concept of congruity. Then the chapter addresses the issue of coordination and welfare. the first and third strategic tensions still remain. where players’ beliefs and behavior are coordinated so there is some resolution of the second strategic tension. and (c) an outside mediator suggests strategies. This is the third strategic tension. leading to a description of the third strategic tension—the specter of inefficient coordination. • Represent as congruity. location. • Pareto coordination game shows the possibility of inefficient coordination. Various examples are furnished. Finally. which captures the absence of strategic uncertainty (as a single strategy profile). Instructors' Manual for Strategy: An Introduction to Game Theory 32 Copyright 2002. • Discuss strategic uncertainty (the second strategic tension). partnership. Lecture Notes The following may serve as an outline for a lecture. 2008 by Joel Watson For instructors only. Nash equilibrium is defined as a weakly congruous strategy profile. • Note that a institution may thus alleviate the second tension. Strategic certainty is discussed as the product of various social institutions. • Nash equilibrium (where there is no strategic uncertainty). best response complete. . communication.9 Nash Equilibrium This chapter provides a solid conceptual foundation for Nash equilibrium. etc. • Example of an abstract game with various sets that satisfy these definitions. rules. so they get the worst payoff profile. Define weakly congruous. Also. etc. based on (1) rationalizability and (2) strategic certainty. Illustrate with a game (such as the battle of the sexes) where the players’ beliefs and behavior are not coordinated. • Examples of Nash equilibrium: classic normal forms. 9 NASH EQUILIBRIUM 33 Examples and Experiments 1. Coordination experiment. To illustrate the third strategic tension, you can have students play a coordination game in the manner suggested in the previous chapter (see the repeated play, contract, and mediation experiments). For example, have two students play a Pareto coordination game with the recommendation that they select the inferior equilibrium. Or invite two students to play a complicated coordination game (with, say, ten strategies) in which the strategy names make an inferior equilibrium a focal point. 2. The first strategic tension and externality. Students may benefit from a discussion of how the first strategic tension (the clash between individual and joint interests) relates the classic economic notion of externality. This can be illustrated in equilibrium, by using any game whose equilibria are inefficient. An n-player prisoners’ dilemma or commons game can be played in class. You can discuss (and perhaps sketch a model of) common economic settings where a negative externality causes people to be more active than would be jointly optimal (pollution, fishing in common waters, housing development, arms races). 3. War of attrition. A simple war of attrition game (for example, one in discrete time) can be played in class for bonus points or money. A two-player game would be the easiest to run as an experiment. For example, you could try a game like that in Exercise 9 of Chapter 22 with x = 0. Students will hopefully think about mixed strategies (or at least, nondegenerate beliefs). You can present the “static” analysis of this game. To compute the mixed strategy equilibrium, explain that there is a stationary “continuation value,” which, in the game with x = 0, equals zero. If you predict that the analysis will confused the students, this example might be better placed later in the course (once students are thinking about sequential rationality). Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002, 2008 by Joel Watson For instructors only; do not distribute. 10 Oligopoly, Tariffs, Crime, and Voting This chapter presents six standard applied models: Cournot duopoly, Bertrand duopoly, tariff competition, a model of crime and police, candidate location (the median voter theorem), and strategic voting. Each model is motivated by an interesting, real strategic setting. Very simple versions of the models are described and the equilibria of four of the examples are calculated. Calculations for the other two models are left as exercises. Lecture Notes Any or all of the models can be discussed in class, depending on time constraints and the students’ background and interest. Other equilibrium models can also be presented, either in addition to or substituting for the ones in the textbook. In each case, it may be helpful to organize the lecture as follows. • Motivating story and real-world setting. • Explanation of how some key strategic elements can be distilled in a game theory model. • Description of the game. • Overview of rational behavior (computation of best response functions, if applicable). • Equilibrium calculation. • Discussion of intuition. Examples and Experiments Students would benefit from a discussion of real strategic situations, especially with an eye toward understanding the extent of the first strategic tension (equilibrium inefficiency). Also, any of the applications can be used for classroom experiments. Here is a game that can be played by e-mail, which may be useful in introducing mixed strategy Nash equilibrium in the next lecture. (The game is easy to describe, but difficult to analyze.) Students are asked to each submit a number from the set {1, 2, 3, 4, 5, 6, 7, 8, 9}. The students make their selections simultaneously and independently. At a prespecified date, you determine how many students picked each of the numbers and you calculate the mode (the number that was most selected). For example, if ten students picked 3, eight students picked 6, eleven students picked 7, and six students picked 8, then the mode is 7. If there are two or more modes, the highest is chosen. Let x denote the mode. If x = 9, then everyone who selected the number 1 gets one bonus point (and the others get zero). If x is not equal to 9, then everyone who selected x + 1 gets x + 1 bonus points (and the others get zero). Instructors' Manual for Strategy: An Introduction to Game Theory 34 Copyright 2002, 2008 by Joel Watson For instructors only; do not distribute. 11 Mixed-Strategy Nash Equilibrium This chapter begins with the observation that, intuitively, a randomized strategy seems appropriate for the matching pennies game. The definition of a mixed-strategy Nash equilibrium is given, followed by instructions on how to compute mixed-strategy equilibria in finite games. The Nash equilibrium existence result is presented. Lecture Notes Few applications and concepts rely on the analysis of mixed strategies, so the book does not dwell on the concept. However, it is still an important topic and one can present several interesting examples. Here is a lecture outline. • Matching pennies—note that there is no Nash equilibrium (in pure strategies). Ask for suggestions on how to play. Ask “Is there any meaningful notion of equilibrium in mixed strategies?” • Note the (1/2, 1/2), (1/2, 1/2) mixed strategy profile. Confirm understanding of “mixing.” • The definition of a mixed-strategy Nash equilibrium—the straightforward extension of the basic definition. • Two important aspects of the definition: (a) what it means for strategies that are in the support (they must all yield the same expected payoff) and (b) what it means for pure strategies that are not in the support of the mixed strategy. • An algorithm for calculating mixed-strategy Nash equilibria: Find rationalizable strategies, look for a mixed strategy of one player that will make the other player indifferent, and then repeat for the other player. • Note the mixed-strategy equilibria of the classic normal form games. • Mixed-strategy Nash equilibrium existence result. Examples and Experiments 1. Attack and defend. Discuss how some tactical choices in war can be analyzed using matching pennies-type games. Use a recent example or a historical example, such as the choice between Normandy and the Pas de Calais for the D-Day invasion of June 6, 1944. In the D-Day example, the Allies had to decide at which location to invade, while the Germans had to choose where to bolster their defenses. Discuss how the model can be modified to incorporate more realistic features. Instructors' Manual for Strategy: An Introduction to Game Theory 35 Copyright 2002, 2008 by Joel Watson For instructors only; do not distribute. Instead of having the entire class play the game in each round. American football run/pass mix). 4. For the classroom version. You can quickly calculate the strategy distribution and randomly select students (from a class list) to pay. The game can be played for money or for points in the class competition. tennis service location. 3. With everyone’s eyes closed. mixed-strategy Nash equilibrium.11 MIXED-STRATEGY NASH EQUILIBRIUM 36 2. If the class frequencies match the Nash equilibrium. A socially repeated strictly competitive game. the social repetition experiments also make the students familiar with strictly competitive games. Tell the students that some of them will be randomly selected to play this game against one another. do not distribute. This classroom experiment demonstrates how mixed strategies may be interpreted as frequencies in a population of players. Here is an idea for how to play the game quickly. Use a game that has a unique. have only two randomly selected students play. Compute the distribution of strategies for the entire class and report this to all of the students. each student selects a strategy by either putting his hands on his head (the Y strategy) or folding his arms (the N strategy). draw on the board a symmetric 2×2 strictly competitive game. the students open their eyes. 2008 by Joel Watson For instructors only. In addition to demonstrating how random play can be interpreted and form a Nash equilibrium. Otherwise. which provides a good transition to the material in Chapter 12. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. with the strategies Y and N for each of the two players. Randomization in sports. Everyone will see the sequence of strategy profiles and you can discuss how the play in any round is influenced by the outcome in preceding rounds. The experiment can be done over the Internet or in class. Discuss randomization in sport (soccer penalty shots. The classroom version may be unwieldy if there are many students. At your signal. Randomly select several pairs of students and pay them according to their strategy profile. Another version of the socially repeated game. then discuss this. repeat the gaming procedure several times and discuss whether play converges to the Nash equilibrium. baseball pitch selection. Ask all of the students to select strategies (by writing them on slips of paper or using cards as described below). . Instructors' Manual for Strategy: An Introduction to Game Theory 37 Copyright 2002. do not distribute. • Determination of security strategies in some examples. and why best response is our behavioral foundation. The chapter presents a result that is used in Chapter 17 for the analysis of parlor games. • Definition of a two-player. • Definition of security strategy and security payoff level. • The Nash equilibrium and security strategy result. strictly competitive games and security strategies. strictly competitive game. Examples and Experiments Any abstract examples will do for a lecture. or leave it for students to read. so the students understand that the definition applies generally.12 Strictly Competitive Games and Security Strategies This chapter offers a brief treatment of two concepts that played a major role in the early development of game theory: two-player. It is instructive to demonstrate security strategies in the context of some games that are not strictly competitive. • Examples of strictly competitive and zero-sum games. Lecture Notes One can present this material very quickly in class. 2008 by Joel Watson For instructors only. An outline for a lecture may run as follows. . • The special case called zero-sum. • Discuss the difference between security strategy and best response. • Definition of contract. reliance. This may also be used immediately before presenting the material in Chapter 13 and/or as a lead-in to Chapter 18. . Much emphasis is placed on how contracts help to align beliefs and behavior in static settings. restitution. Examples and Experiments 1. Self-enforced and externally enforced components. Instructors' Manual for Strategy: An Introduction to Game Theory 38 Copyright 2002. Default damage rules: expectation. • Verifiability. • Definition of the induced game. • Comments on the design of legal institutions. implementation. • Practical discussion of the technology of the relationship. Further. and Enforcement in Static Settings This chapter presents the notion of contract.13 Contract. You play the role of the external enforcer. the relationship between those things considered verifiable and the outcomes that can be implemented is carefully explained. do not distribute. • Efficient breach. It carefully explains how players can use a contract to induce a game whose outcome differs from that of the game given by the technology of the relationship. 2008 by Joel Watson For instructors only. Two students can be selected to first negotiate a contract and then play the underlying game. The discussion then shifts to settings of limited liability and default damage remedies. Lecture Notes You may find the following outline useful in planning a lecture. Law. It may be useful to do this once with full verifiability and once with limited verifiability. Contract game. • Discuss why players might want to contract (and why society might want laws). A contract game of the type analyzed in this chapter can be played as a classroom experiment. and how the court enforces a contract. • Liquidated damage clauses and contracts specifying transfers. Note the implications of limited verifiability. Explain why contracts are fundamental to economic relationships. The exposition begins with a setting of full verifiability and complete contracting. • Complete contracting. 2 First.” Further. Clements President Chicago Coliseum Club Chgo Entirely too busy training for my coming Tunney match to waste time on insurance representatives stop as you have no contract suggest you stop kidding yourself and me also Jack Dempsey. do not distribute. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002.. 2d Ed. to recover damages for breach of a written contract executed March 13. However. Dempsey repudiated the contract with the following telegram message. R. see Barnett. 2 For a more detailed discussion of this case. a corporation.000. Facts of the Case: Chicago Coliseum Club. 1926.000 in the event the gate receipts should exceed that amount. at the time of the making and execution of the contract in question.” Dempsey was not to engage in any boxing match after the date of the agreement and before the date of the contest. App. 1926.000.” The Chicago Coliseum Club was to promote the event. held the title of world’s Champion Heavy Weight Boxer. Plaintiff was incorporated as an Illinois corporation for the promotion of general pleasure and athletic purposes and to conduct boxing. 542. When it contacted Dempsey concerning the life insurance. BM Colorado Springs Colo July 10th 1926 B. as “plaintiff. Dempsey (Source: 265 Ill. he was never paid. 1932 Ill. Dempsey was to be paid $800. he was to receive 50 percent of “the net revenue derived from moving picture concessions or royalties received by the plaintiff. give the background of the case and then present a stylized example that is based on the case. . Dempsey was to engage in a boxing match with Harry Wills.000 plus 50 percent of “the net profits over and above the sum of $2. Dempsey was well known in the pugilistism world and. (Aspen 1999). E. 2008 by Joel Watson For instructors only.39 CONTRACT. This or a different case can be used to illustrate the various kinds of default damage remedies and to show how the material of the chapter applies to practical matters.). known as Jack Dempsey. App. p. He was also “to have his life and health insured in favor of the plaintiff in a manner and at a place to be designated by the plaintiff. The contract between the Chicago Coliseum Club and Wills was entered into on March 6.” brought its action against “defendant” William Harrison Dempsey. Contracts: Cases and Doctrine. Case study: Chicago Coliseum Club v. sparring and wrestling matches and exhibitions for prizes or purses. he was paid $10. AND ENFORCEMENT 2. LAW. The Chicago Coliseum Club hired a promoter. At the signing of the contract. It stated that Wills was to be payed $50. another wellknown boxer. but bearing date of March 6 of that year.125. 000. Further. LAW.400. AND ENFORCEMENT The court identified the following issues as being relevant in establishing damages: First: Loss of profits which would have been derived by the plaintiff in the event of the holding of the contest in question.000. This could be compared to the case where substantial evidence did exist as to the expected profits of Chicago Coliseum. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. which would have left a net profit of $1. The Chicago Coliseum Club claimed that it would have had gross receipts of $3. the court was not convinced of this as there were too many undetermined factors. when it is assumed that the expected profit is zero.600. However.” The strategies for Dempsey are “take this match” and “take other match. 1926. This assumes that promotion by Chicago Coliseum Club benefits Dempsey’s reputation and allows him to gain by taking the other boxing match. The expense of 4 above could be recovered.000 and expenses of $1. The strategies for Chicago Coliseum are “promote” and “don’t promote. Second: Expenses incurred by the plaintiff prior to the signing of the agreement between the plaintiff and Dempsey. and Fourth: Expenses incurred after the signing of the agreement and before the breach of July 10. expectations and reliance damages result in the same transfer. the court will generally assume that the venture would have at least broken even.” This example can be used to illustrate a contract that would induce Dempsey to keep his agreement with Chicago Coliseum. Stylized Example The following technology of the relationship shows a possible interpretation when proof of the expected revenues is available. (Unless shown otherwise. Further. expenses incurred in relation to 3 above could only be recovered as damages if they occured before the repudiation. 2008 by Joel Watson For instructors only. do not distribute.000. .) The expenses incurred before the contract was signed with Dempsey could not be recovered as damages.40 CONTRACT. Third: Expenses incurred in attempting to restrain the defendant from engaging in other contests and to force him into a compliance with the terms of his agreement with the plaintiff. • Terms describing the relation between nodes: successor. Forgetful driver. • Perfect versus imperfect recall. . The key. do not distribute. where he must turn left or right. 2. The player reaches an intersection. while a left turn will take him to the police checkpoint. with examples of violations. • Tree rules. and in a more technically complete manner than was needed for Part I of the book. simply. When he has to make a decision. predecessor. Abstract examples can be developed on the fly to illustrate the terms and concepts. The extensive form representation is pictured on the next page. Examples and Experiments 1. • Review of the components of the extensive form: nodes. Lecture Notes This material can be covered very quickly in class. Here is an outline for a lecture. The chapter defines some technical terms and states five rules that must be obeyed when designing game trees. labels. he will find a police checkpoint. If he turns left. • Perfect versus imperfect information. immediate successor. This one-player game demonstrates imperfect recall. the player does not recall how many intersections he passed through or what decisions he made previously. he will eventually reach another intersection requiring another right/left decision. If he turns right. and immediate predecessor. • How to describe an infinite action space. decision. Instructors' Manual for Strategy: An Introduction to Game Theory 41 Copyright 2002. The concepts of perfect recall and perfect information are registered. At this one. and payoffs. is to bring the extensive form back to the front of the students’ minds. and terminal nodes. information sets. initial. branches. where he will be delayed for the entire evening. as a transition from normal form analysis to extensive form analysis.14 Details of the Extensive Form This chapter elaborates on Chapter 2’s presentation of the extensive form representation. The player is driving on country roads to a friend’s house at night. a right turn will bring him to his friend’s house. 2008 by Joel Watson For instructors only. 14 DETAILS OF THE EXTENSIVE FORM Instructors' Manual for Strategy: An Introduction to Game Theory 42 Copyright 2002. do not distribute. 2008 by Joel Watson For instructors only. . and (d) locate the Nash equilibria of the entire game that specify Nash outcomes in all subgames. The notion of sequential rationality is presented. • Definition of subgame perfect Nash equilibrium.15 Backward Induction and Subgame Perfection This chapter begins with an example to show that not all Nash equilibria of a game may be consistent with rationality in real time. The chapter then defines subgame perfect Nash equilibrium as a concept for applying sequential rationality in general games. do not distribute. Lecture Notes An outline for a lecture follows. • Note that backward induction is difficult to extend to games with imperfect information. It might be useful to discuss. for example. • Backward induction: informal definition and abstract example. • Example of a game featuring a Nash equilibrium with an incredible threat. Instructors' Manual for Strategy: An Introduction to Game Theory 43 Copyright 2002. An algorithm for computing subgame perfect equilibria in finite games is demonstrated with an example. Incredible threats example. Note that the entire game is itself a subgame. 2008 by Joel Watson For instructors only. • Subgame definition and illustrative example. the credibility of the Chicago Bulls of the 1990s threatening to fire Michael Jordan. followed by backward induction (a version of conditional dominance) and then a demonstration of backward induction in an example. Next comes the result that finite games of perfect information have pure strategy Nash equilibria (this result is used in Chapter 17 for the analysis of parlor games). Definition of proper subgame. Examples and Experiments 1. • Example and algorithm for computing subgame perfect equilibria: (a) draw the normal form of the entire game. (c) find the Nash equilibria of the entire game and the Nash equilibria of the proper subgames. • Result: every finite game with perfect information has a (pure strategy) Nash equilibrium. . • The definition of Sequential rationality. (b) draw the normal forms of all other (proper) subgames. Note that the strategy profile identified is a Nash equilibrium. This is a good game to run as a classroom experiment immediately after lecturing on the topic of subgame perfection. This process continues until either one of the players grabs the money or player 2 passes when the pot is $21 (in which case the game ends with both players obtaining nothing). then the game ends (she gets $2 and player 1 gets nothing). Grab game. a student can either grab all of the money in your hand or pass. When on the move. At the beginning of the game. If she grabs the money. then you add another dollar to your hand and offer the two dollars to player 2. then the game ends (player 1 gets the dollar and player 2 gets nothing). do not distribute. If player 2 passes.BACKWARD INDUCTION AND SUBGAME PERFECTION 44 2. 2008 by Joel Watson For instructors only. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. If player 1 grabs the dollar. . you place one dollar in your hand and offer it to player 1. If player 1 passes. then you add another dollar and return to player 1. two students take turns on the move. There is a very good chance that the two students who play the game will not behave according to backward induction theory. You can discuss why they behave differently. In this game. with this game. any of the applications can be used for classroom experiments. The chapter begins with a model of advertising and firm competition. such as the von Stackelberg game). Other equilibrium models. The chapter then develops a simple two-period model of dynamic monopoly. • Analyze information sets toward the beginning of the tree. • Note that the proper subgames at the end of the game tree have unique Nash equilibria. The models in this chapter demonstrate a useful method of calculating subgame perfect equilibria in games with infinite strategy spaces. firms make a technological choice before competing with each other in a Cournot-style (quantity selection) arena. • Explanation of how some key strategic elements can be distilled in a game theory model. 2008 by Joel Watson For instructors only. • Observe that there are an infinite number of proper subgames. except it pays to stress intuition. especially with an eye toward understanding how the strategic tensions are manifested. where a firm discriminates between customers by its choice of price over time. Also. The dynamic monopoly game can be analyzed similarly. the lecture can proceed as follows. Examples and Experiments Students would benefit from a discussion of real strategic situations. conditional on the payoff specifications just calculated. • Description of the game. one can identify the equilibrium and. rather than mathematical expressions. • Description of the real setting. Instructors' Manual for Strategy: An Introduction to Game Theory 45 Copyright 2002. . work backward to analyze the game tree. When it is known that each of a class of subgames has a unique Nash equilibrium. followed by a model of limit capacity. the entry and production facility decisions). do not distribute. depending on time constraints and the students’ background and interest. treating it as the outcome induced by the subgame. In both of these models.16 Topics in Industrial Organization This chapter presents several models to explore various strategic elements of market interaction. The chapter ends with a variation of the dynamic monopoly model in which the firm can effectively commit to a pricing scheme by offering a price guarantee. Calculate the equilibrium of a subgame and write its payoff as a function of the variables selected by the players earlier in the game (the advertising level. such as the von Stackelberg model. With regard to the advertising and limit capacity models (as well as with others. can also be presented or substituted for any in the chapter. Lecture Notes Any or all of the models in this chapter can be discussed in class. such as chess. Chomp tournament. • Examples: tic-tac-toe. • Note that the result in Chapters 12 and 15 apply. A result is stated for games that end with a winner and a loser or a tie. • Describe the class of two-player. see Exercise 5 in Chapter 17. checkers. Many parlor games.” then one of the players has a strategy that guarantees victory. for fairness. • Result: If the game must end with a “winner. For the rules of Chomp. Give them several matrix configurations to play (symmetric and asymmetric) so that.17 Parlor Games In this chapter. while we do not know the actual winning strategy. that have no known solution. After some thought (after perhaps several days). An optimal strategy for the asymmetric version will elude them. The teams can then play against each other at the end of a few class sessions. chess. A few examples are briefly discussed. with the students separated into teams. you can also explain why we know that the first player has a winning strategy. • Discuss simpler examples. do not distribute. fit in this category. regardless of what the other player does. as it has eluded the experts. Lecture Notes An outline for a lecture follows. finite games of perfect information that are strictly competitive. If the game ends with either a winner or a tie. the students will ascertain a winning strategy for the symmetric version of Chomp. Chomp is a fun game to play in a tournament format. then either one of the players has a strategy that guarantees victory or both players can guarantee at least a tie. At some point. 2008 by Joel Watson For instructors only. Have the students meet with their team members outside of class to discuss how to play the game. strictly competitive games of perfect information. checkers. including chess. Thus. Instructors' Manual for Strategy: An Introduction to Game Theory 46 Copyright 2002. the teams can each play the role of player 1 and player 2 in the various configurations. etc. two results stated earlier in the book (from Chapters 12 and 15) are applied to analyze finite. You can award bonus points based on the teams’ performance in the tournament. . Examples and Experiments 1. these games have (pure strategy) Nash equilibria and the equilibrium strategies are security strategies. and tic-tac-toe. • Discuss examples. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. do not distribute. . The students might enjoy. and learn from. 2008 by Joel Watson For instructors only.17 PARLOR GAMES 47 2. playing other parlor games between themselves or with you. An after-class challenge provides a good context for meeting with students in a relaxed environment. Another tournament or challenge. • Transferable utility. Descriptive and predictive interpretations. Then the chapter describes an abstract representation of bargaining problems in terms of the payoffs of feasible agreements and the disagreement point. This implies that x = x∗ (achieving the maximized joint value v ∗) and t∗ = d1 + π1 (v ∗ − d1 − d2 ) − u1(x∗). such as money. so payoffs are v1(x) + t and v2(x) − t. π2 . • The standard bargaining solution: summarizing the outcome of negotiation in terms of bargaining weights π1. also called the disagreement point. Lecture Notes An outline for a lecture follows. Value creation means a higher joint value than with the disagreement point. Assume the players reach an efficient agreement and divide the surplus according to their bargaining weights. Joint value and surplus relative to the disagreement point are defined and illustrated in an example. Instructors' Manual for Strategy: An Introduction to Game Theory 48 Copyright 2002. The chapter commences by noting how bargaining can be put in terms of value creation and division. • Examples of bargaining situations. Several elements of negotiation—terms of trade. 2008 by Joel Watson For instructors only. divisible goods—are noted. • Divisible goods. • Translating a given bargaining problem into feasible payoff vectors (V ) and the default payoff vector d. • An example in terms of agreement items x and a transfer t. • Player i’s negotiated payoff is u∗i = di +πi (v ∗ −d1 −d2 ). . where solution concepts are often expressed as axioms governing joint behavior. that can be used to divide value. Recall efficiency definition.18 Bargaining Problems This chapter introduces the important topic of negotiation. do not distribute. This representation is common in the cooperative game literature. Transferable utility is assumed. The standard bargaining solution is defined as the outcome in which the players maximize their joint value and divide the surplus according to fixed bargaining weights. a component of many economic relationships and theoretical models. For a responder. do not distribute. For example. getting money or other objects from you. Have the students report in class on their negotiation and final agreement. For the offerers. It is most useful if the alternatives are multidimensional. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. You can have the students negotiate outside of class in a completely unstructured way (although it may be useful to ask the students to keep track of how they reached a decision). Give them a set of alternatives (such as transferring money. their payoffs will probably differ from those of the standard ultimatum formulation). 2008 by Joel Watson For instructors only. Each should write a strategy on a slip of paper. It can be instructive—especially before lecturing on negotiation problems—to present a real negotiation problem to two or more students. one alternative might be that you will take student 1 to lunch at the faculty club. and to try the non-anonymous version with two students selected in advance (in which case. Discuss why (or why not) the students’ behavior departs from the subgame perfect equilibrium. The outcome only takes effect (enforced by you) if the students sign a contract. . It may be interesting to do this twice. with each dimension affecting the two players differently (so that the students face an interesting “enlarge and divide the pie” problem). Anonymous ultimatum bargaining experiment.18 BARGAINING PROBLEMS 49 Examples and Experiments 1. and so on). this is an amount to offer the other player. Let half of the students be the offerers and the other half responders. with the roles reversed for the second run. This provides a good introduction to the theory covered in Chapter 19. 2. Negotiation experiments. Once all of the slips have been collected. whereas another might be that you will give one of them (their choice) a new jazz compact disc. you can randomly match an offerer and responder. this may be an amount below which she wishes to reject the offer (or it could be a range of offers to be accepted). between players i and j (to facilitate analysis of larger games later). alternating-offer game with discounting. Bargaining weight interpretation of the outcome. and definition of. and (**) accept if and only if the offer is strictly greater than 0. alternating-offer game with discounting. player j accepts or rejects. Representing time preferences by a discount factor δi . At the end of the chapter is an example of multilateral bargaining in the legislative context. Bargaining weight interpretation of the outcome. Player i offers a share between 0 and 1. • Description of the two-period. • Strategy (**) cannot be part of an equilibrium in the ultimatum game. mi is player i’s equilibrium payoff in subgames where he makes the first offer. do not distribute. Instructors' Manual for Strategy: An Introduction to Game Theory 50 Copyright 2002. • Description of the ultimatum-offer bargaining game. • Brief description of issues in multilateral negotiation. . Examples. 2008 by Joel Watson For instructors only. Sketch of the analysis: the subgame perfect equilibrium is stationary. Determining the subgame perfect equilibrium using backward induction and the equilibrium of the ultimatum game. The ultimatum game is reviewed first. Lecture Notes A lecture can proceed as follows. The analysis features subgame perfect equilibrium and includes the motivation for. • Bargaining weight interpretation of the equilibrium outcome of the infiniteperiod game. • Discounting over periods of time. • Description of the infinite-period. Convergence as discount factors approach one. Note that this observation will be used in larger games later. • The unique subgame perfect equilibrium specifies strategy (*) for player j and the offer of 0 by player i.19 Analysis of Simple Bargaining Games This chapter presents the analysis of alternating-offer bargaining games and shows how the bargaining weights discussed in Chapter 18 are related to the order of moves and discounting. followed by a two-period game and then the infinite-period alternating-offer game. which give equilibrium play in the proper subgames: (*) accept all offers. discount factors. • Determination of the two sequentially rational strategies for player j (the responder). do not distribute. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002.19 ANALYSIS OF SIMPLE BARGAINING GAMES 51 Examples and Experiments For a transition from the analysis of simple bargaining games to modeling joint decisions. 2008 by Joel Watson For instructors only. This combines the negotiation experiment described in the material for Chapter 18 with the contract experiment in the material for Chapter 13. you might run a classroom experiment in which the players negotiate a contract that governs how they will play an underlying game in class. . negotiation is just one of the key components. while concentrating on other strategic elements. can be complicated. A joint decision node is a distilled model of negotiation between the players. The term “regime” generalizes the concept of strategy to games with joint decisions. utilizing a joint decision node. • Always include a default decision to describe what happens if the players do not reach an agreement. in technical terms. • It would be nice to build models in which the negotiation component were characterized by the standard bargaining solution. • Definition of a game with joint decisions—distill a negotiation component into a joint decision (a little model of bargaining). Instructors' Manual for Strategy: An Introduction to Game Theory 52 Copyright 2002. In many strategic settings. Games with joint decision nodes can be used to study complicated strategic settings that have a negotiation component. The chapter explains the benefits of composing games with joint decisions and. . • Example: the extensive form version of a bargaining problem. Recall the pictures and notation from Chapter 18. it takes the place of a noncooperative model of bargaining. • Noncooperative models of negotiation. The chapter illustrates the ideas with an example of an incentive contract. • Definition of regime: a specification of behavior at both individual and joint decision nodes. 2008 by Joel Watson For instructors only. where a full noncooperative model would be unwieldy. Tree Rule 6. a game with joint decision nodes is a hybrid representation. Behavior at joint decision nodes is characterized by the standard bargaining solution. Lecture Notes Here is an outline for a lecture. Then we could examine how bargaining power and disagreement points influence the outcome. Thus. demarcates the proper use of this representation. with cooperative and noncooperative components. do not distribute. The concept of a negotiation equilibrium combines sequential rationality at individual decision nodes with the standard bargaining solution at joint decision nodes. Negotiation Equilibrium This chapter introduces.20 Games with Joint Decisions. there are times when the parties negotiate to form a contract and there are times in which the parties work on their own (either complying with their contract or failing to do so). such as those just analyzed. Contractual relationships often have this flavor. games with joint decision nodes. • Labeling the tree. and shows how to analyze. Ocean liner shipping-contract example. • Calculating the negotiation equilibrium by backward induction. Examples and Experiments 1. and by paying them privately). to be played in the matrix game. The shipper’s cost of the shipment is 20.JOINT DECISIONS AND NEGOTIATION EQUILIBRIUM 53 • Negotiation equilibrium: sequential rationality at individual decision nodes. determine the optimal contract and how the surplus will be divided. You can run a classroom experiment where three students interact as follows. he can contract with an independent shipping contractor (who has a contract with a shipper). After the experiment. or he can use a trade association that has a contract with a shipper. discuss why you might expect t. rather than s. To make the tensions pronounced. modeling by a game with a joint decision. 2. do not distribute. standard bargaining solution at joint decision nodes. but in the third the trade association has a non-negotiable fee of 45. . Students 1 and 2 have to play a matrix game. Agency incentive contracting. Shipping the product is worth 100 to the producer. The contract between students 1 and 3 (which you enforce) can specify transfers between them as a function of the matrix game outcome. Specify a game that has a single rationalizable (dominance-solvable) strategy s but has another outcome t that is strictly preferred by player 1 [u1(t) > u1 (s)] and has the property that t2 is the unique best response to t1. You can arrange the experiment so that the identities of students 1 and 3 are not known to student 2 (by. • Example of a contracting problem. The shipping contractor’s cost is 30. make s an efficient outcome. A producer who wishes to ship a moderate shipment of output (say three or four full containers) overseas has a choice of three ways of shipping the product. using the standard bargaining solution. First determine the effort incentive. but in the event of no agreement he can use the trade association. 2008 by Joel Watson For instructors only. Suppose that the producer only has time to negotiate with one of the parties because his product is perishable. Student 1 is allowed to contract with student 3 so that student 3 can play the matrix game in student 1’s place (as student 1’s agent). Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. Then. given a contract. The producer himself must negotiate if he chooses either of the first two alternatives. He can contract directly with the shipper. allowing many pairs of students to write contracts and then selecting a pair randomly and anonymously. say. Related to externality. • Show that a particular option contract leads to the efficient outcome. Then determine the rational investment choice. parties can contract up front. If the outside asset value rises too quickly with the investment. starting with the case in which a party must choose his/her investment level before contracting with the other party (so here hold up creates a serious problem). In the second version. which has a value in the relationship and another value outside of the relationship. Find the outcome and payoffs from the joint decision node.21 Unverifiable Investment. • Investor ownership is preferred if the value of the asset in its outside use rises with the investment. option contracts are shown to provide optimal incentives. The chapter also comments on how asset ownership can help alleviate the hold up problem. and Ownership This chapter applies the concept of joint decisions and negotiation equilibrium to illustrate the hold up problem. using the standard bargaining solution. Three variations are studied. Hold Up. Calculate and describe the negotiation equilibrium. here. • Ownership of the asset affects the disagreement point (through the outside value) and thus affects the outcome of negotiation. • Find the negotiation equilibrium for the various ownership specifications. 2008 by Joel Watson For instructors only. • Consider up-front contracting and option contracts. Options. • Note the incentive to underinvest. An example is developed in which one of the players must choose whether to invest prior to production taking place. This may not be true in general. • Description of tensions between individual and joint interests because of the timing of unverifiable investments and contracting. do not distribute. • Calculate the negotiation equilibrium by backward induction. Instructors' Manual for Strategy: An Introduction to Game Theory 54 Copyright 2002. Lecture Notes An outline for a lecture follows. • Hold up example: unverifiable investment followed by negotiation over the returns. relative to the efficient amount. then the investor may have the incentive to overinvest. where agreement is required to realize the returns. . Describe how option contracts work and are enforced. • Extension of the model in which the value of the investment is tied to an asset. . do not distribute. a game like that of the Guided Exercise in this chapter. or run an experiment based on. physical plant location. and unverifiable investments in long-term procurement contracting. You can also present the analysis of. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. 2008 by Joel Watson For instructors only. such as those having to do with specific human capital investment.INVESTMENT AND HOLD UP 55 Examples and Experiments You can discuss real examples of hold up. • Definition of a repeated game. In the following section. so that misdeeds are punished. Note what each player’s strategy specifies. since the payoff matrices for these are equal. Thus. • A reputation equilibrium where a non-stage Nash action profile is played in the first period. • Intuition: reputation and ongoing relationships. The concept of a repeated game is defined and a two-period repeated game is analyzed in detail. do not distribute. • Characterization of subgame perfect equilibria featuring only stage Nash profiles (action profiles that are equilibria of the stage game). . Instructors' Manual for Strategy: An Introduction to Game Theory 56 Copyright 2002. Note the payoff vector. A Nash-punishment folk theorem is stated at the end of the chapter. asymmetric equilibrium is constructed to demonstrate that different forms of cooperation. 2008 by Joel Watson For instructors only. played T times with observed actions. up to a constant. Stage game {A. • Key idea: behavior is conditioned on the history of the relationship. collusion. • Example of a two-period (non-discounted) repeated game. • Diagram of the feasible repeated game payoffs and feasible stage game payoffs. Examples: partnerships. a more complicated. Lecture Notes A lecture may be organized according to the following outline. etc. favoring one or the other player. The presentation includes derivation of the standard conditions under which cooperation can be sustained in the infinitely repeated prisoners’ dilemma. The example also shows how a nonstage Nash profile can be played in equilibrium if subsequent play is conditioned so that players would be punished for deviating. the equilibria of the subgames are the same as those of the stage game. • Review of discounting. • The proper subgames have the same strategic features. The two-period game demonstrates that any sequence of stage Nash profiles can be supported as a subgame perfect equilibrium outcome (a result that is stated for general repeated games). can also be supported. The chapter then turns to the analysis of infinitely repeated games. beginning with a review of discounting. u} (call Ai actions). • Note how many subgames there are.22 Repeated Games and Reputation This chapter opens with comments about the importance of reputation in ongoing relationships. Plus. Two-period example. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. The Princess Bride reputation example. Just before he reveals his identity to her. you can play the scene from The Princess Bride in which Wesley is reunited with the princess. Examples and Experiments 1. which you may want to discuss near the end of a lecture on reputation and repeated games. do not distribute. This will hopefully get them thinking about history-dependent strategies. have the students make a self-enforced contract. it will reinforce the interpretation of equilibrium as a self-enforced contract. That is. 2008 by Joel Watson For instructors only. It is probably best to start a lecture with the simplest possible example.57 22 REPEATED GAMES AND REPUTATION • The infinitely repeated prisoners’ dilemma game. he makes interesting comments about how a pirate maintains his reputation. 2. • Example of another “cooperative” equilibrium. • Trigger strategies. At the beginning of your lecture on reputation. Grim trigger. • Conditions under which the grim trigger is a subgame perfect equilibrium. Have the students communicate in advance (either in pairs or as a group) to agree on how they will play the game. The folk theorem. such as the one with a 3 × 2 stage game that is presented at the beginning of this chapter. You can also run a classroom experiment based on such a game. . 23 Collusion. even if the name changes hands over time. Examples and Experiments 1. 2008 by Joel Watson For instructors only. The first application involves a straightforward calculation of whether collusion can be sustained using grim trigger strategies in a repeated Cournot model. the enforcement of international trade agreements. you can play the scene from The Princess Bride where Wesley and Buttercup are in the fire swamp. • Explanation of how some key strategic elements can be distilled in a game theory model. The first player 2 can. Other applications can also be presented. . • (If applicable) Description of the game to be analyzed. • Notes on how the model could be extended. • Description of the real-world setting. Lecture Notes Any or all of the applications can be discussed in class. and goodwill. For each application. Wesley explains how a reputation can be associated with a name. On goodwill. Trade Agreements. This example reinforces the basic analytical exercise from Chapter 22. it may be helpful to organize the lecture as follows. The section on international trade is a short verbal discussion of how reputation functions as the mechanism for self-enforcement of a long-term contract. • Discussion of intuition. Before lecturing on goodwill. and Goodwill This chapter presents three applications of repeated game theory: collusion between firms over time. depending on time constraints and the students’ background and interest. establish a valuable reputation that he can then sell to the second player 2. While in the swamp. by cooperating in the first period. do not distribute. in addition to these or substituting for these. • Determination of conditions under which an interesting (cooperative) equilibrium exists. Instructors' Manual for Strategy: An Introduction to Game Theory 58 Copyright 2002. The Princess Bride second reputation example. a two-period game with a sequence of players 2 (one in the first period and another in the second period) is analyzed. do not distribute. If you want to be ambitious. 2008 by Joel Watson For instructors only. This may easy to do if it is done by e-mail. .COLLUSION. depending on your students’ backgrounds. The interaction can be done in two scenarios. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. but may require a set time frame if done in class. you can present a model of an infinitely repeated game with a sequence of players 2 who buy and sell the “player 2 reputation” between periods. Let three students interact in a repeated Cournot oligopoly. and only the total output is announced at the end of each round. Goodwill in an infinitely repeated game. This may be set as an oil (or some other commodity) production game. It may be useful to have the game end probabilistically. TRADE AGREEMENTS. 3. players are allowed to communicate and each player’s output is announced at the end of each round. players may not communicate. Repeated Cournot oligopoly experiment. In the second scenario. In the first. AND GOODWILL 59 2. may be too difficult for them to do on their own). This can follow the Princess Bride scene and be based on Exercise 4 of this chapter (which. . referring to the information that a player privately observes. which the seller does not observe. because of incomplete information. Lecture Notes A lecture may be organized according to the following outline. • Other examples. • Discussion of settings in which players have private information about strategic aspects beyond their physical actions. • Many real settings might be described in terms of players already knowing their own types.) • Extensive form representation of the example. 2008 by Joel Watson For instructors only. the buyer knows his own valuation of the good. which are represented as open circles. negotiation. As an illustration. the gift game is depicted in the extensive form and then converted into the Bayesian normal form (where payoffs are the expected values over Nature’s moves). Moves of Nature (also called the nonstrategic “player 0”) are made at chance nodes according to a fixed probability distribution. Private information about preferences: auctions. Another abstract example follows. Nature’s probability distribution is noted in the tree. do not distribute. However. Instructors' Manual for Strategy: An Introduction to Game Theory 60 Copyright 2002. • Modeling such a setting using moves of Nature that players privately observe. Note that payoff vectors are averaged with respect to Nature’s fixed probability distribution. Nature moves at chance nodes.24 Random Events and Incomplete Information This chapter explains how to incorporate exogenous random events in the specification of a game. • Bayesian normal form representation of the example. then the game is said to be of incomplete information. • The notion of a type. (For example. etc. If a player privately observes some aspect of Nature’s choices. one type of player will have to consider how he would have behaved were he a different type (because the other players consider this). player 1 declares his/her guess of the price. four contestants must guess the price of an item. The other players get 0. $5 with probability 1/2 and $8 with probability 1/2). player 3 selected 410. the price will be 2 with probability 1/1000. the actual price is revealed. 000. The Let’s Make a Deal game revisited. 000. For an experiment. 3. if player 1 chose 150. The bidding game from this popular television game show forms the basis for a good bonus question. but they all know that the price is an integer between 1 and 1. Ultimatum-offer bargaining with incomplete information. When a player selects a number. the contestants all believe that the price is equally likely to be any number between 1 and 1.RANDOM EVENTS AND INCOMPLETE INFORMATION 61 Examples and Experiments 1. After the players make their guesses. You show player 2 the amount of the check. 4. It may be worthwhile to describe a signaling game that you plan to analyze later in class. he/she is not allowed to pick a number that one of the other players already had selected. and player 4 chose 490. the price will be 1 with probability 1/1000. The players make their guesses sequentially. then player 3 wins $100 and the others get nothing. player 2 chose 300. 000. . The other players observe player 1’s choice and then player 2 makes her guess. Signaling games. You might present. but you seal the check in an envelop before giving it to player 1 (who bargains over the terms of trading it to player 2). You can illustrate incomplete information by describing a variation of the Let’s Make a Deal game that is described in the material for Chapter 2. and if the actual price were 480. Nature picks with equal probabilities the door behind which the prize is concealed and Monty randomizes equally between alternatives when he has to open one of the doors. 5. and so on. 2. Three-card poker. by picking a number between 1 and 1. or run as a classroom experiment. The Price is Right. In the incomplete-information version. In fact. That is. 2008 by Joel Watson For instructors only. This game also makes a good example (see Exercise 4 in Chapter 24 of the textbook). an ultimatum bargaining game in which the responder’s value of the good being traded is private information (say. version. do not distribute.) In the game. First. (See also Exercise 5 in Chapter 25 for a simpler. Player 3 next chooses a number. describe the good as a soon-expiring check made out to player 2. For example. followed by player 4. Then the player whose guess is closest to the actual price without going over wins $100. when they have to make their guesses. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. but still challenging. Suppose none of them knows the price of the item initially. instead of the winner getting $100.RANDOM EVENTS AND INCOMPLETE INFORMATION 62 This game is not exactly the one played on The Price is Right. but it is close. the actual price)? Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. The bonus question is: Assuming that a subgame perfect equilibrium is played. the winner gets the value of the item (that is. do not distribute. what is player 1’s guess? How would the answer change if. 2008 by Joel Watson For instructors only. . Instructors will not likely want to develop in class a more general and complicated model than the one in the textbook. and most straightforward. • Note that people usually are risk averse in the sense that they prefer the expected value of a lottery over the lottery itself. Lecture Notes Analysis of the principal-agent problem is fairly complicated. the reader will find a thorough presentation of how payoff numbers represent preferences over risk. An example helps explain the notions of risk aversion and risk premia. • Risk preferences and the shape of the utility function on money. Risk neutral principal. They both will bind at the principal’s optimal contract offer. Because Nature moves last. Thus. • Calculation of the equilibrium. Then a streamlined principal-agent model is developed and fully analyzed. • The principal-agent model. 2008 by Joel Watson For instructors only. • Example of a lottery experiment/questionnaire that is designed to determine the risk preferences of an individual. the game has complete information. • Representing the example as a simple game with Nature. This is why the principal-agent model is the first. The relation between the agent’s risk attitude and the optimal bonus contract is determined. • Arrow-Pratt measure of relative risk aversion. • Incentive compatibility and participation constraints. where the agent is risk-averse. Instructors' Manual for Strategy: An Introduction to Game Theory 63 Copyright 2002. The Arrow-Pratt measure of relative risk aversion is defined. etc. There is a move of Nature (a random productive outcome). At the beginning of the chapter. . A lecture based on the textbook’s model can proceed as follows. linearity. do not distribute. it can be analyzed using subgame perfect equilibrium.25 Risk and Incentives in Contracting This chapter presents the analysis of the classic principal-agent problem under moral hazard. • Observe the difference between an expected monetary award and expected utility (payoff). Concavity. • Intuition: contracting for effort incentives under risk. application covered in Part IV of the book. Note how the contract and the agent’s behavior depend on the agent’s risk preferences. do not distribute.25 RISK AND INCENTIVES IN CONTRACTING 64 • Discussion of real implications. Examples and Experiments You can illustrate risk-aversion by offering choices over real lotteries to the students in class. 2008 by Joel Watson For instructors only. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. . Discuss risk aversion and risk premia. • Examples of performing standard rationalizability and equilibrium analysis to Bayesian normal form games. One can use this method without having to calculate expected payoffs over Nature’s moves for all players. .26 Bayesian Nash Equilibrium and Rationalizability This chapter shows how to analyze Bayesian normal form games using rationalizability and equilibrium theory. • Another method that is useful for more complicated games (such as those with continuous strategy spaces): treat different types as different players. Two methods are presented. as illustrated using the Cournot duopoly with incomplete information. The second method is shown to be useful when there are continuous strategy spaces. do not distribute. The second method is to apply the concepts by treating different types of a player as separate players. The two methods are equivalent whenever all types are realized with positive probability (an innocuous assumption for static settings). Examples and Experiments You can run a common.or private-value auction experiment or a lemons experiment in class as a transition to the material in Chapter 27. 2008 by Joel Watson For instructors only. You might also consider simple examples to illustrate the method of calculating best responses for individual player-types. Instructors' Manual for Strategy: An Introduction to Game Theory 65 Copyright 2002. Lecture Notes A lecture may be organized according to the following outline. Computations for some finite games exemplify the first method. • Example of the second method: Cournot duopoly with incomplete information or a different game. The first method is simply to apply the standard definitions of rationalizability and Nash equilibrium to Bayesian normal forms. it may be helpful to organize the lecture as follows. the simplified auction models are not beyond the reach of most advanced undergraduates. The example of voting and information aggregation gives a hint of standard mechanism-design/social-choice analysis and illustrates Bayes’ rule. a lemons model that is more general than the one in the textbook can easily be covered in class. and (c) double integration to establish revenue equivalence. is more complicated. on the other hand. However. in the Bayesian normal form. • Description of the real-world setting. The lemons model is quite simple. and the games are analyzed using the techniques discussed in the preceding chapter. (b) the calculus required to calculate best responses. Note whether the equilibrium is unique. For each application. Instructors' Manual for Strategy: An Introduction to Game Theory 66 Copyright 2002. weak dominance is defined and the revenue equivalence result is mentioned. • Notes on how the model could be extended. auctions. • Discussion of intuition. the chapter presents the analysis of both firstprice and second-price formats. Lecture Notes Any or all of these applications can be discussed in class. and Information Aggregation This chapter focuses on three important settings of incomplete information: pricetaking market interaction. • Explanation of how some key strategic elements can be distilled in a game theory model. . The auction analysis. One can skip (c) with no problem. The major sticking points are (a) explaining the method of assuming a parameterized form of the equilibrium strategies and then calculating best responses to verify the form and determine the parameter. and information aggregation through voting. depending on time constraints and the students’ background and interest. The “markets and lemons” game demonstrates Akerlof’s major contribution to information economics. Auctions. • Calculations of best responses and equilibrium. 2008 by Joel Watson For instructors only. do not distribute.27 Lemons. The information aggregation example requires students to work through Bayes’ rule calculations. In the process. • Description of the game to be analyzed. These settings are studied using static models. Regarding auctions. You can run an experiment in which randomly-selected students play a trading game like that of Exercise 8 in this chapter. You can also organize the interaction as a common-value auction. or allow them to declare whether they will trade at a prespecified price. Tell them that whomever has the card in the end will get paid. Have the students specify on paper the set of prices at which they are willing to trade. or run any other type of auction in class. AND INFORMATION AGGREGATION 67 Examples and Experiments 1. 2. draw one at random and give it to student 1 (so that student 1 sees the value but student 2 does not). then she gets the amount written on it. after shuffling the cards. Stock trade and auction experiments. Let one student be the seller of a car and another be the potential buyer. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. . AUCTIONS. Let the students engage in unstructured negotiation over the terms of trading the card from student 1 to student 2. You can discuss the importance of expected payoffs contingent on winning or trading. then he gets the amount plus a constant ($2 perhaps). do not distribute. Lemons experiment. Show the cards to both of the students and then. 2008 by Joel Watson For instructors only. If student 2 has the card.LEMONS. Prepare some cards with values written on them. If student 1 has the card. which is presented as the belief of a player at an information set where he has observed the action.) • Sequential rationality requires evaluating behavior at every information set.28 Perfect Bayesian Equilibrium This chapter develops the concept of perfect Bayesian equilibrium for analyzing behavior in dynamic games of incomplete information. 2008 by Joel Watson For instructors only. and (d) check whether player 1’s strategy is a best response to player 2’s strategy. Bayes’ rule. and (2) the beliefs are consistent with Bayes’ rule wherever possible. updated (posterior) belief. the example is used to demonstrate that subgame perfection does not adequately represent sequential rationality. • Calculations for the example. • Perfect Bayesian equilibrium: strategies. • Algorithm for finding perfect Bayesian equilibria in a signaling game: (a) posit a strategy for player 1 (either pooling or separating). • Definition of pooling and separating equilibria. perfect Bayesian equilibrium is defined and put to work on the gift game. given his beliefs and the strategies of the other players. Sequential rationality is defined as action choices that are optimal in response to the conditional beliefs (for each information set). but not the type. The gift game is utilized throughout the chapter to illustrate the key ideas. . First. • Sequential rationality: optimal actions given beliefs (like best response. Note that conditional beliefs are unconstrained at zero-probability information sets. (A simple signaling game will do. Initial belief about types. Lecture Notes A lecture may be organized according to the following outline. but with actions at a particular information set rather than full strategies). such that (1) each player’s strategy prescribes optimal actions at all of his information sets. do not distribute. (b) calculate restrictions on conditional beliefs. The chapter then covers the notion of consistent beliefs and Bayes’ rule. • Consistency: updating should be consistent with strategies and the basic definition of conditional probability. • Example to show that subgame perfection does not adequately capture sequential rationality. of another player. • Conditional belief at an information set (regardless of whether players originally thought the information set would be reached). Instructors' Manual for Strategy: An Introduction to Game Theory 68 Copyright 2002. (c) calculate optimal actions for player 2 given his beliefs. Then comes the notion of conditional belief. beliefs at all information sets. Finally. This could be done several times. do not distribute. Then a male and female student could be selected.” The colors should be given in different proportions to males and females (for example. A student could be asked to guess the color of another student’s card. Conditional probability demonstration. . 3. Wesley can choose whether or not to stand. The prince enters the room. It may be instructive to play in class a signaling game in which one of the player-types has a dominated strategy. The scene begins with Wesley lying in a bed. the prince decides whether to fight or surrender. and a student could be asked to guess who has. Signaling game experiment. the red card. The variant of the gift game discussed at the beginning of Chapter 28 is such a game. and the color revealed following the guess. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. say “red” and “blue. you can calculate the perfect Baysian equilibria and discuss whether it accurately describes events in the movie. 2008 by Joel Watson For instructors only. 2. This game can be diagrammed and discussed in class. The Princess Bride signaling example. Exercise 6 in this chapter sketches one model of this strategic setting. Students can be given cards with different colors written on them. Finally. A scene near the end of The Princess Bride movie is a good example of a signaling game. for example. After specifying payoffs. The prince does not know whether Wesley is strong or weak.28 PERFECT BAYESIAN EQUILIBRIUM 69 Examples and Experiments 1. males could be given proportionately more cards saying red and females could be given proportionately more cards saying blue). • Notes on how the model could be extended. . Lecture Notes Either or both of these applications can be discussed in class. You can briefly discuss the program of mechanism design theory as well. and Robert Wilson. The signaling model demonstrates Michael Spence’s major contribution to information economics. which has interesting implications. For each application. you might lecture on the problem of contracting with adverse selection. the applications presented in this chapter. or in place of. Instructors' Manual for Strategy: An Introduction to Game Theory 70 Copyright 2002. Examples and Experiments In addition to. where the principal offers a menu of contracts to screen between two types of the agent.29 Job-Market Signaling and Reputation This chapter presents two applications of perfect Bayesian equilibrium: job-market signaling and reputation with incomplete information. The reputation model illustrates how incomplete information causes a player of one type to pretend to be another type. depending on time constraints and the students’ background and interest. • Calculating the perfect Bayesian equilibria (using the circular algorithm from Chapter 28. The extensive form tree of the job-market signaling model is in the standard signaling-game format. This is a principal-agent game. 2008 by Joel Watson For instructors only. • Discussion of intuition. • Description of the real-world setting. so this model can be easily presented in class. do not distribute. John Roberts. if appropriate). because its extensive form representation is a bit different and the analysis does not follow the algorithm outlined in Chapter 28. it may be helpful to organize the lecture as follows. • Explanation of how some key strategic elements can be distilled in a game theory model. The reputation model may be slightly more difficult to present. This offers a glimpse of the reputation literature initiated by David Kreps. Exercise 9 of Chapter 29 would be suitable as the basis for such a lecture. • Description of the game to be analyzed. however. Paul Milgrom. functions. 2008 by Joel Watson For instructors only. calculus is used sparingly in the textbook and it can be avoided. Instructors' Manual for Strategy: An Introduction to Game Theory 71 Copyright 2002. Three challenging mathematical exercises appear at the end of Appendix B. Appendix B gives some of the details of the rationalizability construction. and probability theory. In addition. basic differentiation. the relation between dominance and best response. . If you want the students to see some of the technical details behind the difference between correlated and uncorrelated conjectures. or the rationalizability construction. you can simply teach (or have them read on their own) the short section entitled “Functions and Calculus” in Appendix A. As noted at the beginning of this manual. and if calculus is not a prerequisite for your course. Your students can consult this appendix to brush up on the mathematics skills that are required for game theoretic analysis. do not distribute. If you wish to cover the applications/examples to which the textbook applies differentiation. where calculus is utilized. you can advise them to read Appendix B just after reading Chapters 6 and 7. it usually amounts to a simple exercise in differentiating a second-degree polynomial.30 Appendices Appendix A offers an informal review of the following relevant mathematical topics: sets. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. 2008 by Joel Watson For instructors only. there are bound to be a few typos here and there. . Please report any instances where you think you have found a substantial error. Although we worked diligently on these solutions. do not distribute.72 Part III Solutions to the Exercises This part contains solutions to all of the exercises in the textbook. (a) (b) Incomplete information. . 2008 by Joel Watson For instructors only. Instructors' Manual for Strategy: An Introduction to Game Theory 73 Copyright 2002. The worker does not know who has hired him/her.2 The Extensive Form 1. 2. do not distribute. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. do not distribute. . The order does not matter as it is a simultaneous move game. 2008 by Joel Watson For instructors only. 4.74 2 THE EXTENSIVE FORM 3. Note that we have not specified payoffs as these are left to the students. 6. The payoffs below are in the order A. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. do not distribute. C. 2008 by Joel Watson For instructors only.2 THE EXTENSIVE FORM 75 5. B. . do not distribute. . 2008 by Joel Watson For instructors only.2 THE EXTENSIVE FORM 76 7. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. 8. Exercise 4: Si = {R. 2.S}. “not hire” does not describe a strategy for the manager. (a) (b) Instructors' Manual for Strategy: An Introduction to Game Theory 77 Copyright 2002. SJ = {Aa. Exercise 1: SL = {A. 3. do not distribute. Gr. i = 1. However. No.P. Ba. Gg}. Bb }. SM = {Rr. A strategy for the manager must specify an action to be taken in every contingency. Ab. 2008 by Joel Watson For instructors only. . 2.3 Strategies and the Normal Form 1. “not hire” does not specify any action contingent upon the worker being hired and exerting a specific level of effort. Rg.B}. do not distribute. 2008 by Joel Watson For instructors only. .3 STRATEGIES AND THE NORMAL FORM 78 (c) (d) (e) Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. 2008 by Joel Watson For instructors only. 6. Thus.79 3 STRATEGIES AND THE NORMAL FORM (f) 4. strategy spaces. f). Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. 2}. (d. Player 2’s strategy must specify a choice of quantity for each possible quantity player 1 can choose. ∞). . (d. Player 2 has 4 strategies: {(c. (c. do not distribute. The payoff to player i is give by ui (qi . and payoff functions. g)}. ∞) to [0. ∞). S1 = [0. f ). N = {1. 5. Here N = {1. g). Si = [0. 2}. The normal form specifies player. qj ) = (2 − qi − qj )qi. player 2’s strategy space S2 is the set of functions from [0. ∞). Some possible extensive forms are shown below and on the next page. do not distribute. (a) Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. 2008 by Joel Watson For instructors only.80 3 STRATEGIES AND THE NORMAL FORM 7. . 2008 by Joel Watson For instructors only. .3 STRATEGIES AND THE NORMAL FORM 81 (b) Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. do not distribute. 4 Beliefs, Mixed Strategies, and Expected Payoffs 1. (a) u1(U,C) = 0. (b) u2 (M,R) = 4. (c) u2(D,C) = 6. (d) For σ1 = (1/3, 2/3, 0) u1 (σ1,C) = 1/3(0) + 2/3(10) + 0 = 6 2/3. (e) u1(σ1 ,R) = 5 1/4. (f) u1(σ1, L) = 2. (g) u2(σ1, R) = 3 2/3. (h) u2 (σ1, σ2) = 4 1/2. 2. (a) (b) Player 1’s expected payoff of playing H is z. His expected payoff of playing L is 5. For z = 5, player 1 is indifferent between playing H or L. (c) Player 1’s expected payoff of playing L is 20/3. 3. (a) u1(σ1,I) = 1/4(2) + 1/4(2) + 1/4(4) + 1/4(3) = 11/4. (b) u2 (σ1,O) = 21/8. (c) u1 (σ1, σ2 ) = 2(1/4) + 2(1/4) + 4(1/4)(1/3) + 1/4(2/3) + 3/4(1/3) + 14(2/3) = 23/12. (d) u1 (σ,σ2 ) = 7/3. Instructors' Manual for Strategy: An Introduction to Game Theory 82 Copyright 2002, 2008 by Joel Watson For instructors only; do not distribute. 83 BELIEFS AND EXPECTED PAYOFFS 4. Note that all of these, except “Pigs,” are symmetric games. Matching Pennies: u1 (σ1, σ2) = u2(σ1 , σ2) = 0. Prisoners’ Dilemma: u1(σ1 , σ2) = u2(σ1 , σ2) = 1 1/2. Battle of the Sexes: u1 (σ1, σ2 ) = u2 (σ1, σ2) = 3/4. Hawk-Dove/Chicken: u1 (σ1, σ2) = u2(σ1 , σ2) = 1 1/2. Coordination: u1(σ1 , σ2) = u2 (σ1, σ2 ) = 1/2. Pareto Coordination: u1 (σ1, σ2) = u2(σ1 , σ2) = 3/4. Pigs: u1(σ1, σ2 ) = 3, u2 (σ1, σ2 ) = 1. 5. The expected profit of player 1 is (100 − 28 − 20)14 − 20(14) = 448. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002, 2008 by Joel Watson For instructors only; do not distribute. 6 Dominance and Best Response 1. (a) B dominates A and L dominates R. (b) L dominates R. (c) 2/3 U 1/3 D dominates M. X dominates Z. (d) none. 2. (a) To determine the BR set we must determine which strategy of player 1 yields the highest payoff given her belief about player 2’s strategy selection. Thus, we compare the payoff to each of her possible strategies. u1 (U,θ2) = 1/3(10) + 0 + 1/3(3) = 13/3. u1 (M,θ2) = 1/3(2) + 1/2(10) + 1/3(6) = 6. u1 (D,θ2) = 1/3(3) + 1/3(4) + 1/3(6) = 13/3. BR1 (θ2) = {M}. (b) BR2 (θ1) = {L,R}. (c) BR1(θ2 ) = {U,M}. (d) BR2 (θ1) = {C}. 3. Player 1 solves maxq1 (100 − 2q1 − 2q2 )q1 − 20q1 . The first order condition is 100 − 4q1 − 2q2 − 20 = 0. Solving for q1 yields BR1(q2) = 20 − q2 /2. It is easy to see that BR1 (0) = 20. Since q2 ≥ 0, it cannot be that 25 is ever a best response. Given the beliefs, player 1’s best response is 15. 4. (a) First we find the expected payoff to each strategy: u1(U, θ2 ) = 2/6+0+ 4(1/2) = 7/3; u1(M, θ2 ) = 3(1/6)+1/2 = 1; and u1(D, θ2 ) = 1/6+1+1 = 13/6. As the strategy U yields a higher expected payoff to player 1, given θ2, BR1 (θ2) = {U}. (b) BR2 (θ1) = {R}. (c) BR1(θ2 ) = {U}. (d) BR1 (θ2) = {U, D}. (e) BR2(θ1 ) = {L, R}. Instructors' Manual for Strategy: An Introduction to Game Theory 84 Copyright 2002, 2008 by Joel Watson For instructors only; do not distribute. 2008 by Joel Watson For instructors only. 7. 0). 8. This is because 1/2 A 1/2 B dominates C. S}. 6. (a) BR1(θ2) = {P}. So UD1 = [0. (d) BR1 (θ2) = S1.85 6 DOMINANCE AND BEST RESPONSE 5. 2/3. M is dominated by (1/3. (c) BR1(θ2 ) = {P}. From exercise 3. 20]. do not distribute. . (b) BR1 (θ2) = {R. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. No. BR1(q2) = 20 − q2/2. 4. R = {U. D} × {L. R dominates C. 3. it must be that x ≤ 0. Instructors' Manual for Strategy: An Introduction to Game Theory 86 Copyright 2002. (b) Here there is a dominant strategy. (e) R = {A. . R = {(7:00. 6:00. R}. B} × {X. M} × {L. R = {(x. Y}. Y}. c)}. For “give in” to be rationalizable. (g) R = {(D. When D is ruled out. U dominates D. do not distribute. (d) R = {A. B} × {X. The manager must believe that the probability that the employee plays “settle” is (weakly) greater than 1/2. L)}. (c) R = {(U. (a) R = {U. 6:00)}. 2. So we can iteratively delete dominated strategies. 2008 by Joel Watson For instructors only. Thus. R}. Y}. then this strategy will also be dominated relative to a smaller set of strategies for the other player. The order does not matter because if a strategy is dominated (not a best response) relative to some set of strategies of the other player. (f) R = {A. Y)}.7 Rationalizability and Iterated Dominance 1. M. B} × {X. It may be that s1 is rationalizable because it is a best response to some other rationalizable strategy of player 2. and just also happens to be a best response to s2 . 0. 7. we know a will be at most 9 (if everyone except player 10 selects 9). player 2 can rationalize strategy s2. R = {(0. Note that player u10 = (a − 10 − 1)s10 and that a − 11 < 0 since a is at most 10. by induction. 0. then s2 is a best response to a strategy of player 1 that may rationally be played. 0. Given this. No. 0. . 0. 6. 0. Thus. So player 10 has a single undominated strategy. a − 10 < 0 and so player 9 must select 0. 2008 by Joel Watson For instructors only. 0)}. 0. do not distribute. 0. Yes.7 RATIONALIZABILITY AND ITERATED DOMINANCE 87 5. Thus. If s1 is rationalizable. 0. every player selects 0. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. say sˆ2 . we should focus on R2i = {3. 5. 4. and his/her best response to 6 is 6. . 8}. For x < 80 locating in region 2 dominates locating in region 1. It is easy to see that {1. BRi (1) = {2. . Thus. (c) When x > 75. preferences are as modeled in the basic location game. We first find the best response sets. 4.6}× {5. BRi (6) = {5}. 3. When x < 75. It is easy to see that if the regions are divided in half between 5 and 6 that 250 is distributed to each half. each candidate receives the same number of votes. 2. In any of these outcomes. R = {(5. R = {(6. (a) Yes. 6}. . 5. . 2008 by Joel Watson For instructors only. Thus. 6. 6. R1i = {2. 7. 9}. and his/her best response to 5 is 5. player i’s best response to 6 is 5. and BRi (9) = {5. Thus. 6)}. for example. BRi (4) = {5}. 5. This implies R2i = Ri = {5}. . 4. 7. BRi (5) = {5}. 3. do not distribute. 3. BRi (2) = {5}. 8}. Since player i knows that player j is rational. 3. he/she knows that j will never play {1. (b) Here. player i’s best response to 5 is 6.6}. that player 2 plays 1 then BR1 = {2. We label the regions as shown below. 8}. R = {5. Thus.8 Location and Partnership 1. So unlike in the basic location model there is not a single region that is “in the middle”. the best response set is not unique. BRi(8) = {5}. 5}. BRi (7) = {4. Instructors' Manual for Strategy: An Introduction to Game Theory 88 Copyright 2002. 5. BRi (3) = {2. 4. 7}. When the each player’s objective is to maximize his/her probability of winning. 6. Suppose. Noticing the symmetry makes this easier. 5)}. 9} are never best responses. 8}. and BR2(x) = 1 + cx. Repeating yields 1 1+c Ri = { 1−c 2 } = 1−c . 1 + 4c]. 1 + c + c2 ]. Recall from the text that BR1(y) = 1 + cy. R2i = [1 + c. and BR2(x) = 1 + cx. 2008 by Joel Watson For instructors only.89 8 LOCATION AND PARTNERSHIP 4. Because player i will never optimally exert effort that is either less than 1 or greater than 1 + 4c. This yields the following graph of best response functions. 1]. This yields the following graph of best response functions. Because the players Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. Continuing yields R3i = [1 + c. we have R1i = [1. As neither player will ever optimally exert effort that is greater than 1. Repeat of analysis for c > 1/4: Recall from the text that BR1(y) = 1+cy. Assume 1/4 < c ≤ 3/4. do not distribute. 1]. Assume −1 < c < 0. R1i = [0. Realizing that player j’s rational behavior implies this. . p2 ) = [10 − p2 + p1 ]p2 . p2 ) = 10pi − p2i + pj pi . we have R1i = [5. (a) u1(p1 . we want to solve for pi that maximizes i’s payoff given pj . Thus. 2. ∞). and this is the only rationalizable strategy profile. (c) Here there is no bound to the price a player can select. 1 + c(1 + 4c)]. u2 (p1 . p2 ) = [10 − p1 + p2 ]p1 .90 8 LOCATION AND PARTNERSHIP know this about each other. Thus. Solving for the first order condition yields pi (pj ) = 5 + 1/2p j . Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. remember that the players’ strategies are constrained to be less than or equal to 4. As above. we do not obtain a unique rationalizable strategy profile. . ∞) for i = 1. we have R2i = [1 + c. Similar to the above. 5. Repeating 1 1+c yields Ri = { 1−c 2 } = 1−c . The best response functions are represented below. 4 if 1 + cx > 4 and In this case. do not distribute. In this case. the best response functions cross at (4. 2008 by Joel Watson For instructors only. the best response functions are actually BR1(y) =  1 + cy if 1 + cy ≤ 4 4 if 1 + cy > 4 BR2(x) =  1 + cx if 1 + cx ≤ 4 . Next suppose that c > 3/4. ∞) and R2i = [15/2. However. the functions x = 1 + cy and y = 1+cx suggest that players would want to select strategies that exceed 4 in response to some beliefs. Repeating the analysis yields Ri = [10. (b) ui (p1 . 4). (b) σi = (0. for all p ∈ (1/2. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. Similarly. 7.91 8 LOCATION AND PARTNERSHIP 6. do not distribute. The first order condition implies y = 2x. 8)}. 0. 8]. (a) No. 8 if x > 4 So R = {(8. Player 1 chooses x to maximize u1(x. y) = 2xy − x2. 2008 by Joel Watson For instructors only. player 2 chooses y to maximize u2 (x. So  2x if x ≤ 4 BR2(x) = . 1 − p. The first order condition implies x = y. p. 1). y) = 4xy − y 2 . Thus. 0) dominates locating in region 1. BR1(y) = y. . 0. but y ∈ [2. X) and (B. D} × {L. R = {U. (f) The Nash equilibria are (A. R}. the Nash equilibrium is (1/2. Exercise 1: (a) No Nash equilibrium. M. do not distribute. Exercise 2: The Nash equilibria are (Ea. (g) The Nash equilibrium is (D.stag) and (hare. (d) The set of Nash equilibria is {(U. X)} = R. L)} = R. 3. (d) The Nash equilibria are (A. Figure 7.hare). Exercise 1: The Nash equilibrium is (D.Y).R). R)}.1: The Nash equilibrium is (B. (c) The set of Nash equilibria is {(U. Figure 7. (b) (y.Y). 2. 1/2) would no player wish to unilaterally deviate. Only at (1/2. C}. L). 1/2). Chapter 5. (a) The Nash equilibria (w. (b) The set of Nash equilibria is {(U.c).an′ ).Y). Figure 7.L).X) and (B. Thus.R) and (M.4: The Nash equilibria are (stag. L).9 Nash Equilibrium 1. R = {U. (b) The Nash equilibria are (U.3: The Nash equilibrium is (M.b) and (y. C)}. 2008 by Joel Watson For instructors only. (D. . D} × {L.L). 4. (e) The Nash equilibria are (A. Instructors' Manual for Strategy: An Introduction to Game Theory 92 Copyright 2002. (a) The set of Nash equilibria is {(B.(M.Z). (c) X is not congruent. (c) The Nash equilibrium is (U.Y).aa′ ) and (Ea. Exercise 3: No Nash equilibrium.X) and (B. Chapter 4.c) is efficient.R). for all three statements to be true requires x = 4. and (3.3).X) is a Nash equilibrium has no implications for x. Thus. 2. 8.93 9 NASH EQUILIBRIUM 5. (B. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. Player 1 solves maxs1 3s1 −2s1 s2 −2s21 . This yields the best response function s2 (s1) = 1/4+s1 /2.Z) is efficient requires that x ≥ 4. Y ) = 3 ≥ u2 (θ1.5. 4. 8. l}. Taking s2 as given and differentiating with respect to s1 yields the first order condition 3−2s2 −4s1 = 0. The Nash equilibrium is found by finding the strategy profile that satisfies both of these equations. giving an expected payoff of 3. 3] × [1. l}. 3). 2008 by Joel Watson For instructors only. 6. 2). 1/2). 1). do not distribute. 5. 3]. and {w. and 9 are dominated by 3 and 7. (b) The Nash equilibria are (3. y} × {k. This implies that the Nash equilibrium is (1/2. 12 ). (A. we need u2(θ1 . 9. (a) The Nash equilibria are (2. Note that 3. Rearranging. 6. (b) They will agree to {w.5 on 7. For Y to be a best response to θ1 = ( 21 .5 on 3 and probability . {(z. we have s1 = 3/4− 1/2[1/4+s1 /2]. (b) R = [2. (a) In the first round strategies 1.7) and (7. 7. (c) No. (5/2. m)}. player 2 solves maxs2 s2 +2s1 s2 −2s22 . we obtain player 1’s best response function: s1 (s2) = 3/4−s2 /2. there are four possible strategy profiles. Substituting player 2’s best response function into player 1’s. Z) = x/2 + 1. and 7 are all best responses to beliefs that put probability . y} × {k. (a) The congruous sets are S. . So we need x ≤ 4. (b) Suppose that the first play is (opera. t∗2} and θ1. . 2008 by Joel Watson For instructors only. (c) In the case of strict Nash equilibrium. Consider the following game. (a) Play will converge to (D. because D is dominant for each player. in the long run si will not be played.. play will be (movie. 12. t∗1 ∈ BR1(θ2′ ). t∗1} × {s∗2 . and t∗2 ∈ BR2 (θ1′ ). do not distribute. movie). Then in round three. For {s∗1 . Recall that BRi(movie) = {movie}. we need s∗1 ∈ BR1 (θ2). θ2′ ∈ ∆{s∗2 . in which (H. opera). It must be that one or both players will play a strategy other than his part of such a Nash equilibrium with positive probability. movie).94 9 NASH EQUILIBRIUM 10. s∗2 ∈ BR2 (θ1). in round two. It must be the case that {s∗1 . t∗1} × {s∗2 . play will be (opera. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. The non-strict Nash equilibrium will not be played all of the time. D). it will be played all of the time. θ1′ ∈ ∆{s∗1. This cycle will continue with no equilibrium being reached. t∗2} is weakly congruous. X) is an efficient strategy profile that is also a non-strict Nash equilibrium. and BRi (opera) = {opera}. Thus. because s∗ and t∗ are Nash equilibria. This is true for θ2 putting probability 1 on s∗2 . Thus. 11. t∗2} to be weakly congruous. where θ2. θ2′ putting probability 1 on t∗2. (d) Strategies that are never best responses will eventually be eliminated by this rule of thumb. t∗1}. etc. 95 9 NASH EQUILIBRIUM 13. To find equilibrium. we need γ = 6. it has a Nash equilibrium. This requires γ = 2α = 3β. (a) (b) This game has no pure-strategy Nash equilibrium. and 2 select Y. . Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. and β = 2. It is an equilibrium if 6 players select Z. 3 select X. The number of equilibria is 4620. 2008 by Joel Watson For instructors only. Thus. do not distribute. look for a case in which the players are getting the same payoffs and none wished to unilaterally deviate. (c) Yes. α = 3. and u∗ = p∗ q ∗ − cq ∗ = ([a + cn]/(n + 1])[n[a − c]/b(n + 1)] − cn[a − c]/b(n + 1) .10 Oligopoly. total equilibrium output is Q∗ = nq ∗. p−i ) = Instructors' Manual for Strategy: An Introduction to Game Theory 96 Copyright 2002. Player i’s best response function is qi (Q−i ) = (a − c)/2b − Q−i /2. ui (qi .  1 (a m − pi )[pi − c] if pi = p where m 0 if pi > p. Q−i ) = [a − bQ−i − bqi ]qi − cqi . (a) Si = [0. ∞). 2. Thus. where q ∗ is the equilibrium output of an individual firm. (a) Si = [0. Thus. q ∗ = [a − c]/b(n + 1) and Q∗ = n[a − c]/b(n + 1). 2. (b) Firm i solves maxqi [a − bQ−i − bqi ]qi − cqi . denotes the number of players k ∈ {1. . n} such that pk = p. Thus. = (a − c)2/b(n + 1)2 (d) In the duopoly case qi (q j ) = (a − c)/2b − q j /2. ui (pi . one can just set n = 2 in the above result). By examining the best response function. 2. . ∞]. Q∗−i = (n − 1)q ∗. we can identify the sequence Rki and inspection reveals that Ri = {(a − c)/3b} for i = 1. Crime. . q ∗ = (a − c)/3b. . 2008 by Joel Watson For instructors only. do not distribute. We also have p∗ = a − bn[a − c]/b(n + 1) = n[a − c]/(n + 1) = [an + a − an + nc]/(n + 1) = [a + cn]/(n + 1]. . This yields the first order condition a−bQ−i −c = 2bqi . The Nash equilibrium is found by solving the system of two equations given by the best response functions of the two players (alternatively. This is represented in the graph below. where Q−i ≡  j=i qj . So q ∗ = [a − c − b(n − 1)q ∗ ]/2b. (c) By symmetry.and Voting 1. Tariffs. (c) ui (60.97 OLIGOPOLY. Let p−i denote the minimum pj selected by any player j = i. there are other Nash equilibria in which one or more players selects a price greater than c (but at least two players select c). and so on. do not distribute. Ri = {60} for i = 1. For n > 2. ui (0. 4. (c) The notion of best response is not well defined. but as close to p−i as possible. 3. Thus. player i’s best response is to select pi < p−i . Thus. If c < p−i . 0) = 2000. 2008 by Joel Watson For instructors only. (b) The Nash equilibrium is (60. R1i = [30. This yields the first order condition y2 − c4 = 0. 70]. However there is no such number. This yields the first order condition y1/2 x 1 − (1+xy) 2 = 0. TARIFFS. 2. 60) = 200. (a) G solves maxx −y 2x−1 − xc4 . we find G’s best response function to be x(y) = x2 y/c2 . Rearranging. R2i = [45. Rearranging. we find C’s best response function 2y1/2 (1+xy) to be y(x) = 1/x. C solves maxy y 1/2(1 + xy)−1 . . CRIME. AND VOTING (b) The Nash equilibrium is: pi = c for all i. (a) BRi(xj ) = 30 + xj /2. (d) The best response functions are represented below. These are represented at the top of the next page. 80]. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. 60). It is easy to see that player i will never set xi < 30 or xi > 80. eD (eP ) = 2 2eP − eP .  (c) By symmetry.000 because this leads to a payoff of zero. Further. do not distribute. D}. TARIFFS. and uD (eP . uP (eP .98 OLIGOPOLY. it must be that e∗P = 2 2e∗p − e∗P . This implies 8(eP + eD ) − eD )2. or 8eD = (eP + eD )2 . Rearranging.000 as she will receive a negative payoff. . The Nash equilibrium is x = 1/c and y = c. (b) The prosecutor solves maxeP 8eP /(eP + eD ) − eP . eD ) = −8 + 8eD /(eP + eD ) − eD . eD ) = 8eP /(eP + eD ) − eP . e∗P = e∗D = 2. Thus. ∗ Similarly. (a) The normal form is given by N = {P. (d) This is not efficient. (c) As the cost of enforcement c increases. 2008 by Joel Watson For instructors only. Clearly. 6. ei ∈ [0. neither does better by unilaterally deviating to a bid that is less than 15. The probability that the defendant wins in equilibrium is 1/2. neither player wishes to bid higher than 15. 5. ∞). Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. CRIME. In equilibrium b1 = b2 = 15. Taking the square root 8eP = (eP +√ √ of both ∗ sides yields 2 2eD = √ eP +eD . The first order condition is 8/(eP + eD ) − 8eP /(eP + eD )2 = 1. 000. AND VOTING (b) We find x and y such that x = y/c2 and y = 1/x. enforcement x decreases and criminal activity y increases. we find eP (eD ) = 2 2eD −eD . she gets 2m. So neither L nor M will vote for McClintock. then her best response is F. If she believes that everyone else will play F. m has decreased by 12 . Rearranging yields x > m − 21 . for any strategy profiles of the others (aside from L voting for McClintock). do not distribute. Knowing that L will not vote for McClintock. voting for McClintock is dominated by voting for Bustamante. TARIFFS. (b) For α ≤ 41 . (a) For α ≥ 31 . For L. C will vote for Schwarzenegger. (a) All of the strategies are rationalizable. The Nash equilibrium is q1∗ = 4 and q2∗ = 2. M does strictly better voting for Schwarzenegger than by voting for McClintock. 10. 9. This means that in the next round. Note that m ≤ 1. . 2008 by Joel Watson For instructors only. (c) Every player x > 21 selects G. then her best response is G. After two rounds. If player x believes that no one else will play F. then any x with 2m − 2x < 1 will find that G dominates F. There is a symmetric Nash equilibrium in which everyone plays F.99 OLIGOPOLY. There is another symmetric Nash equilibrium in which everyone plays G. (b) Playing G yields 1. Thus. and BR2(q1) = 4 − 21 q1. Instructors' Manual for Strategy: An Introduction to Game Theory 1 2 selects F. we get that G is the only rationalizable strategy for everyone. If she selects F. playing F yields 2m − 2x. AND VOTING 7. M will vote for Schwarzenegger. 8. If after some round of iterated dominance it is rational for at most m of the players to choose F. and Copyright 2002. BR1(q2 ) = 5 − 12 q2 . If player x selects G she gets 1. and every player x < x = 21 selects either F or G. Knowing this. CRIME. We can then show that L does strictly better by voting for Bustamante than voting for Schwarzenegger for any strategies of the others (assuming M does not vote for McClintock). (0. 4/5) σ2 = (3/4. However. x/2. Z}. the probability of (L. x/2)). player 1 will never play D. ( 34 . Nash equilibrium is (U. and player 2 will never play L. Instructors' Manual for Strategy: An Introduction to Game Theory 100 Copyright 2002. 1/2). 1/2. yields −5q + (x − 15)(1 − q) = 10 − 10q. as x becomes larger. Rearranging yields q = 20−x Firm X chooses p so that firm Y is indifferent between L and N. x/2). L) and ((0. 1/2).11 Mixed-Strategy Nash Equilibrium 1. 21 ). 1/3. (b) The Nash equilibrium is (( 21 . When x < 1. Y } × {Q. Thus. so we need 6σ2 (X) + 0σ2 (Y ) + 0σs (Z) = 4. 2. and that (2/3. there is an equilibrium of ((1 − x. This requires 5p + 5 − 5p = 3p + 8 − 8p or p = 3/5. We must also find probabilities over M and R such that player 1 is indifferent between U and C. (a) σ1 = (1/5. It must be that u1 (A. So σ2(X) = 32 . . This 25−x . the Nash equilibria are (U. (a) R = {X. L). We need to find probabilities over U and C such that player 2 is indifferent between M and R. N) = p(1 − q) = (1/2)[ 20−x−25+x ] = (1/2)[ x−20 ]. σ2) = 4. (a) (N. 4. σ1 = (3/5. 5 (c) The probability of (L. 0). (L. L) and (L. 2008 by Joel Watson For instructors only. Thus. When x > 1. 1/4). 3. do not distribute. 1/2)). 1/2. Further. 5. There is enough information. 20−x (d) As x increases. x/2. (1 − x. σ2 = (0. N) decreases. Rearranging yields p = 1/2. (b) It is easy to see that M dominates L. (b) Firm Y chooses q so that Firm X is indifferent between L and N. 0. This yields −5p + 15 − 15p = 10 − 10p. N). 2/5. 41 )). for 0 < x < 1. This requires 3q + 6 − 6q = 5q + 4 − 4q or q = 1/2. Thus. 1/2. N) is a “better” outcome. 0) dominates D. Indifference between A and B requires 8q = 6 − 4q or q = 1/2. To see this note that q = 1/2 solves 4 − 4q = 4q = 3q + 1 − q. Here p denotes the probability with which U is played and r denotes the probability with which C is played. Let q be the probability that player 2 selects X. Notice that the q which makes player 1 indifferent between any two strategies makes him indifferent between all three strategies. D) (c) There are no pure strategy Nash equilibria. 4/5). σ2 = (3/5. 1/2). σ2 = (0. Thus. (B. at q = 5/8 player 1 is indifferent between A and C. y). 2008 by Joel Watson For instructors only. (b) (D. It remains to find probabilities such that player 2 is indifferent between playing M and R. thus. σ1 = (x. Clearly. A). So we can consider each of the cases where player 1 is indifferent between two of his strategies. where x. (a) σi = (1/2. This implies r = 1/5. 1/2) = {C} and. 1/3). σ1 = (1/2. . 1/2). However. B). 1/2). A).101 11 MIXED-STRATEGY NASH EQUILIBRIUM 6. σ2 = (1/2. 1/2. Indifference between M and R requires 2p + 4r + 3(1 − p − r) = 3p + 4(1 − p − r). do not distribute. note that BR1(1/2. 1/5. 2/5). Let q denote the probability with which M is played. (f) Note that M dominates L. Player 2 mixes over X and Y so that player 1 is indifferent between those strategies on which player 1 puts positive probability. 1/2). 1/2) and σ2 = (1/2. Thus. The comparison of 8q to 2q + 6 − 6q to 5 shows that we cannot find a mixed strategy in which player 1 places positive probability on all of his strategies. there is no equilibrium in which player 1 mixes between Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. B). and σ1 = (2/3. (e) (A. and σ1 = (1/5. First game: The normal form is represented below. 7. (d) (A. (B. So player 2 chooses probabilities over M and R such that player 1 is indifferent between at least two strategies. y ≥ 0 and x + y = 4/5. there is no equilibrium in which player 1 selects ID with positive probability. note that BR1(1/4. 2008 by Joel Watson For instructors only. Second game: The normal form of this game is represented below.102 11 MIXED-STRATEGY NASH EQUILIBRIUM A and B. and then player 1 would not be indifferent between A and C. This implies that (1 − pn−1 )v = v − c or 1 p = (c/v) n−1 . if this were the case. and that each player be indifferent between calling and not calling. however. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. Note that player 1 prefers OU or OD if player 2 selects O with a probability of at least 3/5. for. (a) The symmetric mixed strategy Nash equilibrium requires that each player call with the same probability. where p ∈ [0. This is an equilibrium for every q ∈ [1/4. 1 − q). player 2 is indifferent between his two strategies. player 1 should not pick IU. and then player 1 would not be indifferent between B and C. Note that this decreases as the number of bystanders n goes up. do not distribute. Thus. 0. there is clearly no equilibrium in which player 1 mixes between A and C. Finally. 8. C}. in this case. mixed strategy equilibria in which player 1 selects C with probability 1 (that is. p. . 1 − p) and σ2 = (q. 5/8]. Further. There are. there is no equilibrium in which player 1 mixes between B and C. this is because player 2 would strictly prefer X. 3/4) = {B. plays a pure strategy) and player 2 mixes between X and Y. when player 1 mixes between OU and OD. indifference between B and C requires 6 − 4q = 5 or q = 1/4. player 2 would strictly prefer Y. Turning to player 2’s incentives. (b) The probability that at least one player calls in equilibrium is 1− pn = n 1 − (c/v) n−1 . There is also no equilibrium in which player 1 selects IU with positive probability. then player 2 strictly prefers O and. 1] and q ≤ 2/5. the set of mixed strategy equilibria is described by σ1 = (0. in response. Likewise. Further. Clearly. we are done. A mixture of 2/3 and 1/3 implies that 001 receives a payoff of 8 from all of his undominated strategies. It is the case that (a − e) and (g − c) have the same sign. (a) If the game has a pure-strategy Nash equilibrium. 1/3. The analogous argument can be made with respect to player 2. Further. (b) It is advised that 001 never take route b. and d. when 001 chooses c. No. we now consider a mixture by 001 over a and d. This implies that each pure strategy of each player is a best response to some other pure strategy of the other.L) is not a Nash equilibrium implies a > e and/or h > f. do not distribute. It must be that either e > a and g > c or a > e and c > g. route c. The mixed equilibrium is ((1/3. 1/3. It is easy to see that if there is no pure strategy Nash equilibrium. 1] such that aq + c(1 − q) = eq + g(1 − q).103 11 MIXED-STRATEGY NASH EQUILIBRIUM 9. 11.R) is not a Nash equilibrium implies c > g and/or f > h. it must be that there is a mixture for each player i such that the other player j is indifferent between his two strategies. then only one of each of these pairs of conditions can hold. It is easy to see that 12(2/3)+10(1/3) = 34/11 > 11. 1/3)). Further. we should expect that the equilibrium with one player mixing and the other playing a pure strategy will involve 001 choosing c.L) is not a Nash equilibrium implies e > a and/or d > b. it does not have any pure strategy equilibria. (c) As 002’s payoff is the same. or route d. . c. Thus. 1/3) makes 001 indifferent between a. Clearly 002 is indifferent between x and y when 001 is playing c. Consider player 1. Route b is dominated by a mixture of routes a and c. It is easy to show that there exists a q ∈ [0. That (U. 002 can mix so that c is a best response for 001. and 4(1/3) > 1. 10. 1/3). we need Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. (1/3. Rearranging yields (a − e) = (g − c)(1 − q)/q. That (D. 001 should choose either route a. That (U. This equilibrium is s1 =c and σ2 = (2/3. When θ2 = 2/3.R) is not a Nash equilibrium implies g > c and/or b > d. 2008 by Joel Watson For instructors only. That (D. 001 should choose route a. One such mixture is 2/3 probability on a and 1/3 probability on c. (a) When θ2 > 2/3. 001 should choose route d. we noticed that 002’s mixing with probability (2/3. (b) Assume the game has no pure-strategy Nash equilibrium. 1/3). In finding the equilibrium above. When θ2 < 2/3. and proceed as follows. regardless of his strategy. Since b is dominated. do not distribute. (2/3. In considering whether there are any more equilibria. 001 could also play c with positive probability. This implies p = 1/3. (c) In equilibrium p = Instructors' Manual for Strategy: An Introduction to Game Theory √ 10−2 . Indifference on the part of 002 is reflected by 3 − 3p = 6p. 0. 2/3). 2/9). Let p denote the probability with which 001 plays a. so long as the ratio of a to d is kept the same.X. and (Y.104 11 MIXED-STRATEGY NASH EQUILIBRIUM only to find a mixture over a and d that makes 002 indifferent between x and y.Y. 1/3). (2/3.X). 6/9. (a) (b) The pure strategy Nash equilibria are (X. Thus we should expect that.Y. 2008 by Joel Watson For instructors only. Since he never plays b. which means that 002 receives a payoff of 2 whether he chooses x or y. and let q denote the probability with which he plays c. One such case is (1/9. Making 002 indifferent between playing x and y requires that 2q + 3(1 − p − q) = 6p + 2q. and 1 − p denote the probability with which he plays d. the probability with which d is played is 1−p−q. 1/3)) 12. implying an equilibrium of ((1/9. 2/9). This equilibrium is σ = ((1/3. 6/9.Y). This implies that any p and q such that 1 = 3p + q will work. Let p denote the probability with which 001 plays a. it is useful to notice that in both of the above equilibria that 002’s payoff from choosing x is the same as that from y.Y). . 0. 2 Copyright 2002. (Y. Putting these two facts together. do not distribute. because si is a best response to sj and ui (s) = ui (ti . tj ) ≥ ui (s). but u2(D. Examples include chess.12 Strictly Competitive Games and Security Strategies 1. Z) > u2 (C. note that. sj ). 2008 by Joel Watson For instructors only. X) > u2 (D. Thus. Because t is a Nash equilibrium. and Othello. 2. 2: Z (b) 1: C. 2: X (d) 1: D. (a) 1: C. sj ). X) > u1(D. we know that ui (s) = ui (si . 2. 2: Y 3. sj ) ≥ ui (t). 2: Z (c) 1: A. Z). Instructors' Manual for Strategy: An Introduction to Game Theory 105 Copyright 2002. . tic-tac-toe. sj ). (d) No. we have uj (t) ≥ uj (ti . (b) Yes. tj ) = ui (ti. Z) = u1 (C. Let i be one of the players and let j be the other player. Because s is a Nash equilibrium. we obtain ui (s) ≥ ui (ti. 4. (a) No. (c) Yes. we have ui(s) ≥ ui (ti . strict competition further implies that ui (t) ≤ ui (ti. but u2(A. sj ). Note that u1(D. Z). the same argument yields ui (t) ≥ ui(si . checkers. sj ) = ui(t) for i = 1. For the same reason. so the equilibria are equivalent. To see that the equilibria are also interchangeable. Y). Switching the roles of s and t. si is a best response to tj . Y). we know that ti is also a best response to sj . Note that u1 (A. and Enforcement in Static Settings 1. Instructors' Manual for Strategy: An Introduction to Game Theory 106 Copyright 2002.I) can be enforced by setting α between −4 and −2. (a) A contract specifying (I. Law. (b) No. 2. 2008 by Joel Watson For instructors only. I). (a) (I. . do not distribute. I) can be enforced under expectations damages because neither player has the incentive to deviate from (I.13 Contract. (g) c > 1/2. . Player 1 sues if −c > −4 or c < 4. This yields the new indifference relationship of 1 (1− pn−1 )v − d = v − c.N). 2008 by Joel Watson For instructors only. LAW. Thus. (c) No. (d) (e) c > 1. then p = [(c − d)/v] n−1 . (f) Consider (I. If c < d then p = 0 in equilibrium. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. suit occurs if c < 4. 3. do not distribute. Player 2 sues if −c > −4 or c < 4. (a) Now the payoff to i when no one calls is negative. Consider (N. Let d denote the fine for not calling. player 2 still has the incentive to deviate. This implies that. AND ENFORCEMENT (b) Yes.107 CONTRACT. (a) 10 (b) 0 4.I). Consider the case where the fine is incurred regardless of whether anyone else calls. if c > d. 7. This is because it gives a player the incentive to breach when it is efficient to do so. 5. There are pure strategy equilibria that achieve this outcome. do not distribute. except for p = 0 which results only if the type (1) fine is imposed. LAW. (2) Here. AND ENFORCEMENT Now consider the case where the fine is incurred only when no one calls. Expectations damages gives the non-breaching player the payoff that he expected to receive under the contract. the self-enforced component is to play (I. if i doesn’t call then he pays the fine with a low probability. The type (2) fine may be easier to enforce. For technology B. Expectations damages is more likely to achieve efficiency. I) occurs. because in this case one only needs to verify whether the pedestrian was treated promptly and who the bystanders were. (a) For technology A. I). and none otherwise. but it never happens in the symmetric mixed strategy equilibrium. I) occurs. I). Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. a transfer of at least 2 from player 1 to player 2 when (N. (b) (1) Given that if i doesn’t call then he pays the fine with certainty. (b) Now for technology A. The required type (2) fine may be much higher than the required type (1) would be. 2008 by Joel Watson For instructors only. N) occurs. the fine can be relatively low. N). For B the self-enforced component is to transfer 4 from player 1 to player 2 when someone plays N. The indifference relationship here implies (1 − pn−1 )v − dpn−1 = v − c. Verifiability is more important. and no transfer when both play I. 6.108 CONTRACT. the fine should be relatively large. (c) Either type of fine can be used to induce any particular p value. There is no externally-enforced component. . the self-enforced component is to play (I. 1 This implies p = [c/(d + v)] n−1 . The externally-enforced component is a transfer of at least 1 from player 2 to player 1 when (I. Restitution damages takes from the breacher the amount of his gain from breaching. It must be possible to convey information to the court in order to have a transfer imposed. The externally-enforced component is a transfer of at least 4 from player 1 to player 2 when (N. Thus. the self-enforced component is to play (N. The efficient outcome is for exactly one person to call. and none otherwise. AND ENFORCEMENT (c) Expectations damages gives the non-breaching player the amount that he expected to receive under the contract. Restitution damages take the gain that the breaching party receives due to breaching. The payoffs under this remedy are depicted for each case as shown here: Reliance damages seek to put the non-breaching party back to where he would have been had he not relied on the contract. The payoffs under reliance damages are depicted below. . do not distribute. (a) Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002.109 CONTRACT. The payoffs under restitution damages are depicted below. 2008 by Joel Watson For instructors only. LAW. 8. 110 CONTRACT, LAW, AND ENFORCEMENT (b) (H,H) and (L,L) are self-enforcing outcomes. (c) The court cannot distinguish between (H,L) and (L,H). (d) The best outcome the parties can achieve is (L,H). Their contract is such that when (H,H) is played player 1 pays δ to player 2, and when either (H,L) or (L,H) is played, player 1 pays α to player 2. We need α and δ to be such that α > 2 and δ > α + 1. (c) The best outcome the parties can achieve is (H,H). Their contract is such that when (H,H) is played player 1 pays δ to player 2, when either (H,L) is played player 1 pays α to player 2, and when (L,H) is played player 1 pays β to player 2. We need α, β, and δ to be such that α+2 < δ < β +1. 9. (a) S1 = [0, ∞), S2 = [0, ∞). If y > x, then the payoffs are (0, 0). If x ≥ y, the payoffs are (y − Y, X − y). (b) There are multiple equilibria in which the players report x = y = α, where α ∈ [Y, X]. There is another set of multiple equilibria in which the players report x (player 2) and y (player 1) such that x ≤ Y < X ≤ y. (c) There are multiple equilibria; all satisfy x < y, y ≥ X, and x ≤ Y . (d) It is efficient if an equilibrium in the first set of multiple equilibria of part (b) is selected. This is because the plant is shut down if and only if it is efficient to do so. 10. Examples include the employment contracts of salespeople, attorneys, and professors. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002, 2008 by Joel Watson For instructors only; do not distribute. 14 Details of the Extensive Form 1. No general rule. Consider, for example, the prisoners’ dilemma. Clearly, the extensive form of this game will contain dashed lines. Consider Exercise 3 (a) of Chapter 4. The normal form of this does not exhibit imperfect information. 2. Suppose not. Then it must be that some pure strategy profile induces at least two paths through the tree. Since a strategy profile specifies an action to be taken in every contingency (at every node), having two paths induced by the same pure strategy profile would require that Tree Rule 3 not hold. 3. 4. 5. Instructors' Manual for Strategy: An Introduction to Game Theory 111 Copyright 2002, 2008 by Joel Watson For instructors only; do not distribute. 15 Backward Induction and Subgame Perfection 1. (a) (AF, C) (b) (BHJKN, CE) (c) (I, C, X) 2. (a) The subgame perfect equilibria are (WY, AC) and (ZX, BC). The Nash equilibria are (WY, AC), (ZX, BC), (WY, AD), (ZY, BC), and (WX, BD). (b) The subgame perfect equilibria are (UE, BD) and (DE, BC). The Nash equilibria are (UE, BD), (DE, BC), (UF, BD), and (DE, AC). 3. (a) (AHILN,CE) (b) 6 4. For any given x, y1∗ (x) = y2∗(x) = x; and x∗ = 2. 5. (a) (b) Working backward, it is easy to see that in round 5 player 1 will choose S. Thus, in round 4 player 2 will choose S. Continuing in this fashion, we find that, in equilibrium, each player will choose S any time he is on the move. (c) For any finite k, the backward induction outcome is that player 1 chooses S in the first round and each player receives one dollar. Instructors' Manual for Strategy: An Introduction to Game Theory 112 Copyright 2002, 2008 by Joel Watson For instructors only; do not distribute. . In the subgame perfect equilibrium.A).B). CBC.BACKWARD INDUCTION AND SUBGAME PERFECTION 113 6. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. add (IB. (IB. 7. (a) (b) If x > 3.A). and CBC chooses 76′ 6′′ 7′′′. (OB. the equilibria are (IA. and MBC. (OA.B).A). A) to this list. RBC chooses 76′ . Payoffs in the extensive form representation are in the order RBC. If x = 1. the equilibria are (IA. B) to this list. If X = 3. do not distribute.B). 2008 by Joel Watson For instructors only.A).B). add (IA.B). If 1 < x < 3. If x < 1. MBC chooses 7. (OA. The outcome differs from the simultaneous move case because of the sequential play. (OB. (OB. the equilibria are (OA. There is a mixed strategy equilibrium in which p = q = 0. once the proper subgame is reached. A) and (B. . B). (e) The Nash equilibria that are not subgame perfect include (OB. Player 2 plays A with a probability that does not exceed x/3. For x < 3/4. Any mixture (with positive probabilities) over OA and OB will make player 2 indifferent. We also need player 2 to mix so that player 1 is indifferent between IA and IB. There is not an equilibrium with p and/or q positive. and let 1 − p − q denote the probability that player 1 plays OA or OB. There is also a mixed equilibrium (3/4. player 2 mixes so that player 1 does not want to play IA or IB. If 1 < x < 3. but (for x > 3/4) this mixture makes player 1 strictly prefer to select OA or OB. (d) The pure strategy equilibria are (A. 1/4. A). B). Next consider the case in which 3/4 ≤ x ≤ 1.BACKWARD INDUCTION AND SUBGAME PERFECTION 114 (c) If x > 3 any mixture with positive probabilities over OA and OB for player 1. OA and OB are dominated. and the above mixed equilibria in which. To see this. In equilibrium. and over A and B for player 2. 2008 by Joel Watson For instructors only. Let p denote the probability that player 1 plays IA. 1/4. 3/4). player 1 does plays A with probability 3/4 and player 2 does plays A with probability 1/4. player 2 chooses A with probability 1/4. let q denotes the probability with which she plays IB. we need p = 3q. once the proper subgame is reached. implying that player 2 can put no more than probability x/3 on A and no more than x on B. Here. (f) The subgame perfect mixed equilibria are those in which. then IB is dominated. (OA. note that for player 2 to be indifferent. do not distribute. and B with probability 3/4. In equilibrium. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. player 1 chooses IA with probability 3/4 and IB with probability 1/4. player 1 does not play A with probability 3/4 and/or player 2 does not play A with probability 1/4. A) is chosen. B will never be selected in equilibrium. each has a higher payoff when both choose A. (b) It is easy to see that 0 < (x1 + x2)/(1 + x1 + x2 ) < 1. ∞) × (0. Each player selects A or B. (a) Si = {A. Ax2 ). where x1 and x2 are any positive numbers. B} × (0. (c) There is no subgame perfect equilibrium because the subgames following (A. and that (x1 + x2)/(1 + x1 + x2) approaches 1 as (x1 + x2) → ∞. A) have no Nash equilibria. ∞). 2008 by Joel Watson For instructors only. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. Further. . B) is chosen. picks a positive number when (A. do not distribute. Thus. and picks a positive number when (B. The Nash equilibria of this game are given by (Ax1 .BACKWARD INDUCTION AND SUBGAME PERFECTION 115 8. B) and (B. 16 Topics in Industrial Organization 1. z1(a) = a2 /9− 2a3 /81. which is a∗ = 6.000 in the low production plant. L). q2∗(q1)) = 0. (c) Find q1 such that u2(q1 . in the subgame perfect equilibrium both players invest 50. 2. (a) u2(q1. Maximizing by choosing q1 yields the first order condition 550 − 3q1 − 100 = 0. . The equilibrium is (L. This advertising level solves maxa 2a2 /9 − 2a3 /81. 4. The subgame perfect equilibrium is a = 0 and p1 = p2 = 0. do not distribute. Instructors' Manual for Strategy: An Introduction to Game Theory 116 Copyright 2002. u∗1 = 325(150) − 100(150) = 33. Because this is a simultaneous move game. q2∗ = 150 − (1/2)(150) = 75. From the text. 750 − F . q1∗ = 150. they would choose a to maximize their joint profit (with m set to divide the profit between them). Thus. q2∗(q1) = 150 − (1/2)q1 . q2∗(q1 )) = (1000 − 3q1 − 3q2 )q2 − 100q2 − F . Thus. we are just looking for the Nash equilibrium of the following normal form. Thus. Solving for equilibrium price yields p∗ = 100 − 3(150 + 75) = 325. q2∗(q1)) = (1000−3q1 −3[150−1/2q1 ])q1 −100q1 −F . u∗2 = 325(75) − 100(75) − F = 16875 − F . We have (1000 − 3q1 − 3[150 − (1/2)q1 ])[150 − (1/2)q1 ] − 100[150 − (1/2)q1 ] − F ] = (900 − 3q1 )[150 − (1/2)q1 ] − 3[150 − (1/2)q1 2 − F = 6[150 − (1/2)q1 ]2 − 3[150 − (1/2)q1 ]2 − F = 3[150 − (1/2)q1 ]2 − F. 2008 by Joel Watson For instructors only. If the firms were to write a contract that specified a. 3. Maximizing by choosing q2 yields the first order condition 1000 − 3q1 − 6q2 − 100 = 0. (b) u1 (q1. Here. because the gain in extracting surplus from Hal is more than offset by the loss of not selling to Laurie. q1 = 300 − 2(F/3)1/2. (iii) F = 1728: Here. If Firm 3’s strategy is to choose q3′ = 14 if Firm 1 enters. Note that u1 = (1000 − 3[300 − 2(F/3)1/2])[300 − (F/3)1/2 ] −100[300 − 2(F/3)1/2] − F = 900[300 − 2(F/3)1/2 ] − 3[300 − 2(F/3)1/2 ]2 − F. The optimal pricing scheme is as follows. 040. then Firm 1’s best response is to enter against Firm 2. Firm 1 enters Firm 3’s industry. Set p1 = 1. 900. 112 = 25. Hal purchased) then set p2 = 200 to sell to Laurie. q 1 = 300 − 2(8112/3)1/2 = 196 and pi1 = 900(196) − 3(196)2 − 8. (a) If enters against Firm 2. 022. (iv) F = 108: In this case. If enters against Firm 3. u∗1 = 33. 260. 560. firm 1 will produce q1 = 252. 112 = 53. Thus. q1∗ = q2∗ = 3. resulting in u1 = 53. 700 (or just below to make Hal strictly want to buy). With Hal buying in the first period and Laurie in the second. Thus firm 1 will produce q1 = 196. 777. 642. u∗1 = 33. u1∗ = 33. q1′ = q3′ = 4. q1 = 300−2(108/3)1/2 = 288 and u1 = 900(288)−3(288)2−108 = 10. (a) If Hal does not purchase the monitor in period 1. (b) Yes. d) (i) F = 18. Tony would not benefit from being able to commit not to sell monitors in period 2. If one unit is sold in the first period (that is. firm 1 will produce q1 = 150. 642. resulting in u1 = 33. So firm 1 will produce q1∗ and u1 = 48. then p2 = 200 is not optimal because p2 = 500 yields a profit of 500. 2008 by Joel Watson For instructors only. 750 − 8.117 16 TOPICS IN INDUSTRIAL ORGANIZATION Setting profit equal to zero implies F = 3[150 − (1/2)q1 ]2 or (F/3)1/2 = 150 − (1/2)q1 . 040. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. Thus. do not distribute. 6. On the other hand. 728 = 32. 630. (ii) F = 8112: In this case. if there are no first-period sales (Hal deviated) then set p2 = 500 to sell to Hal in the second period. 728/3)1/2 = 252 and u1 = 900(252) − 3(252)2 − 1. 723 implies q 1 = 142 < q1∗. Thus. 400 and p2 = 200. (b) The optimal prices are p1 = 1. Tony would not benefit from being able to commit not to sell monitors in period 2. 750−108 = 33. while p2 = 200 yields a profit of 400. Hal buys in period 1 and Laurie buys is period 2. total revenue is 1. 750 − 1. 560. q 1 = 300 − 2(1. 728 = 34. resulting in u1 = 34. . 5. it would solve maxW˙ −W ˙ = 0. (a) Without payoffs. and for player 2 to use the strategy 234555678 (where. 9. In the subgame perfect equilibrium. . player 1 selects E. u = 0 and v = 0. Now. So in equilibrium 10)2 . ˙ − p/2 ˙ . q1′ = q3′ = 4. level of W ˙ − Knowing how the government will behave. for example. This implies that it would commit to p˙ = 0 and the ASE would set W In (a) u = 0 and v = −5. (c) One way is to have a separate central bank that does not have a politically elected head that states its goals. 8. 2 denotes that player 2 locates in region 2 when player 1 has located in region 1). the ASE solves maxW˙ −(W ˙ ∗ = p˙∗ = 10. and the quantities are given by q1 = q2 = q3 = 3. when commitment is possible. 2008 by Joel Watson For instructors only. ˙ /2. ˙ − 30 or maxp˙ p/2 ˙ −W (a) The government solves maxp˙ 30 + p˙ − W This implies that they want to set p˙ as high as possible.118 16 TOPICS IN INDUSTRIAL ORGANIZATION 7. The first order condition implies W y = 30. player 2 chooses DE′ . q2′′ = q3′′ = 4. (b) Player 1 enters. and q3′′′ = 6. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. So p˙∗ = 10. (b) If the government could commit ahead of time. regardless of the ˙ . The subgame perfect equilibrium is for player 1 to locate in region 5. do not distribute. the extensive form is as follows. 119 16 TOPICS IN INDUSTRIAL ORGANIZATION 10. . An example of this is below. For scheme A to be optimal. 2008 by Joel Watson For instructors only. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. it must be that Laurie’s (low type) value in period 2 is at least as large as both Hal’s (high type) period 1 value and Laurie’s period 1 value. it must be that twice Laurie’s (the low type) value in period 1 is at least as great as Hal’s (high type) period 1 value plus his period 2 value. An example of this is below. do not distribute. For scheme B to be optimal. 5. 6. as shown in the diagram below. . cell 9 or cell 10 (shown below) will lose.17 Parlor Games 1. a player must not be forced to enter the top-left cell Z. To win the game. Instructors' Manual for Strategy: An Introduction to Game Theory 120 Copyright 2002. 2008 by Joel Watson For instructors only. A player who is able to move the rock into cell 1 or cell 2 thus wins the game. do not distribute. a player would lose if he must move with the rock in either cell 1 or cell 2 as shown in the following diagram. (a) Use backward induction to solve this. thus. or 7. 4. This implies that a player can guarantee victory if he is on the move when the rock is in one of cells 3. We next see that a player who must move from cell 8. w). 0). where w > 2. y) denote the state where the red basket contains x balls and the blue basket contains y balls. and the fifteenth one. z). a player must not leave her opponent with either any of the following (0. Continuing with this logic and assuming m. or when m or n equals 1 and the other is even. a player must not leave her opponent with either (w. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. the player who puts in exactly the fifth penny is assured of winning. starting from a cell marked with an X in the following picture. and that strategy involves always putting in enough pennies to exactly put in the fifth one. This can be solved by backward induction. that player is assured of being able to put in exactly the fifteenth penny. n > 0. a player must leave her opponent with either (0. player 2 has a winning strategy when m. or (z. 2). 1). 2) or (2. Thus. If a player puts in the fifteenth penny—and no more. do not distribute. a player should leave her opponent with (2. player 2 has a strategy that guarantees victory. we see that player 2 has a winning strategy when m = n and player 1 has a winning strategy when m = n. player 1 has a winning strategy. 2008 by Joel Watson For instructors only. Since the dimensions of the matrix are 5 × 7. Similarly. . z > 1. So. the tenth one. n > 1 and both are odd. in order to win. if a player puts in exactly the tenth penny. z). Otherwise. to win. Let (x. 2. (1.121 17 PARLOR GAMES Continuing the procedure reveals that. So player 2 has a winning strategy. (b) In general. 3. (z. To win this game. the next player to move will win. that player is assured of winning because her opponent must add at least one penny.1) or (1. Thus.0). Continuing with this. . which contradicts what we assumed at the beginning. the configuration of the matrix induced by player 2’s selection will be in X (it is a configuration that player 1 could have created in his first move). (a) In order to win. n) in his first move. 5. the next player to move can guarantee victory for himself. player 2 can guarantee victory from this point. Thus. player 2 has a strategy that ensures a win. This means that. A winning strategy is known for the Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. This means that player 1 has a strategy that guarantees him a win. do not distribute. whatever player 2 selects in response to his choice of cell (m. Suppose player 1 does not have a strategy guaranteeing victory. 2008 by Joel Watson For instructors only. Let X be the set of matrix configurations that player 1 can create in his first move. because players are indifferent between moves at numerous cells. whatever player 2’s following choice is. As player 1 begins in cell Y. however. which player 2 would then face. player 1 can guarantee a victory following player 2’s move. in the matrix below. Player 1 has a strategy that guarantees victory. There is a subgame perfect equilibrium in which player 1 wins. player 1 actually does have a strategy that guarantees him victory. A configuration refers to the set of cells that are filled in. Then player 2 must have such a strategy. a player must avoid entering a cell marked with an X. for every opening move by player 1. Thus. n). another in which player 2 wins. (b) There are many subgame perfect equilibria in this game. This game is interesting because player 1’s winning strategy in arbitrary m × n Chomp games is not known. then. This is easily proved using a contradiction argument. Note. regardless of what player 2 does. that if player 1 selects cell (m. starting from each of the configurations in X. he must enter a cell marked with an X. Thus.122 17 PARLOR GAMES 4. We have that. and still another in which player 3 wins. 1). Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. (c) Player 1 can guarantee a payoff of 1 by choosing cell (2.123 17 PARLOR GAMES special case in which m = n. This strategy selects cell (2. 2) in the first round.1). do not distribute. (b) No. . 6. 2008 by Joel Watson For instructors only.2) and force player 3 to move into cell (1. (a) Yes. Player 2 will then rationally choose cell (1. 000. t = 210. 000−20.18 Bargaining Problems 1. 2. 000 − 40. 3. 000. 000 + (220. John should undertake the activity that has the most impact on t. t = 15. 000 and u∗R = 20. per time/cost. 000. and hence his overall payoff. (c) From above. 000)/4 = 80. This implies v ∗ = 220. 000) = 140. 000. u∗J = u∗R = 110. Assuming that x and w can be increased at the same cost. A one-unit increase in x will raise t by πJ . otherwise. A one unit increase in w raises t by 1 − πJ . 000. and vR = 320. 000. Instructors' Manual for Strategy: An Introduction to Game Theory 124 Copyright 2002. John should increase x if πj > 1/2. . u∗J = 40. x∗ = 400 and v ∗ = 220. Thus. This implies t = 180. 000+(3/4)(220. 000. 000−60. (b) Solving maxx 60. do not distribute. 000. 000 − x2 + 800x yields x∗ = 400. t = 0. he should increase w. u∗J = u∗R = 25. 000. (a) x = 15. 2008 by Joel Watson For instructors only. and u1 = u2 = 15. (a) v ∗ = 50. 000. 000. vJ = −100. and u2 = 22. u1 = 25. t = −1. t = −175. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. u1 = 14. do not distribute. u1 = 8. and u2 = 16. 2008 by Joel Watson For instructors only. (c) x = 15.18 BARGAINING PROBLEMS 125 (b) x = 15. t = −7. and u2 = 75. (d) x = 10. . you should raise your disagreement payoff. The other party’s disagreement point influences how much of v ∗ you get because it influences the size of the surplus. Possible examples would include salary negotiations. and negotiating the purchase of an automobile.126 18 BARGAINING PROBLEMS (e) x = 12. merger negotiations. 2008 by Joel Watson For instructors only. t = 144π1 − 336. do not distribute. In the latter case. your decision is not efficient. u1 = 144π1 . otherwise. . and u2 = 144π2 . 5. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. 4. You should raise the maximum joint value if your bargaining weight exceeds 1/2. 6. the payoff vector converges to ([1 − δ]/[1 − δ 2]. δ(1−δ)).19 Analysis of Simple Bargaining Games 1. δ−δ 2(1−δ)) in the case of T = 4. because the current owner is very impatient and will be quite willing to accept a low offer in the first period. δ). [δ − δ 2]/[1 − δ 2]. (d) The president should promise z = |y|. . Instructors' Manual for Strategy: An Introduction to Game Theory 127 Copyright 2002. player 1 offers m = 1 and player 2 accepts. 4. because you are patient and would be willing to wait until the last period rather than accepting a small amount at the beginning of the game. (b) The president accepts x if x ≥ min{z. 1] | m1 + m2 = 1}. (a) The superintendent offers x = 0. 2. the responder in the first period prefers accepting less than one-half of the surplus to rejecting and getting all of the surplus in the second period. 3. |y|}. the payoff is (1 − δ − δ 2(1 − δ + δ 2). More precisely. at which point you can get the entire surplus (the owner will accept anything then). you can wait until the last period. If T = 2. in the least. δ − δ 2(1 − δ + δ 2 )). yielding the payoff vector (1−δ. |y|}. the payoff vector is (1−δ(1−δ). 2008 by Joel Watson For instructors only. One can interpret the equilibrium demands (the mi ’s) as the bargaining weights. which is the subgame perfect equilibrium payoff vector of the infiniteperiod game. and the president accepts any x. m2 ∈ [0. For T = 3. do not distribute. (c) The superintendent offers x = min{z. Discounting to the first period. More precisely. Note that BRi (mj ) = 1 − mj . In the case of T = 1. As T approaches infinity. this will give you more than one-half of the surplus available in the first period. For T = 5. (b) In this case. Thus. The payoff is (1−δ−δ 2(1−δ). (a) Here you should make the first offer. player 1 offers 1 − δ in the first period and player 2 accepts. and the president accepts. The set of Nash equilibria is given by {m1. since δ < 1/2. the offerer in the first period will get more than half of the surplus. you should make the second offer. player 2 will accept any offer that gives her at least δ(1 − δ). . in period 1. and the offer in period 3 is z. Player 1 Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. If period 3 is reached. This implies y = 1 − δ. player 1 offers x = 1 − δ + δ 2 and it is accepted. assume that the offer is always given in terms of the amount player 1 is to receive. Thus. Knowing this. 6. in period 2 (if it is reached) player 1 will offer y such that player 2 is indifferent between accepting and rejecting to receive 1 in the next period. in period 2. player 1 will offer x so that player 2 is indifferent between accepting and rejecting to receive 1 − δ in the second period. 7.128 19 ANALYSIS OF SIMPLE BARGAINING GAMES 5. player 2 will accept any offer that gives her at least δ. player 2 will offer z = 0 and player 1 will accept. Thus. For simplicity. Suppose that the offer in period 1 is x. Player 2 substitutes an offer of x = 0. In the first period. 2008 by Joel Watson For instructors only. Thus. y = 1 for any offer made by player 1. the offer in period 2 it is y. do not distribute. Player 3 accepts any offer such that his share is at least zero. . This is because. (a) (b) Player 2 accepts any m such that m + a(2m − 1) ≥ 0. 2008 by Joel Watson For instructors only. (c) As a becomes large the equilibrium split is 50:50. 8. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002.129 19 ANALYSIS OF SIMPLE BARGAINING GAMES makes any offer of X and Y . Y = 1. Thus. Also. player 2 cares very much about how close his share is to player 1’s share and will reject any offer in which a is not close to 1 − a. it may be that player 2 accepts X = 0. This implies accepting any m ≥ a/(1 + 2a). player 1 offers a/(1 + 2a). do not distribute. when a is large. 000 − 160. Carina solves maxe 800xe + t − e2. 2008 by Joel Watson For instructors only. (a) (b) When b ≥ x2. uW = 8. 2. 000 = 160. Negotiation Equilibrium 1. and t∗ = 8. To find the maximum joint surplus. hold t fixed and solve maxx 800[400x] − [400x]2 . (c) v 8 = 16. (a) Carina expends no effort (e∗ = 0) and Wendy sets t = 0. Wendy solves maxx 800[400x] − 800x[400x]. 000. This yields the first order condition of 800x = 2e.20 Games with Joint Decisions. x∗ = 4. (b) Carina solves maxe 800xe − e2 . This implies e∗ = 400x. the transfer is t∗ = −80. This implies e∗ = 400x. Instructors' Manual for Strategy: An Introduction to Game Theory 130 Copyright 2002. do not distribute. The joint surplus is 320. (c) Given x and t. b∗ = 16. uM = 8. (d) This assumes verifiability of worker effort. 000. surplus = 16. This yields x∗ = 1/2. . Because of the players’ equal bargaining weights. This yields x∗ = 1. (100 − q1 − q2 )(q1 + q2)].” Combining these inequalities. ui = −10qi + πi[100 − q1 − q2 ](q1 + q2). t∗ = −19. the players would always avoid court fees by negotiating a settlement.131 JOINT DECISIONS AND NEGOTIATION EQUILIBRIUM 3. Note that m ∈ [0. 6. It is not possible to do both if c is close to 0. Combining this with the legal constraint that t ≤ 10. while not getting in the way of justice in the event that player 2 deviates.” For player 1 to have the incentive to choose “enforce. (a) Since the cost is sunk. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. . (b) We need t large to deter player 2. (a) The players need enforcement when (H. we have c ∈ [t − 2.5. Player 2 prefers not to deviate from (H. x∗ = 8. we find that (H. This prevents the support of (H. (a) x∗ = 10p and y ∗ = 5(1 − p). 5. We also need t − c ≤ 2. do not distribute. H) and then select “enforce. In this case. and t − c small to deter player 1. H). 2008 by Joel Watson For instructors only. H) only if t ≥ 4. the legal fee deters frivolous suits from player 1.” it must be that t ≥ c. The game is represented as below. player 2 would not select “enforce. (b) p∗ = . A value of t that satisfies these inequalities exists if and only if c ≥ 2. or otherwise player 1 would prefer to deviate from (H. 4. and y ∗ = 1. (c) In this case. L) is played. Thus. t] and t ≥ 4.8. H) can be enforced (using an appropriately chosen t) if and only if c ∈ [2. the surplus is [100 − q1 − q2 ](q1 + q2 ). (b) u1 = (1/2)[100 − q1 − q2](q1 + q2 ) − 10q1 and u2 = (1/2)[100 − q1 − q2](q1 + q2 ) − 10q2 . In other words. 10]. Each firm wants to maximize its share of the surplus less cost. The first order condition implies q1∗(q2) = 40 − q2. do not distribute. . Note that the total quantity (40) is less than both the standard Cournot output and the monopoly output. In the equilibrium. In equilibrium. Since there are many combinations of q1 and q2 that satisfy this equation. it is not efficient from the firms’ point of view. By symmetry q2∗(q1) = 40 − q1 . 2008 by Joel Watson For instructors only. The gain from having the maximum surplus outweighs the additional cost. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. there are multiple equilibria. (e) The player with the smaller bargaining weight does not receive enough gain in his share of the surplus to justify production. q1 + q2 = 40. (d) Now each firm solves maxqi πi[100 − qi − qj ](qi + qj ) − 10qi . the player with the larger bargaining weight π produces 50 − 5/π units and the other firm produces zero. This is because the player with the smaller πi would wish to produce a negative amount. This implies best response functions given by qi∗(qj ) = 50−5/πi −qj that cannot be simultaneously satisfied with positive quantities. Since it is less than the monopoly output.132 JOINT DECISIONS AND NEGOTIATION EQUILIBRIUM (c) Firm 1 solves maxq1 (1/2)[100 − q1 − q2 ](q1 + q2) − 10q1 . The seller chooses H. (d) The surplus is 10. 5]. Each gets 5. Joel. Instructors' Manual for Strategy: An Introduction to Game Theory 133 Copyright 2002. Thus. Joel pays Estelle 400/3. The buyer chooses A if H. . p1 = 15 and p0 ∈ [−5. Joel pays Jerry 1900/3.21 Unverifiable Investment. The order of the payoffs is Estelle. the buyer will accept if p1 ≤ 20 + p0 . Hold Up. and let tJ denote the transfer from Joel to Jerry. The standard bargaining solution requires that each player i receive di + πi [v ∗ − di − dl − dk ]. the buyer will not accept if p1 ≥ 5 + p0 . Because the seller invests high if p1 ≥ 10 + p0 . Jerry. 2. and R if L. The seller will not choose H. it must be that 20 + p0 ≥ p1 ≥ 10 + p0 . Joel buys the desk. Here is the extensive form with joint decisions: The surplus is 900−500 = 400. Thus. where l and k denote the other players. Thus. (b) If p0 ≥ p1 − 5 then the buyer always accepts. 2008 by Joel Watson For instructors only. and Ownership 1. there are values of p0 and p1 that induce the efficient outcome. Let tE denote the transfer from Joel to Estelle. Options. (a) The efficient outcome is high investment and acceptance. and Jerry restores the desk. This is efficient. do not distribute. In the case that H occurs. (c) In the case that L occurs. (a) Let x = 1 denote restoration and x = 0 denote no restoration. Estelle can be held up if she has the desk restored and then tries to sell it to Joel. do not distribute. This is efficient. (d) Estelle (and Jerry) do not value the restored desk. 2008 by Joel Watson For instructors only. Joel buys the desk for 125. the desk is not restored and Joel buys the desk for 50. This is not efficient. and let tJ denote the transfer from Joel to Jerry. In equilibrium. which occurs after Joel has acquired the desk from Estelle. Let b denote the transfer from Joel to Estelle when the desk has not been restored. In equilibrium. However. Thus. and pays Jerry 650 to restore it. Let m denote the transfer from Joel to Estelle when the desk has been restored. Jerry’s payoff is greater here than in part (a) because Jerry can hold up Joel during their negotiation.134 INVESTMENT AND HOLD UP (b) Let t denote the transfer from Estelle to Jerry. (c) Let tE denote the transfer from Joel to Estelle. . Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. 135 INVESTMENT AND HOLD UP 3. The entrepreneur gets πE [R − M] and the union gets nw + πU [R − M]. This gives you the incentive to treat it with care. 6. since the entrepreneur can foresee that it will lose F . the entrepreneur may try to negotiate a contract with the union before making his investment. he does better to increase his general human capital. If the worker’s bargaining weight is less than 1. and options to buy in procurement settings are examples. Following this investment. 4. Stock options in a start-up company. stock options for employees. then he gets more of an increase in his payoff from increasing his outside option by a unit than from increasing his productivity with the single employer. . The railroad is built if πE [R − M] > F . (c) One way to interpret this equilibrium is that player 1’s bargaining weight is 1 if he invests 1/2. 2008 by Joel Watson For instructors only. 5. which is accepted. he does extract the full return. Thus. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. If it is not possible to verify whether you have abused the computer or not. do not distribute. This implies that the railroad will not be built. player 1 obtains the full value of his investment when he selects 1/2. 7. Thus. but he obtains none of the benefit of another investment level. (a) The union makes a take-it-or-leave-it offer of w = (R − M)/n. the players demand m2(1/2) = 0 and m1 (1/2) = x. To avoid the hold-up problem. In the event that player 1 deviates by choosing some x = 1/2. because you will be responsible for necessary repairs. but it drops to zero if he makes any other investment. then the players are prescribed to make the demands m2 (x) = x and m1(x) = 0. When he has all of the bargaining power. (c) The entrepreneur’s investment is sunk when negotiation occurs. then it is better for you to own it. (b) The surplus is R − M. (a) The efficient investment level is the solution to maxx x − x2 m which is x∗ = 1/2. so he does not generally get all of the returns from his investment. (b) Player 1 selects x = 1/2. Player 2 has no incentive to deviate in the short run. 3. In period 2. selection of the Nash equilibrium to be played in period 2 cannot influence incentives in period 1. (b) To support cooperation by player 1. R) in the second period. subgame perfection requires play of the only Nash equilibrium of the stage game. . do not distribute. If player 2 defects ((U. Thus. 4.X) and (B. we see that cooperation requires δ ≥ 2/3.Y). and plays Y if player 1 played B in period 1. Solving for δ. player 2 plays X if player 1 played A in period 1. (c) Cooperation by player 1 requires δ ≥ 4/5. the only subgame perfect equilibrium is play of the Nash equilibrium of the stage game in both periods. Player 1 plays A in period 1 and B in period 2. Player 2 plays X in period 1. R) in the second period. If player 1 defects ((C. To support cooperation by player 2. Thus. L) can be supported as follows. (b) Yes. 2. Thus.M) is played) in the first period. (a) To support cooperation. L) is played) in the first period. then the players play (D. 2008 by Joel Watson For instructors only. M) in the second period. Otherwise. it must be that δ ≥ 3/5. (U. it must be that δ ≥ 4/5. δ must be such that 2/(1 − δ) ≥ 4 + δ/(1 − δ). As there is only one Nash equilibrium of the stage game. (a) The Nash equilibria are (B. In period 2. Instructors' Manual for Strategy: An Introduction to Game Theory 136 Copyright 2002. and the answer does not change. it must be that δ ≥ 1/2. then the players coordinate on (C. we need δ ≥ 3/5. the logic from the two period case applies. the players play (D. For any finite T .22 Repeated Games and Reputation 1. using the stage Nash punishment. Clearly. (b) Any combination of the Nash equilibria of the stage game are subgame perfect equilibria. 1−δ 1 − δ2 6. R. C) and (C. B) in the first round and. 2008 by Joel Watson For instructors only. (8. 6. 9). Alternating between (C. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. A) in the first round. if player 2 cheats.C) and (C. 2). player 1 prefers not to deviate in an even period if 7+ 2δ ≤ 3 + 2δ + 3δ 2 + 2δ 3 + 3δ 4 + . B) in the second period. Furthermore. Thus. This yields payoff (9. L. 4. player 1 can guarantee himself at least 2 per period. B). and then if no one deviated. . player 2 has no incentive to deviate in odd or even periods. There are two other subgame perfect equilibria. alternating between (C. . This yields the payoff (8. the players select (U. D) requires that neither player has the incentive to deviate. B) in the second period. In the first. yet he would get less than this starting in period 2 if the players alternated as described.137 22 REPEATED GAMES AND REPUTATION 5. 7. and (8. R. they play (U. On the other hand. 10). (a) The (pure strategy) Nash equilibria are (U. they play (D. the players select (U. A long horizon ahead. 10). alternating between (C. L. D) is supposed to be played. however. (U. 5 2δ 3 + 2δ) ≤ . B) in the second period. do not distribute. 6). B) and (D. These yield the payoffs (8. they play (D. R. .D) can be supported. if player 2 does not cheat. 8. In the other equilibrium. otherwise. Note first that. 1−δ which simplifies to 7+ Solving for δ yields δ ≥  4 . 6. L. B) in the second period. Player 1 has no incentive to deviate in even periods. R. when (D.D) cannot be supported. 7. R. .C) and (C. he expects to get αz. player 1 plays T and player 2t plays D. from the text we know that cooperation can be supported when both are long-run players. (b) If a player selects S. However. neither player wishes to deviate. there is no gain from continuing. The equality that defines α is thus αz = 10α + (1 − α)(−1 + δαx). (b) Consider the following example. then the only subgame perfect equilibrium repeated game will involves each player defecting in each period. their continuation values from the start of a given period are both 0. the continuation value from the beginning of each period is αx. the prisoners’ dilemma.138 22 REPEATED GAMES AND REPUTATION 8. Since the players randomize in each period. do not distribute. then the game stops and this player obtains 0. There is a subgame perfect equilibrium. in which. (a) As x < 10. he expects 10α + (1 − α)(−1 + δαx). 9. in equilibrium. If only one player is a long-run player. 2008 by Joel Watson For instructors only. (c) Consider. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. (a) Player 2t plays a best response to player 1’s action in the stage game. Thus. When a player selects S. when he chooses C. . (c) In this case. using stage Nash punishment. for example. he thus gets an expected payoff of 10α − (1 − α). Setting this equal to 0 (which must be the case in order for the players to be indifferent between S and C) yields α = 1/11. If the player chooses C in a period. until and unless someone defects. Thus. The profit under the Nash equilibrium of the stage game is 0. where ε is arbitrarily small. (c) Collusion is “easier” with fewer firms. 000 + 60(30) − 30(30) = 2. (a) Consider all players selecting pi = p = 60. it must be that [2500/n][1/(1−δ)] ≥ 2500+0. The payoff to each player is equal to 2. The payoff to a player from unilaterally deviating is equal to  2. we get δ ≥ 1/3. (b) The quantity of each firm when they collude is q c = (110 − 60)/n = 50/n. do not distribute. (b) Under zero tariffs. 1−δ 1−δ Solving for δ. player i’s gain from deviating is 900. 000 + 30 +   k + 2 2 k − 2 30 + 90k. If player i defects. Sustaining zero tariffs requires that 2000 200δ ≥ 2900 + . 2. then everyone chooses pi = p = 10 thereafter. To support collusion. A deviation by player i yields a payoff of 2. 900. the gain to player i of unilaterally deviating is  k 30 + 2 Instructors' Manual for Strategy: An Introduction to Game Theory 2 139 − 60k. the stage game payoff of defecting can be made arbitrarily close to 2. 500. we find that xi = 30+ 12 [30+ x2i ] which implies that x∗1 = x∗2 = 60. 000 − 30(60) = 200. the payoff to each country is 2. 000 + 60 30 +  = 2. which simplifies to δ ≥ 1 − 1/n. she does so by setting pi = 60 − ε. 2008 by Joel Watson For instructors only. Trade Agreements. (a) The best response function of player i is given by BRi (xj ) = 30+xj /2. . If someone defects. Copyright 2002. k 2   k − 30 + k 2 2 − 90k Thus. Thus. Solving for equilibrium. and Goodwill 1.000. The profit of each firm under collusion is (50/n)60 − 10(50/n) = 2500/n. (c) The payoff to each player of cooperating by setting tariffs equal to k is 2000 + 60k + k 2 − k 2 − 90k = 2000 − 30k.23 Collusion. In words. there is an equilibrium in which (A. (a) Each player 2t cares only about his own payoff in period t. in which case (B. 1800 − 90k + [30 + k2 ]2 3. so he will play D. Substituting this into the condition δ(pG − pB ) ≥ 1 and simplifying yields δα ≥ 1/2. Y). Player 21 sells the right to player 22 for 8α if he did not deviate in the first period.Y) is played in the second period.140 COLLUSION. we need 2/(1 − δ) ≥ 3 + δ/(1 − δ) or δ ≥ 1/2. Let α be the bargaining weight of each player2t in his negotiation to sell the right to player 2t+1 . implying that player 2t obtains pG = α[2 + δpG ]. 2008 by Joel Watson For instructors only. We can see that the surplus in the negotiation between players 2t and 2t+1 is 2 + δpG . This implies that player 1 will play D in each period. this requires that 2 + δpG ≥ 3 + δpB . although here it may seem undesirable from player 21 ’s point of view. it must be that  k 30 + 2 2 − 60k + Solving yields the condition [2000 − 30k]δ 200δ ≤ . There is also a “goodwill” equilibrium that is like the one constructed in the text. in which case (D. because this is what player 2t+1 expects to obtain from the start of period t + 1 if he follows the prescribed strategy of cooperating when the reputation is good. unless player 21 deviated from X in the first period. Solving for pG yields pG = 2α/(1 − δα). Obviously. This is an equilibrium (player 21 prefers not to deviate) if α > 3/4. C) unless someone defects. . D) is played thereafter. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. For this to be rational for player 1. so the price is discounted). Z) and (B. Z) is played in both periods and player 21 sells the right to player 22 for 8α. Cooperation can be supported if δ(pG − pB ) ≥ 1. TRADE AGREEMENTS. Players coordinate on (A. Similar calculations show that pB = α/(1 − δα). whereas he sells the right for 4α if he deviated. This surplus is divided according to the fixed bargaining weights. where pG is the price he gets with a good reputation and pB is the price he gets with a bad reputation. 4. do not distribute. Z) in the second period. (b) Suppose players select (C. The Nash equilibria are (A. AND GOODWILL In order to support tariff setting of k. For player 2t . the discount factor and the owner’s bargaining weight must be sufficiently large in order for cooperation to be sustained over time. 1−δ 1−δ [30 + k2 ]2 − 60k ≤ δ. X) in the first period and (A. (Trade occurs at the beginning of the next period. 141 COLLUSION, TRADE AGREEMENTS, AND GOODWILL 5. (a) The Nash equilibria are (x, x), (x, z), (z, x), and (y, y). (b) They would agree to play (y,y). (c) In the first round, they play (z, z). If no one defected in the first period, then they are supposed to play (y, y) in the second period. If player 1 defected in the first period, then they coordinate on (z, x) in the second period. If player 2 defected in the first period, then they coordinate on (x, z) in the second period. It is easy to verify that this strategy is a subgame perfect equilibrium. (d) The answer depends on whether one believes that the players’ bargaining powers would be affected by the history of play. If deviation by a player causes his bargaining weight to suddenly drop to, say, 0, then the equilibrium described in part (c) seems consistent with the opportunity to renegotiate before the second period stage game. Another way of interpreting the equilibrium is that the prescribed play for period 2 is the disagreement point for renegotiation, in which case there is no surplus of renegotiation. However, perhaps a more reasonable theory of renegotiation would posit that each player’s bargaining weight is independent of the history (it is related to institutional features) and that each player could insist on some neutral stage Nash equilibrium, such as (x, x) or (y, y). In this case, as long as bargaining weights are positive, it would not be possible to sustain (x, z) or (z, x) in period 2. As a result, the equilibrium of part (c) would not withstand renegotiation. 6. (a) If a young player does not expect to get anything when he is old, then he optimizes myopically when young and therefore gives nothing to the older generation. (b) If player t − 1 has given xt−1 = 1 to player t − 2, then player t gives xt = 1 to player t − 1. Otherwise, player t gives nothing to player t − 1 (xt = 0). Clearly, each young player thus has the incentive to give 1 to the old generation. (c) Each player obtains 1 in the equilibrium from part (a), 2 in the equilibrium from part (b). Thus, a reputation-based intergenerational-transfer equilibrium is best. 7. (a) Any δ. (b) δ ≥ 37 . (c) m = 4 . 3(1−δ) Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002, 2008 by Joel Watson For instructors only; do not distribute. COLLUSION, TRADE AGREEMENTS, AND GOODWILL 142 8. (a) Cooperation can be sustained for δ ≥ 23 . (b) Cooperation can be sustained for δ ≥ (c) Cooperation can be sustained for δ ≥ Instructors' Manual for Strategy: An Introduction to Game Theory k . k+1 4(k−2)! . 4(k−2)!+k! Copyright 2002, 2008 by Joel Watson For instructors only; do not distribute. 24 Random Events and Incomplete Information 1. 2. (a) Instructors' Manual for Strategy: An Introduction to Game Theory 143 Copyright 2002, 2008 by Joel Watson For instructors only; do not distribute. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. 4. 2008 by Joel Watson For instructors only.RANDOM EVENTS AND INCOMPLETE INFORMATION 144 (b) 3. . do not distribute. and salespeople. The firm obtains 200 − 100(1 − q) = 100(1 + q).25 Risk and Incentives in Contracting 1. (a) The wage offer must be at least 100 − y. Thus. 2. commodities traders. The probability of a successful project is p. (b) In this case. which means the wage must be at least 100(1 − q). . the worker accepts the job if and only if w + 100q ≥ 100. 3. whereas the safe job at a wage of 100 − y is optimal otherwise. Instructors' Manual for Strategy: An Introduction to Game Theory 145 Copyright 2002. 2008 by Joel Watson For instructors only. This implies that b = p−1/α . it is optimal to offer the risky job at a wage of 50 if y ≤ 70. do not distribute. Examples include stock brokers. (a) Below is a representation of the extensive form for T = 1. 4. (c) When q = 1/2. This implies an incentive compatibility constraint of p(w + b − 1)α + (1 − p)(w − 1)α ≥ wα and a participation constraint of p(w + b − 1)α + (1 − p)(w − 1)α ≥ 1. so the firm’s payoff is 180 − (100 − y) = 80 + y. we need p(w + b − 1)α + (1 − p)(w − 1)α = 1 = wα . 6. 4.146 25 RISK AND INCENTIVES IN CONTRACTING (b) q2 δ q2 δ q1 δ Regardless of T . then player 2 accepts. the probability with which player i gets to make an offer can be viewed as his bargaining weight. (c) n1 = 2. 5. (c) The expected equilibrium payoff for player i is qi . Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. (d) The more risk averse a player is. do not distribute. the lower is the offer that he is willing to accept. player 1 accepts. . he offers to player 2 (and demands 1 − q2δ for himself). Thus. she offers to player 1. an increase in a player’s risk aversion should lower the player’s equilibrium payoff. When player 2 offers q1δ or more. Thus. (a) n2 = 6. 2008 by Joel Watson For instructors only. When player 1 offers or more. When player 2 gets to offer. (b) n2 = 1 or 6. or 7. whenever player 1 gets to make the offer. V) is the only rationalizable strategy profile. (b) The Bayesian normal form is: XA YB is a dominant strategy for player 1. 2008 by Joel Watson For instructors only. (c) False. Instructors' Manual for Strategy: An Introduction to Game Theory 147 Copyright 2002. the rationalizable set is (XA YB . . do not distribute.26 Bayesian Nash Equilibrium and Rationalizability 1. Thus. (a) The Bayesian normal form is: (Z.W). do not distribute. which is {(Du. 3. x1 The first-order condition is 1 + x2L − x1 + 1 + x2H − x1 = 0. and x2H = 7 . Similarly. (a) The extensive form and normal form representations are: The set of Bayesian Nash equilibria is equal to the set of rationalizable strategies. . The first order condition of the high type of player 2 implies x∗2H (x1) = (1+x1)/3.148 BAYESIAN EQUILIBRIUM AND RATIONALIZABILITY 2. R)}. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. x2H ) = 1 + (x2L + x2H )/2. 7 8 12 ∗ ∗ x2L = 7 . Player 1 solves max(x1 + x2L + x1x2L ) + (x1 + x2H + x1 x2H ) − x21. Solving this system of equations. This implies that x∗1 (x2L. Player 1’s payoff is given by u1 = (x1 + x2L + x1 x2L) + (x1 + x2H + x1x2H ) − x21. The low type of player 2 gets the payoff u2L = 2(x1 + x2L + x1x2L ) − 2x22L . 2008 by Joel Watson For instructors only. the first-order condition of the low type of player 2 yields x∗2L (x1) = (1 + x1)/2. whereas the high type of player 2 obtains u2H = 2(x1 + x2H + x1x2H ) − 3x22H . we find that the equilibrium is given by x∗1 = 17 . then player 2’s optimal quantities Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. the beliefs of player 1A and 1B must be the same.149 BAYESIAN EQUILIBRIUM AND RATIONALIZABILITY (b) The extensive form and normal form representations in this case are: The equilibrium is (D. In equilibrium. The high type of player 2 has a best response function of BRH 2 (q1 ) = 3/8−q1 /2. The set of rationalizable strategies is S. . do not distribute. (c) Regarding rationalizability. If q1 = 0. q2H ) = 1/2−q2L /4−q2H /4. R). u. 2008 by Joel Watson For instructors only. Recall that player 1’s best response function is given by BR1(q2L . 4. The low type of player 2 has a best response function of BRL2 (q1) = 1/2−q1 /2. the difference between the settings of parts (a) and (b) is that in part (b) the beliefs of players 1A and 1B do not have to coincide. and q2H will never exceed 7/32. player 1’s best response is q1 = 5/16. Repeating this logic. To the quantities q2L = 1/2 and q2H = 3/8. U). p2 ) = 42p1 + p1 p2 − 2p21 − 220 − 10p2 and u2 (p1 . whatever is the strategy of player j. 42+p2 and 4 ∗ and p2 = 14.150 BAYESIAN EQUILIBRIUM AND RATIONALIZABILITY are q2L = 1/2 and q2H = 3/8. 6. player i’s best response has a “cutoff” form in which player i bids if and only if his draw is above some number αi . and p∗2. p2 ) = 922 + 2c)p2 + p1 p2 − 2p22 − 22c − cp1 . . Thus. 2008 by Joel Watson For instructors only. (a) (b) (BA′. p∗2. Note that player 2 would never produce more than these amounts. player 1 will never produce more than q1 = 5/16. It is easy to see that.c=14 = 16. Thus. Y) 8. (LL′ . do not distribute. (a) u1 (p1 . we find that the rationalizable set is the single strategy profile that simultaneously satisfies the best response functions. which is the Bayesian Nash equilibrium.c=6 = 12. 7. (b) BR1 (p2 ) = (c) p∗1 = 14 BR2 (p1 ) = 22+2cp1 . 5. This is because the probability of winning when Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. q2L will never exceed 11/32. We conclude that each type of player 2 will never produce more than her best response to 5/16. 4 (d) p∗1 = 14. αj ) is the constant 3αj − 2 up to αj and then rises with a slope of 4. by bidding. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. Examining this description of i’s best-response. Note that if αj > 1/3 then player i optimally bids regardless of his type (meaning that αi = 0). 1/3]. The unique Nash equilibrium is (Bf.B). Let αj be player j’s cutoff. and if αj = 1/3 then player i’s optimal cutoff is any number in the interval [0. Then. 2008 by Joel Watson For instructors only. . and player 2 always bids. we see that there is a single Nash equilibrium and it has α1 = α2 = 1/3. That is player 1 bids when he has the Ace and folds when he has the King. 9. player i of type xi obtains an expected payoff of b(xi. αj ) < −1 and bid if b(xi. if αj < 1/3 then player i’s optimal cutoff is αi = (1 + αj )/4. αj ) =  1 · αj + (1 − αj )(−2) if xi ≤ αj 1 · αj + (xi − αj )(2) + (1 − xi )(−2) if xi > αj Note that.151 BAYESIAN EQUILIBRIUM AND RATIONALIZABILITY i bids is increasing in i’s type. as a function of xi . do not distribute. αj ) > −1. b(·. Player i’s best response is to fold if b(xi. vi. Note that. Here. but bidding vi allows player i to win and receive a payoff of vi − bj . The probability that at least one of the players’ valuations is above 500 is 1 − (1/2)2 = 3/4. (b) Colin wins and pays 82 (or 82 plus a very small number). without a reserve price. When either 1000 < p ≤ 2000 or p > 1000 + 2000q. player i will bid at least r if vi > r. 5. Here. Thus. setting a reserve price of 500. 2008 by Joel Watson For instructors only. (c) The seller should set the reserve price at 92. 3. do not distribute. the expected revenue of the auction is 1000/3. and the other player’s bid bj . There is always an equilibrium in this game. Consider. 2. Auctions. he loses either way. consider the case where bj < x < vi . Colin wins and pays 92. the only equilibrium involves no trade whatsoever. it does not matter whether player i bids x or vi . there is an equilibrium in which neither the lemon nor the peach is traded (Jerry does not trade and Freddie trades neither car). x < bj < vi . and Information Aggregation 1. To show that bidding vi is weakly preferred to bidding any x < vi . which exceeds 1000/3.27 Lemons. The probability that vi < r is r/1000. With a reserve price r. Instructors' Manual for Strategy: An Introduction to Game Theory 152 Copyright 2002. In this case. regardless of p. for example. In the first case. Finally. Your optimal bidding strategy is b = v/3. the probability that both players have a valuation that is less than r is (r/1000)2 . the expected revenue of setting r = 500 is at least 500(3/4) = 385. consider three cases. As discussed in the text. you should bid b(3/5) = 1/5. bidding x causes player i to lose. 4. Next consider the case in which x < vi < bj . bidding either x or vi ensures that player i wins and receives the payoff vi − bj . and receives a payoff of 0. Thus. (a) Colin wins and pays 82. . with respect to x. Thus. trade yields an expected payoff of x 2 100 (1/2)(x1 + x2)F2(x2 )dx2 + Not trade yields 1000 100 Instructors' Manual for Strategy: An Introduction to Game Theory 1000 x2 pF2 (x2)dx2 − 1. Jessica will not trade if her signal is x2 = 200. where John trades if and only if x1 ≤ x1 and Jessica trades if and only if x2 ≥ x2 . 8. we reached this conclusion by tracing the implications of common knowledge of rationality (rationalizability).LEMONS. 000 and he offers to trade. so the result does not rely on equilibrium. but then she never has an interest in trading. 7. but bidding 20 is not. Interestingly. Thus. do not distribute. (c) Intuitively. (b) It is not possible for trade to occur in equilibrium with positive probability. (1/2)(x1 + x2 )F2(x2)dx2 . where trade is usually vigorous. the equilibrium strategies can be represented by numbers x1 and x2. suppose 600 ≤ p ≤ 1. Copyright 2002. Let vi = 20. then the trade could occur only when v = 1000. if p < 200 then John would never trade. However. players may lack common knowledge of the fundamentals or each other’s rationality. . In the real world. Thus.” regardless of their types. trade never occurs in equilibrium. AND INFORMATION AGGREGATION 153 6. This may seem strange compared to what we observe about real stock markets. John therefore knows that Jessica would only be willing to trade if her signal is 1. if player i bids 25 then she wins and obtains a payoff of 20 − 10 = 10. First. bidding 25 is a best response. Jessica deduces that John would only be willing to trade if x1 = 200. If player i bids 20 then she loses and obtains a payoff of 0. if John’s signal is 1. the only equilibrium has both players choosing “not. 000. in which case he would have been better off not trading. Suppose player i believes that the other players’ bids are 10 and 25. For John. Similar reasoning establishes that trade never occurs in the case of p < 600 either. because she then knows that 600 is the most the stock could be worth. However. Consider two cases for p between 200 and 1000. Realizing this. In this case. AUCTIONS. so neither player will trade in equilibrium. 000. trade may occur due to liquidity needs. and there may be differences in owners’ abilities to run firms. 2008 by Joel Watson For instructors only. (a) Clearly. The equilibrium bidding strategy for player i is bi (vi ) = vi2/2. These inequalities cannot be satisfied simultaneously. (a) Player 1’s best-response bidding strategy is bi (vi ) =  y1 for y1 ≥ 3 . 000. AND INFORMATION AGGREGATION 154 Simplifying. (∗∗) By the definitions of x1 and x2. do not distribute. (∗) For Jessica. trade implies an expected payoff of x 1 100 [(1/2)(x1 + x2 ) − p]F1 (x1)dx1 − 1. Integrating (**) over x2 > x2 yields x 1000 1 100 x2 [(1/2)(x1 + x2 ) − p]F2 (x2)F1(x1 )dx2 dx1 ≥ 1000 x2 F2(x2 )dx2 . Integrating (*) over x1 < x1 yields x 1000 1 100 x2 [p − (1/2)(x1 + x2 )]F2(x2 )F1(x1 )dx2dx1 ≥ x 1 100 F1(x1)dx1 . she prefers trade when x 1 100 [(1/2)(x1 + x2 ) − p]F1(x1)dx1 ≥ 1.LEMONS. we see that John’s trade payoff is greater than is his no-trade payoff when 1000 x2 [p − (1/2)(x1 + x2)]F2(x2 )dx2 ≥ 1. AUCTIONS. (*) holds for all x1 ≤ x1 and (**) holds for all x2 ≥ x2. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. 0 for y1 < 0 (b) Player i will bid up to yi . unless trade never occurs in equilibrium—so that x1 is less than 100 and x2 exceeds 1. No trade gives her a payoff of zero. 2008 by Joel Watson For instructors only. . Simplifying. implying that all of the integrals in these expressions equal zero. 9. U) with q = 1. (b) Yes. Yes. do not distribute. EE′ ) and the beliefs are w = q and any r ≤ 1/2. (a) No. 4. . 2. Instructors' Manual for Strategy: An Introduction to Game Theory 155 Copyright 2002. Player 1’s actions may signal something of interest to the other players. NN′) and beliefs w = q and any r ≤ 1/2. (b) Yes. (a) Yes. given the rational response of the other players. there is a pooling equilibrium with strategy profile (pp′ . player 1 is indifferent or prefers to signal. 2008 by Joel Watson For instructors only. Let w = Prob(H | p) and let r = Prob(H | p).28 Perfect Bayesian Equilibrium 1. There are also pooling equilibria in which the incumbent plays pp′ . (a) The separating equilibrium is (pp′. This sort of signaling can arise in equilibrium as long as. Y) with belief q ≤ 3 5 (c) 3. (b) For q ≤ 1/2. NE′) with beliefs w = 1 and r = 0. it is (RL′ .D) with q ≤ 1/3. (AA′. it is (LL′ . For q > 1/2. There are also similar pooling equilibria in which the entrant chooses E and has any belief r ≥ 1/2. there is a pooling equilibrium in which the strategy profile is (pp′ . each type x ∈ {0. (a) If the worker is type L. this is worse than the strategy of part (b). then it expects 150p + 145(1 − p) = 145 + 5p. If the firm only offers a contract with the safe job and wants to employ both types. with the intention of only attracting the H type. then the firm offers z = 1 and w = 40. In the perfect Bayesian equilibrium. the optimal wage is 40 and the firm gets an expected payoff of 160p. then the firm must set w1 so that 100(3/5) + w1 ≥ 110. r = 0. and q = 1/2. 1. FS′ . otherwise. q = 1. player 2 folds with probability 1/3. K − 1} provides evidence and the judge believes that he faces type K when no evidence is provided. player 2 bids with the Ace and folds with the Queen. do not distribute. When player 2 is dealt the King and player 1 bids. (b) c ≤ 2. (c) In the perfect Bayesian equilibrium. y = 1. 7. player 1 bids with both the Ace and the King. 8. which yields a higher payoff than would be the case if the firm gave to the H type the incentive to select the safe job. if the firm wants to give the H type the incentive to accept the risky job. Thus. the part (b) strategy is better. (a) c ≥ 2. The firm’s optimal choice is w1 = 50. Clearly. (b) Note that the H type would obtain 75+ 35 = 110 by accepting the safe job. he bids with probability 1/3. The separating perfect Bayesian equilibrium is given by OB′ . y = 0. . whereas the guilty type does not. (a) The perfect Bayesian equilibrium is given by E0 N1. r = 0. Finally. which yields a payoff of 145. which means w1 ≥ 50.156 28 PERFECT BAYESIAN EQUILIBRIUM 5. If the firm follows the strategy of part (b). . Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. The following is such a pooling equilibrium: OO′ . 6. (b) The innocent type provides evidence. . then the firm offers z = 0 and w = 35. the firm might consider offering only a contract for the risky job. and q = 1. In this case. SF′ . (c) The answer depends on the probabilities of the H and L types. then it is best to set the wage at 35. When player 1 is dealt the Queen. 2008 by Joel Watson For instructors only. If the worker is type H. This “H-only” strategy is best if p ≥ 145/155. . . and y = 1. so the firm plays MC′ . because of the prospect of unreached information sets. (c) Yes. Consider separating equilibria. A) with belief q = p. there is such an equilibrium regardless of p. In fact. Instructors' Manual for Strategy: An Introduction to Game Theory 157 Copyright 2002. The equilibrium is given by (NH NL . This requires 2p−(1−p) ≥ 0. The low type always wants to mimic the high type. Clearly. Consider the worker’s strategy of EN′. (b) Yes. It is easy to see that NE′ cannot be an equilibrium. the PBE strategy profile is a Bayesian Nash equilibrium. 4. (a) There is no separating equilibrium. because the low type is not behaving rationally in this case. Consistent beliefs are p = 0 and q = 1. do not distribute. then they would have the same incentive to become educated. This relation does not hold in general. and the firm’s choice between M and C is whatever is optimal with respect to q. there is such an equilibrium provided that p is such that the worker accepts. by the same logic conveyed in the text. 2008 by Joel Watson For instructors only. because the presence of the C type in this game (and rationality of this type) implies that player 2’s information set is reached with positive probability. R) with belief q ≤ 1/3. . The equilibrium is given by (OH OL . If high types and low types have the same cost of education. There is a pooling equilibrium in which NN′ is played. It is easy to see that EE′ cannot be a pooling equilibrium. p = 1/2.29 Job-Market Signaling and Reputation 1. which simplifies to p ≥ 1/3. there is no other Bayesian Nash equilibrium. of course. 2. 3. Neither the high nor low type has the incentive to deviate. Education would not be a useful signal in this setting. q is unrestricted. the firm selects M′. Next consider pooling equilibria. So player 1’s offer will solve maxp2 p2 prob(p2 < v). (a) The extensive form is: In the Bayesian Nash equilibrium. player 1 selects action I with probability r = p/(1 − p). and B′. player 1 forms a firm (F) if 10p − 4(1 − p) ≥ 0. This is because 2p − 2(1 − p) = 4p − 2 > 0. we find that p1 c(p1 ) = 1− δ . Thus. However. and that player 2 selects I. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. I′. Using that player 2 with v = c(p1 ) is indifferent between accepting and rejecting p1 . Player 1 does not form a firm (O) if p < 2/7. This requires 2q − 2(1 − q) = 0. This yields p2 = c(p21 ) . Substituting and solving for r. If p > 1/2. which simplifies to p ≥ 2/7. and player 2 has belief q = 1/2 and plays I with probability 1/4. we get r = p/(1 − p). Substituting for player 2’s offer in period 2. . equilibrium requires that player 1’s strategy is II′SB′ . 6. Also. In period 1 player 1 maximizes 2 p1 prob(p1 ≥ c(p1 )) + δp2 prob(p2 < v)prob(p1 < c(p1 ). 4−δ 7. do not distribute. player 1 always plays S. As before. we find that p1 = 2[1−δ/2]2 . that player 2 has belief q = p. which implies that s = 1/4. Thus. which simplifies to q = 1/2. Player 2 accepts p1 if and only if v = c(p1 ). Player 1 randomizes so that player 2 is indifferent between I and N. then player 2 always plays I when her information set is reached.158 29 JOB-MARKET SIGNALING AND REPUTATION 5. player 2 randomizes so that player 1 is indifferent between I and N. in equilibrium. In period 2 player 2 will accept p2 if and only if v ≥ p2 . q = p/(p + r − pr). 2008 by Joel Watson For instructors only. (b) If both types of the other player select Y. (d) If p ≥ 2/7 then there is a pooling equilibrium in which NN′ and F′ are played. 8.159 29 JOB-MARKET SIGNALING AND REPUTATION (b) The extensive form is: (c) Clearly. whereas he expects pw Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. regardless of p. Thus. (a) A player is indifferent between O and F when he believes that the other player will choose O for sure. The L type weakly prefers Y. he has the incentive to give a gift if 10p ≥ g. He expects pw from not giving a gift. Thus. The L type expects −g + p9w + 5) + (1 − p)0 if he gives a gift. O. (O. If p ≤ 2/7 then there is a pooling equilibrium in which NN′ and OO′ are played (and player 1 puts a probability on H that is less than 2/7 conditional on receiving a gift). player 1 wants to choose F with the H type and O with the L type. and player 1’s choice between F and O is optimal given this belief. 2008 by Joel Watson For instructors only. Thus. in addition to p ≥ 2/7. then the H type expects −g + p(w + 10) + (1 − p)0 from giving a gift. do not distribute. If. the H type prefers Y if 10p − 4(1 − p) ≥ 0. player 1’s belief conditional on no gift is p. which simplifies to p ≥ 2/7. such an equilibrium exists if p ≥ 2/7. 10]. it is the case that g ∈ [5. player 1’s belief conditional on a gift is arbitrary. O) is a Bayesian Nash equilibrium. then there is also a pooling equilibrium featuring GG′ and FO′. This is the case if 10 − g ≥ 0 and 0 ≥ 5 − g. there is a separating equilibrium if and only if the types of player 2 have the incentive to separate. Thus. 10]. (c) If the other player behaves as specified. which simplifies to g ∈ [5. O. . (d) The incentive compatibility conditions for the low and high types. we obtain e = 4. exists if g ∈ [5p. x subject to xˆ −αˆ e2 ≥ (a) The manager’s optimal contract solves maxeˆ. and x = 2/3. (c) The worker will choose the contract that maximizes xˆ − αˆ e2 . he would choose the contract that is meant for him. x).eH Instructors' Manual for Strategy: An Introduction to Game Theory 3 1 1 1 1 eH − e2H + eL − e2H − eL2 . (b) Using the solution of part (a). 8 8 The participation constraints are 1 xL − e2L ≥ 0 8 and 3 xH − e2H ≥ 0. Solving the first-order condition. therefore. which gives him a payoff of 4/9. Substituting for xL and xH yields the following unconstrained maximization problem: max eL . x). we get eˆ = 1/(2α) and xˆ = 1/(4α). e = 4/3. the low type prefers to select contract (e. whereas he would obtain 0 by choosing contract (e. 8 Note that combining these gives xL = 41 e2H + 18 e2L .ˆx eˆ−ˆ 0 (which is necessary for the worker to accept). On the other hand. the manager will pick xˆ and eˆ so that the constraint binds. are 1 1 xL − e2L ≥ xH − e2H 8 8 and 3 3 xH − e2H ≥ xL − e2L . 10p]. respectively. we can substitute for xL and xH using the equations 1 1 xL = xH − e2H + e2L 8 8 and 3 xH = e2H . The high type of worker would get a payoff of −4 if he chooses contract (e. 2 8 2 4 8 Copyright 2002. The L type prefers not to give if g ≥ 5p. x = 2. rather than getting 0 under the contract designed for him. Thus. x). Using the constraint to substitute for xˆ yields the unconstrained problem maxeˆ eˆ − αˆ e2. The equilibrium.160 29 JOB-MARKET SIGNALING AND REPUTATION if he does not give a gift. 9. do not distribute. . 8 (e) Following the hint. Clearly. 2008 by Joel Watson For instructors only. Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. we obtain e∗L = 4. . x∗L = 54/25. 2008 by Joel Watson For instructors only. e∗H = 4/5. (f) The high type exerts less effort than is efficient.161 29 JOB-MARKET SIGNALING AND REPUTATION Calculating the first-order conditions. and x∗H = 6/25. because this helps the manager extract more surplus from the low type. do not distribute. −100p(1 − q) ≥ 0. Using the monotone property. We have that u1 (C. (c) Suppose not. By induction.” which means an infinite number of strategies are eventually deleted. which implies that Bi (Rk−1 ) = ∅ for some i. (a) Suppose not. then it also must be dominated with respect to the smaller set X−i . This follows from the definitions of Bi and UDi . which contradicts what we assumed at the start. θ−1 ) = u1 (A.30 Appendix B 1. which contradicts the assumption of uncorrelated beliefs. Furthermore. Rk ⊂ Rk−1 implies Rk+1 = UD(Rk ) ⊂ Rk = UD(Rk−1 ). from (b). This is discussed in the lecture material for Chapter 7 (see Part II of this manual). and 5(1 − p)(1 − q) ≥ 6(1 − p)(1 − q). that any belief for player i that puts positive probability only on strategies in X−i can also be considered in the context of the larger Y−i . −100(1 − p)q ≥ 0. do not distribute. we know strategies that are removed are never “put back. Then it must be that B(Rk−1 ) = ∅. 2008 by Joel Watson For instructors only. This contradicts that S is finite. (b) Let p denote the probability that player 1 plays U and let q denote the probability that player 2 plays M. meaning that X ⊂ Y implies B(X) ⊂ B(Y ) and UD(X) ⊂ UD(Y ). . it cannot be that 6p > 5 and 6(1− p) > 5. M) is played with probability 1/2 and that (D. we see that UD(S) = R1 ⊂ S = R0 implies R2 = UD(R1 ) ⊂ UD(R0 ) = R1 . θ−1 ) = 5 and u1(B. (c) Consider the belief θ−1 that (U. However. However. for instance. This requires that (1 − p)q = p(1 − q). if a strategy of player i is dominated with respect to strategies Y−i . θ−1) = 3. Instructors' Manual for Strategy: An Introduction to Game Theory 162 Copyright 2002. Then it must be that the following inequalities hold: 5pq ≥ 6pq. (b) The operators B and UD are monotone. we know that the best response set is nonempty (assuming the game is finite). (a) For any p such that 0 ≤ p ≤ 1. 3. N) is played with probability 1/2. Suppose that C ∈ BR. Then there are an infinite number of rounds in which at least one strategy is removed for at least one player. Note. 2. who will add them to those shown here. so the examinations covered a relatively narrow set of topics) and a group of questions from Jesse Bull. Please also report errors to Watson. 2008 by Joel Watson For instructors only. do not distribute.163 Part IV Sample Examination Questions On the following pages are some sample examination questions. Instructors are welcome to send their own sample questions to Watson (jwatson@ucsd. in the form of two examinations used by Joel Watson for undergraduates at UCSD in 2007 (wild fires in San Diego shortened the term in Fall 2007. .edu). Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. do not distribute. (a) List all of the efficient strategy profiles in this game. how many (pure) strategies does player 1 have? Do not name the strategies. In the extensive-form game pictured on the right. in the spaces provided on the answer sheet that has been distributed separately. What is player 1’s best response? 4. 5. Write your answers. 3. In the space marked “version. Consider the normal-form game pictured on your answer sheet. If not. In the normal-form game pictured on your answer sheet. Watson. calculators. (b) Calculate the rationalizable set of strategy profiles in this game. these questions will be graded only on the basis of whether your final answers are correct. You may not use your notes. In the normal-form game pictured on your answer sheet. describe a belief to which M is a best response. Fall 2007 You have 50 minutes to complete this examination. Instructors' Manual for Strategy: An Introduction to Game Theory 1 Copyright 2002. 1. You may use the scratch paper that has been distributed but submit only your answer sheet. or any books during the examination.” write the following number: 4. 2008 by Joel Watson For instructors only. suppose that player 1 believes that player 2 is equally likely to play any of her strategies. is player 1’s strategy M dominated? If so. describe a strategy that dominates it. simply report how many there are.Economics 109 Midterm Exam I Prof. 2. including all necessary derivations. Write your name in the designated space on the answer sheet. . You do not need to show any work in your answers to questions 1-5. The payoffs are given by: u1 (x. y) = 2xy − x2 u2(x. there are ten players (n = 10) and two strategies for each player. answer the following questions: (a) How many strategy profiles are contained in the set of rationalizable strategy profiles? (b) Describe one of the rationalizable strategy profiles. and this is independent of how many other firms locate there. where m is the total number of firms (including firm i) that locate in the city. You are to determine the set of rationalizable strategies in this game. Consider a game in which. if firm i locates in the city. Suppose that v1 (m) = v2(m) = v3 (m) = v4(m) = 31 − m. Instructors' Manual for Strategy: An Introduction to Game Theory 2 Copyright 2002. 20] and player 2 selects a number y ∈ [0. 20]. then its payoff is vi(m). expected strategy). To show that you have done this accurately. That is. (c) Describe the various values of m that can arise in a rationalizable outcome. 2008 by Joel Watson For instructors only. 7. (a) Calculate and graph each player’s best-response function. do not distribute. v5 (m) = v6(m) = v7 (m) = v8(m) = 31 − 3m. and v9(m) = v10(m) = 31 − 2m. simultaneously. (b) Determine the rationalizable strategy profiles for this game. . Consider a strategic setting in which ten firms simultaneously and independently decide whether to locate in the city (X) or in the suburbs (Y). Each firm’s payoff of locating in the suburbs is 20. y) = 10y + xy − y 2 .6. as a function of the opposing player’s pure strategy (equivalently. Show your logic. player 1 selects a number x ∈ [0. However. 5 1. 3 8. 3 9. name a belief to which M is a best response: 5. 1 2 a b c w 8. 2 b 2. 2 4.2 3 4 5 6 7 Total 4 6 6 8 8 8 40 Economics 109 Midterm Examination I Answer Sheet. do not distribute. 2 2. Watson C 1. 2 5. 2 5. Number of strategies that player 1 has (circle one): 3. 5 2. 2 0. 1 3. . 8 x 6. 1 M 4. 0 L 3. Prof. 1 2 3 4 5 6 7 8 10 16 64 256 2 1 l c r t 4. 2 1 X Y K 8. name a strategy that dominates it: If not. Fall 2007. 2 (a) The efficient strategy profiles are: (b) The rationalizable set is: R = Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. Your name:_____________________________ Your student ID:_________________________________ Version: 2. 2 m 1. 2008 by Joel Watson For instructors only. 4 6. 1 3. 3 4. 2 Is M dominated? Circle one: YES NO If so. 4 y 1. 2 Player 1's best response: 4. 0 5. 2 3. (a) Number of rationalizable strategy profiles (circle one): 0 1 2 4 8 16 256 1024 (b) One of the rationalizable strategy profiles is: (c) Values of m that can arise in a rationalizable outcome: Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. do not distribute. (a) Best-response functions y 20 Graph: BR1(y) = BR2(x) = x 0 0 20 (b) Rationalizable set: R= 7.6. 2008 by Joel Watson For instructors only. . ∞). each player selects a number that is greater than or equal to zero. calculators. at which point the game ends. For the normal-form game pictured on your answer sheet. Then. which must be greater than or equal to zero. Watson. s2) = 2s2 + 2as1s2 − s22 . in the spaces provided on the answer sheet that has been distributed separately. including all necessary derivations. You may use the scratch paper that has been distributed but submit only your answer sheet. Player 1’s payoff is u1 = 2y1 y2 + xy2 − y12 and player 2’s payoff is u2 = 4y2 − 4xy2 − 2y1 y2 − y22. do not distribute. ∞) and S1 = [0. Calculate the subgame perfect equilibrium of this game and report the equilibrium strategies. where a is a constant parameter. 1. Consider the following two-player game. 2. In either case. simultaneously and independently. First. Let s1 denote the strategy of player 1 and let s2 denote the strategy of player 2. provide such a value. if any. Write your name in the designated space on the answer sheet. s2 ) = 2s1 + 2as1s2 − s21 and u2 (s1 .Economics 109 Midterm Exam II Prof. Instructors' Manual for Strategy: An Introduction to Game Theory 1 Copyright 2002. You may not use your notes. explain your answer. explain your answer. player 1 selects a number x. That is. 3. 4. In the space marked “version. 2008 by Joel Watson For instructors only. are efficient. . Write your answers.” write the following number: 2. (a) Is there any value of a such that this game has no Nash equilibrium? If so. find the pure strategy Nash equilibria and describe which. 5. or any books during the examination. Consider a two-player game in which the strategy spaces are S1 = [0. Player 2 observes x. calculate the mixed strategy Nash equilibrium. For the normal-form game pictured on your answer sheet. Fall 2007 You have 50 minutes to complete this examination. In either case. this questions will be graded only on the basis of whether your final answers are correct. provide such a value. You do not need to show any work in your answers to question 2. Suppose that the payoff functions are given by u1 (s1. player 1 selects a number y1 and player 2 selects a number y2. (b) Is there any value of a such that this game has an efficient Nash equilibrium? If so. 5 6. 3 w 0. 6 2. 1 3. 1 4. do not distribute. 0 3. 1 y 0. 6 2. 5 x 8. 4 y 1. 3 6. 1 3. 3 5. 2008 by Joel Watson For instructors only. 4 8. 6 2. 2 7. Circle the cells that are pure-strategy Nash equilibria and put an asterisk (*) in the cell of each efficient Nash equilibrium. 1 6. 5 4.2 3 4 5 Total 8 10 12 10 40 Economics 109 Midterm Examination II Answer Sheet. Watson 1. 8 0. 2 Mixed strategy equilibrium: C Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. 3 4. 3 2 a b c w 5. 1 2 a b c d e v 3. 3 5. 7 2. 3 7. 1 7. 4 4. 8 5. Prof. 2 x 6. 9 1. Fall 2007. 2 1. 3 1. Your name:_____________________________ Your student ID:_________________________________ Version: 2. 1 0. 6 z 7. . 4 1. a = _________________________________________________________________________________________ Your comments on the course so far: Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002.4. . (a) Is there an a such that the game has no Nash equilibrium? Circle one: YES NO If yes. 2008 by Joel Watson For instructors only. do not distribute. Equilibrium strategy profile: 5. a = (b) Is there an a such that the game has an efficient Nash equilibrium? Circle one: YES NO If yes. so your knowledge of the appropriate techniques can be verified. For the other questions. . Instructors' Manual for Strategy: An Introduction to Game Theory 1 Copyright 2002. with the exception of those denoted by x and y. the set of best responses for player 2 is BR2(µ1 ) = {L. Submit only your answer sheet at the end of the examination period.* Consider the normal form game pictured here: (a) What is the set of rationalizable strategy profiles in this game? (b) Determine the game’s pure strategy Nash equilibrium strategy profile(s). M) is a Nash equilibrium. • (U. for these questions. ON THE FIRST PAGE OF YOUR ANSWER SHEET. It is important that you include the essential derivations on the answer sheet.Economics 109 Final Examination Prof. PLEASE SIGN THE WAIVER IF YOU AGREE TO IT. write your final answers in the space provided on the separate answer sheet that you have been given. Questions marked with an asterisk (*) will be graded only on the basis of your final answers (not your derivations). or any books during the examination. (c) Does this game have a mixed strategy equilibrium in which both X and Y are played with positive probability? 2. M) is an inefficient strategy profile. Use scratch paper as you wish. do not distribute. and • For the belief µ1 = ( 31 . • (U. but you may not submit your scratch paper. Fall 2007 You have two hours and fifty minutes to complete this examination. which puts probability 1/3 on U and 2/3 on D. Keep your eyes on your own examination sheets. write your complete answers (including derivations) on the separate answer sheet. 23 ). You may not use your notes. 1. N}. a calculator.* Consider the normal form game pictured here: All of the payoff numbers are specified. 2008 by Joel Watson For instructors only. Watson. Find numbers for x and y such that the following three statements are all true. 2008 by Joel Watson For instructors only. (a) What price would the monopolist set in the second period if neither customer purchased in the first period? (b) What is the monopolist’s optimal first-period price? Instructors' Manual for Strategy: An Introduction to Game Theory 2 Copyright 2002. do not distribute. Assuming that the players discount future payoffs according to the discount factor δ.* Consider the following game. X) is played in each period? (Use grim trigger strategies. Suppose this is the stage game in an infinitely repeated game. The following two questions refer to the subgame perfect Nash equilibrium (in which the three players are behaving sequentially rationally).) 5. . (a) Solve the game by backward induction and report the resulting strategy profile. Consider the following stage game. Consider a dynamic pricing problem for a monopolist who faces two types of customers (H type and L type). (b) How many proper subgames does this game have? 4. over two periods of time — as in the example discussed in class and in the textbook. Suppose the types of customers have values of consuming the durable good in each period as shown in the following table: Suppose there is one H type customer and one L type customer.3. under what conditions is there a subgame perfect equilibrium in which (C. If player 1 rejects this offer. if there is such an equilibrium. where δ is the discount factor for both players per period. Consider a four-period bargaining game in which player 1 would make the offer in periods 1 and 4. and player 2 would make the offer in periods 2 and 3. what is the offer that player 2 makes in the second period? (d) In the subgame-perfect equilibrium of this game. then in this period the player who accepted the offer gets mt dollars and the other player gets 1−mt dollars. If player 1 rejects this offer. where player 2 makes an offer m2 to player 1. That is. what is the offer that player 2 makes in the third period? (c) In the subgame-perfect equilibrium of this game. what is the offer that player 1 makes in the fourth period (contingent on agreement not occuring earlier)? (b) In the subgame-perfect equilibrium of this game. where player 1 makes an offer m4 . then the game proceeds to period 3. The dollar amounts are discounted relative to earlier periods. . in period 1 player 1 makes an offer m1 to player 2.6. report it. where player 2 makes another offer m3. 2008 by Joel Watson For instructors only. what is the offer that player 1 makes in the first period? 7. (b) Does this game have any pooling perfect Bayesian equilibrium? Show your analysis and. if there is such an equilibrium. then the game proceeds to period 2. do not distribute. then the game proceeds to period 4. report it. If an agreement is not reached by the end of the fourth period. (a) In the subgame-perfect equilibrium of this game. Consider the following game with nature: (a) Does this game have any separating perfect Bayesian equilibrium? Show your analysis and. Instructors' Manual for Strategy: An Introduction to Game Theory 3 Copyright 2002. If an agreement is reached in period t. then both players get 0. If player 2 rejects player 1’s offer. . Consider a contracting game between a firm (player 1) and a consumer (player 2). sealed-bid auction in which there are two bidders (players 1 and 2) vying for one object. Calculate the Bayesian Nash equilibrium and report the Low type’s bid. Each player’s valuation of the object is either 0 (the Low type) or 10 (the High type). Nature selects these with equal probabilities and chooses the valuations of the two players independently. If player i loses then he/she gets 0. This auction game has a Bayesian Nash equilibrium in which the High type of each player selects his/her bid randomly according to a continuous probability distribution over [0. 2. and the function p.8. If player i wins the auction then he/she gets a payoff of vi − bi. The extensive form of this game is shown below. Let the function p(b) represent this probability distribution in the sense that. Each player knows his/her own valuation but does not observe the valuation of the other player (knowing only that it is 10 with probability 1/2 and 0 with probability 1/2). the players simultaneously make bids b1 . If the consumer selects N. b2 ≥ 0. meaning that he does not exert effort to read the contract. and finally the consumer decides whether to accept the contract (Accept or Don’t). do not distribute. p(b) is the probability that the High type bids less than b. First the firm chooses what type of contract (Good or Bad) to offer the consumer. Instructors' Manual for Strategy: An Introduction to Game Theory 4 Copyright 2002. is there a pure-strategy subgame perfect Nash equilibrium in which G is offered by player 1? (b) Is there a mixed-strategy subgame perfect Nash equilibrium in which G is offered and accepted with positive probability? If so. for some number b. The object is given to the player who bids the higher amount. the winner is determined randomly (with equal probabilities). then he must decide whether to accept it without observing whether the contract is G or B. in equilibrium what is the probability that player 1 selects G and what is the probability that player 2 selects R? 9. The G contract yields a value of 5 to the firm and 5 to the consumer. then the consumer pays a reading cost of 1 unit and learns the firm’s choice (G or B). meaning that he exerts effort to read the contract. After nature selects the valuations. 2008 by Joel Watson For instructors only. whereas the B contract yields a value of 8 to the firm and −4 to the consumer. for i = 1. for any b. If the consumer selects R. Consider a first-price. then the consumer decides how thoroughly to read it (Read or Not). b]. in the case of equal bids. b. Let vi be the valuation of player i. (a) In this game. Prof. Watson _____________________________________________________________________________________ 1. . 2008 by Joel Watson For instructors only. x= y= _____________________________________________________________________________________ 3.1 2 3 4 5 6 7 8 9 Total 7 5 6 7 8 8 8 8 7 64 Your name: _________________________________ Economics 109 Final Exam Answer Sheet Fall 2007. (a) The set of rationalizable strategy profiles: (b) Name the Nash equilibrium/equilibria: (c) A mixed strategy equilibrium in which X and Y are played? Circle one: YES NO _____________________________________________________________________________________ 2. Analysis: Condition on *: Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. (a) The strategy profile derived by backward induction: (b) Number of proper subgames: _____________________________________________________________________________________ 4. do not distribute. 5. . do not distribute. (a) Analysis: Second-period price if no one purchased in the first period: p= (b) Analysis: Optimal first-period price: p1* = _____________________________________________________________________________________ 6. 2008 by Joel Watson For instructors only. (a) Player 1’s fourth-period offer: m4 = (b) Player 2’s third-period offer: m3 = (c) Player 2’s second-period offer: m2 = (a) Player 1’s first-period offer: m1 = Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. .7. Analysis: Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. do not distribute. (a) Analysis of separating PBE (clearly show whether there is an equilibrium and report all conditions): (b) Analysis of pooling PBE (clearly show whether there is an equilibrium and report all conditions): _____________________________________________________________________________________ 8. 2008 by Joel Watson For instructors only. 8. do not distribute. 2008 by Joel Watson For instructors only. (continued) (a) Is there a pure strategy equilibrium in which G is offered? Circle one: Explain: YES NO (b) Is there a mixed strategy equilibrium in which G is offered? Circle one: Explain: YES NO Probability that player 1 chooses G: Probability that player 2 selects R: _____________________________________________________________________________________ 9. Analysis: Low type’s bid: b= & Function Instructors' Manual for Strategy: An Introduction to Game Theory p(b) = Copyright 2002. . Jesse Bull 2008 1. For lake 2 the relationship is F2(L2 ) = 5L2 . There are four boxes (all turned upside down so that the contents of each cannot be seen) arranged in a straight line (say left to right) with an equal distance. An island has 2 lakes and 20 fishermen. The inverse demand function for the market is p = 10 − Q. For convenience. assuming that they interact as in part (b)? 3. what is the total number of fish caught? (b) The chief of the island asks his economist whether this arrangement is efficient (that is. between consecutive boxes.Sample Exam Questions. At what level would each firm argue that f should be set. (a) Under this institution. What is the answer to the chief’s question? What is the efficient number of fishermen on each lake? (c) The chief decides to require a fishing license for lake 1 which would require each fisherman who decides to fish on lake 1 to pay the chief x fish. the boxes are labeled 1. Find the equilibrium quantities and profits. do not distribute. 2. 2008 by Joel Watson For instructors only. and 4. Player 1 is given a $100 bill (by 1 Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. Each firm in a duopoly can produce any positive quantity of output by paying only a fixed cost of f. Each fisherman can fish on only one lake. The current institution is that a fisherman gets to keep the average number of fish caught from the lake on which he chose to fish. If it is to bring about the efficient allocation of fishermen to lakes. (a) Find firm 2’s best response function. Assume that f = 9. where L1 is the number of fishermen on lake 1. 3. whether the equilibrium allocation of fishermen to lakes maximizes the number of fish caught). Player 1 (the hider) and player 2 (the seeker) play the following game. Let qi denote the output of firm i. The boxes are arranged in ascending order from left to right. and let Q denote the total output. . (b) Suppose that the firms interact as in the Stackelberg model with firm 1 choosing its quantity and then firm 2 choosing its quantity (after observing firm 1’s choice). what should x be? 2. (c) Suppose that the fixed cost f is the result of an operating fee that must be paid to the government in order to produce output. say x. On lake 1 the total number of fish caught is given by F1(L1 ) = 10L1 − (L1 )2 /2. Player 2 really does not like exercise and. The value w is due to Ashley’s outside opportunity. Player 2 does not observe where player 1 hides the $100 bill. suffers disutility of $10 for each distance x that he walks. Suppose that the owner of a factory (F) has hired a contractor (C) to improve the factory machinery. and then the administrator of the game places the boxes a large distance apart. If the money is under the box under which player 2 looks.) What is the Nash equilibrium of this new game? (c) Suppose that the game is as in part (b). 2008 by Joel Watson For instructors only. These are the faster rate and the old rate at which the inefficient machinery operated. A properly performed job by the contractor allows the production line to operate at a faster rate (with certainty) that is more efficient. do not distribute. If Ashley works as a surfing instructor for the YMCA. (So. the La Jolla YMCA. assume that there are only two possible rates when the improvements have been made properly. Ashley is negotiating an employment contract with a prospective employer. Solve this bargaining problem using the Nash bargaining solution. If it is not. the default outcome of this negotiation problem leads to the payoff vector (w. and must walk to the box that he wishes to look under. looking under box 1 costs player 2 $10. If Ashley works as a tennis instructor. for example. (a) Describe the Nash equilibrium. under the assumption that Ashley’s bargaining power is πA and the YMCA’s bargaining power is πY . given the warm weather. The contract specifies two things: (1) Ashley’s job description (surfing instructor or tennis instructor) and (2) Ashley’s salary t. but player 2 begins at box 2. player 2 must look under one (and only one) of the boxes. Ashley is better at teaching surfing and enjoys it more. 000 − t. (Suppose x is equal to the length of a city block. If Ashley and the YMCA fail to reach an agreement. and so on. then the YMCA gets zero and Ashley obtains w. Will player 2’s equilibrium strategy change? Why? 4. Once player 1 has hidden the $100 bill under one of the boxes.the administrator of the game) to hide under one of the four boxes. Further. the machinery 2 Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. then her payoff is t − 6000 and the YMCA’s payoff is 10. which is to work as a writer for the San Diego Union Tribune. . and looking under box 2 costs him $20. assume that if improperly improved. For simplicity. (b) Suppose that the game is modified as follows. Describe the joint decision that is made (the job description and salary). 0). 5. then Ashley’s payoff is t − 2000 and the YMCA’s payoff is 102. where Ashley’s payoff is listed first. player 1 gets to keep the $100. Now player 1 chooses the box in which to hide the money. 000 −t.) Player 2 is now required to begin a distance of x to the left of box 1. he gets to keep the $100. In other words. but in state L she only possesses dL . Assume that the contractor possesses no evidence in either state. they go to court. what will happen? Is this efficient? 6. Denote the “high efficiency” state by H.) Clearly. . (a) Model this game by drawing the extensive form. The seller chooses whether to perform (P) or to not perform (N). (With probability . (b) What happens in each state if they go to court? (c) What is the equilibrium of this game? Is this efficient? (d) What does this imply about frivolous law suits by the factory? Does this change if the contractor bears a cost of 5 to go to court? (e) If evidence production is costless and the same transfer schedule is in place (and no cost of going to court for the contractor). The factory owner can present dL in either state at a cost of 16. (Note that any initial payments between the parties have already been made. The factory owner potentially possesses two pieces of evidence dL and dH . Costs are additive. which is possible in either state. so producing both dL and dH . the buyer has the evidence with probability . but that having the more efficient machinery yields the factory a gain of 100. Following either production decision (by the contractor) the players have scope for settlement. Assume their bargaining weights are 1/2. and the “low efficiency” state by L. The court’s action (a transfer between the players) is based on the evidence presented. while production of dH in state H costs her 4.2. do not distribute. when available. costs the factory zero.8 the 3 Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002.can operate only at the slower rate. It is costly for the factory owner to present (or produce) evidence. Producing no evidence (∅). When they go to court the factory owner can present evidence that is in her possession. If the seller performed. In state H she possesses both dL and dH . Suppose that improperly improving the machinery yields the contractor a cost savings of 4. Following the seller’s production decision. dL ⇒ 13 from C to F dH ⇒ 2 from C to F ∅ ⇒ –4 from C to F (4 from F to C) dL and dH ⇒ 15 from C to F. 1/2. Consider the following interaction between a buyer and seller who have agreed to a contract. which are just the running of the machinery at the possible speeds. The buyer observes the seller’s production decision (P or N). 2008 by Joel Watson For instructors only. If no settlement is reached. the buyer either has evidence available or does not. It is common knowledge that the court’s mapping of evidence to transfers is as follows. it is efficient for the improvements to be made. and are not modeled here. The state and existing evidence are common knowledge between the players. costs the factory 20. the court imposes expectations damages—it requires the breaching player to pay the other an amount that gives the non-breaching player what she expected to receive under the contract. where c′ (q) > 0 and c′′(q) > 0. she must pay an amount that gives the principal q − s. Naturally. If the buyer loses. when no one goes to court. In each period. Is this reasonable? Explain. Going to court costs each player $4.) Suppose that the players agree to the same value of q and s for each period.) Explain. If the good of the agreed upon quality q is delivered. the principal is to pay s to the agent. 8. if she goes to court and the seller does not. and the buyer also decides whether to present her evidence if it exists. and the agent’s is s − c(q). the buyer’s evidence is costless to disclose. The seller wins if she goes to court and the buyer does not. The value of each player’s outside option is zero. Suppose that if the contract is breached. The principal’s payoff from the transaction is q − s. which is made prior 4 Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. A buyer and seller can potentially trade a single good of quality i ∈ R+ . Each period a principal and an agent contract for the agent to produce a good of quality q. the court takes no action. which costs the agent c(q). do not distribute. Both the principal and agent are risk neutral.) If it enforces the contract. does the contract give her the incentive to perform? 7. Under what conditions are these played in a subgame perfect equilibrium? (Use modified trigger strategies. Following the realization of the buyer’s evidence outcome. (If the agent does not produce the good of the agreed upon quality q. say by not paying or partially paying. the players simultaneously and independently decide whether to go to court (C) or not to go to court (N). . then the principal must write a contract.buyer does not have the evidence. the players go to court. the buyer has the evidence with probability . the buyer must present the evidence in order to win.) If the seller did not perform. Consider the following infinitely-repeated game with discount factor δ. (It is incurred each period. The court outcome is decided as follows. If both go to court. With probability v ∈ (0. If the court observes these. which is private information to the buyer. regardless of whether she presents evidence. When available. the court imposes no transfer. The buyer wins. the court requires that the seller pay $10 to the buyer. 1) the court observes both the level of q that is produced and whether s has been paid. If the buyer wins at court. What is the seller’s expected payoff in the litigation game when she has performed and when she has not? If the seller’s immediate payoff in the productive interaction is $4 higher when she does not perform than when she performs. The principal incurs a cost k of writing the contract each period. it will enforce the contract. and then they play the production/trade game. when the seller has performed. the players agree on q and s. she must pay an amount that gives s to the agent. If the principal breaches.5. 2008 by Joel Watson For instructors only. The good’s quality is determined by the seller’s level of investment. Describe the equilibrium. thus. her payoff is p[φ(i) − ψ(i)] + ψ(i) − i. the seller is “greedy. (b) Suppose that i is verifiable and the buyer can offer a contract to the seller. (d) Under what conditions can you find a perfect Bayesian equilibrium that induces the efficient level of investment ie ? Even if you cannot describe the conditions. If they do not trade. given i. where limi→0 φ′(i) = ∞ and limi→∞ φ′ (i) = 0 to ensure interior solutions. If the greedy seller accepts an offer p ≥ γ.” which means that the seller bears a large cost K (assume K > γ[φ(i) − ψ(i)] for all i) if she sells the good to the buyer at a price that gives her less than γ ∈ (0. φ(0) > ψ(0) = 0 and φ′(i) > ψ ′(i) for all levels of investment. before the seller chooses i. the greedy seller will not agree to trade if she does not receive payment from the buyer that is at least γ[φ(i) − ψ(i)] + ψ(i). Both functions φ and ψ are twice differentiable. always strictly positive and strictly increasing in the investment. which the buyer does not observe. (c) Under what conditions can you find a perfect Bayesian equilibrium of this game that induces a level of investment i∗ such that the seller’s payoff from trade is greater than her payoff from consuming the good herself? Even if you cannot describe the conditions. The buyer is motivated by his value of the good φ(i) less what he pays to the seller. 1) of the trade surplus. the seller is “accommodating. (a) Describe the efficient level of investment. That is. If the greedy seller accepts an offer p < γ. The seller can be of two possible types. the buyer makes a take it-or-leave-it offer to the seller. With probability (1 − q). With probability q. The function φ is strictly increasing and concave. Instead. The offer p is a fraction of the trade surplus. the seller can consume the good and receive a ψ(i). 5 Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. The trade surplus φ(i) − ψ(i) is. which she accepts or rejects. . describe the intuition. given an investment level i. and ψ : R+ → R+ the seller’s value from consuming a good of quality i. When can the efficient level of investment be implemented? What contract induces the efficient level of investment? Suppose now that the buyer and seller cannot contract prior to the seller’s investment decision. The accommodating seller’s payoff from accepting any offer p ≥ 0 is p[φ(i) − ψ(i)] + ψ(i) − i. describe what such an equilibrium would look like. do not distribute. if the buyer’s offer is accepted. Finally. That is. An investment level of i by the seller costs the seller i. the accommodating seller will agree to trade as long as she receives a price at least as large as ψ(i). after the investment has been made.to trade. given an investment level i. his payoff is (1 − p)[φ(i) − ψ(i)]. So. whereas ψ is weakly increasing and concave.” which means she will agree to a price that gives her any non-negative share of the trade surplus. Let φ : R+ → R+ describe the buyer’s. that the seller is to receive. 2008 by Joel Watson For instructors only. her payoff is p[φ(i) − ψ(i)] + ψ(i) − i − K < 0. the firm’s profit will be 16. Let xi denote player i’s level of investment. The players invest first. (c) In light of your answers to parts (a) and (b). Each player can invest so as to increase her valuation of the object. The investment decisions become common knowledge. the buyer and seller can make a non-binding agreement. 10. (b) Suppose player i’s valuation of the object is equal to x2i . (a) Suppose player i’s valuation of the object is equal to x2i . 2]. If she does not invest. and then the winner may make her investment. prior to the seller’s investment decision. Suppose that two players who may choose to form a firm interact as follows. which costs her xi . First the players each choose whether to make an investment (which are made simultaneously and independently). (a) What is the efficient outcome? (b) Describe conditions on di (0) and di (3) such that. Then the players decide whether to form a firm and. (f) Suppose that. in equilibrium. Let di (0) denote the value of player i’s disagreement payoff when she has not invested. . if they decide to form a firm. Two risk-neutral players bid for a single object in an auction. These are symmetric. we focused on sequential equilibria? Explain. both players invest. (c) Compare the expected revenue under (a) and under (b). how to divide the profit from the firm. Describe an equilibrium of this game. sealed-bid auction. 2008 by Joel Watson For instructors only. sealed-bid auction. If player i invests. Assume that the player’s will divide the surplus from forming a firm according to the standard bargaining solution with equal bargaining weights. do not distribute. (d) Instead assume that each player i’s valuation is given by x2i +wi .(e) Would it be possible to induce the efficient level of investment if. (The timing of this and the type of auction is described below. Show that this is an equilibrium. instead. it costs her nothing. the firm’s profit will be 12. If one or both has not invested. If both have invested.) The players’ investment decisions are simultaneous and independent. Now 6 Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. where wi ∼ u[1. it costs her 3. briefly provide some intuition for your answers in relation to the “hold-up” problem. Players invest before a second-price. Describe an equilibrium of this game. and their investment levels are then publicly announced before the good is auctioned in an English auction (ascending oral bids). and let di (3) denote the value of her disagreement payoff when she has invested. Does this help attain the efficient level of investment? 9. The object is first auctioned in a second-price. These are “real” or productive actions. but can base his decision (about whether to impose x or y) on the players simultaneously and independently announcing the state to him. There are two players (1 and 2) who know the state. u1(x) = 10 u1(y) = 5 u2(x) = 5 7 Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. etc.) The social planner would like to impose public action x in state a. There is an uninformed social planner (or external enforcer). The demand curve is given by Q = 100 − p.) The social planner does not know the state. Consider a Bertrand duopoly in which two firms simultaneously and independently select prices. Assume that the firms have equal discount factors given by δ. (You could think of this as a parent and two children or a firm and two workers. in fact. 2008 by Joel Watson For instructors only. Your task is to try to design a game form (a mechanism) that induces. which is either a or b. . The idea is that the game form will specify a public action (x or y) as a result of the announcements that players make. (a) Suppose that player’s preferences are represented by the following utility functions (which do not depend on the state).they do not observe each others’ investments or valuations (that is xi and wi are private information). each player can announce “a” (claiming the state is a) or “b” (claiming the state is b). and impose public action y in state b (call this a social choice function). Suppose that each firm’s cost function is c(q) = 20q. players to truthfully name the state and implements x in state a and y in state b. and show that these are. equilibrium behavior. Describe all equilibrium prices that can occur in equilibrium (as a function of δ). do not distribute. That is. What is the expected revenue for the seller? 11. So consider game forms that have x imposed when both players have announced “a” and have y imposed when both players have announced “b. in equilibrium. We assume that he would like for both players to truthfully announce the state.” Players’ preferences over x and y are specified below. Suppose that this interaction is infinitely repeated and firms observe each other’s choice of price. (You could think of these decisions as being which child gets which toy. Consider the following social choice problem. Describe an equilibrium. and are the only actions the social planner can take. or which worker is assigned which task. where p is the lowest price charged by the firms. 12. Consumer demand is divided equally between the firms who charge the lowest price. The task of the social planner is to design a game form that the players will play by making their announcements. Now suppose that in state b player 2 possesses a document (evidence) that she does not possess in state a. Assume that evidence disclosure and announcements must occur simulateously and independently as before. but now assume that in state b player 1 possesses a document (evidence) that she does not possess in state a (and player 2 possesses no documents in either state). in addition to making an announcement of the state. . 8 Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002. u1(x) = 5 u1(y) = 10 u2(x) = 5 u2(y) = 10 Is there a mechanism (as described above) that implements the desired social choice function? (c) Suppose players have the same preferences as in part (a). 2008 by Joel Watson For instructors only. (Player 1 possesses no documents in either state. How does this change your answer to part (a)? (d) Consider the scenario in part (c). do not distribute. player 2 can also disclose her evidence (when she possesses it).) Suppose that. How does this change your answer to part (c)? (e) Briefly discuss these in relation to “signaling” models.u2(y) = 10 Is there a mechanism (as described above) that implements the desired social choice function? (b) Now suppose that player’s preferences are represented by the following utility functions (which do not depend on the state).
Copyright © 2024 DOKUMEN.SITE Inc.