10. Mixed Strategies in Baseball, Dating and Paying your Taxes
ECON 159.?Game Theory

Lecture?10. Mixed Strategies in Baseball, Dating and Paying your Taxes

https://oyc.yale.edu/economics/econ-159/lecture-10


Last time we looked at mixed strategies, and in particular, we looked at mixed-strategy equilibria. The big idea was if a player is playing a mixed strategy in equilibrium, then every pure strategy in the mix--that's to say every pure strategy on which they place some positive weight--must also be a best response to what the other side is doing. Then we used that trick. We used it in this game here, to help us find Nash Equilibria and the way it allowed us to find the Nash Equilibria is we knew that if, in this case, Venus Williams is mixing between left and right, it must be this case that her payoff is equal to that of right and we use that to find Serena's mix.?
Conversely, since we knew that Serena is mixing again between l and r, we knew she must be indifferent between l and r and we used that to find Venus' mix.?So this was the mix that we found before we changed the payoffs, we found that Venus' equilibrium mix was .7, .3 and Serena's equilibrium mix was .6, .4. And a reasonable question at this point would be, how do we know that's really an equilibrium?

So what I want to do now is actually do that, do that missing step. We rushed it a bit last time because we wanted to get through all the material. Let's actually check that in fact P* is a best response to Q*. So what I want to do is I want to check that Venus' mix P* is a best response for Venus against Serena's mix Q*. The way I'm going to do that is I'm going to look at payoffs that Venus gets now she knows - or rather now?we?know she's playing against Q*. So let's look at Venus' payoffs.. I'm going to figure out her payoffs for L, her payoffs for R, and also her payoff for what she's actually doing P*.?
So Venus' payoffs, if she chooses L against Q* then she gets 50 times .6.. [This is Q* and this is 1-Q*.]. So she gets 50 times .6 and 80 times 1 minus .6 which is .4, 80 times .4. We can work this out, and I worked it out at home, but if somebody has a calculator they can please check me. I think this comes to .62. If Venus chose R--remember R here means shooting to Serena's right, to Serena's forehand--if she chose R then her payoffs are 90?Q*. So 90(.6) plus 20(1-Q*) so 20(.4), so 90(.6) plus 20(.4), and again I worked that out at home, and fortunately that also comes out at .62.?
So what's Venus' payoff for P*??P* is .7, so .7 of the time she will actually be playing L and when she plays L, she'll get a payoff of .62, and .3 of the time she'll be playing R, and once again, she'll be getting a payoff of .62 and.??I'll show you what the equilibrium is but P* itself is .7. So when Venus plays L with probability of .7, then .7 of the time she'll get the expected payoff of .62 and .3 of the time she'll get a payoff again of .62.

We in fact chose Serena's mix Q to make Venus indifferent between L and R. And that's exactly what we found here, going left it's .62, going right it gets .62 and hence P* gets .62. But I claim we can now see something a little bit else. We can now ask the question, is P* in fact the best response? Well, for it not to be a best response, for this not to be an equilibrium, there would have to be some deviation that Venus could make that would make her strictly better off. If this were not an equilibrium, there would have to be some deviation for Venus, that would make her strictly better off. By playing P* she's getting a return of .62. So one thing she could deviate to, is playing L all the time. If she deviates to playing L all the time, her payoff is still .62 so she's not strictly better off. That's not a strictly profitable deviation. Another thing she could deviate to, is she could deviate to playing R. If she deviates to playing R, her payoff will be .62. Once again, she's not strictly better off: she's the same as she was before, so that's not a strictly profitable deviation.?
I've shown that P* is as good as playing L, and P* is as good as playing R. In fact that's how we constructed it. So deviating to L is not a strictly profitable deviation and deviating to R is not a strictly profitable deviation. But at this point, somebody might ask and say, okay, you've shown me that there's no way to deviate to a pure strategy in a strictly profitable way, but how about deviating to another mixed strategy? We can see that Venus has no strictly profitable pure-strategy deviation. She has no strictly profitable pure-strategy deviation because each of her pure strategies yields the same payoff as did her mixed strategy, yields the same as P*. But how do we know that she doesn't have a mixed strategy that would be strictly better??
So any mix that Venus deviates to, will be a mix between L and R, and any mix between L and R will be a mix between .62 and .62 and hence will yield .62. So we're going to use again, this fact we developed last week. The fact we developed last week was that any mixed strategy yields a payoff that is a weighted average of the pure strategy payoffs, the payoffs to the pure strategies in the mix. Any mixed strategy yields a payoff that is a weighted average of the payoff to the pure strategies in the mix. So here if we've shown that there's no pure-strategy deviation that's strictly profitable, then there can't be any mixed strategy deviation that's strictly profitable.?

Because the mixed strategy deviations must yield payoffs that lie among the pure strategy deviations. The lesson is we only ever have to check for strictly profitable pure-strategy deviations. Because if we had to check for mixed strategy deviations one by one, we'd be here all night, because there's an infinite number of possible mixed-strategy deviations. But there aren't so many pure strategy deviations we have to check. Let's just repeat the idea. Suppose there isn't any pure-strategy deviation that's profitable, then there can't be any mixed strategy deviation that's profitable, because the highest expected return you could ever get from a mixed strategy, is one of the pure strategies in the mix, and you've already checked that none of those are profitable.?
They say, look we found Venus' equilibrium mix by choosing a P and a 1-P to make Serena indifferent. We found Serena's equilibrium mix by finding a Q and a 1-Q to make Venus indifferent and a natural question you hear people ask then is, why is Venus "trying to make Serena indifferent?" Why is Serena "trying to make Venus indifferent?" That's not really the point here. It isn't that Venus is trying to make Serena indifferent. It's that in equilibrium, she is going to make Serena indifferent. It isn't her goal in life to make Serena indifferent between l and r, and it isn't Serena's goal in life to make Venus indifferent between L and R, but in equilibrium it ends up that they make each other indifferent. The way that we can see that is that if Venus puts--we said last time it's repeated--if Venus puts too much weight, more than .7 on L, then Serena just cheats to the left all the time, and that can't possibly be an equilibrium. And if Venus puts too much weight on R, then Serena cheats to the right all the time and that can't be an equilibrium. So it has to be that what Venus is doing is going to make Serena exactly indifferent and vice versa.?
It was that the return to stealing, attempting to steal, seems to be roughly a wash. It seems to be that the expected return when this great base runner attempts to steal a base is roughly the same as the return when they don't attempt to steal the base. But I claim we knew that was going to the case. We didn't have to go and look at the data. Why did we know that was going to be the case? How did we know that we were bound to find a return in that analysis that finds those things roughly equal?
Since we're in a mixed strategy equilibrium, since he's randomizing, it must be the case that the returns are equal. That's the big idea here, that's the thing we learned last time. If the player, and these are professional baseball players doing this, they've been very well trained, a lot of money has been spent on getting the tactics right. There's people sitting there who are paid to get the tactics right. If it was the case that the return to base stealing wasn't roughly equal when you attempt to steal or didn't attempt to steal, then you shouldn't be randomizing. Since you are randomizing it must be the case that the returns are roughly equal. So that's the first thing to observe and the second thing to observe is what we just pointed out.?
In fact, the value of having a fast base stealer on the team doesn't show up in the expected return on the occasions on which he attempts to steal, or which he does not attempt to steal. It shows up where? It shows up in the fact that the pitching team changes their behavior to make it harder for this guy to steal by going faster to the plate, or throwing more fast balls. Where will that show up in the statistics? If you're just a statistician like me, you just look at the data, where will that show up??
It's going to show up in the batting average of the guy who's hitting behind the base stealer. The guy hitting behind the base stealer is going to have a higher batting average because he's going to get more pitches which are fast balls to hit, and more pitches out of the stretch. So if you ignore that effect, you're going to be in trouble. But we know, if we analyze this properly using Game Theory, we know we're in a mixed strategy equilibrium. We know, in fact, the pitching team must be reacting to it. We know there must be a cost in doing that, and the cost turns up in the hitter behind.?

Much as they were before, whereby we mean that Nina wants to meet David but she would, given the choice she would rather meet David in the apple fields. And David who's a dark personality, likes the sort of darker side of Shakespeare. And he also wants to meet Nina but he would rather meet at the Yale Rep. If that's backwards I apologize to their preferences. But once again, because they're still incompetent Economics majors, they've again forgotten to tell each other where they're going.?

I'm going to postulate the idea that Nina is going to mix P, 1 - P and David is going to mix Q, 1 - Q.?So to find the Nash Equilibrium Q, to find the mix that David's using we use Nina's payoffs. So let's do that. If she goes to the Rep her payoff is 0 if David goes apple picking with probability Q, and her payoff is 1 if she meets David at the Rep, which happens with probability 1 - Q. So this is her payoff from apple picking and this is her payoff from seeing Richard II. And what do we know if Nina is indeed mixing, what do we know about these two payoffs? They must be equal. If Nina is in fact mixing, then these two things must be equal. And that means: what we're saying is 2Q equals 1(1-Q) or Q equals 2/3, so?Q is 1/3.?
Okay, so our guess is that if there's a mixed strategy equilibrium it must be the case that David is assigning a probability 1/3 to going apple picking, which means he's assigning probability 2/3 to his more favored activity which is going to see Richard II.?So to find the Nash Equilibrium P, to find Nina's mix what do we do?? Use David's payoffs. So David's payoffs, if he goes apple picking then he gets a payoff of 1 if he meets Nina there and 0 otherwise and if he goes to the Rep he gets a payoff of 0 if Nina's gone apple picking, and he gets a payoff of 2 if he meets Nina at the Rep.?
Once again, if David is indifferent it must be that these are equal. So if these are-- if David is in fact mixing between apple picking and going to the Rep--it must be that these two are equal and if we set this out carefully we'll get, let's just see, we'll get 1(P) equals 2(1-P), which is P equals 2/3 and 1-P equals 1/3. So here we have Nina assigning 2/3 to going apple picking, which in fact is her more favored thing and 1/3 to going to the Rep.?

So check that P equals 2/3 is in fact the best response for Nina. Let's go back to Nina's payoffs. For Nina, if she chose to go apple picking, her payoff now is 2 times Q but Q is equal to 1/3 plus 0(1-Q) and if she chooses to go to the Rep then her payoff is 0 with probability 1/3 and 1 with probability now 2/3.?
All I've done is I've taken the lines I had before and substituted in now what we know must be the correct Q and 1-Q and this gives her a payoff of 2/3 in either case. If she chooses P, her payoff to P will be 2/3 of the time she'll get the payoff from apple picking which is 2/3 and 1/3 of the time she'll get the payoff from going to the Rep which is 2/3 for a total of 2/3. So Nina's payoff from either of her pure strategies is 2/3. Her payoff from our claimed equilibrium mixed strategy is 2/3, so neither of her possible pure strategy deviations were profitable. She didn't lose her anything either, but they weren't profitable, and by the lesson we started the class with, that means there cannot be any strictly profitable mixed deviation either, so indeed, for Nina, P is a best response to Q. We can do the same for David but let's not bother, it's symmetric.?

So in this game we found another equilibrium. The new equilibrium is Nina mixed 2/3, 1/3 and David mixed 1/3, 2/3 and we also know the payoff from this equilibrium. The equilibrium from this payoff, for both players, was 2/3. There are three equilibria in this game. They managed to meet at apple picking in which case the payoffs are 2 and 1. They managed to meet at the Rep, that's the second pure strategy equilibrium, in which case the payoffs are 1 and 2, or they mixed, both of them mixed in this way, and their payoffs are 2/3, 2/3. The other equilibrium payoffs the worst you got was 1 and you sometimes got 2, but now here you are playing a different equilibrium and at this different equilibrium you're only getting 2/3. Why have these payoffs got pushed down so far? What's happening to our poor hapless couple?

The reason, what's forcing these payoffs down is they're not meeting very often.?How often are they actually meeting??So they meet when they end up in this box or this box, is that right? So what's the probability of them ending in those boxes? Well ending up in this box is probability 2/3, 1/3 and ending up in this box is probability 1/3, 2/3. You end up meeting apple picking, the 2/3 of the time when Nina goes there times the 1/3 of the time when David goes there. And you end up meeting at the Rep the 1/3 of the time Nina goes there times the 2/3 of the time that David goes there. So this is the total probability of meeting and it's equal to 4/9.?So 4/9 of the time they're meeting, but 5/9 of the time--more than half the time--they're screwing up and failing to meet.?

So this is a very bad equilibrium, but it captures something which is true about the game. What is surely true about this game is that if they just played this game, they wouldn't meet all the time. In fact what we're arguing here is they'd meet less than half of the time. But certainly this idea that we're given from the pure strategy equilibria, that they would magically always manage to meet seems very unlikely, so this does seem to add a little bit of realism to this analysis of the game. However, it leads to a bit of an interpretation problem. You might ask the question why on Earth are they randomizing in this way. Why are they doing this? It's bad for everybody. Why are they doing this? This leads us to think about a second interpretation for what we think mixed strategy equilibria are. Rather than thinking of them literally as randomizing, it's probably better in this case to think about the following idea.?
We need to think about David's mixture as being a statement about what Nina believes David's going to do. David may not be literally randomizing. But his mixture Q, 1--Q, we could think of as Nina's belief about what David's going to do. Conversely, Nina may not literally be randomizing. But her P, 1 - P, we could think of as David's belief about what Nina's going to do. And what we've done is we've found the beliefs such that these players are exactly indifferent over what they do. We found the beliefs for David over what Nina's going to do, such that David doesn't really quite know what to do. And we found the beliefs that Nina holds about what David's going to do such that Nina doesn't quite know what to do. That make sense? So it's probably better here to think about this not as people literally randomizing but these mixed strategies being a statement about what people believe in equilibrium.?
I want to spend the rest of today looking at yet another interpretation of mixed strategy equilibria. So, so far we have two, we have people are literally randomizing. We have thinking of these as expressions about what people believe in equilibrium rather than what they're literally doing. And now I'm going to give you a third interpretation. So for now we can get rid of the Venus and Serena game. So to motivate this third idea I want to think about tax audits.?

They can choose to pay their taxes honestly?or to cheat. This is the tax payer, the parent. And at the same time the audit office, the auditor, has to make a choice, and the auditor's choice is whether to audit you or not and it's not literally true because literally the auditor can wait until your tax return comes in and then decide whether to audit you. But for now let's think of these choices being made simultaneously, and we'll see why that makes it more interesting. So let me put down some payoffs here and then I'll explain them. So 2, 0, 4, -10, 4, 0 and 0, 4.?

So from the taxpayer's point of view, if they're going to be audited, then they'd rather pay their taxes than not, and if they're not going to be audited then according to these payoffs they'd rather cheat. From the auditor's point of view, if they knew everyone was going to pay taxes, then they wouldn't bother auditing and if they knew everyone was going to cheat, then they'd of course audit. So you can quickly see that there's no box in which the best responses coincide, there's no pure strategy Nash Equilibria.?
So to find the Nash Equilibrium here we know it's going to be mixed. So to find the probability with which taxpayers pay their taxes--and let me already start getting ahead of myself and just say to find the proportion of taxpayers who are going to pay their taxes--what do we do? What must be true of that equilibrium proportion Q of taxpayers who pay their taxes??

So from the auditor's point of view, if the auditor audits, their payoff is 2Q plus 4(1-Q) and if they don't audit their payoff is 4Q plus 0(1-Q). Everyone see how I do this, this is 2Q plus 4(1-Q) and this is 4Q plus 0(1-Q). And if indeed the auditor is mixing, then these must be equal. And if they're equal, let's just do a little bit of algebra here and we'll find that 2Q equals 4(1-Q) so Q equals 2/3.?So our claim is to make the auditor exactly indifferent between whether to audit or not, it must be the case that 2/3 of the parents of the kids in the room, are going to be paying their taxes honestly, which means 1/3 aren't, which is kind of worrying, but never mind.?We found the taxpayer, we found the proportion of taxpayers who are paying their taxes, now I want to find out the probability of being audited.?
How do I figure out the equilibrium probability of being audited in this model? How do I work out the equilibrium probability of being audited??So the equilibrium probability of being audited are going to use P and 1-P, so P is going to be the probability of being audited, how do I find P? Yeah, I'm going to look at the taxpayer's payoffs. So from the taxpayer's point of view, if the taxpayer pays their taxes, their payoff is just 0, and if they cheat they're payoff is -10P plus 4(1-P). And if indeed the taxpayers are mixing--or in other words, we are saying that not all taxpayers are cheating and not all taxpayers are honestly paying their taxes--then these must be equal. So if these are equal I'm going to get?4 equals 14P, which is the same as saying P equals 2/7.

So my claim is that the equilibrium here is for 2/3 of the taxpayers to pay their taxes and for the audits, the auditor, to audit 2/7 of the time. Now we could go back in here and we could check, I could do what I did before, I could plug the Ps and Qs in here and check that in fact this is an equilibrium, but trust me that I've done that, trust me that it's okay. So here we have an equilibrium, let's just write down what it is. From the auditor's point of view it is that they audit 2/7 of the time, or 2/7 of the population, and from the taxpayers' point of view, it's that they pay their taxes honestly 2/3 of the time and not otherwise. Now without focusing too much on these exact numbers for a second, I want to focus first for a minute on how do we interpret this mixed strategy equilibrium.?

So from the point of view of the auditor we're really back where we were before with the base stealer or the person who's searching baggage at the airport. We could think of the auditor literally as randomizing.?It actually is the case that by law, that the auditor's literally have to randomize. So this 2/7, 5/7 this has the same interpretation as we had before. This is really a randomization. But this 2/3, 1/3 has a different interpretation and a potentially exciting interpretation. It isn't that we think that your parents get to tax day, work out what their taxes would be and then toss a coin. The interpretation here is that the parents, some parents are paying their taxes and some parents aren't paying their taxes. There's a lot of parents out there, a lot of potential taxpayers, and in the population, in equilibrium, if these numbers were true, 2/3, of parents would be paying their taxes and 1/3 would be cheating.?
So this is a randomization by a player, and this is a mixture in the population. The new interpretation here is, we could think of the mixed strategy not as players randomizing, but as a mix in a large population of which some people are doing one thing and the other group are doing the other. It's a proportion of people paying taxes. So I don't know if this 2/3, 1/3 is an accurate number for the U.S. It's probably not very far off actually. For Italy I'm ashamed to say the number of people who pay taxes is more like 40%, maybe even lower now, and there are countries I think where it gets as high as 90%. I think the U.S. rate when they end up auditing is a little higher than this but not much. So again, we're going to think of this not as randomization but as a prediction of the proportion of American taxpayers who are going to pay their taxes.?

Now, I want to use this example in the time we have left, to actually think about a policy experiment. So let's put this up somewhere we can see it. Let's think about a new tax policy. So suppose that Congress gets fed up with all these newspaper reports about how 2/3 of American's don't pay their taxes or whatever the true proportion is, I think it's actually a little higher than that but never mind. They get fed up with all these reports and they say, this isn't fair, we should make people pay their taxes so we're going to change the law and instead of paying--instead of being in jail for ten years, or the equivalent of a fine of -10 if you're caught cheating, we're going to raise the fine or the time in jail so that it's now -20. So the policy experiment is let's raise the fine--to fine the cheating--to -20 and the aim of this policy is try to deter cheating.?It seems a plausible thing for a government to want to do.?
Let's redraw the matrix, so here's the game - 2, 0, 4, -20, 4, 0, 0, 4 audit, not audit and pay honestly or cheating. So here's our new payoffs and let's ask the question, with this new fine in place, now we've raised the fine, to being caught not paying your taxes, in the long run once things have worked their way back into equilibrium again, after a few years, do we expect American taxpaying compliance to go up or to go down, or what do we expect? What do we think is going to happen? The only way we're going to figure this out is to work out, so let's work out the new Q in equilibrium.?

Let's do this, so to find out the new Q in equilibrium, once again, we're going to have to look at the auditor's payoffs, and the auditor's payoffs if they audit, they're going to get 2Q plus 4(1-Q), and if they don't audit they're going to get 4Q plus 0(1-Q), and if the auditor is indifferent, if they're mixing, it must still be the case that these are equal.?Yeah, it's still there right, I didn't delete it. It's the same equation that sits up there. From the auditor's point of view, given the payoffs to the auditors nothing has changed, so the tax compliance rate that makes the auditor exactly indifferent between auditing your parents and not auditing your parents, is still exactly the same as it was before at 2/3. In equilibrium, tax compliance hasn't changed at all.
Let me say that again, the policy was we're going to double the fines for being caught cheating and in equilibrium it made absolutely no difference whatsoever to the equilibrium tax compliance rate. Now why did it make no difference? Well let's have a techie answer and then a better, a more intuitive answer. The techie answer is this, what determines the equilibrium tax compliance rate, what determines the equilibrium mix for the column player is the row's payoffs. What determines the equilibrium mix for the column player are the row's payoffs--row player's payoffs. We didn't change the row player's payoffs, so we're not going to change the equilibrium mix for the column player. Say again, we changed one of the payoffs for the column player but the column player's equilibrium mix depends on the row player's payoffs and we haven't changed the row player's payoffs, so we won't change the equilibrium compliance rate, the equilibrium mix by the column player.?
The probability of audit will have changed. What's going to change is not the Q but the P, the probability with which you're audited is going to change in this model. Let's just check it, to find the new P, I need to look at the taxpayer's payoffs and the taxpayer's payoffs are now 0, –sorry, if they pay their taxes honestly then they get 0, and if they cheat they get -20 with probability P and 4 with probability 1-P. If they're mixing, if some of them are paying and some of them are not, this must be the same, and I'm being more careful than I was last time I hope, this gives me 24 P is equal to 4 or P equals 1/6. So the audit rate has gone down from 2/7 to 1/6. I'm guessing that probably wasn't the goal of the policy although it isn't necessarily a bad thing. There is some benefit for society here, because audits are costly, both to do for the auditor and they're unpleasant to be audited, so the fact that we've managed to lower the audit rate from 2/7 to 1/6 is a good thing, but we didn't manage to raise the compliance rate.?
So I don't want to take this model too literally, because it's just a toy model, but nevertheless, let's try and draw out some lessons from this model. So here what we did was?we changed the payoff negatively to being caught cheating. But a different change we could have done is we could have left the -10 in place and we could have raised the payoff to cheating and not getting caught. We could have left this 10 in place and changed this 4 let's say to a 6 or an 8. We've increased the benefits to cheating if you're not caught. What would that have done in equilibrium? So I claim, once again, that would have done nothing in equilibrium to the probability of people paying their taxes, but that would have done what to the audit rate was that the audit rate would have gone up, the equilibrium audit rate would have gone up.?

Let's tell that story a second. So rich people, people who are well paid, have a little bit more to gain from cheating on their taxes if they're not caught, there's more money at stake. So my colleagues who are finance professors in the business school have more money on their tax returns than I do, so in principle, they gain more if they cheat. Does that mean that they cheat more than me in equilibrium? No, it doesn't mean that they cheat more than me in equilibrium. What does it mean? It means they get audited more often. In equilibrium, richer people aren't necessarily going to cheat more, but they are going to get audited more, and that's true. The federal audit rates are designed so they audit the rich more than they audit the poor. Again, it's not because they think the rich are inherently less honest, or the poor are inherently more honest, or anything like that, it's simply that the gains to cheating and not getting caught are bigger if you're rich, so you need to audit more to push back into equilibrium.?
Now, suppose we did in fact want to use the policy of raising fines to push down, to push up the compliance rate, to push down cheating. How would we change the law? Suppose we want to raise the fines for cheating, we don't like people cheating so we raise the fines, but we're worried about this result that didn't push up compliance rates, how could we change the law or change the incentives in the game so that it actually would change compliance rates? What could we do??

If we want to change the compliance rates we should change the payoffs to the auditor. The problem with the way the auditor is paid here is that the auditor is paid more if they manage to catch people, but audits are costly. The problem with that is when you raise the fine on the other side, all that happens is the auditor's audit less often in equilibrium. So if you want to get a higher compliance rate, one thing you could do is change the payoffs to the auditor to make auditing less costly for them, or making catching people nicer for them, give them a reward, or you could simply take it out of Game Theory altogether. You could enforce, you could have congressional law that sets the audit rates outside of equilibrium, and that's been much discussed in Congress over the last five years. Somebody setting audit rates, as it were, exogenously by Congress.?

You might think they're going to be political considerations going on in Congress other than just having an efficient tax system. The big lessons from this class are there are three different ways to think about randomization in equilibrium or out of equilibrium. One is it's genuinely randomization, another is it could be something about peoples belief's, and a third way and a very important way is it could be telling us something about the proportion of people who are doing something in society, in this case the proportion of people who are paying tax.?
A second important lesson I want to draw out here, beyond just finding equilibria, two other things we drew out today, one lesson was when you're checking equilibria, checking mixed strategy equilibria, you only have to check for pure strategy deviations. Be careful, you have to check for all possible pure strategy deviations, not just the pure strategies that were involved in the mix. If the guy has seven strategies and is only mixing on two, you have to remember to check the other five. The third lesson I want to draw out today is because of the way equilibria works, mixed strategy equilibria work, if I change the column player's payoffs it changes the row player's equilibrium mix, and if I change the row player's payoffs, it changes the column player's equilibrium mix. Next time, we're going to pick up this idea that mixed strategies can be about proportions of people playing things and take it to a totally different setting, namely evolution.?
