Josh Dunigan |||

Formal epistemology and gaming

Any good competitive gamer will start to create a model of the game in their head. They will start to think about their decisions and the utility gained from them, probabilities of something happening over another thing, and setting themselves up for the best chance of success. All of these things are related to game theory and also formal epistemology (the theory of knowledge). In particular, there are a few things that I have been thinking about when playing games like Melee, League of Legends, and CS:GO. One is Bayesian epistemology, another decision making, and the other disagreements. This post will serve as a philosophical justification for the next time you are playing a game and you call your teammates braindead”, slang for epistemically irrational. It will also hopefully make you just more epistemically rational and rank up in them games, summoner.

Decision Theory

As Titelbaum puts it, decision theory is just searching for rational principles to evaluate the various acts available to an agent at any given moment. Given what she values (her utilities) and how she sees the world (her credences), decision theory recommends the act that is most efficacious for achieving those values from her point of view”. In other words, there are some possible decisions someone could make about something. These decisions are to applied to their credences, the probabilistic beliefs they have about certain events happening in the world.

Expected utility

An important concept is expected utility. Say there is some baseball batter playing in tonights game. You think that there is a 30% chance that they will get 1 hit, 20% chance they will get 2 hits, and a 50% chance they will get 3 hits. Any other amount of hits you think are 0% possible. We can calculate the expected amount of hits you think the batter will get

.30 x 1 + .20 x 2 + .50 x 3 = 2.2

2.2 does not represent the actual number of hits you think they will get. What you are saying is effecitively that over the course of a large amount of games, you think that at this batters skill level in this scenario would average about 2.2 hits per game. If you were correct in your credences, the batter at the end of the season to average about 2.2 hits per game.

We can apply this to stocks as well. Say some company has its earnings soon. You think that there is a 40% chance the share will go up after earnings and a 60% chance it will go down. Say the expected values are

.40 x 100 + .60 x 10 = 46

This is called the fair price, meaning that if the stock price is below 46 dollars before the earnings, you should buy it because you think it will make money. Say it is at 30 dollars, so you would then expect to make 16 dollars. If it is above 46, rationally speaking, you should not buy it given the expected value calculated using your own credences.

We think of investments as bets in a general sense. If you are betting that some proposition is true, say that the Cubs will win the game tonight, you are saying that you think the probability is higher that the Cubs will win than the other team.

1 x cr(P) + 0 x cr(~P) = cr(P)

In other words, your price that you would bet (1) on P being true (Cubs winning) is equal to your unconditional credence in P.

This type of thinking is useful in many ways. One example is that some MIT student led a group of people to abuse expected values in the Massachusets lottery. They found that the expected value of buying a 2 dollar lottery ticket was $5.53. Meaning, over a large amount of tickets being bought, you would make almost 3x your return. Being epistemically rational, when you see that there is this favorable outcome of bets, you should buy as much as you can. So the MIT group bought 700,000 tickets for about 1.4 million dollars and netted 2.1 million tickets. If they could buy more tickets, they would have approached their 3x net gains as the number of tickets being bought rose.

Expected utility theory

Utility is something that we can use as a numerical quantity meant to show how much an agent values some particular arrangement of the world. We suppose that each agent has their own set of credences about the world, probabilities about whether or not some events will occur. We will also assume that each agent assigns utilities to how much they would like certain events to occur. Say there are two universities a high schooler is applying to, whicever one they would rather go to is assigned a higher utility. We also will assume the position that two people could assign different utility to the same event without being irrational. One Yankee fan might assign X to the utility gained from Yankees winning while another may assign 3x.

Utility is used to provide uniform measurement to how we want the world to be. Everyone is aware of the diminishing returns on happiness that you get from money. If you have none, 1 dollar means a lot, so will say 100 thousand. But after you can afford most the things you want, the next 100 thousand is not as important as the first. We could assign the first 100 thousand someone makes as 1000 utils, while the next is only 500 utils, even though the amount of money is the same.

This is important for decision making as shown by the following example. Say there is a bet you could take that you are 50% certain to get 20 dollars, and 20% certain to get 100. The expected value of the 20 is 10 dollars, while the expected value of the 100 is 20 dollars. However, say you the util you assign to gaining 20 dollars in your current financial scenario is 100 utils while 100 dollars is 200 utils. So then, if we use utils and not dollars, your expected utility for the first bet is 50 utils while the second is just 40 utils. So if you are to be rational, you should take the first bet which provides you a high expected utility even though the expected value is lower.

So we can present a set of possible acts by calling it a decision problem. For each possible act, the agent assigns a utility for each one. If you had complete control over the realization of each act, it would be simple to choose the highest utility one (the most utility gained in the world might be being a tech CEO, but you cannot just become one simply). In a decision problem, the rational agent will assign utility from each act and thus create an ordering over the acts. That is, an agent prefers to do act A over act B because act A has a higher utility, A > B. If an agent is indifferent to two acts, we say that A ~ B. Thus, we are able to gain a few properties of a rational agents decision problem

Preference Asymmetry - There do not exist acts A and B such that the agent both prefers A to B and prefers B to A.

Preference Transitivity - For any acts A, B, and C, if the agent prefers A to B and B to C, then the agent prefers A to C.

These should be simple properties for anyone with elementary math skills.

Preference Completeness - For any acts A and B, exactly one of the following is true: the agents prefers A to B, the agent prefers B to A, or the agent is indifferent between the two.

If your preferences violate transitivity, you are liable to fall victim to what people call a money pump. Say you are about to do B, but then someone offers you an option to A. Say you pay me so that you can do A, but then I say that you can pay me to do C because you prefer C over A. Then this can be repeated over and over such that I can pump money from you. In other words, a money pump is possible if your preferences are

{A > B, B > C, C > A}

A real life example is a person who says they prefer eating out over cooking, prefer’s eating at their friend’s over cooking, but prefers their own cooking to their friends. Say they are about cook, then go to order take out. Then when on they are on the phone to get take out, you realize your friend is cooking and head over there. But then you realize you do not like their cooking, so head back home to make dinner. This is stupid and this is the point of showing what it means to be rational.

Savage’s expected utility theory is represented by the the following example. Say there are four possible world states, you either take or do not take an umbrella and it either rains or does not rain. You assign 0 utils to taking an umbrella and it raining and not taking an umbrella and it not raining. However, if you do take an umbrella and it does not rain you feel like you wasted your time and space, so you assign -1 utils. Similarly, if it rains and you do not take the umbrella, you assign -10 utils. Now imagine that we only have a .30 credence in it raining or not (say that is what the weather app says). We can calculate Savage’s expected utility straightforward as

EU$_{Savage}$(take) = 0 $$ .30 + -1 $$ .70 = -0.7

EU$_{Savage}$(leave) = -10 $$ .30 + -0 $$ .70 = -3

So to be rational according to Savage, we should always take the umbrella when the chance of rain is high enough such as 30%.

Savage’s expected utility theory entails the Dominance Principle, which is

If act A produces a higher-utility outcome than act B in each possible state of the world, then A is preferred to B.

However, there are problems with this.

pass fail
study 18 -5
dont study 20 -3

In the above table, a student assigns the following utilities to studying or not and passing or not. The dominance principle in this case will show that in all outcomes, not studying is the best one. However, the expected utility theory of Savage does not account for the fact that studying affects whether or not you pass or fail, so not studying seems rational. This is because Savage requires that we only use states independent of our control, but real life is not so cut and clear.

Jeffrey’s theory takes this in account by using conditionalization. That is, it uses your credence calculated by saying given some state in the world then I will A, connecting the world states and acts of agents.

chicken beef
white wine 1 -1
red wine 0 1

Say you are going to a dinner party and you have the following utilities about the wine pairing. Now, say you are not entirely sure what the wine pairing will be due to the host changing the meal depending on what wine you bring. Let us say that you are 75% confident that the host will pick the proper wine in response to the meat. Meaning, if it is chiken you are 75% sure it will be white and if it is beef you are 75% sure it will be red. Then we have a new credence table

chicken beef
white wine .75 .25
red wine .25 .75

We can then calculate the proper decisions via Jeffrey’s theory

EU$_{Jeffrey}$(white) = util(white & chicken) $$ cr(chicken | white) + util(white & beef) $$ cr(beef | white)

EU$_{Jeffrey}$(white) = 1 $$ .75 + -1 $$ .25 = .5

A similar calculation can be made for white which yields .75 expected utility. In other words, Jeffrey’s theory says that you should multiply your utility by the conditioned as opposed to unconditioned credences. If some state is not dependent on an agents action, then Jeffrey’s theory is the same as Savage’s.

Conclusion of expected utility

There are a lot more ways to explain the nuances and further developments and problems in this sub-field of decision theory, but I think this will suffice for now.

Bayesianism and probabilistic epistemology

Probabilistic epistemology is the theory that our beliefs should conform to probabilistic axioms and theories. So our degrees of belief in things should be consistent with the axioms

  1. Pr(p) $$ 0, for any proposition P

  2. Pr(t) = 1, for any tautology t

  3. Pr(p $$ q) = Pr(p) + Pr(q), for any inconsistent propositions p and q.

In other words, all our beliefs in propositions must have a proability greater than or equal to 0, all tautologies have a probability of 1, and the probability of a disjunction of inconsistent propositions is equal to the sum of their probabilities. For (3), this means that the probablity of [“drawing either a purple, red, or green marble from a bowl of five differently colored marbles is the sum of the probabilities of drawing any of these marbles: 1/5 + 1/5 + 1/5 = 3/5”](http://www.stat.yale.edu/Courses/1997-98/101/probint.htm#:~:text=The%20addition%20of%20probabilities%20for,probabilities%20of%20the%20two%20events%3A&text=The%20chance%20of%20any%20(one,the%20union%20of%20the%20events..

To be Bayesian in our beliefs, we would need to add the following two rules as well.

  1. Pr(p | q) = $$

  2. Pr${new}$ = Pr${new}$ (p | q)

Four is just that the probability of p conditional on q is equal to the probability of p and q divided by the probability of q. Five is that when an agent learns some new proposition q, we should update our credence in p in some way.

As explained by Isaacs in their Title IX paper,

Consider the following example illustrating conditionalization. Suppose an agent initially had degree of belief .2 that it will rain hard, degree of belief .3 that it will ran not-hard, and degree of belief .5 that it will not rain. If that agent learns that it will rain, he should reduce his degree of belief in the proposition that it will not rain, he should reduce his degree of belief in the proposition that it will not rain to zero and increase his degrees of belief in the other propositions a corresponding amount. Updating based on this information will yield degrees of beliefs as follows: .4 that it will rain hard, .6 that it will rain not-hard, and 0 that it will not rain.

The theory of degrees of belief I think was first formulated by Ramsey in a paper. You can think of having .5 degree of belief in it raining by saying you think there is an equal chance it will rain or not rain. Some people think that degrees of belief do not make sense however. What it does it mean to hold a half” belief or something along those lines.

This is all I want to say about Bayesianism, but just wanted to mention it since Jeffrey’s expected utility theory discusses it and uses it.

Agreement and disagreement

Another sub-field of formal epistemology is on disagreement. How do rational agents reconcile their beliefs about things? One line of th thought is that say someone has the same access to knowledge and data as you on something, we should call this person an epistemic peer. In academia, it may be the way in which two leading scholars have read all the same papers as each other. In general though, it is probably best to assume that everyone is an epistemic peer at the most general level. That is, why should you assume that you are more rational as a whole than someone else? Your epistemic reasoning is just as good as any other human. There may be some differences in sub-domain, say a plumber versus an electric engineer, but on the whole they are similar.

One plausible way to think about disagreements about beliefs is that we should give other peopel equal weight to their views. Say you have a .7 degree of belief in P, while someone else has .3, equal weight entails you to meet in the middle to .5. However, this gives way to spinelessness, which is that each view is unlikely to be right that you weight in your beliefs. You could have very strong credences in the fact that murder is bad, but giving equal weight would mean that you would have to lower your .999 credence in this if enough people think murder is permissible or good.

Similarly, it causes yourself to not trust yourself. If you just weight your views as 1/N, it ignores the fact that you might have spent longer and harder thinking about some P.

There is another epistemic concept, which is the expert or guru. We think of the weather person as an expert, in that, if they say it will be a .3 chance of rain, we hold that credence as well. If a Stanford Medical School professor says that there is a .9 chance of me living from my disease, I am going to hold that credence as well. However, the problem of identifying and expert is difficult especially when people do not understand why someone should be an expert.

Another concept relevant to this post is the good faith principle. This just states that in any interaction, without overwhelming evidence to the contrary, you should assume that each person is just arguing out of positive motives, to make the world a better place”. The good faith principle argues that even in the midst of some bad faith actors in a system, it is generally better to assume everyone acts out of good faith and is not trying to pull one over on you.

League of Legends

The game I want to apply a lot of this to is League of Legends. League is actually one of the most complex games due to the complexity of the decision problems. In League, the decision problem contains a massive set of acts and utilities and credences. The goal of League is to destroy the enemy base. The game is usually divided into early game, mid game, and late game, which all present different decisions. The decisions you make in each stage of the game affect the latter parts. The utilities and decisions to make change at each stage of the game depending on your character, your team, the other team, and where you are at in terms of objectives and gold.

For a jungler, you should always be doing something on the map, that is, clearing wards, ganking lanes, getting camps, or counterjungling. You should try and make plays as well for global objectives such as towers, dragons, and rift or baron. At each moment, you have to weight your options. Say by the time dragon is about to spawn, there are many decisions you could make. You could go top side and get your jungle camps, you could get rift, you could gank top lane and get a kill/assist and maybe a tower or some plates. However, the chance of success of those happening are not 1. If your top laner is bad, you will have to lower your credence in that happening. If the enemy top laner is a smurf, you should lower your credence. If you think the jungler is going to be bot, you should probably increase your credence in going bot to countergank and contest dragon.

At any point in the game, the utilites remain static. The towers are the same gold and the dragons stay the same. The only thing that changes are the credences. The mark of a good League player is having the proper credences in their acts. If you are bot lane and you are confident that once you hit level 2, you can kill the other team and get the utility, you should. Even if it your credence say is .9 in getting first blood bot lane, you should do it, if we calculated out the expected utility. The good League player is one that is epistemically rational in all these ways and also has the tech skill to back it up. Which is why someone like Faker is hailed as the best player because his tech skill is perfect and his credences in what to do are near perfect probably as well.

Part of the problem with league is that there is in some sense 9 enemies epistemically. That is, you can control your tech skill and your credences, but you cannot control the other people. You could have a credence of 1 in something, but if your teammates have credences of .1 in doing some act, they will not follow you or work with you. This is usually when the good player will start typing in chat, and the arguments come in.

The problem with league is that, I think, most people are not able to figure out if they are good or bad or what they are doing wrong. Someone can play well in the early game, where there decision problem and the epistemic difficulty is controlled by just themselves and no one else, and they do not have to worry about getting any macro level objectives. Once going to mid game, the difficult part, they have to manage getting minions, towers, objectives, rotating for team fights, etc. Late game is usually just running around at team fights as 5 people and trying to not get caught out, which is why we see in silver elo good laners and people who want to just wait til late game because they understand what to do more.

Someone could be a great early game player, but keep losing games because of their mid game play. They could even have the most kills and whatnot as well, and others may not even know what to do mid game either, so people are at a loss of what they should do better. It seems as if they win some games and lose some games, so it is all depending on their teammates.

So when disagreement on what to do happens in a game, the good players may know they are right, bt bad players think they are right as well. Most people in league of legends as well you can see are epistemically irrational as well, they are bad faith actors. In some sense, once someone makes a really bad play, say the jungler goes to their top camps when dragon is about to be up, it seems like this jungler could not care any bit about being good at the game. Usually the case is you type something in chat and they respond fuck you” or something along those lines.

Conclusion

We can think of any game modelling using expected utility theory. We can account for tech skills as just increasing or decreasing credences in doing some act. Even someone who has never played melee, it is not that they cannot wave dash, the act is possible, its just they should have a very very low credence in pulling it off. At any point in some game, the best player will know the entire set of acts they have possible, the utilities they assign to the or the game assigns to them, and their credences in being able to realize them since they are dependent on our actions and our teams actions and the enemies actions.

Up next Finding your humanity A Kantian Conception of Data Rights
Latest posts The Fascist Mythic Past and Rock Music The Little Pleasures Equality and Dialogue On endings A Kantian Conception of Data Rights Formal epistemology and gaming Finding your humanity Complicity and voting Tiger parenting Heroes and the fight against evil Hannah Arendt’s introduction to the life of the mind Philosophy briefs : Kant’s theory of punishment Intellectualism is dead, long live intellectualism My current favorite movies Enlightened centrism Philosophy briefs : moral evil Paterson is Paterson Bernie Part Five - Hope Bernie Part Four : Moral Politicians vs Political Moralists Bernie Part Three : Republic vs Despotic States Moral growth as necessary for perpetuity of the state Bernie Part Two - Medicare for All Silence as a critique of Descartes Bernie Part One : Unconditional Poverty Relief White Nationalism The case for Bernie Legacies of the Third Reich: Concentration Camps and Out-group Intolerance Dinner at Kant’s A Clockwork Orange and moral goodness Hasan Minhaj and Bernard Williams - We Can’t Care About Everything Korsgaard’s Core Argument in Fellow Creatures