Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Linearity of expectation applies to any two random variables, not just ones that are independent. The issue is that these games are not random variables, they are random processes (sequences of random variables) with state affected by the choices you make and previous outcomes. Linearity of expectation is irrelevant.


sort by: page size:

Linearity of expectation is of course true in general, but in this case summing the expectations of the individual transition probabilities is only valid because the only valid state transitions are going from state i -> i+1 (owning i unique toys to i+1 unique toys) or just staying in state i. If you could magically go from state i to any other state (you lose unique toys or gain more than one unique toy), then this simple calculation would not be correct.

I think the 'nonlinearity' they are referring to is the difference between expected value and probability. It's possible that A beats B most of the time, and yet the average amount by which A beats B is negative, because when A does lose it loses big.

> I never meant to claim [expected value] doesn't matter for "anything at all", just not for this envelope game.

When you say "this envelope game", I assume you are also including minor variations of this game?

If you mean specifically this exact envelope game, where the expected value for all actions is zero, then it is strictly true that it doesn't matter how we make decisions - using expected value or not - though it's not a particularly interesting point to make.

I will proceed assuming you mean also including minor variations of this game.

You wrote this earlier:

> That is, the probability is 1/2. That's all that matters to the decision making. Calculating an "expected value" at all is completely useless, whether or not you do it "correctly".

We can make a minor variation to the game such that the probability will still remain at 1/2, but the expected value for switching can be changed. We can make the minor variation such that it will be profitable to switch, or such that it will be unprofitable to switch. We can do this while the probability remains at 1/2. According to you the probability is all that matters, and it's useless to calculate the expected value for a game like this? This would lead to making unprofitable decisions in a game with a small variation as described above.


I expect a negative expected value game, and I get a negative expected value game.

> That's why you can solve a single subgame in Two Envelopes game independently of other subgames, even though it's an imperfect-information game.

Obviously the EV of envelope 1 is the contents of envelope 1. If you know the probability of reaching it you can calculate the expected value of that envelope by multiplying by the probability of reaching it. But why are you multiplying by 1/2? Probability is defined in terms of sets. What are the set that makes it 1/2? Does that set contain only the subgames that are part of the subgame you are in?


Why would you expect [Expected Value of the amount of liability per unit time] to be anything but linear in the rate of attempted operations (within the envelope where each operation is routine, non-interfering, statistically independent event)

> Expected value doesn't mean jack shit if the game can only be played once.

Thinking like this was the mistake I've made.

While you can play a given game only once, your life will have plenty of such games. So there definitely is a relevance to "expected value". And this is easily to simulate with a program. The expected value of the wealth for those who take the chance when the "local expected value" is better than the certain outcome does tend to be higher.


> negative expected value

This would only be true if the value of money was linear.

The value of money is not linear.

A 10% chance of a 1000$ loss can be far more harmful than a 100% chance of a 100$ loss, especially at low income.


How does that work? By linearity of expectation, if the expected value of buying 1 ticket is negative then the expected value of buying multiple tickets should be even more negative

Because expectation is linear and the total amount of money in play never changes, we know that the sum of the expectations for each player over a given turn is equal to 0. If no players have run out of money, this means that the expected change for each player over a single turn is zero.

If instead we have m players with money and b broke players, each player still has an equal expected number of dollars received, and the m players with money each expect to give 1 dollar. Summing this, we have a total expected change of (m + b)E(received) - m, which must equal zero, meaning E(Received) = m/(m + b), so players with money expect a change of -b/(m+b) and players without money expect a change of m/(m+b).

This tells us that the expectation for a turn is basically always zero and never gets above zero in a way that allows accumulation of wealth for a single player. So over long periods of time we should expect this to look like a drunk walk with a weird distribution.


In short, no. You are calculating the expected value differently in the two problems.

I'd add a nit - this is the expected value of the game. I might have a utility preference curve that is non-linear in dollars - for instance, the classic U(v) = log(v).

> When you say "this envelope game", I assume you are also including minor variations of this game?

No; it seems obvious to me that the argument for switching presented in the original wikipedia article is meant to apply only to the evelope game as it's presented. Of course introducing variations could easily change the meaningfulness of the article's premise. That is, it wouldn't be considered a "problem" or a "paradox" if calculating an "expected value" were actually meaningful.

> then it is strictly true that it doesn't matter how we make decisions - using expected value or not - though it's not a particularly interesting point to make

True, but my point was more of a question: if it's obvious that calculating an "expected value" is irrelevant in this specific case (as the article says, "It may seem obvious that there is no point in switching envelopes as the situation is symmetric"), why is the argument presented in the article considered compelling? That is, either it's not actually compelling, or the "expected value" being meaningless in this case is not necessarily so obvious at first... but if so, why not? (Or, to put it another way, why is the argument in the article compelling enough to warrant such a long wikipedia page with such numerous proposed "resolutions"?)


With the example of U given, U(2 apples + 2 oranges) > U(5 apples), which would not be true for a linear variant of U. You could transform this utility function into exp(U) = apples^a * oranges^(1-a), to make it more linear.

But I believe in general only linear transformations preserve all aspects of a utility function, when you start to look at the expected utility of a bet with probability p of outcome a and probability (1-p) of outcome b, which has expected utility E[U] = pU(a)+(1-p)U(b). I can imagine that it is harder to accept that this view of utility functions actually models human behavior.


If something decreases both expected value and variance, it's pretty hard to call it gambling.

You just painstakingly carried out the absolute analysis, which I know works.

Let X be the value in the envelope you have, and Y in the other one (X and Y are both random variables with well-known distributions). Then E[Y/X] = 1.25. That's what I wanted you to explain. You just keep saying that E[X] = E[Y], which I know.

Note that this paradox would not arise if X and Y were independent, since then E[Y/X] = E[Y] / E[X] = 1.


Take a step back and look at his description of the bet and equation (2).

Is says “a simple gamble”. It doesn’t say anything about an infinite series of iterations of the bet. It’s a single step problem. Do you play once or do you pass?

(If the question is “do you want to play twice (or N times)” it’s also effectively a single-period problem. One just has to consider the distribution of outcomes after two (or N) rounds.)

The usual EUT resolution is what I just described, which you don’t find problematic. He does find it problematic, because for him calculating an expectation is interacting with a copy of yourself in a parallel universe or something.

The reason why he talks about infinite sequences of games is not because the problem is about an infinite sequence of games. To solve the simple problem he has to hypothesize that there is an infinite sequence of them.


There is a subtle error in the way this article is phrased. Since the outcomes are mutually exclusive you can sum them when computing the expectation. And you're also neglecting all the high probability-low gain outcomes.
next

Legal | privacy