It's poorly communicated, but it is a real effect. You're effectively moving money from the very rare case where you win a thousand times in a row to the common case where you don't. So instead of having vanishingly rare cases where you have all the money in the universe, and common cases where you go bankrupt you have a median case where you make a little money, rare cases where you make a 2x return or similar, and common but a minority of cases where you lose most of your money.
The key is that the game (1.5 vs 0.66) has a positive arithmetic expectation and zero geometric expectation, so you should be able to profit from it somehow. Do it with a double or nothing game and you always lose because the geometric expectation is negative infinity and arithmetic is zero.
The expected value of that strategy is still negative.
You get a high chance of winning 1 dollar, with a low chance of losing everything. Only the chance of losing everything isn't low enough.
tl;dr: doubling winnings on each throw of heads and paying out on the first tail is a game with expected winnings diverging to positive infinity, yet probably no one would pay more than a few bucks to enter
The median outcome is indeed negative, for the reason you give. But the mean outcome is positive, because some players become exceedingly rich.
You can try it at home, here's some Julia which runs it over 1M people, each with 1K flips:
using Distributions
n = 1000
d = Binomial(n, 0.5)
to_wealth(heads) = 1.5^heads * 0.6^(n - heads)
rand(d, 1_000_000) .|> to_wealth |> mean
You can keep running that, it's above 1 almost all the time.
To look at this another way — would you take the other side of the bet? Someone on average has to be making money, and the other side is clearly losing money.
Am I missing something? It doesn’t seem counterintuitive to me that repeatedly making an all-or-nothing bet with a non-zero chance of losing will eventually cause my expected value of capital to go to zero.
I like the presentation style though, and the allusion to Shannon’s theorem although I didn’t quite grasp the connection.
If you were paying 50$ to play this game, you would have to play around 2^50 (each time paying 50$) times to even break even! I think the article is a good illustration of how expected value can't be really used as the only parameter.
>Here’s an example. You can play one of two games an infinite number of times in an instantaneous amount of time: one game you flip a coin and win a dollar on heads, tails you pay 99 cents. The other game you flip a coin and either win 1,000,000 dollars or pay 990,000 dollars.
> A not so obvious result that follows from making successive negative expected value bets, is that in the long run you are guaranteed to lose all your money (or ruin). Intuitively this makes sense as with each bet, you are losing money on average.
Expected value doesn't tell you much about the outcome of successive bets. Someone else can probably explain this better since it comes up on HN a lot (something about ergodicity and the difference between ensemble average and time average).
Quick example is if play a game of double or nothing on coin flips. This is a "fair game" because you pay x and get back 2x * 0.5 + 0 * 0.5 = x. But if you play more than one game you will very quickly get a "nothing" and can't continue.
This is a technique often used in gambling games (poker apps minigames etc.)
They offer a series of actions that let you lose or gain x% n many times in a row, with an equal chance of each. The fact that +x% doesn't cancel out a -x% compounds on the difference in lay person expectations to produce horrific odds for microtransactions.
Interesting. I'm not sure if my moves were right mathematically. I basically played as though I had one shot i.e., I pull a lever with a 5% chance once and I either win or get nothing. Similarly for the losing position. However, if I get to play a lot my answers would be completely different. While the expected value of $1000 with a 13% chance and multiple plays is $130, the probable value for a 13% chance and one play is zero dollars.
I think the problem is with the assumption that expected wins are a good guide of behavior at the granularity of individual plays of the game.
The expected win is infinite because very rare scenarios have huge payouts. However, any one play of the game has a 50% chance of paying out nothing. If I get to play the game a large enough number of times for the rare scenarios to actually occur, then I would be willing to pay a higher price than if I only got one shot at it.
Yeah. OP might be conflating arithmetic mean with geometric mean. This is a common problem that many investors make. I see it all the time.
People mistakenly think that if their investment made 50% and then lost 50% that they broken even, but they're actually down 25% (1.5*0.5=0.75). However, if you invest $100 and make 50%, then invest a second $100 and lose 50%, you do indeed break even.
When OP stated that the bet has a positive EV, it's for a flat $1 bet. Indeed, if you always bet $1 (or any fixed amount), it does have a positive return of $+0.05, and you should take the bet.
It's only when you change it from a $1 bet amount to an "entire bankroll" amount that you're looking at a geometric mean of 0.949 return per bet. That number is simply the geometric mean of the two possible returns, 1.5 and 0.6.
So the actual EV is sqrt(1.5*0.6)-1.00 = $-0.051 per bet (normalized to $1.00).
I don't understand the point this article is trying to make about anything else. The entire effect here is explained either by misstating the problem or by using the wrong type of mean for the EV calculation.
The idea that some lucky people will make money while the rest lose is explained by simple luck. Run the simulation longer and they will all lose out to the law of large numbers.
That also assumes single winners, more interestingly it ignores the diminishing marginal utility of money.
Consider your personal utility function, at 70 what odds would you need to play double or nothing with your entire life’s savings. If it takes more than 50.1:49.9 odds then you like most people don’t have a linear view of money.
> As Motley Fools [1] like to point out, the most you can loose is 100% whereas the upside is unbounded.
Reminds me of the gamblers ruin paradox. In some games even if the expected payout from each round is positive, it can be shown that the gamblers wealth goes to 0 over suffiently large time scales with probability 1.
Why is the expected value infinite? It seems fairly obvious that you will not make much money on such a game as we all know you can't flip a coin many times without tails showing up. To make infinite money in such a game you would need effectively "infinite luck" no?
This is a dishonest piece. It ignores that it's based on a zero-sum game and the world isn't zero sum. The quoted economists know that very well.
I like that the coin flip game illustrates the concept of compounding interest, but it doesn't model wealth creation at all.
Most new ventures aren't I-win-you-lose, they're we-win-or-I-lose. Wealthy people really can take bigger bets more frequently like the article suggests, but it's not necessarily at the expense of everyone else.
A more accurate illustration would be a game where each round you have a choice: bet 25% of your money or recieve $0.30. After each round, you must pay $0.25 to play again. Some people start the game with no money, some people start with $1.00.
If you think this game through, you'll still end up with super wealthy outliers and bankruptcies, but the players in the game actually have some agency.
> Every time the coin turns out heads, I take half of the money on the table (i.e the total you have left is divided by 2). Every time it turns out to be tails, your money is multiplied by 1.5 (plus a 50 cents bonus). It’s obvious that this game is totally rigged to my advantage as the amount of heads will converge to be similar to the amount of tails, but the loss you incur with each head result (money divided by 2) is larger than the win gained with each tail result (money multiplied by 1.5 + change).
This game has positive expected value, the only problem is risk of ruin if you size your bets too aggressively as a percentage of your chip stack
The +50% / -40% is cleverly chosen, because it seems like the bet is weighted toward the gambler if you’re just using a naïve expected value.
However, if you were to make it “double your money” (+100%), it would become clear that the only fair downside would be “halve your money” (-50%). For these values, the “trick” becomes much more obvious: that increases in repeated games need to be far greater in percentage terms than decreases (i.e. not just a 10% difference) in order to balance out.
The key is that the game (1.5 vs 0.66) has a positive arithmetic expectation and zero geometric expectation, so you should be able to profit from it somehow. Do it with a double or nothing game and you always lose because the geometric expectation is negative infinity and arithmetic is zero.
reply