Yes, but value is not created out of thin air, you need capital to create value, i.e. you need to be wealthy.
So you could extend the yard-sale model and include a rule that after each round, each $ will turn into $1.5 with some probability.
Guess what? Now the rich players can't just risk higher stakes in the coinflip games, they will also have more opportunity to introduce more money into the game through value creation.
So that would make the outcome even more extreme than the standard yard-sale model, not less.
This is a dishonest piece. It ignores that it's based on a zero-sum game and the world isn't zero sum. The quoted economists know that very well.
I like that the coin flip game illustrates the concept of compounding interest, but it doesn't model wealth creation at all.
Most new ventures aren't I-win-you-lose, they're we-win-or-I-lose. Wealthy people really can take bigger bets more frequently like the article suggests, but it's not necessarily at the expense of everyone else.
A more accurate illustration would be a game where each round you have a choice: bet 25% of your money or recieve $0.30. After each round, you must pay $0.25 to play again. Some people start the game with no money, some people start with $1.00.
If you think this game through, you'll still end up with super wealthy outliers and bankruptcies, but the players in the game actually have some agency.
Sure. The question is more like, should you be willing to pay $1 million to play? And honestly, if you can't afford to burn $1 million, that would be a pretty bad idea.
However! Suppose you were allowed to modify things ever so slightly. Suppose you get to say, "Sure, I'll pay $1 million to play. But instead of playing once and taking my winnings, I want to play a million times, and only take one millionth of whatever I win each time." You haven't done anything to change the average payout of the game. But you've made it a much better idea to play, by reducing the variance.
That's what it means to say the game has infinite expected value. You really should be willing to pay anything to play it, as long as you're allowed to repeat the game over and over.
Of course, if you're stuck with playing the game just once, then lots of games look like a bad idea despite having a positive expected value. The St. Petersburg game is hardly alone here. For example, suppose you have a 1-in-a-million chance of winning $10 billion, but it costs $1000 to play. Should you play? The expected value is good, but I don't think most people would touch it. Not much of a paradox there.
Why is the expected value infinite? It seems fairly obvious that you will not make much money on such a game as we all know you can't flip a coin many times without tails showing up. To make infinite money in such a game you would need effectively "infinite luck" no?
The expected value of that strategy is still negative.
You get a high chance of winning 1 dollar, with a low chance of losing everything. Only the chance of losing everything isn't low enough.
tl;dr: doubling winnings on each throw of heads and paying out on the first tail is a game with expected winnings diverging to positive infinity, yet probably no one would pay more than a few bucks to enter
>Here’s an example. You can play one of two games an infinite number of times in an instantaneous amount of time: one game you flip a coin and win a dollar on heads, tails you pay 99 cents. The other game you flip a coin and either win 1,000,000 dollars or pay 990,000 dollars.
It's poorly communicated, but it is a real effect. You're effectively moving money from the very rare case where you win a thousand times in a row to the common case where you don't. So instead of having vanishingly rare cases where you have all the money in the universe, and common cases where you go bankrupt you have a median case where you make a little money, rare cases where you make a 2x return or similar, and common but a minority of cases where you lose most of your money.
The key is that the game (1.5 vs 0.66) has a positive arithmetic expectation and zero geometric expectation, so you should be able to profit from it somehow. Do it with a double or nothing game and you always lose because the geometric expectation is negative infinity and arithmetic is zero.
So can this be generalized to other collectable items? If as a hobby I collect, sell and exchange X does it mean I will lose money in the long run? Recently there was a few articles about investing in Lego sets, from this article POV it may not be that good investment, the future price is hard to estimate and probably you will guess price increase/decrease about half of the time right. So using this model you will lose money in the long run. Or did I miss something?
Returning to the simulation, the coin experiment can be explained using different model:
Imagine position X on a line: |A A A A X B B B B B B B B B B B|, X can move either left or right by the amount specified by the rules of the game. But since one person is poorer the boundary | is closer to X. X is doing a random walk, so it will move with exactly the same probability e.g. 5 positions left or 5 positions right. But for the poor player 5 positions to the left means he is left with no money to play again, and for the rich player it means he lost some of his advantage. If the difference is huge like x100 the poor player has basically no change at winning at all. So this game is only fair if A and B have similar amount of money.
Yes. Given 1000 players and 1000 turns, if each player starts with $100 in capital under your chosen parameters:
import random
l = 0.33
w = 0.5
c = 100
m = 1000
p = {k: c for k in range(m)}
n = 1000
for k in range(n):
for j in range(m):
if random.choice([0,1]):
p[j] += (w * p[j])
else:
p[j] -= (l * p[j])
print(sum([p[k] for k in p]) / len(p))
print(sum(1 for k in p if p[k] > c) / len(p))
I wrote this up quickly so there might be an error, but under your stated parameters the average wealth increases over time and most people end up wealthier than they started. Specifically, the number of people who will be wealthier at the end seems to converge to somewhere between 57-60%.
NB: This assumes you bet your entire capital each round instead of a constant bet size. In the presence of non-ergodicity you wouldn't want to do this, but that just means it's an even stronger result that most people come out ahead.
In fact 33% happens to be the maximum loss percentage this system (win rate, win percentage, bet = total capital) can tolerate while still exhibiting higher wealth for most players over time :)
Thinking about the one single scenario where it goes on forever, as if it were an event unto itself, is misleading imo -- by that reasoning, in that scenario, you would win an infinite amount even if you were only getting paid $1 on each flip. But the EV is clearly not infinite for that game. The increasingly huge rare payouts are essential to the EV calc.
That’s an illusion if you can’t use it directly to make more money. In the abstract sure it’s positive value but for this simulation or any simulation it doesn’t matter.
Just an aside: the first scenario you suggest, in which you have a 1% chance to 1000x your money and a 99% chance to lose it all, has an expected value of 10. It is absolutely rational to make that bet if you have enough capital to withstand a drawdown of let's say, 5x your initial bet.
For example, suppose you model this as a game with the following rules:
- you start with $1,000,000
- each turn you may bet $10,000
- if you bet, you roll a d100
- if you roll a 1, you earn 1000x your bet, if you roll anything else you lose your entire bet
- the game ends after 1000 turns or you lose all your money, whichever happens first
If you bet every turn, on average you'll end the game with approximately $60,000,000.
EDIT: Did you edit your comment to be 100x or did I misread? Oh well, leaving this here for posterity. If the win outcome is 100x, the EV is still 1.
I suppose. But there's a difference between "mathematical potential payoff" and how much I will risk on the flip of a coin. And as far as I can tell, the amount I pay has no effect on the odds or rules. So, three bucks.
But no, the payoffs don't go to zero: it's heads you double your wealth, tails you lose 40%. That's insurable risk.
(if the payoffs went to zero there would be no benefit to pooling... the only winning move is not to play)
The article pretty pointedly is about social insurance, but it doesn't make a particularly good case for it since it's a completely abstract model which bears no relationship to the actual reasons wealth and income disparities exist and feeding unemployed people might be a good idea. Rich people don't need to gamble 40% of their wealth on each economic interaction (they're perfectly capable of diversifying their own portfolios) and very rarely get bailed out with a share of lots of less rich people's earnings when their investments suck.
The non-straw man version of "mainstream economics" absolutely understands how risks work and literally invented the type of game theoretic model the author is using to show what he thinks "mainstream economics" is missing
This is... false. For a number of reasons. Something can be economically rational at scale and still a gamble, because you do not have infinite funds. More importantly, past results are not indicative of future results in valuations. Thinking that a thing must increase in value because it increased in value previously is perhaps the dumbest fallacy in in finance possible.
Option 1: I offer you 99,999/100,000 chance to lose everything you have, And a 1/100,000 chance to get 100,001 times more. Expected value is net positive. And yet, you’d be pretty dumb to do it. Gambling.
Option 2: I offer you 9/10 chance to get 1.01x more. 1/10 chance to get 0x. Expected loss. Better than 50% odds of gain. Gambling.
Option 3:
Coin toss. Triple your lifespan on heads. Die on tails. Definitely gambling.
While I agree, that conclusion is wrong because you forgot to look at the win-win and loss-loss cases. Even with two coin tosses, as in your example, you'll win on average:
1.5 * 1.5 = 2.25
0.6 * 1.5 = 0.9
1.5 * 0.6 = 0.9
0.6 * 0.6 = 0.36
=> 1.1025x total
It's counter-intuitive because even though you almost always lose, you still win (linear) wealth on average (but not median). The difference is that if you have unlimited tosses available, you don't care about maximizing EV after X tosses. Instead you care about minimizing your risk of losing it all.
So you could extend the yard-sale model and include a rule that after each round, each $ will turn into $1.5 with some probability.
Guess what? Now the rich players can't just risk higher stakes in the coinflip games, they will also have more opportunity to introduce more money into the game through value creation.
So that would make the outcome even more extreme than the standard yard-sale model, not less.
reply