Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

The simulation argument in my mind arises from thinking about the universe in terms of computation. The concept of computing and tractability comes from limitations and phenomena we see in the universe: energy and the arrow of time. Yet, the arrow of time doesn't appear to be fundamental to the constitution of the universe (with the laws of physics being identical in reverse), but only an emergent property of entropy. Like a human-inspired sentient creator, the simulation is the most in-reach explanation of creation for a given perspective.

We could look beyond the limitations of thermodynamics, and consider that the contents of spacetime may just exist in the same way as the contents of pi or the Mandelbrot set exists.

Why must the universe be "created" at all by computation, when computation doesn't even seem to be a fundamental, but only an emergent property of it?



sort by: page size:

There is a limit to the amount of computation that can be done by a given amount of matter that only new physics could change. Given that limit, at best you would need every particle in our universe just to simulate an identical one. It's more likely that the best possible simulation would have significantly reduced fidelity and/or size with respect to its host universe, thus the inevitability component of the simulation argument falls apart.

What would convince me we are a simulation is evidence (e.g. proof that quantum randomness is caused by floating point rounding errors, or that entanglement is caused by lazy expression evaluation), hard empirical measurements. Not arguments from pure logic and extrapolation.

That said, I agree that we have done some amazing things with computers; I just doubt (in the extreme) the simulation argument is valid.


Interesting point! All I have to bring in is that if computational complexity is of no interest to inhabitants of a simulation, very strong assumptions about the nature of the simulating universe still need to be made? This, I think, means that you're saying that time is for all intents and purposes unbounded/unlimited/infinite in the simulating universe. As the law of physics that turns exp->poly problems would still need to be simulated, and likely, if i understand correctly, is not necessarily polynomial in the simulating universe?

If I'm interpreting it correctly it does seem to put some constraints on the nature of the simulating universe, but does not necessarily disprove the existence of it?


My main problem with the simulated world argument is complexity. Take the Billiard Ball example[0]. This means to accurately simulate the universe you can't really get away with approximations. Under close enough scrutiny discrepancies in the simulation are discernible, and we can scrutinize it at the subatomic level. But to simulate the observable universe, how big would your computer need to be? How slowly would the simulation run relative to the simulator's real-time? It just doesn't stack up.

The only way to do it would be to fake it by generating the appearance of a thorough simulation rather than the reality of one. In which case the arguments put forward for wanting to perform a real simulation - to simulate history and so forth - break down because you'd only be emulating the appearance of it not simulating it.

The only way out of this I can see is if the universe containing the simulator were vastly more complex than ours, such that in comparison our universe would be trivial to simulate. But then why would they do it? Our universe would be nothing like theirs. In principle this is possible, but it massively reduces the chances that our world is a simulation because only a subset, and quite possibly a vanishingly small subset, of possible universes would be capable of hosting the simulation. Possibly fewer universes that there are universes like ours. At which point the odds of ours being a simulation collapse.

[0] http://www.anecdote.com/2007/10/the-billiard-ball-example/


Your argument only knocks down the idea that the universe is simulations all the way down, assuming a finite prior probability distribution on the size of the universe (which I would be inclined to agree with). Even if each simulation was three orders of magnitude less efficient than its parent universe, that would still leave a lot of room for simulation.

Why do you (and others) tend towards arguments suggesting simulations?

To me, the language here is important. Both simulation and computation imply (to me) a tool and a user of the tool. Even a broader interpretation of computation as, umm, the efficient transfer of precise information through spacetime in certain shapes (which is easily providable by our understanding of physics today) requires an observer to extract the computation from the otherwise non-semantic system.

It definitely makes sense to think of the universe as a projection of a higher dimension, or a holograph, or however you want to look at it, but that's a far cry from implying a simulation.

EDIT: hologram -> holograph


Then I misunderstood originally.

If you're not assuming the universe is simulated and is instead "fundamental" (whatever that means), then who are you to put limits on its processing power?'

I think you have a level-confusion here - the universe isn't packing computational machinery into itself. Remember that space-time is a part of this universe. If something is computing our universe, it has to hold the representation of space-time, too. The computation machinery isn't in here with us.


"There is a limit to the amount of computation that can be done by a given amount of matter that only new physics could change. Given that limit, at best you would need every particle in our universe just to simulate an identical one. It's more likely that the best possible simulation would have significantly reduced fidelity and/or size with respect to its host universe, thus the inevitability component of the simulation argument falls apart."

Or, potentially, one would only need to simulate the minds of any observesers in the simulation, not every particle in the simulated universe.


The questions aren't exactly the same. For example, we can imagine characterising our whole universe as a set of physical laws plus some initial conditions, and maybe a pseudo-random number generator. If we're given such a characterisation, we can write a simulation trivially. In fact, a simulation is probably the simplest form of such a characterisation. Hence, all we need is a computer (for example, SK calculus) and this relatively-simple simulation program (so simple that it could be a Boltzmann fluctuation). We don't necessarily need time to run the computation, since that would imply some kind of "outside universe", when it might be the case that computation can exist all by itself. From a pure reductionist point of view, we've removed unnecessary complexity and are left with just computation, which is nice and Platonic.

The brain in a vat idea, on the other hand, requires brains, vats and all manner of other support structures (eg. energy, entropy, etc.) to exist as well as the brain in a vat. In which case, we basically require an "outside universe" which is very similar to the "inside universe", so reductionism says that we're adding complexity with such an argument, so it's unnecessary.


The simulation hypothesis is not really about whether or not there is a master programmer but rather a consequence of a computational universe. The more interesting question is whether we live in a digital (computational) universe or not. Wolfram thinks that if we do then there should be a corresponding digital (computational) physics. He also hypothesizes that a universe's complexity should then arise from simple computational processes.

There's no reason to believe our universe is uncomputable. It may require vast resources, in excess of what our universe possesses (by definition, in some sense), but we have no ability to say that there can exist no other possible universe that may not only possess these resources, but consider our entire universe the moral equivalent of a homework problem running on a toy computer.

Even if our universe is in some sense based on real numbers (in the math sense), there's no apparent reason to believe that arbitrarily accurate simulations couldn't be run of it. (Or, alternatively, our host universe may also have real numbers and be able to use them for simulation purposes.)


I have to disagree and I can not really understand what is so appealing about the simulation idea to many. It is an idea that will lead nowhere because you are unable to distinguish a real universe from a simulated one. In order to do this you would have to know what at least one of those scenarios looks like and then compare the universe you observe to what you know to be true about a real or simulated universe.

It is common to base such thoughts on analogies between our universe and our computers, i.e. our computers have only finite precision while the universe seems continuous, at least to a very good approximation. Therefore if we would detect rounding errors in the universe we would know that the universe is actually a simulation.

But that is at least naive and most likely not true. How would you know that a real universe does not have finite precision? Why would you assume simulation must have finite precision? And you have to answer the same question for any other feature you want to use to make the distinction.


By definition the universe can “simulate” itself. Although it’s true we have no proof that this doesn’t go beyond Turing machine in power (yet it’s a reasonable assumption).

Of course the goal of physics is to compress the universe into simpler laws. But there’s no inherent reason that it should be possible.


Why does it imply a simulation? Perhaps it can be explained by an anthropic argument: Perhaps the kind of mechanics behind spacetime in which time evolution is computed in a stepwise manner depending on density are extraordinarily more likely because they somehow allow for much simpler mechanics (sets of laws with lower entropy are more likely to be the ones we find ourselves in, especially so if we assume that all non-physical sets of laws are realized as well, so that the impact of such a difference in entropy on the probabilties could be vast).

I wasn't proposing that a computer running our universe is based on electronic circuits; rather, I was proposing that computation itself is all that's needed to simulate our universe. Since an 'outside' universe doesn't need to bear any relation to our 'inside' universe, it can be vastly simpler. Maybe it's a rule 110 cellular automaton; maybe it's a 2,3 turing machine; maybe its an iota evaluator; it could even be just some abstract computational essence, distinct from implementation. That's enough to run a simulation.

Now, the question is 'why simulate the universe when you could just simulate a brain'? This is basically the Boltzmann Brain argument. In terms of bits (see Kolmogorov complexity) the universe is simpler to describe (physics + initial conditions) than a whole brain (neurons, synapses, hormones, etc.). Basically, the universe can unfold from a tiny core (the big bang), whereas a single brain can't (it could unfold from a fertilised egg, but that's already huge, and requires all kinds of matter and physics to work which the big bang would have used anyway). In that case, a random program running on an 'outside' computer is much more likely to be a universe than a brain. By the anthropic principle, we could ask if our universe is the simplest which can support life, in terms of the number of bits required to encode the laws of physics as a program.


There's a lot of subjectivity to the "we can simulate the universe" variety of claim stated by the article, but I'd tend to forgive considerations of the scale of the entire universe in that claim. It should be obvious that simulating the trajectory of every last particle is straight up impossible. (Even in principle: you could probably even show outright impossibility on logical grounds with a diagonal argument a la the time hierarchy theorem.)

Instead I'd take it as meaning something like, "any slice of physical phenomena there is to observe in the universe, we can simulate given reasonable resources to do so." So, put an imaginary box around some reasonably isolated plot of reality, pick your precision and your time scale, and you could replicate what happens in that space with a "reasonable" computational resource overhead. That's what elevates the quantum simulation overhead objection to number 1 in my mind.


Aaronson is one of my favorite philosophers.

But, what if there's a law of physics inside of the simulated universe that solves exponential problems in polynomial time? You could pack a more limited universe inside of that one, if you didn't want your inhabitants to have access to it. I think that computational complexity absolutely does determine how useful a simulation is to us, but it's not meta-universal enough to do this kind of philosophy with.


The laws of physics are computable; why would a sufficiently large computer not be capable of simulating a universe?

An interesting point is that limitations on math (i.e., things that would be true regardless of the details of the physical world) would put limitations on any physics simulations - including hypothetical physics simulations done by someone outside of our universe with potentially different physical limitations.

So the point of the article is something like - if phenomenon-X can't be simulated by anyone, no matter how good their computers become; and if our universe is a simulation (which is a possibility), then our universe won't contain phenomenon-X.


> At least with the technology today.

Why would anyone ever consider a theoretical concept purely under the terms of an environment bound by obviously inferior technology?

It's not so impossible to conceive of a perfectly accurate atomic-scale simluation, capable of mirroring significant parts of our own reality. The problem arises only when you attempt to run the model in real-time. Only at that point do the physical limits of the parent reality come into play.

It's possible to short-circuit this fact by pre-computing certain results in advance. But, then what you'd be doing is presenting the simulation with assumptions.

Since physical constraints prevent us from operating in real time, and force us to inject pre-fabricated assumptions, this would mean we can't make perfectly accruate predictions, or perhaps start a simulated universe with a "big bang" event, and then fast forward to observe the exact future, or operate with the same concept in reverse and witness mysteries of the past, simply because it doesn't seem possible to force a simulation to accurately operate at speeds faster than the parent universe. At least not without distortions, substitutions and abstractions.

Nor does it seem possible to simulate an accurate universe that isn't smaller than the container universe. The simulation must occupy an amount of space within the parent universe, somehow, and yet still leave at least some space for the simulation's operable framework and infrastructure. Thus, the simulation must be arbitrarily smaller than it's outer reality. Since we are hypothesizing, it's not necessary to specify how much smaller, so long as we acknowlege that obvious truth.

But could we simulate a small part of the parent universe with perfect accuracy, inside itself? Yes, I think so. Although assumptions would have to be pre-defined for inputs to bootstrap the simulation.

Could we simulate one living sentient entity, trapped in a tightly-bound, controlled, claustrophobic prison, an environment devoid of the richness of a raw, unpredictable natural environment? Sure, why not?

next

Legal | privacy