I predict the following quantum bug: Due to optimizations to prevent the need to simulate every single particle in this universe...the people who evolved in this universe are now perplexed that physics works so intuitively at the large scale, but at the small scale, it seems to become bizarre and "not calculated until you look".
I actually really like this idea. I was in Blender and thought about how mipmapping has some surface similarities with the way the world works. Detail is only apparent once you can observe it. Maybe fields and waves are faster to calculate, where finite particles are difficult.
I'm just spewing nonsense, but this is the sort of stuff that makes for a great talk over a beer.
Yes, it is unsound. Quantum mechanics has features which aren't observable (the imaginary components of the wavefunction, the value of position and velocity at the same time, etc). However, this does not mean less computation, because in order to simulate the system you need to represent the whole wavefunction and everything that happens to it whether it's observable or not. Quantum mechanics is elegant in many ways, but low computational load is not one of them.
Is it weird that the complexity gets me excited? The thorough needs of this kind of nature of computation is a thrill to dream about.
Thanks for the notes and the links. I've recently started to work my way through Feynman's lectures a I'm considering a return to school for physics particularly because of the excitement it brings me to be able to even just peek beneath the covers. I want to be able to rip them off the whole damned bed and leave the sleepers exposed! But I digress...
That's not what quantum mechanics really does. The wavefunction, which is the core of a quantum calculation, "runs" all the time whether any part of it is observed or not. A given observable property of the wavefunction may not be predictable without looking, but that doesn't make the wavefunction any easier to simulate. And without the wavefunction running, we wouldn't observe the probabilities which we observe in experiments.
See the last sentence of my comment: it's not consistent with experiments. The success of quantum mechanics as a predictive tool comes from acting as if the wavefunction is always present, no matter what aspect of it is measured.
There may be a whole different theory of physics which can replace quantum mechanics and doesn't have wavefunctions, and has completely different simulation requirements, but at that point you could postulate anything.
> The wavefunction, which is the core of a quantum calculation, "runs" all the time whether any part of it is observed or not.
The wave function doesn't really have to "run" to exist though... a wave doesn't actually have any influence on anything until it is collapsed. If nothing observes the wave, it won't have any affect at all. It will just be there, in some cosmic register, waiting for some dust cloud to inquire about it.
Consider a polygon in a game engine, which started at 0,0 and has a known velocity. You are at tick 4762, and that polygon is represented by a position function, but it doesn't actually "run" until you declare a tick, and do the math.
I dispute whether a wavefunction can influence things without collapsing, but putting that aside...
Your example of a polygon with a constant velocity is carefully chosen: that equation has an analytical solution, so you can calculate x(t) and "skip" forward in time. This is not possible in general, even in classical mechanics. If it were a system of more than two interacting particles, you wouldn't be able to fast-forward; you would have to calculate all the intermediate timesteps even if you only wanted the last one.
If the universe simulator can solve iterative problems in O(1), then I question whether our concept of optimization is meaningful enough to it for this discussion to make sense.
There are some physical systems (mathematically, "Hamiltonians") which have this "stateless" property. However, if the time/energy uncertainty principle is true, simulating time T in O(1) cannot be possible in general unless BQP=PSPACE (the unlikely idea that quantum computers can efficiently solve any problem that can be stored in polynomial amounts of memory). See this paper: https://arxiv.org/abs/1610.09619
There is running joke about general relativity and quantum mechanics being a demonstration of simulation hypothesis, i.e., floating point computations used to simulate our universe break down under very small and very large scales.
Related to this, Second Life has a concept of time dilation which is the ratio of current simulator frame rate to ideal frame rate (30fps IIRC). As more avatars, scripts, and physically simulated objects were put in a simulator the time dilation increased.
I would be interested to hear more about this, if it's something that can be easily communicated to the lay-audience. It sounds like I share a similar feeling to others in this thread in thinking that this "not calculated until you look" idea is attractive.
I think the distinction is that observation can affect a system. In order to look at a particle, you need to bounce a photon off of it, which affects it. The particle/wave still exists and is still "doing calculation", its just that measuring the system literally and obviously affects it. Some people attach far too much significance to this.
No, I heard this is a common misunderstanding. You can bounce light off particles going through ONE SLIT and still destroy the interference patten. Partivles going through the other slit shouldn't have been deflected - yet the interference pattern is totally broken!
What you are suggesting is there is a hidden variable theory.
Hmm, I'm maybe missing something in this example. In the two slit experiment, a single photon can appear to go through two slits simultaneously. You appear to be suggesting that there are multiple particles going throw the slits which creates the interference, but I understand it that one particle interferes with itself, when you do not detect which slit it goes through.
You lock into (become entangled with) a random selection from a superposition of states, according to the probability described by the wavefunction at the time. What the deal is with the other states you don't see is a matter of interpretation, e.g., Many Worlds hypothesis.
I think part of the issue is people fail to carefully distinguish between physics and interpretations. Interpretations are actually more philosophy than physics. The physics can be experimentally demonstrated to be correct. The interpretations are essentially untestable.
A lot of people make statements which assume the truth of some particular interpretation of quantum physics, without realising that it is just one of many. Many advocates of "quantum mysticism" are adopting the von Neumann-Wigner interpretation ("consciousness causes collapse"), and they often misidentify that as "Copenhagen" even though it isn't. But on the other hand, many of their detractors are committing a similar error, and presuming Copenhagen or many worlds as if it was the actual physics as opposed to just one of many competing philosophical interpretations of it.
Can you explain how "consciousness causes collapse" differs from the Copenhagen interpretation? I'm one of those people who think of it as meaning that, except from the other side: I feel like "consciousness causes collapse" is wrong and count that against the Copenhagen interpretation. But if Copenhagen doesn't necessarily presuppose that then maybe I'm going wrong.
The Copenhagen interpretation, also known as "shut up and calculate", asserts only that the mathematics of quantum mechanics are accurate, that is, predictive of observation.
Assigning metaphysical implications is declared out of bounds. It's a useful compromise.
In college when my future mother-in-law called to ask if I knew where her daughter was I absently replied "Just a sec, let me finger[1] her." It wasn't until the extended silence on the line did it occur to me that a my use of jargon may have been misinterpreted.
[1] For you who don't know, the finger(1) command and protocol would return you the status of someone logged in and what they were working on (if they had set their status). Sort of like Google Chat used to do before they ruined it.
I have a similar story about, on request, describing the sorts of day-to-day things I then did to a family member who doesn't have a conception of what "internet companies" do.
It involved databases and the phrase "take a dump".
Could his simulation explanation, while cute, have something to it in the way of explanation? It combines the whole "we are living in a similation" with quantum mechanics!
The title of the article is misleading: this "virtual universe" simulates only gravity, which is all that was needed by the astrophysicists who wrote it. If it were simulating complete physics as we observe it, this amount of computing power wouldn't suffice to simulate a single cell for a single nanosecond.
It doesn't have any quantum features, and if it did, they would make it require vastly more computing power, not less.
Side note: I recently came across this YouTube video which did a fantastic job showing just the basics of waveform collapse, without straying into silly metaphors and cartoons, or getting bogged down by historical narratives:
Highly recommended for anyone trying to grok the relationship between a wave, a particle, and an observation. I find I don't want to hear what quantum physics is "like" I just want to have things explained to me slowly and completely.
I was surprised to find the 'stringy' texture of intergalactic structure which these intense simulations seem to concisely capture , was also generated -roughly, by a very naive simulation which I applied to a few thousand points:
The process which changed homogenously random points in a cube into those stringy messes, didnt even include gravitation. It pulsated the points and diddled random neighbours ever-so-slightly closer to each other over a few million iterations.
I realise that academic universe simulations like this examine with great insight, more subtle features but i found it interesting that the basic stringy texture does not require precise forces to self arrange.
I suspect the texture will look basically similar under any scheme where stuff is attracted more strongly the closer they are. It's a basic feature arising from the "the dense get denser" characteristic of gravity.
are_you_kidding its a wonderful image but the texture of the strings is a bit different - its more 'furry' I wouldn't be sure that is due to gravitational attraction, possibly electromagnetics involved?, maybe not but such nebulas are a bit exceptional - at the large scale universe structure there doesnt seem to be any exception - the whole lot is stringy.
would you mind elaborating a bit? you say the process moved random neighbors closer to each other, but what do you mean by neighbors? i'm assuming you don't mean particles that are directly adjacent. if the method for selecting which particles to move together is distance-dependent then it's possible an inverse-square law got baked into it just by virtue of the shape of euclidean space.
For sure I can see that inverse square effect can get involved unintentionally, but the function didn't apply it intentionally and it would have introduced non-inverse-square derived effect. After this early experiment ive tried multiple types of quasi-gravity and quasi-electromagnetism - it is very tricky to arrange a viable 'quasi-force' just by relying on properties of the coordinate system or such.
For this I was developing 'spatial splitting' routines, i was taking the end cells of the splitting routine and just moving there contents a little bit towards their centers. While this was going on the 'universe' was also expanding, contracting and turning inside out in a neverending series of big bang and crunches. It looked pretty. The stringification occured with a few different 'end cell' testing functions i happened to try out. I just noticed it wasnt difficult to bring about that structure, unlike other structures which are more difficult to bring about - like accretion disks.
heh, it begins forming those rectangles because each rectangle is a group of particles which have been initialised with same velocity vectors. (Their position is intialised randomly within one cube, but their velocities randomly within multiple cuboids). This configuration is not essential for 'stringification' Its a just a snapsot of experimental/playful developement.
The distortion and eventual 'stringification' of the undulating point groups occurs over minutes or hours of runtime.
My recent thinking about simulations is that we might soon inadvertently create life/consciousness while simulating something else. Which leads to interesting ethical considerations - before switching off a simulation, should we check for life? And how do we do that, given that it's going to be completely alien?
If you want to worry about overlooking artificial life, a more interesting scenario is some kind of evolutionary pressure among malware, leading to a computational equivalent of roaches or even crafty raccoons, stealing a living from our internet-of-things trash cans.
reply