> After extensive experimentation, they realize this: 50% of the time, the white player wins and 50% of the time, the black player wins (we'll ignore draws and any first-move advantage for the example).
But this is empirically false. It fails to explain that, for example, some players consistently beat other players, or at least have a much higher than 50% winrate.
If chess really did consistently just have a 50% outcome, then it really would be equivalent to an overcomplicated coinflip. So I don't find this parable at all convincing, and remain a logical positivist.
The aliens can’t see the players, other than that one is playing as “white” and the other “black”. They don’t have access to information about a particular player going from game to game.
That's not how I read the article's introduction. But under that interpretation, it would be meaningless for the aliens to try to understand the game in any more detail until they understood that kind of basic information.
That's the question the article is addressing. I think you're putting too much weight on the rough and sketchily defined example and missing the actual topic.
But the claim lives or dies by its example. The author's whole point is that he's discovered or imagined a situation in which doing science "badly" would be better than doing it "well". If his imaginary situation doesn't actually hold up then his whole argument is nonsense.
Analogies are simply tools to facilitate discussion, they can't themselves say anything definitive about the underlying claim. That's why it's useful to interpret them generously. If I came up with a flawed analogy for time dilation would that show that time dilation is nonsense?
I think a better approach on encountering a flawed analogy is to attempt to improve it, come up with a better one, or address the underlying claim directly, but saying "this analogy is unclear to me the end" isn't going to get you very far in life.
> Analogies are simply tools to facilitate discussion, they can't themselves say anything definitive about the underlying claim. That's why it's useful to interpret them generously. If I came up with a flawed analogy for time dilation would that show that time dilation is nonsense?
That's a bizarre attitude. Thought experiments were a major and important tool in developing the theory of relativity, precisely because they were taken seriously and done rigorously; contradictions weren't waved away as "just a tool to facilitate discussion".
> I think a better approach on encountering a flawed analogy is to attempt to improve it, come up with a better one, or address the underlying claim directly, but saying "this analogy is unclear to me the end" isn't going to get you very far in life.
On the contrary, being willing to call out nonsense has served me very well. There is no "underlying claim" here; the story is what's meant to carry the claim, and if the story doesn't work (and it doesn't) the whole thing falls apart (and it does). It's nonsense, top-to-bottom.
This isn't a formal physical theory, just a heuristic for evaluating lines of investigation. Richard Feynman gave a good explanation of the principle, which he said he found helpful in his work.
I mean, who is which color actually is a coin flip (it‘s random or as good as random and there are repeat games to make sure the first mover advantage doesn’t play a role), so the model seems to be correct in that regard. They modeled that part of reality correctly, just missed that that’s not what the game is about.
There are so many unreasonable assumptions baked into this parable that make it difficult to take seriously. Any amount of meaningful investigation by the aliens would yield more insights into the fact that the game is in fact not a random coin flip. Even recognizing that white has a slightly higher win rate [0] would demonstrate some asymmetry that suggests the existence of a deeper structure or symmetry-breaking.
The article is not a treatise on how aliens might investigate chess though. That's just a symbolic example to illustrate the idea that you could have a simple model with good predictive power that is somehow less correct than a complicated model with worse predictive power.
But the whole point of the article is to claim that its example are possible, and yet it fails to demonstrate that. We can posit a world in which God declares that logical positivists may not enter heaven, and in such a world logical positivism would be a poor choice, but there's no evidence that that's actually the world we live in.
The pieces that it does predict some things better:
Is one player happier than the other right now?
How long will it take to make the next move?
How long will the game last?
There's lots and lots of observable things other than who wins that the coin flip model does a worse job at. Thus, the example makes just as good an argument that making better predictions is still what makes a model better.
The missing question is "what are you using the model for?"
Are you trying to learn chess? Understand human psychology? Predict the outcomes of chess games? The same model isn't going to be the best for all of them. Like, the super GM level stockfish is going to suck at predicting beginner chess games
Whether one model is better than another is all about the purpose it's trying to serve, and how well it does that.
It's not _players_ here that matter, just the color. If I play a grand master (or indeed just any moderately competent player) 10 times, I will lose all 10. But assuming we share who starts, white will win 5 times and black will win 5 times.
Further, for some random chess tournament, I'm guessing the win ratio for black/white is close to 50/50 (perhaps white is a bit ahead for having first mover advantage - which is stipulated as ignored in the article)
It turns out the result of the game has lots to do with the player, and Little to do with the color, which makes the alien first model simple, accurate, and completely wrong.
> It turns out the result of the game has lots to do with the player, and Little to do with the color, which makes the alien first model simple, accurate, and completely wrong.
And discovering that relationship between players and winning is important and valuable! But trying to understand how the pieces move before you've even understood that this is a game of skill is putting the cart before the horse, and gives you a model that really is less useful than the coin-flip model.
You'd observe that the players treat it as a contest, and care about the results. It's like if you were trying to understand, say, a water pump, that you couldn't understand see internals of - speculating blindly about what kind of gears and motors might be inside would be completely futile, the path to understanding it would be to first understand what it's being used for, and then you might be able to start to reason about how.
No, you would observe blobs of water and carbon jiggling and perhaps making some noises. You'd have to develop quite a sophisticated model of human behavior to infer that they are treating it as a contest and care about the results. Even then, if they've previously seen humans at a casino, they might reasonably deduce that chess is no more a game of skill than craps is.
> You'd have to develop quite a sophisticated model of human behavior to infer that they are treating it as a contest and care about the results.
Sure. But again, trying to understand chess without having that understanding would be putting the cart before the horse.
> Even then, if they've previously seen humans at a casino, they might reasonably deduce that chess is no more a game of skill than craps is.
That would be a reasonable starting assumption, but they'd eventually notice contradictions: the fact that some players consistently had advantages over others, more experienced players generally beat less experienced players, commentary and analysis of board positions is considered worthwhile...
You are deliberately misinterpreting the analogy to the point of absurdity. The aliens are only observing a black box system that spits out "white wins" or "black wins," they are trying to determine if there's any reason to develop the means of probing the contents of the black box. Regardless of whether you start by modelling the players or the board, you are still assuming the winner is determined by some complicated model with numerous hidden variables which you would have no reason to believe unless you start with the assumption that this is not a random number generator - an assumption for which you have no evidence. The author's thesis is that assuming a complex explanation for a black box's behavior is reasonable because that's the only way that leads to experiments to probe the contents of the box.
> The aliens are only observing a black box system that spits out "white wins" or "black wins," they are trying to determine if there's any reason to develop the means of probing the contents of the black box. Regardless of whether you start by modelling the players or the board, you are still assuming the winner is determined by some complicated model with numerous hidden variables which you would have no reason to believe unless you start with the assumption that this is not a random number generator - an assumption for which you have no evidence.
If you actually did that, you'd start creating complicated hidden variable theories to explain every random coin flip you could see, despite the overwhelming majority of them actually being random coin flips.
> The author's thesis is that assuming a complex explanation for a black box's behavior is reasonable because that's the only way that leads to experiments to probe the contents of the box.
The thesis only holds if it's actually common to have a complex hidden detail inside a black box that is nevertheless somehow completely impossible to infer from the outside. And what I'm saying is that that's actually absurd; in cases where there are meaningful details to be found out, it will be apparent from the outside that there is detail in there, at least the overwhelming majority of the time.
Black box systems are incredibly common. By definition, there isn't anything readily apparent from the outside. You haven't shown that to be absurd, you only showed that if you can see enough of the interior of a black box, it is no longer a black box.
You complain that the author's logic will lead to creating complicated hidden variable theories but that's exactly what the author is advocating. While there will always be some ever more convoluted model to explain results, any given model is testable, whereas assuming there is nothing to model is not testable.
> Black box systems are incredibly common. By definition, there isn't anything readily apparent from the outside.
Citation needed. The author is trying to claim that this kind of extremely opaque black box system is common (at least, common enough that we should take the possibility seriously), but their only argument is a made-up example that falls apart under the slightest scrutiny.
> While there will always be some ever more convoluted model to explain results, any given model is testable, whereas assuming there is nothing to model is not testable.
The claim that Russell's Teapot exists in any given orbit is testable, whereas assuming Russell's Teapot doesn't exist anywhere is not testable.
The author did himself a disservice by attracting chess players to criticize his example that clearly could have been about literally any game more complex than coin flipping that has roughly 50/50 parity.
He could have referred to the NFL or something, maybe that would have helped. The point was to set up the aliens to have a gross approximation of outcome determination that is both accurate and wrong.
The aliens could have decided to use home vs. away teams as their "coin flip" and stopped investigating once they found out that home teams win a bit more, for example, and the "crazy" alien in the parable could suggest that examining only the first play of the game is a better answer. The "crazy" alien would be told to stop because the first play isn't more predictive, but the fact that the alien is looking at gameplay is more "right".
>The author did himself a disservice by attracting chess players to criticize his example
Heh, I think the author's example is fine if the reader gives it a charitable interpretation. The disservice is from all the overly pedantic people who want to argue about chess instead of the philosophy of science.
Anyone who can muster their willpower for thirty seconds, can make a desperate effort to lift more weight than they usually could. But what if the weight that needs lifting is a truck? Then desperate efforts won't suffice; you'll have to do something out of the ordinary to succeed. You may have to do something that you weren't taught to do in school. Something that others aren't expecting you to do, and might not understand. You may have to go outside your comfortable routine, take on difficulties you don't have an existing mental program for handling, and bypass the System.
[...]
Are they really under the impression that humanity can survive if every single person does everything the ordinary, normal, default way?
This text mainly shows that author either misunderstands chess, probability and science, likely at the same time, or just wants to critique for critique’s sake.
Of course while it’s good to search for alternative models/theories/explanations, unless you can provide something with better with more predictive power than the existing/widely accepted ones, it’s a good idea to hold the critique.
EDIT: To clarify: by less predictive power I mean that it neither explains new effects or predicts new unknown ones, nor explains known phenomena or generates existing theories as special examples. I didn’t mean theories such as for example string theory, that has little predictive power at the moment, but has current theories as special cases and holds the promise of explaining things that current theories cannot. /EDIT
Physicists are “stuck” with existing theories not because they like them, but because they work so well it’s hard to invent something that even works equally well (not to mention something that works better). There a lot of smart that are brave in thinking and propose wild explanations. Yet, in most cases they don’t stand up the test of time.
Einstein couldn’t deal with randomness of Quantum Mechanics, put forward a hidden variable theory and it was (and still is) seriously considered, but he (and many others) weren’t able to put forward better-working theory. We stick to QM despite its weirdness/randomness because it works extremely well, not because we like it or think things must be this way and require no further study/“it is the most efficient and parsimonious possible model“.
I think the criticism of occam's razor is valid. Occam's razor comes at a cost, and denying it would be wrong. It brings many advantages though, which outweigh the disadvantages.
Sure it is valid, but in reality most scientists don’t religiously stick to Ockham’s razor and oppose alternative theories that give correct predictions.
If hard sticking to Ockham’s razor was true, Quantum loop Gravity, String Theory and many other theories wouldn’t be intensively studied for past 50 years. Or development and studying interpretations of quantum mechanics (which btw yielded results in Quantum Information Theory).
It’s just that constructing something correct and new IS really hard.
"Physicists are “stuck” with existing theories not because they like them, but because they work so well it’s hard to invent something that even works equally well"
Yes, but hundred years ago we were "stuck" with another worldview that was explaining everything fine, and i assume ther was a big resistance from estabilishment to addopt new ideas. But then old scientist died, and resistance got weaker. So we might look back at today after 100 yaers and see similar situation. No one says that ideas of 100 yaers ago were all wrong, just not so true as current.
One hundred years ago there were many things that did not have any real explanation -- classical physics was not "stuck" and was not "explaining everything fine". Our understanding of something as common as metals and insulators depends heavily on quantum mechanics. That's not even counting things like spectral lines, the stability of atoms, etc, etc that were unexplained in 19th century physics, and explicitly known at the time to be at odds with classical physics.
While worldviews can be overturned and paradigms can be shifted, it is (significantly) harder than it was before. We simply know more now, and have a better understanding of our limits. So whatever new framework that has to supercede our current framework has more ground to cover than it did even in the recent past. This is exacerbated by the fact that in physics in that most everything outside the early universe and black holes (which aren't easily accessible experimentally) seems to conform to our current framework -- there is both more to fit in and less data to work with.
No, a touch over 100 years ago Einstein came up with a new set of equations that more precisely describes the movement of objects. Newtonian physics is still valid and still taught at schools since it adequately explains the movement of objects at an approximate level that is good enough for most people most of the time. But when dealing with equations at an astronomical scale, Newtonian physics start to break down.
Likewise quantum mechanics doesn’t replace General Relativity. It compliments it.
I think you misunderstand. The work that Einstein did on quantum mechanics was indeed "complimentary", i.e. provided for free as a courtesy. (Although it could be argued he actually got paid for it later with Nobel prize.)
> But then old scientist died, and resistance got weaker.
Hum... The old scientists that (very vocally on that case) resisted change were the same that uncovered the problems with the old models and laid out the first theories on how to fix those problems. Those things are way more complex than simple quotes and labels can communicate.
New models got adopted when after a lot of work people created some that worked better. Not a moment before. Those better models didn't get resistance from the established physicists.
> unless you can provide something with better with more predictive power than the existing/widely accepted ones, it’s a good idea to hold the critique.
Part of the authors premise is that a more correct theory could have less predictive power out thr gate and might not be pursued as a result.
Why wouldn’t it be pursued? Is string theory not pursued by “mainstream science” because (at least for now) it has less predictive power than standard model?
I don't think it's entirely right to discard comparatively less developed theories based on their relatively weaker predictive power. That does sound like how one reaches a local maximum, to use the term from the article.
Think on it in a different way, most of the very revolutionary theories, those that changed how we see the world, were relatively more simple, they were generally simple enough that one fringe could develop them to a point where they shined so brightly as to be next to irrefutable. Things like how the earth might be round, and circle the sun.
We can't expect the same to be true for more advanced fields, we can't say "yeah, this "new/underdeveloped" idea does seem reasonable, but it does not solve everything as well as our existing theory that we've been iterating on for decades, so let's not waste time on that".
> Think on it in a different way, most of the very revolutionary theories, those that changed how we see the world, were relatively more simple, they were generally simple enough that one fringe could develop them to a point where they shined so brightly as to be next to irrefutable. Things like how the earth might be round, and circle the sun.
Intresting trivia: heliocentrism didn't show "shining brightly as to be next to irrefutable" - it was a fringe idea that could not be confirmed through observation at the time, required some pretty wild (for the time) assumptions - such as stars being very, very far away, to explain why there's no visible parallax from Earth's movements - and went against existing understanding of physics in general (such as, Earth is very big and heavy and bulky, so it's not obvious how could it be moving in circles very fast). Also, IIRC, the predictions made by heliocentric model were less accurate than geocentric ones.
It took astronomical observations with early telescopes to provide data points favoring a mixed geo/heliocentric model, and then further observations, work of Kepler and Newton's theory of gravity for heliocentric model to finally start making sense.
This does serve as an example backing TFA's thesis: some accepted theories, like (then) geocentric model, may be just local maxima - theoretical dead ends. A potential better theory will initially look bad in comparison, it needs work to develop past the accepted one.
right on thanks for making this point - "Also, IIRC, the predictions made by heliocentric model were less accurate than geocentric ones." - that was my recollection also: the epi-cycles had greater predictive power and accuracy.
> Also, IIRC, the predictions made by heliocentric model were less accurate than geocentric ones
> work of Kepler
Yes. One problem of the early Copernican heliocentric model was that it stated that the orbits of planets around the sun were perfect circles. It wasn't until Kepler showed that a) the orbits were actually elliptical and b) the planets speeded up when they approached the sun and slowed down as they moved away that the actual movements of the planets could be more accurately predicted. Until this time, the older earth-centric models with all of the epicycles were 'better', even though totally unrelated to reality.
one important point is that ground observations include mars moving backward fairly often. it took a while to formulate a movement form that satisfied that constraint better than epicycles
Yes. This is one example of the fact that the Earth 'laps' the outer planets because it has a faster orbit. So the outer planets sometimes appear to go backward with respect to a fixed point such as a star as the Earth undertakes them. They also can move up and down with respect to that fixed point because their orbits are tilted somewhat compared with the Earth's orbital plane around the sun. The overall effect is that the outer planets sometimes trace out a little spiral, and the further they are from the sun, the more these spirals dominate their overall motion (because we are lapping them more often).
Obviously it's different for Mercury and Venus, whose orbits are inside ours. They instead switch between being visible in the morning or the evening.
All very complicated for those ancient astronomers!
I still haven't understood why people keep saying that there is randomness in QM.
The so-called "wave function collapse" isn't really part of QM, it's duct tape that we have applied to stick together the QM with "Classical Physics" and our pre-existing assumptions about human consciousness. I don't think we should consider it "real" or "true".
Without the wave function collapse, there is no randomness in QM.
you have it wrong, QM is the only physical theory that has randomness as an inherent part, as compared to e.g thermodynamics where randomness is due to lack of information. It is proven (see Bell inequalities) that randomness in QM isn't due to lack of information.
You can't just wave away the collapse mechanism, what do you make of the double alit experiment? isn't the target "real" enough?
What you are saying is that whenever we, the observer, leave the QM model we use the collapse as a computational trapdoor function? Sounds like an interesting point of view.
But would that not also imply that we should be able to measure the quantum world with quantum devices? Say we have a quantum property that is extremely close to p=0.5. If we could invent a device to replicate that property perfectly and measure it repetitively we could then estimate ever more accurate boundaries for the "true" value of p, no?
> QM is the only physical theory that has randomness as an inherent part
It's either inherent randomness or just a deep hole in the whole thing (similar to the alien chess thought experiment problem). Personally I choose to believe that the theory is just incomplete because nobody can even define what a "measurement" really is, meaning in which cases what we do is a "measurement" and in which cases it is not a "measurement". I also think that this is what people like Feynman refer to when they say things like "nobody understands QM", it's actually "nobody understands the wave function collapse", the rest is just maths.
Turing machine as an abstract concept is known to not hold a solution to the halting problem. There's no question about that, the only question is how the abstract concept assumptions pertain to the real world (infinite tape? maybe a problem, maybe not?; is the execution speed bounded? or maybe we can somehow count to infinity by exponentially increasing the speed? things like that).
On the other hand nobody has any clue what the quantum measurement / wave function collapse actually is. There are theories/interpretations but no truly satisfying answers in the same way as for example Newton's equations were a satisfying answer to the elliptical movement of planets, even though we later found out in the 20th century that F ~ 1/r^2 was actually an approximation.
We simply don't know, and we have no idea when shall we know.
It is, but that's in interpretations that also don't have the concept of "wave function collapse" in them. WFC is a feature of interpretations that considers measurement as something ontologically special.
If you don't consider measurement ontologically special, then you need to somehow derive a physically meaningful Born rule without reference to measurement, which so far is something that AFAIK has only been accomplished in theories with large amounts of nonlocality and extra assumptions. The idea that people cling to the obviously false projection postulate out of obstinance is really strange to me, there just aren't very good alternatives available (at least not with the math fully worked out).
I would love to consider measurement something ontologically special, but it's not possible because there is no well-defined definition for what a measurement is.
The definitions I have found always invoke the presence of a "classical system"/"observer".
But that just kicks the can down the road, because there is no well-defined definition of a "classical system" either.
Sure. Everyone agrees Copenhagen is just kicking the can down the road. I'm just saying, let's not act like we have a ton of viable theories to fill out the rest of the road; in the meanwhile, we still have to perform measurements and make predictions, and the projection postulate is handy for that.
(It would help tremendously if we ever measured quantum states that weren't "collapsed", but as we've never done this so far it makes most of the stochastic collapse stuff hard to justify, even if it seems intuitively like the right approach).
My layman feeling wrt. QM, and Copenhagen school in particular, is that we're searching for too computationally simple mental models. Most other areas of physics - like GR, SR, thermodynamics - can get away with aggregating matter into points, perfect spheres, etc. because they're working in macro scale, but QM is trying to deal with the smallest bits of our reality. Now the boundary between QM and "classical physics" is one where your quantum system will interact with 10^{double digit} amount of other quantum-relevant bits. I have a feeling that searching for what constitutes "a measurement" in such scenario is missing the point, and even talking about the macro system being entangled with the test system is pretty much skipping over all the interesting bits.
In the Everett/Many Worlds interpretation the appearance of randomness can be explained as an emergent phenomenon resulting from not being able to predict which part of the wave function we will end up in before running an experiment.
The Everett/Many Worlds interpretation cannot reproduce the predictions of quantum mechanics without extra assumptions (e.g. the Born rule) that don't have any physical basis within the context of MWI.
Yes, you are right to point this out. There are some important details that are still being debated. Personally my impression is that the debate has advanced enough to the point where MWI can’t be outright dismissed based on this argument. There are multiple plausible explanations and the remaining difficulties have more to do with philosophy than physics.
Edit: To give one example of an approach that I think is promising: We start by describing the observer and environment through a density matrix (a probability distribution over possible wave functions) and introduce an interaction with a quantum system (e.g. a spin). Given a reasonable interaction, you can show that the entanglement in the combined state (observer, environment and spin) leads to the system approaching a state that is a probability distribution of entangled states where each probability corresponds to the Born rule. Interestingly in this case the probabilities emerge from our lack of knowledge about the microstate of the observer/environment, so it’s actually thermodynamic uncertainty.
I'm not dismissing MWI, I'm just saying that current formulations either don't reproduce quantum mechanics or don't really address the existence of randomness within quantum mechanics.
While I intuitively like the statistical approach you mention, under it the Born rule holds only approximately, so it should be theoretically possible to observe entangled states which we have never done--i.e. it produces different predictions from the Copenhagen interpretation of quantum mechanics, which means it's not strictly a different interpretation, but its own falsifiable theory. Like I said in another comment, if we ever do observe entangled states directly, people will jump on board one of these alternate explanations like lightning. But until we do, the question of why we never ever observe anything that doesn't look like collapse still needs mathematical justification.
I am not aware of any interpretation that has the Born rule without assuming something equivalent to it. For example de Broglie–Bohm theory requires you to assume the original distribution of the particles follows the Born rule. QBists just postulate it, consistent histories just postulates it etc.
I am not particularly a many-world proponent, but I do not think it is fair to level this accusation as an issue for many worlds without bringing up that every other interpretation has the same "flaw".
AFAIK it can in fact be derived through Gleason's Theorem under the assumption of noncontextuality, so I don't think it's fair to say that nobody can derive the Born rule without assuming it (many people have issues with noncontextuality but that's very much a philosophical thing). The thing you have to demonstrate is that a probability measure actually connects to physical observables in some way, and this is the part that is difficult (and as far as I can tell MWI does nothing to resolve this conundrum).
Gleason's theorem also makes a big assumption when you require that the measurement outcomes are associated with POVM elements (or projection operators if you don't like POVMs). I lumped that in with "assuming something equivalent to it" since Gleason's theorem (at least by my understanding) is exactly the statement that assuming non-contextuality+POVMs/POMs is equivalent to assuming Born's rule.
Although its really cool I don't think Gleason helps you tie any particular interpretation to the Born rule, since you still have to make a jump to tie your measurement outcome to a POM/POVM element.
As far as your last sentence goes, this is sort of what I was trying to argue in my comment above. The "part that is difficult" that you identify as being unresolved by MWI is also completely unresolved by pilot wave theory, or qbism or consistent histories or any other interpretation (as far as I am aware).
I think we're in agreement there (though IMO it's highly nonobvious that noncontextuality+POVMs automatically get you the Born rule, so I don't think it's "cheating" to assume that--obviously any set of axioms that let you derive Born will have such a property!). I was mostly saying, I don't think MWI helps us understand where the probabilities come from any more than any other interpretation--you need something more. And if you can't identify where the probabilities come from, then saying your theory is "deterministic" rings fairly hollow to me.
What is usually misunderstood about QM is that QM isn't a model of the world.
QM is a model of "what we can observe from the world" based on "what we can observe from the world".
QM is a model of "accessible" information.
What laymen usually wants is to understand how the world evolve.
What QM physicists tell them is there exist some inaccessible information, but using accessible information we have, we know how to predict all the accessible information (albeit stochastic-ally).
The typical example to help computer scientists to understand is the seed of a pseudo-random generator in an online casino. The players will never be able to access the seed, therefore the best they can do is make decision based on the value of the generated random numbers they observe and their probabilities.
Bell inequalities are a consequence of this modelisation. They are a refurbishing of Boole's inequalities, a theorem about probability which only bound those who use probability.
The usual fallacy forward is telling QM is a non-local theory, classical local theory can't violate Bell inequalities, Bell inequalities violations are observed in the real world, therefore the world is not local...
This is all hand-waving. The reality is no one knows how any of this works, and there are more than twenty different interpretations.
If you can prove conclusively there's no randomness and no collapse and no need for either, a Nobel Prize awaits. If you can't - most likely - one opinion is as (in)valid as any other for now.
No, this is historically incorrect. Quantum Foundations was strongly disfavoured as a research pastime for decades - not because Shut Up and Calculate gets the right answer (it doesn't for many problems, including those that involve gravity) but because the risks of failure and obscurity were too high, there were few academic champions, it was seen as academically fringe, and the potential rewards for bolting something new onto the Standard Model without fundamentally changing its assumptions were much higher.
There's also been the - likely incorrect - belief that different models are too hard to distinguish experimentally.
So there's been a process of continuous refinement of existing theories which are known to be incomplete, and no concerted and sustained attempt to solve foundational philosophical problems - which is the level that Einstein, Newton, and other pioneers operated at.
I am wondering why you think the belief, that different models are too hard to distinguish experimentally, is incorrect - after all, the achievment of such a distinction would seem to be highly motivating. To take a historical example, The publication of Bell's inequality motivated a successful program leading to its experimental verification; do you have in mind some potential experiment to distinguish between models that is being wrongly ignored on the grounds that it is too hard to persue?
An alternative explanation for quantum foundations being in limbo is that it is extremely difficult to come up with alternatives that offer a possibility of verification.
Update: writing this reminded me of [1], in which a simple experiment by Shahriar Afshar, that arguably challenged one tenet of the Copenhagen interpretation, provoked a disturbingly over-the-top response, which supports your position on how work on quantum fundamentals is opposed (though, personally, I doubt it succeeds in challenging the Copenhagen interpretation. Interestingly, the opponents of Afshar's interpretation do not all agree on why they think it is wrong.)
Going through the piece, he builds upon things that make sense but then just completely goes off the rails at the end. I don’t even know what the point of the article is.
It comes across as science/intellectual shaming.
> Possibly illiterate dilettantes on the internet might see and bring to attention legitimate theoretical flaws.
And if the flaws are legitimate, no matter how much work a scientist has done, then he has to take these into account now. Science is incremental and self-correcting. Is it a flaw if someone points out something wrong?
> All the years you spend in graduate school counting angels on pinheads in your respective theoretical framework is mostly a waste of time.
What is the point of saying this? There are dead ends to science and some of these dead ends may seem like wastes of time but it’s all about incremental knowledge and discoveries. If something revolutionary comes along that disproves years of work, then that for science is a success.
> Most of the scientific work is not meaningful outside of the theoretical framework that gave rise to it.
What does this mean? Isn’t this “theoretical framework” based on our observations? Sure there is this chance that it might be wrong but if most of our data and observations show that this is true, then it is until proven otherwise.
If someone tells you that the earth might be flat and what we can see with our eyes and what we currently know might not be true, is that sufficient evidence to make you think, “oh the earth might be flat after all”?
I see where your criticism is coming from, but I think you're mostly nitpicking on the character analogy-breakdowns of the piece rather than the general nature of the argument.
As for "isn't this theoretical framework based on our observations"? Well, yes and no; as Einstein famously once noted, "it's the theory that decides what can be observed.". Obviously the reverse is also partly true and the whole thing is an iterative process, but in a sense Luke's article seems to be about the same thing as Einstein's quote: that it is fruitful to challenge the framework every now and then, rather than accept it as religion.
Specifics aside, I don't think that's an unreasonable mindset to have in science. (or other fields for that matter - a similar mantra exists in medicine: "always dare to challenge the existing diagnosis" -- exactly because people rarely think to do it)
My gut-level reaction (and this guy's clearly advocating for the merit of gut-level reactions! :D ) is this:
This is about priming people to buy into some propaganda system, for instance one constructed for political purposes, and reject their existing assumptions. The weird swerve at the end is the payload.
The reason to do this is, if you're knowingly maintaining a propaganda system that works towards some known purpose, and you want people to fall into it more readily. This doesn't discount the validity of the initial concepts: local maxima are real, and our ability to thoroughly understand things is limited.
But we remain functional through pretending we can indeed understand things, and there's usefulness in that. Generally, axioms are brought into question when specific details don't line up with our theory.
In a vacuum, 'abandon all theories!' is a fine position to have. In reality, 'abandon all theories!' is a set-up for getting fed a pile of information that benefits somebody else, because you become a willingly useful idiot ready to be programmed by anyone.
Running into somebody like this, I find the initial 'abandon all theories!' attitude to be refreshing. I even agree with some of it. If the NEXT THING he tells me is all about how Pepe's face is on the Moon outlined in craters, I've learned to be extremely suspicious of what the guy is really selling. Because even if he is himself sincere… there is somebody up the chain pursuing 'Nigerian spam' techniques and finding out by this who's credulous enough to be used for their own purposes.
And that's what I get from the 'going off the rails at the end'. The point of the article is to hook the truly credulous, and the writer may or may not be in on that.
> In Against Method, Paul Feyerabend, in what an unreflective mind might misinterpret as a "troll," says that it is important for science that people have biases, financial interests, interfering religious and political doctrines and the like in science.
Also dying, and new generations coming along is in the interest of science. quite ironically the development of eternal life would kill scientific progress, which might be the great filter after all.
That could prove really difficult without the death of those not in need of any advancement. Power and privilege by their very nature tend to find ways of avoiding being watered down.
Any examples of what is supposed to work this way? It sounds like a rant against quantum measurement by someone who thinks everyone just forgot to consider hidden variable theories.
I saw it more as a critique of any area of science where the development of a statistical model of the behaviour of a system is treated as though it yielded any meaningful understanding of the system.
Quantum Mechanics, sure, but you could also apply this to Climate Science if you wanted to start a fight...
The author's argues that the 50-50 model of chess is less true and a dead-end.
If this is a matter of which model to use then fair enough, you may wish to use a more exact model than the 50-50 model; but the author is not arguing about which model to use, they are arguing about which model to pursue building upon.
Building upon any model of interest is not a dead end (because it is being built upon!). Even if the underlying principles of the model of interest need to be changed to accomplish something else in the future, it is still useful to develop the model. Approximate truths can also have deep meaning and are sometimes even more generalizable to multiple areas of reality than exact answers. Approximations are no less true than trying to be exact, they are just saying a different thing. Neither is inferior to the other, or at least if exactness really is better than approximation, this is not a good argument for it.
Another commenter pointed out that some models need to be thrown out in order to make room for the new (e.g. the earth-centric view of the solar system had to go at some point), and I think that's valid and hard to argue against; and it seems to align with what the author is saying. But the work done upon the old models was certainly not worth nothing. For one thing, the work done upon the old models is what made the new work possible. I think perhaps the author's issue is that they do not acknowledge that the 50-50 model of chess has value.
p.s. to the author if they read the comments: I actually enjoyed reading your thoughts even if I disagree with them.
Tbh, I didn't think he gave that message. If anything, he even identifies the 50-50 model as a local "maximum" (i.e. it has value). To me the main point was to have a mindset that is willing to challenge the possibility of whether such an optimum could actually simply be a local one.
I suspect the metaphor is inspired by Feynman's one about gods playing chess [1]. As it's almost always the case, Feynman manages to immediately inspire and illuminate.
On the other hand, the present article seems to me to get some important things wrong. For one, it's not the case that overall white and black win 50% of the time each. More importantly though, if somebody is interested in investigating chess, they will supposedly understand that it's some activity that other (somewhat) intelligent beings are undertaking.
Even if everything seems like a blur, an alien scientist should not be content with dismissing it as uninteresting randomness. More so, if somebody actually starts investigating seriously, they will immediately obtain useful results about the game, results that should almost certainly provide useful predictions about the game outcome (at least better than "it's all random").
I might find, however, some agreement with the spirit of the article. It seems to me that, even though the author doesn't articulate it properly, the idea is that scientists should pursue more fringe theories even if immediate confirmation is lacking. I think there's quite a bit of value in this, as long as your predictions are always checked against reality.
>For one, it's not the case that overall white and black win 50% of the time each.
He addresses that (first-move advantage from white moving first) in the parenthesis after that same sentence:
"After extensive experimentation, they realize this: 50% of the time, the white player wins and 50% of the time, the black player wins (we'll ignore draws and any first-move advantage for the example). "
Acknowledging is not addressing, and in this case the statement amounts to "they realize this: ... (they would not realize this)." It is either too flawed to make the point intended, or illustrates a flaw in the underlying argument.
>50% of the time, the white player wins... we'll ignore... any first-move advantage for the example
Really hard to parse the meaning. ¿White wins 50% of the time if we ignore that occasions where white wins because it has the first move? But white always has the first move.
When I look at a master database, White wins about 33% of the time, black wins about 25% and the rest of the games are draws. The aliens would have to be pretty terrible at statistics to ignore draws and first mover advantage when they are clearly extremely important.
It is difficult to relate to an analogy which is so divergent from the basic facts.
That was one example of how the author proposes an extremely inadequate analogy. The fact that you can propose an improvement so easily demonstrates this.
It's also very clearly a problem that it ignores player skill, time control and all sorts of other factors that would be basic to any kind of model claiming to have power in predicting chess outcomes.
> It's also very clearly a problem that it ignores player skill, time control and all sorts of other factors that would be basic to any kind of model claiming to have power in predicting chess outcomes.
This was exactly the point the author was trying to convey. The simplest model of chess with the fewest assumptions is that it is just a random number generator with no dependence on any factors, but this is a bad model, and the "fringe" alien who assumes there is some deeper, hidden structure to the game is correct to do so.
If it helps, consider the alien's sport of glorfball. We've never seen a game played, but we know that approximately 50% of the time the Aberdorfs win, and approximately 50% of the time the Gloophbahorps win. Based on this data, is it reasonable to conclude that glorfball is nothing but a game of chance?
It's reasonable to model glorfball as a game of chance for now, and aim to develop a more detailed understanding either by gathering more data (Can we correlate the occasions when the Aberdorfs win and the occasions when the Gloophbahorps win with anything else? If we can't observe glorfball games directly, we look for ways to find out about them indirectly, or outside conditions that might possibly affect glorfball). It's not reasonable to posit that glorfball results must be driven by cross-referencing the Da Vinci Code against a message written on the back of the Declaration of Independence, which seems to be what the article is advocating for.
That's not the simplest model of chess with the fewest assumptions. The simplest model of chess with the fewest assumptions is that white always wins. It's also a terrible model but is significantly better than what the author proposes. It seems very strange that you seem compelled to defend this point.
The author proposes a deliberately terrible model presumably in the hope that he is illuminating a wider point. Sadly I don't think he's doing that.
I might find, however, some agreement with the spirit of the article. It seems to me that, even though the author doesn't articulate it properly, the idea is that scientists should pursue more fringe theories even if immediate confirmation is lacking. I think there's quite a bit of value in this, as long as your predictions are always checked against reality.
I have seen notable academics say the same thing. The problem is that the incentives set up in modern academia are strongly against doing anything outside of the mainstream.
Effectively what you're saying is that you cannot possibly entertain the larger point of the article, because you find faults with the specific analogy he chose to illustrate it. Right?
Fair enough. His fault for trying to argue using an analogy I guess. It never works.
That's precisely the opposite of what I was saying. I tend to agree with what (I understood as) the larger point of the article, I just see the chess analogy as muddled and inaccurate.
My impression is that the author is taking aim at a rather simple, outdated and largely superseded philosophical notion of what science is. While there are probably cases of social and institutional pressure being applied to discourage fringe theories, there seems to be a lively opposition to each of string theory, dark matter and dark energy.
> a rather simple, outdated and largely superseded philosophical notion of what science is
To me the idea of treating the chess game as a random process somehow felt like an anti-realist position [1]. I'm not sure how this kind of thinking is currently perceived by philosophers of science, it's possible it's an outdated view.
That's an interesting way of looking at it, but if that is the intent of the parable, it is being rather equivocal about it. As I pointed out in another reply, unless the amount of information the aliens can possibly get about chess games is severely limited (excluding, for example, information that would allow them to see that games between specific pairs of players are rarely even-odds) then chess does not actually look like a random process - but if they are limited to that extent, then they are not going to determine that it is a game of rules, no matter what their philosophy and no matter how much they suspect that it is.
There is nothing unrealistic about random processes, and if the aliens take chess to be one, they are simply mistaken on account of their inability to get sufficient information to falsify this view.
There is a pretty well-known historical example in the opposition by Ernst Mach and the logical positivists to Boltzmann's work. I hope it is safe to say that the simplistic philosophical notions behind that opposition are now outdated and superseded.
I fully agree with your points. I think I may have been sloppy in expressing my criticism of the original article. At a certain level I think my comment was 'sociological' - the aliens, as intelligent creatures, should have a strong intuition that humans as other (somewhat) intelligent creatures can't be spending so much time and attention on something that is nothing more than a coin toss.
This sociological intuition should drive further inquiry into the mechanics of the game.
> if they are limited to that extent, then they are not going to determine that it is a game of rules, no matter what their philosophy and no matter how much they suspect that it is
Indeed, in that case the aliens are forever stuck. However, IRL you probably can't be absolutely sure that you're forever stuck, so in this case the 'philosophical' attitude might matter. An anti-realist might say - to hell with it, no worth trying, we'll never get better predictions out of more complex theories. A realist however, might pursue a theory not because it makes more accurate predictions, but because he/she has an idea that the theory is truly closer to the truth than the idea of a random coin toss. This intuition might take you through a dark period towards a higher reward (see moving from a local maximum to a valley, towards a yet unforeseeable global maximum).
Thanks for bringing that to my attention - I have posted a reply there.
I think you are probably on to something when you suggest this article was influenced by Feynmann's analogy, but I am pretty sure that Feynmann was not suggesting the straw man that this article attacks.
I figured this was a riff on Jayes. From Chapter 10, section 8 of "Probability Theory: The Logic of Science" (starting on page 329).
10.8 Mechanics under the clouds
"We are fortunate that the principles of Newtonian mechanics could be developed and verified to great accuracy by studying astronomical phenomena, where friction and turbulence do not complicate what we see. But suppose the Earth were, like Venus, enclosed perpetually in thick clouds. The very existence of an external universe would be unknown for a longtime, and to develop the laws of mechanics we would be dependent on the observations we could make locally.
Since tossing of small objects is nearly the first activity of every child, it would be observed very early that they do not always fall with the same side up, and that all one’s efforts to control the outcome are in vain. The natural hypothesis would be that it is the volition of the object tossed, not the volition of the tosser, that determines the outcome;indeed, that is the hypothesis that small children make when questioned about this.
Then it would be a major discovery, once coins had been fabricated, that they tend to show both sides about equally often; and the equality appears to get better as the number of tosses increases. The equality of heads and tails would be seen as a fundamental law of physics; symmetric objects have a symmetric volition in falling (as, indeed, Cramer and Feller seem to have thought). Of course, physicists continued discovering new particles and calculation techniques – just as an astronomer can discover a new planet and a new algorithm to calculate its orbit, without any advance in his basic understanding of celestial mechanics.
With this beginning, we could develop the mathematical theory of object tossing, dis-covering the binomial distribution, the absence of time correlations, the limit theorems, the combinatorial frequency laws for tossing of several coins at once, the extension to more complicated symmetric objects like dice, etc. All the experimental confirmations of the theory would consist of more and more tossing experiments, measuring the frequencies in more and more elaborate scenarios. From such experiments, nothing would ever be found that called into question the existence of that volition of the object tossed; they only enable one to confirm that volition and measure it more and more accurately.
Then, suppose that someone was so foolish as to suggest that the motion of a tossed object is determined, not by its own volition, but by laws like those of Newtonian mechanics,governed by its initial position and velocity. He would be met with scorn and derision; for in all the existing experiments there is not the slightest evidence for any such influence. The Establishment would proclaim that, since all the observable facts are accounted for by the volition theory, it is philosophically naive and a sign of professional incompetence to assume or search for anything deeper. In this respect, the elementary physics textbooks would read just like our present quantum theory textbooks.
Indeed, anyone trying to test the mechanical theory would have no success; however carefully he tossed the coin (not knowing what we know) it would persist in showing head and tails about equally often. To find any evidence for a causal instead of a statistical theory would require control over the initial conditions of launching, orders of magnitude more precise than anyone can achieve by hand tossing. We would continue almost indefinitely,satisfied with laws of physical probability and denying the existence of causes for individual tosses external to the object tossed – just as quantum theory does today – because those probability laws account correctly for everything that we can observe reproducibly with the technology we are using.
After thousands of years of triumph of the statistical theory, someone finally makes a machine which tosses coins in absolutely still air, with very precise control of the exact initial conditions. Magically, the coin starts giving unequal numbers of heads and tails; the frequency of heads is being controlled partially by the machine. With development of more and more precise machines, one finally reaches a degree of control where the outcome of the toss can be predicted with 100% accuracy. Belief in ‘physical probabilities’ expressing a volition of the coin is recognized finally as an unfounded superstition. The existence of an underlying mechanical theory is proved beyond question; and the long success of the previous statistical theory is seen as due only to the lack of control over the initial conditions of the tossing.
Because of recent spectacular advances in the technology of experimentation, with increasingly detailed control over the initial states of individual atoms (see, for example,Rempe, Walter and Klein, 1987), we think that the stage is going to be set, before very many more years have passed, for the same thing to happen in quantum theory; a century from now the true causes of microphenomena will be known to every schoolboy and, to paraphrase Seneca, they will be incredulous that such clear truths could have escaped usthroughout the 20th (and into the 21st) century"
It is an amusing story, but the Venusian-Newtons could show that their principles would allow one to calculate the behavior of non-symmetric objects, and that the statistics of both symmetric and non-symmetric projectiles can be predicted by their theory - and it also explains gyroscopic precession!
The statement, "A machine that allows his alien eyes to see the first move." seems too information-rich for what the author then describes as the alternative model. Perhaps not the point of this thought experiment, but let's assume the aliens can only crudely see the first move: they know how many pieces there are on each side of the board, their color, but otherwise all the pieces look the same. They cannot see the surface of the board (they don't know it is checkered), but they can determine the exact center of each piece (so during the first move they can compute a precise coordinate of the piece that moves). Promptly after the first move, the image goes away. Also assume they trust the humans who've told them: "there are actually pieces on the board with functions". They have collected data on 1000s of matches.
What can the aliens know or deduce?
- The number of pieces
- The pieces are divided into 2 different colors (white and black)
- The pieces are split by color and placed at opposite (vertical) ends of the board.
- There is the same number of pieces in each color
- The pieces are arranged into 2 vertical rows and 8 horizontal columns
- Opening moves cluster at 2 different lengths, towards the opposite side of the board.
- The longer opening move length is typically ~2x the short opening move length.
- The opening move vector is always towards the opposite end of the board (not horizontal)
- The opening move is always white
- White wins slightly more often
- They are given the duration and outcome of each game (win, loss, or tie)
- ? When games end quickly, first-move advantage shrinks
- ? Sometimes a series of games end more quicker than usual (they observe a blitz tourney)
- ? Win P() is slightly different for each of the 8 (x2) possible opening moves
Also, a note regarding the statement in the article that the alternative model "is less predictive over iterated games than the coin flip model". I don't think this is technically possible. If the model was selecting the winner at a statistically worse rate than 50% chance, you could just flip the sign of the prediction, and do that much better than chance.
Observing the movement of pieces quickly reveals that the board is divided in 64 squares. Squares being black and white or any other set of colors is actually irrelevant. They can basically fully understand the game from this, no need to make things complicated.
I'm not sure they would be able to infer the board has 64 squares. They can't see the board, and can only see the locations of the starting pieces and location of the first move. The center of the board could be anything - 300 hexagons, a spin-wheel, series of mouse traps, etc.
Here's my interpretation of what they see pre- and post- opening move...
If you analyse 5000 board configurations you can deduce with a high certainty that there are 64 fields the pieces can be on.
The pressing question is then why can some pieces move only diagonally or only in one direction or any.
In the case of chess, going after the why question is pointless, but obviously that wouldn't satisfy any scientist: It works because someone decided that it should work like this.
Could it work with different rules? The alien scientist then probably opt for a positive answer calling for a multiverse of chess rules :)
In keeping with the original authors description, the aliens can only see the opening move. I'm confused - how could they determine there are 64 fields (nb: I only know the very basics of chess)?
If they could view entire matches, it seems a trivial task to acquire a complete understanding of the underlying game, no?
Oh sorry, I missed that part. But according to the rules of chess a random valid opening should cover all fields on the table so at least you'd know the layout. Pawns can move one or two fields.
Knowing only the first move would limit understanding heavily, that's true.
I find the article great. His only mistake is to think that chess is a balanced game. White has clearly an advantage and in a long amount of random games, like he did propose, see a 50% winrate for black.
From the article: 50% of the time, the white player wins and 50% of the time, the black player wins (we'll ignore draws and any first-move advantage for the example).
This is one of the main sins of postmodernism: the thought that because all view points are tenable, all are correct. The scientific method is based on accuracy of prediction, otherwise we are not talking about science anymore. That's not bad, and many a researcher could do with the realization that maths and models are not the best way to describe all things. It's a different ballgame though.
I can't propose one, I think whoever could would have some sort of Renaissance discovery in their head. I say this because I feel the way we approach certain disciplines like psychology (in an increasingly scientific and statistical way) is the wrong take, although it has worked well for most things in the past. Although we can make good experiments, we are missing the intuitive part of it. There are some truths about the way minds work that you can understand, but you can't really write down or quantify, and science can do no help there.
For physics though, we have it down right. Do the experiment, did it work?.. it's already the correct method. No need to improve there.
I believe the point being made here is that, indeed, scientific method is based on accuracy of prediction.
However, in order to reach a good accuracy, a great deal of fine-tuning and tweaking is required. E.g. high energy physics. The standard model has something like 18 free parameters.
A new theory, which could be "more correct" and provide, in the long term, better predictions, might require time and manpower for this fine-tuning and tweaking to occur.
However, as it did not provide better accuracy of prediction in its initial stages, is said to be not worth pursuing or even directly pseudoscience or quackery.
This used to not be the case because, 100 years ago, relatively simple theories workable by 1-3 solo scientists provided a large enough breakthrough in prediction power to be seriously considered.
It may be the case however that nowadays, with the amazing level of precision measurement we are able to achieve, we've optimized ourselves into a corner.
We've fitter our quite-a-lot-of-degrees-of-freedom theories into a local maximum so hard that finding a theory that predicts better is, at least, impractical.
And through this process we've blinded ourselves from any new and disruptive ideas.
Again, look at the standard model. It's got so many damn dials to tweak that no wonder it fits reality so well. And if it ever doesn't, we can just shove supersymmetry in there.
We need to be able to dedicate resources to theories that do not provide better predictions, but that provide new perspectives. A moderate amount of resources. But calling those scientists quacks does no good.
I don't think that the usefulness of alternative viewpoints is contested by anyone, the problematic part of the article is the vague, ill-informed critique of 'scientific rigor'.
What other way is there to judge models than to compare them with experimental results? Grounding theories to experiments is literally the only thing that separates science from crackpottery.
It's also pretty weird for the article to criticize "logical positivism", when the most common contemporary views on the topic of philosophy of science (e.g. Popper, Kuhn, Putnam) don't actually agree with logical positivism.
I agree, this is the case already to some extent. And yes, the article is quite vague in general. I believe it's intended as thought-provoking and not really a proper argumentation. I think in that sense it's an interesting piece. We must remember to not be too myopic when knees-deep into our over-fitted math frameworks.
I'd only add that, in my opinion, the vehement rejection of non-mainstream ideas is actually more common in non-professional circles. While a physicist may find interesting considering a new idea, a physics enthusiast, in my personal experience, is much more likely to acuse of pseudoscience and crackpottery. It's sort of a validating and gregarious warm and fuzzy feeling. Look at those people with their clearly energy-conservation-violating nonsense. Hey, I've done it before.
We already have the best disruptive model that doesn't make good predictions: string theory.
There's not really a need to come up with these alternatives when we know string theory is powerful enough to do it, and only has one free parameter. That is, unless you can come up with a theory that had no free parameters.
These theories are fundamentally math models, and I don't find it likely that we'll determine that p = np by finding just the right np problem and exploring that on its own.
In the theory department, we're fine with what we've got. What we're missing is new experiements that can actually challenge our models. We've got a couple things to push on, sure { black hole singularities, dark matter, dark energy, wave function collapse, quantum gravity } but we don't have the tools to manipulate and observe them like we do for electromagnetism or chromodynamics
Does string theory only have one parameter? I was under the impression that you also had to insert some very complicated underlying geometry into spacetime (the extra dimensions), which I would count as more parameters.
You would be somewhat right, this is known as the landscape problem. Technically string theory has no free dimensionless parameters (by the formal definition, values which need to be determined experimentally).
edit: to expand, the landscape problem is more about finding the right parameters for compactification (folding up the geometry) that produce the universe we all know and love, or at least a realistic one. This makes them similar to free parameters.
>They are very interested in chess, despite the fact that they have eyes with properties that make it impossible to make out what actually happens on a chess board. (The whites and blacks and squares all blur together.)
I suspect that the real breakthrough would come when the alien physicist designs a very sensitive instrument that can observe exotic wavelengths of light at high resolution, and thereby show that the board is not uniform, but is divided into regions that have 1 of 2 states: i.e. black and white squares.
The author unironically uses a parable about aliens predicting chess outcomes with a coin flip and how wrong that is, to promote the idea that random fringe ideas without predictive power have value.
I severely doubt anyone who isn't already primed to agree with the conclusions would read this and find it convincing
(I don't find the idea that science needs ways to escape local maxima particularly controversial, but this article is terrible, and made worse by the unnecessary antagonism)
I think the issue arises, when a player who is not good at chess would lose, literally 100% of the time against anyone even close to a top regional level. In other words, chess is not a 50% chance game, it is a variable chance game depending on who you play, so the analogy is hard to follow because the aliens would not come up with this 50% outcome idea.
I think you misunderstand - you don't always play black in chess. So on average white wins 50% of the time (modulo first move advantage and draws as the article states).
But they can observe that Alice always wins when playing against Bob. (If they somehow can determine that one player is black and the other is white, they probably can also distinguish the players as well?)
They can't see a chess piece but they can see humans well enough to tell them apart?
Analogies don't work if you deliberately make assumptions that are inconsistent with what they are trying to convey.
We could rephrase the analogy such that the aliens are listening to a specific broadcast from earth from 1000 light years away. Some percentage of the time the broadcast is "White won", the rest of the time the broadcast is "White lost." Is it reasonable to assume that it's a game of pure chance just because you don't have a good method of predicting the outcome?
Yeah yeah, I agree. But, I think, the thing is that once you start making so many carefully made assumptions what is and isn't happening, then the nice parable format falls apart quickly.
Seriously, a model that can predict things better than a coin flip will be more scientific. Even if it utilizes something that is somewhat hypothetical / not immediately observable. Say, Newtonian mechanics talks about forces - but forces is a totally made up concept, their nature is (for the most part) not explained within the Newtonian mechanics framework at all, but it's a good model.
Also, many models of reality that are "inferior" to the "real models" are still very helpful and useful in science. Many numerical methods scientists are happy to use, say finite differences, discretize continuous equations. This transformation makes the model strictly speaking worse. But this is fine, since it allows to produce calculations that match and predict experiments. Even in Physics, people start using ML / neural networks to approximate complex calculations. Not because a neural network is a better more descriptive model of the reality (of course it's not), but because it calculates the answer close enough to the real one.
Crank ideas are crank not because they use use some made up concepts that cannot be experimentally seen. They are crank, because they ignore mainstream development, staying blissfully unaware of the subtleties and details the mainstream theories have already considered and resolved.
So the main fault of crank theories is the ignorance of their creators, who are either not willing or not able to correctly contextualize their work within the previously existing knowledge. If you want to do science, you have to do your homework: 1) explore and learn what is known already, 2) develop it further or propose an alternative, 3) contextualize your work within the existing knowledge about the subject. Skipping steps 1 and 3 is dishonest.
I’m a bit surprised that nobody has called attention to the Pepe the Frog meme. Given its association with the alt-right at worst, and trolling at best, this seems out of character for hacker news.
Use of Pepe and other alt-right tropes and 4chan lingo is consistent with who Luke Smith is and the image he wishes to cultivate for himself to attract a similar crowd.
Regarding the main point of the article. I do agree that it's very important to not get stuck in local maxima. However, I think the author heavily underestimates/overestimates the chances of the following:
1. The author underestimates the rate of competing ideas coming out of 'learned' scientists.
2. The author overestimates the possibility of an 'internet dilettante' without good knowledge of prior art coming up with something useful.
To me it's a bit like me playing chess with a Grandmaster. Sure, I know how the chess pieces move, I know basic tactics and strategy. But I have never studied a chess theory book, never learned any openings. What's the chance of me coming up with a chess theory novelty that provides a better model of the game and lets me win the Grandmaster? It's not none. But due to my lack of knowledge of prior art the chance is pretty thin.
I was disappointed to search through the comments and not find the phrase "two slit experiment". [1]
The first half of the article is clealy about quantum mechanics, although it doesn't name it explicitly. The author seems to be arguing that there is something specific really going on below the quantum level, regardless of whether we can measure it, and scientists refuse to acknowledge that only because of philosphical dogma (logical positivism). They seemed to think we only use probabilities to reflect our uncertainty, and ought to come up with a more fundamental theory that avoids that uncertainty.
But the two slit experiment shows that there is no more fundamental theory (at least not in that sense). If two paths have probability of 0.5, then it's not the case that only one or the other took place, we just don't know which is which. Instead, they both took place in some sense, to the extent that they interact with each other; that's not possible if there was one true path that we just couldn't deduce. It's maybe better to think of 0.5 as a weighting rather than a probability.
I thought this was a quite insightful article - and was surprised to find so much criticism in the comments. e.g. those who are attacking his parable on the grounds that "actually the odds aren't actual 50-50 in chess" or "an alien would have to wonder why some people win more than others"... these objections seem like nit-picking to me and miss the opportunity to broaden our perspectives. The example is to help illustrate and illuminate an intriguing idea which does merit consideration: that the scientific method and various aspects of the scientific process are very much a "gradient descent" algorithm which will tend to converge to a local minimum... the large jumps needed to discover the "global minimum" are not sufficiently rewarded... or may not even be within our capabilities!
Here is a more concrete example I have been considering recently: The principal of least action can be derived from Newton's force laws. Alternatively, Newton's force laws can be derived from the principal of least action. Therefore, you could choose to make either "law" the more fundamental one and consider the other to be an emergent phenomena. Might this also apply to, e.g. the principals of thermodynamics? Might it be that all of the laws of physics are actually just an emergent phenomena that can be derived directly from the information theoretic properties of the laws of thermodynamics and entropy?
But then, why even stop there? We choose/discover these examples because our tiny, mediocre brains can only understand the universe from that point of view, whereas the actual universe we live in is unimaginably vast and complex. But might there be a way, for example, of deriving the laws of physics from some narrative principal (e.g. "we live in the best of all possible worlds... therefore by some chain of reasoning: F = ma!"). Maybe that chain of reasoning is only comprehensible to an artificial intelligence which we have yet to invent, involving us inhabiting some element of a vast fractal structure subject to some sort of anthropic principal?
And in general, it is worth considering the concept that not all things which are "true" are objectively measurable - those are just the things that are easiest to prove are true. To give an example of something which you know is true but which is, almost by definition, subjective: the existence of your own consciousness! It is impossible to prove, objectively, that you're not just some simulation that is running in a computer that's about to run out of memory and segfault... that you yourself are "real".
That's not to say that any of the above is actually true... but it might be true. Our recent focus on "provability" and "objectivity" has been used to argue that such things are in fact false, or just not worth considering or thinking about. There is sometimes an arrogance when there should be humility. Just because something might be exponentially difficult to prove, the sheer difficulty in proving it does not mean that it is not true.... There may be things which are true which we will never have the resources to discover. I think it's all worth considering.
People are attacking this article because logical positivism gives them peace of mind. It's nice to think that the universe is well ordered and that the answers are out there. It can give you a sense of superiority, too.
In this sense, a belief in logical positivism has the same psychological basis as a belief in a conspiracy theory.
ah and I get it now - the author has associations with the alt-right. For those who had heard of him (I had not), they were already inclined to view his statements uncharitably, even though I think this particular essay does not have any real "alt-right" themes.
>> Maybe that chain of reasoning is only comprehensible to an artificial intelligence which we have yet to invent, involving us inhabiting some element of a vast fractal structure subject to some sort of anthropic principal?
And we will call this AI Deep Thought and it will finally provide answer to Life, the Universe and Everything ...
Sorry I could not resist myself.
But Your whole obsevation is spot on - so many people here is missing forest for the trees - arguing about pointless details. I would even dare to say that they are sitting in local minimum of all possible discussions that could be derrived from this article.
If there is indeed no way for aliens to "see" the chess playing process (precisely: to make measurements which confirm/reject hypotheses about details of the game), then the best model they can ever come up with is a black box where the outcome has given statistical properties.
Where the parable falls short, and a mistake often made, is to confuse one's own inability to come up with discriminating experiments, for an absolute truth about the universe's resistance to observation.
What would it mean for aliens to not ever be able to measure chess' inner properties? it would mean that aliens could never read human texts, or talk with humans, as any of those would instantly lead them to the truth. Which in turn means aliens can't see light, or hear sounds, or interact with things which would allow them to do so. Going further, we end up at the conclusion that either these aliens exist in a completely separate universe from chess (and everything around it), or they CAN observe it (they just haven't tried hard enough).
This post is confused, in that it first states aliens can't see chess, but later they can. If their sensors ARE able to measure chess, the scientific process will eventually weed out the worse models for the better ones and they will understand it.
If they aren't, then they will be left with the statistical random model, and that is indeed the best explanation, as all guesses toward the complicated intricacies of the game are equally likely and cannot be differentiated.
In the end, the post's theory boils down to one idea: "there is sometimes an underlying truth that is not observable", and it is a really weak one. Like the flying tea pot, it can not be disproven or proven. "Underlying Truths" that do not affect the universe in any way and cannot be measured, might as well not exist, and we should not attempt to model them. Everything else can and will be explained, in time.
I'm not sure I see a paradox, or lesson here, other than it is important to understand what the scientific process of model, hypothesis and measurement can and can't do.
--
the same argument is often made for magic, or spirituality, or any other thing people say is 'unexplainable' (which can often be translated to "something I don't want to think too hard about")
We used to think many things were magic (light, magnets, the world), until someone smarter than us came up with a really smart experiment which let us shine light into boxes we thought were black (bell's inequality theorem is a fine example).
The lesson from history is, if we look hard enough, we eventually get closer to the truth, every time.
yeah - I think his point was something along the lines of "you can have a useful theory but it might be worth getting rid of it for something less useful that might be more productive in the long run because it has better understanding of the fundamental mechanism."
Or more shortly using predictive ability as the sole criteria can lead to less correct results then alternatives.
If my description is correct I would say that the advantage of using prediction as the sole criteria is that it avoids the burden of making arbitrary selections. I understand the appeal of the choosing a course because it's more elegant or beautiful but if you were getting on a plane and were told that was the guiding light of the designer - or worse the engineers - how happy would you be?
The central thesis of the article is supported mostly by the argument "Yeah, it is dude." Ignoring the weakness of the analogy, there is a reason science is "about models" and not about truth: We can measure and test models, and we cannot measure and test truth.
A theory that is more complex and fits the data worse is a less good theory because it is easier to get stuck in. If I have two new theories with this property, how do I choose between them? Imagining a situation where the less predictive theory is closer to the underlying reality does not change this - the fact that it is closer to the underlying reality is irrelevant until and unless we have measurements that allow us to distinguish those cases, at which point, we will have evidence for the (now) more predictive theory.
This article is actually arguing for a system that would result in more dogmatic acceptance and less critical testing.
This was the only part of an otherwise great post I objected to. Science may have a side effect of uncovering truth but it shouldn't be the driving force. We should be striving for better predictive ability that doesn't go against our observations. Arguing which is more real is not science.
Drop some heavy objects and some light objects you have lying around. You will see that the heavier ones tend to fall faster than the light objects. If all you care about is predicting which object will fall faster, you could simply say gravity pulls the heavier one faster and call it a day.
But if you were to forego predictive power and seek understanding, you'd eventually see that objects fundamentally fall at the same rate, and a confounding factor of air resistance was the reason for the observed discrepency. With this deeper understanding, you can make models of gravity and fluid dynamics which are not just more accurate for the specific cases you measured, but also extensible to other cases like the orbits of planets.
You are imagining this situation from the perspective of already having a more powerful / general model and looking backwards, so it seems obvious. But that is not how things work. Without data to support it, you can imagine all kinds of possible theories - that is not science, and it certainly gets you no closer to understanding.
Yes, the entire point is that what was obviously correct in hindsight was not clearly advantageous to begin with, the author's thesis is that exploring models which may seem worse than what we currently have can ultimately lead to better overall results. The author is specifically criticizing the narrow definition of science which says every incremental step no matter how small must improve your models' predictive power rather than exploring the possibility space.
Neither you nor the OP have provided a reasonable example of a model which is less predictive and also more "true." the OPs analogy has been discussed to death here, and yours is explicitly of a model which is in fact more predictive.
The second step would be noticing that objects of the same weight fall at different rates. When your predictive power fails you seek more refined models. For everyday phenomena we can guess at truth, maybe 'the wind'. The deeper you go the less clear it becomes as you don't have any direct interaction with the things you're modelling.
The author applies logical positivism incorrectly. There's nothing wrong with coming up with new models and testing them against the data, and eventually getting enough variables to be significantly more effective. But such models should be a low portion of your overall probability until actually found to be more effective.
Of course, if you start out with a contrived example where there's a large amount of unobservable simple structure, then it looks like postulating unobserved structures is good. I could give a contrary example where it's impossible to do better than random and draw the opposite lesson.
Beyond that, under logical positivism there isn't a notion of truth as the author applies it. To critique LP by saying it doesn't get at the truth is kind of missing the point.
The whole article starts with a straw man about aliens making incorrect deductions based on a version of chess that doesn't correspond to actual chess. This is used to illustrate a very obvious point about local maxima that is in turn extrapolated in an entirely unsupported way to try to draw some conclusions about science and allow the author to make some strange and entirely unnecessarily edgy statements like "Logical positivism is kind of theoretical lobotomy"... "tweaking equations like an oblivious autist is Science" etc.
I think many comments here miss the point. I’ll try to summarise them and defend the article:
* “Chess results are not 50-50” - I don’t think the exact numbers are that important. What’s important is that in the 50-50 model or x-y model the Aliens assume these values to be just a consistent result of an RNG.
* “Chess is not random / the players are intelligent / the aliens could investigate who plays against who” - In the parable the Aliens don’t have direct access these variables, just like we might not have the access to some variables in QM experiments and we (and They) could mistake something hard to observe for random (i.e. if the rule is “if there’s an even number of atoms in a 1m radius, do A, if it’s odd, do B” the outcome would be 50-50 and would seem random even if it’s not)
* “Theories that are more complicated, predict less or make worse approximations are worth less than simple theories that predict more / better” (in other words Science is about models and approximations, not about truth or “Shut up and calculate”) - I think the author simply disagrees but he also shows why non-mainstream models are worth pursuing and developing - so that we are not stuck in a local maximum with a worse model than the one we could have.
Something else concerns me: I don’t know much about QM but if I understand correctly, Bell’s inequality tells us that there are no “hidden variables” and that QM is indeed random.
I’d be grateful for any references to articles on this topic that can be understood by non-experts.
One important thing the author seems to miss at some level is the context of the data being modeled.
If all you have is data on whether black or white won, then, assuming the 50-50 outcome probabilities, this is a fine model. There is nothing to be gained by anything else.
Where I disagree with the author I guess is to suggest that some other statistically inferior model would be better because, if you had a richer set of observations, it would be shown to be true. To me that scenario is irrelevant at some level, because it's a different scenario (I don't agree with the author's assertion that first move information would somehow decrease predictive information -- it might not improve it, but I doubt it would decrease it).
If you had information on player identities even, you might be able to model ability or something like that, and use that to improve your predictions. But then that relies on the bit of information about which players are which, beyond black and white.
The philosopher Quine argued that models cannot be decontextualized from the observations they are explaining, and I think that is particularly relevant in this case. At some level it doesn't matter whether the Aliens' models are true or not, because they have no information in the scenario that they could use to do anything with.
The 50-50 model (per the author) isn't incorrect, it's just a (assumedly) correct model for the game of chess when all you know is which color won the game. If you were playing chess, and your pairings were random and you couldn't see the board or what pieces were available or where they could go, etc the game would seem very similar to what the aliens were observing. A more "truthful" model is only relevant or useful in the context where there is more data to make it useful.
Is there an area of study that looks at how people discuss things? Including, but not limited to misunderstanding/miscommunication dealing with things analogies, thought experiments, and the like? So that you could make predictions. For example, since "story A" used analogies "B and C", then we'd expect ~50% of the commentators on said article will fixate on B, since that is their favorite topic, and they never tire of discussing B, even if it is purely incidental the point of the article. So you could potentially tailor your article help minimize misunderstandings. Micro-sociology or maybe some sort of specific behavioral psychology?
>I’d be grateful for any references to articles on this topic that can be understood by non-experts.
"...From his reply to EPR, we find that Bohr's position was like this: 'You may decide of you own free will, which experiment to do. If you do experiment E1 you will get Result R1. If you do E2 you will get R2. Since it is fundamentally impossible to do both on the same system, and the present theory correctly predicts the results of either, how can you say that the theory is incomplete? What more can one ask of a theory?'
While it is easy to understand and agree with this on the epistemological level, the answer that I and many others would give is that we expect a physical theory to do more than merely predict experimental results in the manner of an empirical equation; we want to come down to Einstein's ontological level and understand what is happening when an atom emits light, when a spin enters a Stern-Gerlach magnet, etc. The Copenhagen theory, having no answer to any question of the form: 'What is really happening when - - -?', forbids us to ask such questions and tries to persuade us that it is philosophically naive to want to know what is happening. But I do want to know, and I do not think this is naive; and so for me QM is not a physical theory at all, only and empty mathematical shell in which a future theory may, perhaps, be built."
...and...
"The Chaotic Ball: An Intuitive Analogy for EPR Experiments"
I would agree all models are bad, some models are useful even though it goes unstated. I feel like the author’s point is that asking for “the right model” is the wrong question. I don’t mind argument-by-analogy too much, but I think it would help if we knew what the aliens were trying to do with their model. People criticizing "inaccuracy" seem to just have high standards for their models (perhaps you want to simulate 100B chess games in 10 seconds.. how's that coin flip idea sound now?). Just imagine you're dreaming about backgammon: do you know all the rules? no, but you can roughly model what's going on. Your model could be said "worse" than standard rules, but it's good enough for a dream sequence.
More than once have I seen a nice chess set in a home, set up completely wrong. They’ll be close! Their model isn’t too far off from standard, but just so (K & Q reversed, B & N reversed, &c). I presume the owners were quite content. Honestly, I don’t even know if most people would notice, so why point it out (is my model even the “preferred” one)?
But this is empirically false. It fails to explain that, for example, some players consistently beat other players, or at least have a much higher than 50% winrate.
If chess really did consistently just have a 50% outcome, then it really would be equivalent to an overcomplicated coinflip. So I don't find this parable at all convincing, and remain a logical positivist.
reply