The old calculation relies on older experimental results that have been verified by multiple experiments - so if the older value is wrong, it means either the calculation was done wrong (possible), or the experiments all have had a significant correlated systematic error that has never been caught (also possible). However, I’d say both of those things are relatively unlikely, when compared to the probability of some small error in a new paper that was just released that uses a new method that involves lattice calculations. This is all a balance of probabilities argument, but from my experience in the field, I’d say it’s more likely that any errors in calculation or missed systematics would be in the new paper.
However, I’m an experimentalist who has worked close to a lot of this stuff, not an actual theorist, so I’d love to get a theorists interpretation as well.
The headline seems to differ from the rest. They seem to suggest that the preprint paper is entirely accurate and that our previous calculation method was incorrect.
Am I interpreting this correctly? Is this just a bad and missleading headline?
> A nitpick, perhaps, but isn't that three orders of magnitude?
Perhaps the example was a best-case, and the usual improvement is about 10x. (That or 'order of magnitude' has gone the way of 'exponential' in popular use. I don't think I've noticed that elsewhere, though.)
True, but the paper reduced opposition. I was surprised by the paper at the time, but given the authors considered it a useful analysis. I was quite relieved when the error was discovered
This paper[1] which corrected Bose et al.'s original derivation, has more discussion about the impact of the second result - in particular, Figure 2 (on page 13) has a comparison of the relative error of the old and new bounds against the empirically calculated rate.
The article points out the data/calculations have been re-evaluated with new adjustments and the conclusions are different than before. It's a bit nonsensical to try to refute new conclusions by just pointing to the old conclusions again.
They said it "should" go down, but that another comment saying the worst case is the same is "also correct".
I do not see any "complete nonsense" here. I suppose they should have used a different word from "tolerance" for the expected value, but that's pretty nitpicky!
Quote from the paper: "Fanelli found that the number of papers providing support for the main hypothesis had
increased from 70% in 1990 to 86% in 2007 (it is unclear why Fanelli reported an over 22% increase in the abstract)."
of course, my calculator yields 86/70=1.228 (which would be a >22% increase)
such basic flaws really dont bode well for the rest of the paper.
Please note the postscript at the bottom ofthe article:
> Postscript: In November 2013 I received an email from Dan Corson informing me that, using digits of e computed by a program called “y-cruncher”, he had checked the number of CPSs up to 100 million digits, and found that it does finally approach the expected value, although even by this point it has not quite ever reached the expected value
I haven't read it closely, but it looks to me as if the calculation estimates the expected number of counterexamples rather than the "measure" of them (however you've chosen to define that).
reply