Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

A footnote in the article notes that the original "faulty" formula and the new "correct" one are asymptotically equivalent.


sort by: page size:

I am talking about the accumulated error part near the end of the article, so your (uncharitable) assumption is not correct.

This paper exactly accounts for the increased uncertainty due to multiplying these factors together. It's unclear to me how that makes it invalid...

The old calculation relies on older experimental results that have been verified by multiple experiments - so if the older value is wrong, it means either the calculation was done wrong (possible), or the experiments all have had a significant correlated systematic error that has never been caught (also possible). However, I’d say both of those things are relatively unlikely, when compared to the probability of some small error in a new paper that was just released that uses a new method that involves lattice calculations. This is all a balance of probabilities argument, but from my experience in the field, I’d say it’s more likely that any errors in calculation or missed systematics would be in the new paper.

However, I’m an experimentalist who has worked close to a lot of this stuff, not an actual theorist, so I’d love to get a theorists interpretation as well.


it's an improvement (iirc, it makes the error-growth linear), but does not 'fix' the problem.

I'm genuinely perplexed.

The headline seems to differ from the rest. They seem to suggest that the preprint paper is entirely accurate and that our previous calculation method was incorrect.

Am I interpreting this correctly? Is this just a bad and missleading headline?


> A nitpick, perhaps, but isn't that three orders of magnitude?

Perhaps the example was a best-case, and the usual improvement is about 10x. (That or 'order of magnitude' has gone the way of 'exponential' in popular use. I don't think I've noticed that elsewhere, though.)


Yes, Cramer’s rule (referred to in the article) does not give numerically stable results AFAIK - which is yet another can of worms. Good catch.

True, but the paper reduced opposition. I was surprised by the paper at the time, but given the authors considered it a useful analysis. I was quite relieved when the error was discovered

This paper[1] which corrected Bose et al.'s original derivation, has more discussion about the impact of the second result - in particular, Figure 2 (on page 13) has a comparison of the relative error of the old and new bounds against the empirically calculated rate.

[1] https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=90377...


The article points out the data/calculations have been re-evaluated with new adjustments and the conclusions are different than before. It's a bit nonsensical to try to refute new conclusions by just pointing to the old conclusions again.

They said it "should" go down, but that another comment saying the worst case is the same is "also correct".

I do not see any "complete nonsense" here. I suppose they should have used a different word from "tolerance" for the expected value, but that's pretty nitpicky!


Quote from the paper: "Fanelli found that the number of papers providing support for the main hypothesis had increased from 70% in 1990 to 86% in 2007 (it is unclear why Fanelli reported an over 22% increase in the abstract)."

of course, my calculator yields 86/70=1.228 (which would be a >22% increase)

such basic flaws really dont bode well for the rest of the paper.


> arbitrarily correcting old measures.

Why do you say the correction is arbitrary? Are there papers arguing for corrections in the other direction?


> that's a pretty big correction.

No, it isn't. It doesn't affect the core finding at all.


Please note the postscript at the bottom ofthe article:

> Postscript: In November 2013 I received an email from Dan Corson informing me that, using digits of e computed by a program called “y-cruncher”, he had checked the number of CPSs up to 100 million digits, and found that it does finally approach the expected value, although even by this point it has not quite ever reached the expected value


Wait, so did they do a statistical test somewhere comparing it to the formula they wrote, or is that just by eyeballing it?

The point being made by this new paper is actually substantially different to the original 2003 one

I haven't read it closely, but it looks to me as if the calculation estimates the expected number of counterexamples rather than the "measure" of them (however you've chosen to define that).

How does that change my point? The article says the original variant had an R rate of 2.7, and delta had one between 4 and 9.

That’s a big claim with no data.

And thanks I understand how exponential works.

next

Legal | privacy