Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> Probabilities are meaningless unless it’s a repeatable experiment otherwise its a ludic fallacy

Um... this goes against the entirety of the Bayesian approach to statistics. I think you'd find a lot of very intelligent people who disagree strongly with this statement.

The Bayesian approach takes probabilities as subjective confidences. You can describe confidences as "well calibrated" if, when you look at their historical guesses, if their 70% assessments are correct 70% of the time.



sort by: page size:

". I think you'd find a lot of very intelligent people who disagree strongly with this statement. "

I never claimed to be intelligent.

" You can describe confidences as "well calibrated" if, when you look at their historical guesses, if their 70% assessments are correct 70% of the time."

- Again, if you're trying to figure out if a coin is split 50/50 then yes, but without being able to repeat the same experiment you're fooling yourself and the whole aspect of bayesian thinking I think goes out the window. eg me being right about unrelated topics doesn't mean I'm right/wrong about specific topics.


>Idiots use Bayesian probabilities wrong, therefore using Bayesian probability is wrong.

>The claim is that the prior is subjective and they're bad.

The only such claims I've seen are ones where they disagree with the prior, which is no different that disputes amongst themselves where so-called frequentists argue whether p=0.05 or p=0.001 is appropriate. Frequentists don't pretend that their p-value is set in stone.

Let me repeat what I said in another comment:

I'm finding this whole discussion orthogonal to Bayesian vs frequentist statistics. The main issue frequentists have with a lot of Bayesian approaches is that Bayesians often want to assign probabilities to one off events, whereas frequentists insist only on repeatable events. That is where the accusation of "subjective" comes from. Frequentists like to believe that any question of probability can be decided by taking N samples and seeing the outcome (even if only conceptually).

For problems where there is a population, and one can do sampling (i.e. repeatability), there's never a problem with using Bayesian methods.

My complaint is that while there are differences amongst Bayesians and frequentists, this article does not present any, and wrongly implies things about frequentists.


> Bayesian probability is emphatically not about an experiment being repeated in the future. That's the frequentist interpretation of probability.

I am saying something different, but this seems like a futile discussion. What does it mean to have P(Paul wrote it) equal to some number in (0,1)? Let's call that number q. Explain what q means in words.

If the truth were ever going to be revealed, or if I could conceive of the experiment being repeated in the future, I could explain it as "I would be willing to pay up to $q to buy an asset that pays $1 if Paul wrote it and $0 otherwise."

> And what are you trying to say by the phrase "counting some stuff and dividing those counts by some numbers and stuff"?

I mean it is very hard for me to take a person seriously who is doing a whole bunch of interviews without even a working paper somewhere.

It seems that if you use some arithmetic, people are inclined to accept what you did without questioning whether it makes sense to apply such models in this case.

> The Dutch book argument shows that rational people ...

You need to do better than just quoting passages from Wikipedia if you want to understand what that means.


> I can't fathom how one can reasonably argue that frequentist approaches are always inferior, even in applied statistics.

My original claim is broader than I wanted it to be. The fact is, a Frequentist approach will always be less accurate than the correct application of probability theory. But of course,

> Bayesians know that using probability theory correctly is sometimes intractable (combinatorial explosion and all that). In those cases, they will use approximations. But at least, they will know it's an approximation.

https://news.ycombinator.com/item?id=6793905

The key to the Bayesian outlook is to remember that no matter what, there is a correct answer, even if you can't afford to compute it. As Eliezer Yudkowsky put it, there are laws of thought. Want to use Frequentist tools? Sure, why not. Just remember that they often violate the laws of ideal though. Some inaccuracy inevitably ensues.


> The distribution of these probabilities is substantially lower than 273.

There's no evidence for this, you can't just point to "there's a lot of numbers and we only saw 273 of them" to justify it. The distribution seems exactly what we're trying to determine; to me it sounds like a good example of when Bayesianism breaks down.


>"The problem I have with the Bayesian definition of probability is that it isn't a definition, is it?"

You'll have to clarify what exactly your problem with the concept is here.

>"So what does the 1 in 1/6 actually mean? And the 6?"

It means there are six possible outcomes, one of which is of special interest and five others. You have no reason to think any one possibility is more likely than any other.

>"What does it mean that someone is 1/6 certain of something?"

You would take any bet on it with better than 6 to 1 odds. If you have to decide between a 1/6 opportunity and a 1/60th opportunity for the same reward you should choose the first. You can use such probabilities to perform a cost-benefit analysis amongst various possible actions with various costs and rewards.

>"How does one measure certainty?"

I assume you are using "certainty" as a synonym for "probability", in which case one way is to ask people to bet.

>"The more I think about it the more I'm convinced Bayesian probabilities are a flawed concept."

Ok, but your objections here don't seem to have much thought/experience behind them.


>The math behind Bayesian methods is pretty solid and indisputable. The main controversy is how to specify priors.

But as the Bayesians always point out, specifying which sampling procedures, test statistics, and inferences to run under frequentist statistics also amounts to inserting subjective judgements into your statistical analyses, soooo... you might as well just pick the one that reviewers in your field like, or that works for your problem.


>Priors in Bayesian statistics are not randomly guessed.

And frequentism (if there is such a thing) does not prohibit using base rates in their analysis. The article is setting up false dichotomies.

Also, this in the article is a "random guess":

>Maybe we’re not so dogmatic as to rule out “The Thinker” hypothesis altogether, but a prior probability of 1 in 1,000, somewhere between the chance of being dealt a full house and four-of-a-kind in a poker hand, could be around the right order of magnitude.


> When sample size grows, frequentist and bayesian [...] estimates seem to converge to each other anyway

Yes. And so? Bayesians would argue (and I quote) that "the interesting limit in statistics is when the number of samples tends to one. The limit when the number of samples tends to infinity is completely useless."

> I tried getting into Bayesian stats but honestly it just seems overkill for most cases.

There are 3 black balls and 7 white balls in an opaque bag. How likely is it to pick a black ball? Bayesian statistics gives a straightforward answer (you just assume an uninformative prior and perform a computation). But frequentist statistics starts to argue about an infinite number of replicas of your own universe and other nonsensical constructions. Not sure that the Bayesian approach is overkill in that case...


> no kind of statistical reasoning makes any sense

Probabilities are useful when you don't have perfect knowledge. But yes, they are an illusion. In truth, every event has a 100% chance of happening. And things that didn't happen aren't real events, they just seemed to be possible, but we were wrong.


> Oh please. You can do plenty of psuedoscience and superstition with good old frequentist statistics.

That's not really the point. The article is simply saying that Bayesian methods are not a silver bullet; it's not saying that other methods of statistics are free from problems.


> If clinical trialists use p-values wrong, how is moving to Bayesian methods going to be less misused and misunderstood?

Bayesian methods have the advantage that, while they can still be misused and misunderstood, at least when used correctly they tell us what we want to know and are easy to interpret (a posterior probability is exactly what you think it is) whereas p-values are hard to interpret correctly (a low p-value can imply a large effect size, a large sample size, or in the face of publication bias, lord knows what).

That said, I do think that the advantage to Bayesian thinking in research would mostly not be about the methods but about the attitude: aside from a couple of weirdos pushing Bayes factors, Bayesian statisticians almost universally communicate results using credible intervals (HPD) instead of dichotomizing the evidence into significant or not significant. Frequentist confidence intervals will get you most of the way there and could completely replace p-values, but if you're going to advocate for better statistics and uproot established practices, might as well go all the way and encourage better methods and better ways of communicating results at the same time.


> I don't think there's a single thing that I'm justified in saying is true with 100% probability.

Not even simple deductive proofs? 2+2=4 by definition. There's no probability for it being wrong. We couldn't do math if that wasn't the case. It's not even correct to assign those kinds of statements a probability. Now there is a possibility for humans to make mistakes with complicated proofs, as you pointed out. But the correct proof doesn't have a probability, only the belief in it being correct.

For empirical matters, I'm not sure what sort of probability you could assign to some skeptical scenario like it's all a dream, or nobody exists outside your mind, or we live inside a simulation. We take it that there's a world with other people for granted.

Sure, you can be fooled into thinking a black ball was pulled out of an urn of white balls, but whether the urn has all white balls is just a fact, not a probability. And under proper conditions, we should be able to verify that fact. To doubt that is to entertain wild skeptical scenarios where we can't really do science.

At any rate, I don't see that sort of reasoning as Bayesian. It's just radical skepticism.


> Interestingly, when I see Bayesians simulate random data (to introduce the concepts on this data) they usually assume a true parameter value. E.g. when sampling from Y = a + b * X + e, they'll assume fixed, true values of a and b and not random variables - which is a frequentist assumption! So far I've never seen e.g. b being sampled from Normal(mu=2, sigma=1) instead of just setting b=2.

The Bayesian philosophy of "random parameters" does not mean that Bayesian methods cannot be assessed for frequentist properties or compared against frequentist procedures.


> By your logic, every proposition is either true or false, and therefore the concept of probability is useless.

That would be nonsensical interpretation of my logic. In cases where it is possible to conceive of an experiment being repeated in the future, a "degree of belief" interpretation is appealing. If the answer were going to be revealed objectively and someone were taking bets, sure, fine, I'll go along with that. But, don't make an appeal to the Dutch book argument in a unique one-off case where no one has anything at stake other than free publicity which they have chosen to pursue by counting some stuff and dividing those counts by some numbers and stuff.


> Frequentist methods are not dependent on the researcher's state of mind, but on what actions he would have taken if things turned out differently.

That's exactly what I had in mind. It amounts to the same, really.

Granted, the researcher's decision process is bloody important: it determines what the researcher will be doing. But once you know what has been done, that decision process has no further influence over the experimental results —only the actual experiment does.

Of course, this is all knowing what priors we had in the first place. Since one's priors tend to influence one's decision process, one may chose to ignore one's own priors, and deduce them back from one's decision process. In that narrow sense, different decision processes do yield different conclusions. Going this route amounts to ignore relevant information however, and that would be stupid.

> Frequentism isn't stupid, but you do need to be really smart to use them correctly.

I believe the "you have to be smart" argument have also been used to attack Bayesian methods on practical grounds. It's a valid attack either way, but a bit weak for my taste.

When you apply probability theory, you have to make a logical error to draw the wrong conclusion. The only problem left is prior beliefs, and I don't believe we can escape the need for them.

I'm not sure mucking Frequentist methods up requires a logical error. If it indeed doesn't, then we can safely declare Frequentism "unsound" and move on, don't you think?

> [VWO slides]

Hmm, so Bayesian methods are easier to explain… interesting, thanks for the link.


>I guess a hard-line frequentist (if such a person exists) would counter that you can't assign probabilities to hypotheses or fixed parameters. Then Bayes's theorem (and every other statement about probability) is true only when applied to statements about how often a certain event will occur.

If you model "thinking" and "believing" as sampling in probabilistic programs (which they do in some schools of cognitive science), then Bayes' Theorem becomes a theorem about how often certain execution traces occur when the sampling program is run with fresh randomness. You then need none of the weird metaphysics associated with "subjective Bayesianism".


> But this is just frequentist inference in Bayesian clothing.

Or maybe this is "just" frequentist inference done right - on a Bayesian foundation.

next

Legal | privacy