I think that is really powerfull, the ability to think in a Bayesian way, to not label something as one thing or the other but to accept fully, that as things stand now, something can be 70% A and 30% B. This does not even have to collide with objectivist views as the thing is either A or B but, based on all the information you have you can only be 70% sure it is A.
New information may well lead to to a new 20% A, 70% B, 10% something else. I tend to judge people by their ability to do this. Never enter a discussion unwilling to change your mind is one of my mantra's. I'm also fully aware I break it often by the way.
I have often seen two experimental situations with highly overlapping histograms for one parameter and then have a colleague ask: "So where do we draw the line for positive/negative?" I always say: "We don't. We call the sample that is exactly in the middle 50% positive, 50% negative and we have said everything we can about said sample given the 1 dimensional information we have."
What's cool for the Dune fans, this seems to me to be exactly what Bene Geserit witches and mentats do. Ingest proofs, take tiny, tiny hints, combine them in a Bayesian manner, produce a likelihood of truth for a certain hypothesis... Perhaps I'm over-interpreting (and projecting) ;)
> but what I don't understand is why aren't more brilliant people even a little bit conflicted about this?
How do you know they're not? Unless you've had personal one on one conversations with the people you're referring to, are you really confident that you understand the most nuanced version of their position? I mean, most people, in published statements, interviews, etc., are probably not going to talk very much about the scenario where they have a < 1% subjective bayesian prior. But that doesn't mean they don't still have that inner reservation.
> It seems if you even hint you have some doubts and say well, maybe 1% chance Eliezer is right you are told NO, it's not even 1%. It's 0%!
Maybe it's just that we run in different circles, or maybe it's a matter of interpretation, but I don't feel like I've seen that. Or at least not on any wide scale.
Once you figure out simple examples you slowly start thinking this way about the world. It's beautiful. Take people for example:
Someone with open mind has a priori with at least slight probability assigned to unlikely (for them!) hypothesis while on the other hand very religious people for example have 0 in their priori when it comes to possibility of their religion being made up so they are forced to ignore evidence to the contrary (because bayesian updating breaks for them due to division by zero and mind's way to signal this exception is denial).
In general someone with a lot of weight on given hypothesis is "stubborn" or just very convinced and someone with uniform or close distribution just doesn't know anything about given problem.
Someone unable to build heavily weighted distributions is a conspiracy theorist, someone reluctant to - a sceptic and someone too much eager to a fanatic. Someone with very bad priors is un/badly educated (in given domain) or biased or maybe just stupid, someone with good priors is an expert.
It's possible to combine expert with sceptic attitude or expert with fanatic or all too often stupid with fanatic (very bad and very heavily weighted priors with possible 0's on some options).
Once you start thinking this way you start expressing yourself differently, you start adding those probability qualifiers to your sentences: "I am very sure it's the way to go", "My intuition tells me this but I am not really sure", "I am very convinced and it's not worth discussing" (yes, it can be rational and good attitude) or "I would do X but I need more evidence to be reasonably sure".
It's all there in people's mind, language and interactions once you start thinking this way it's whole new world of perspective and understanding.
"given this measurement and our prior beliefs, the probability of page A being better than page B is X%"
FTFY ;). I think Bayesian methods add a lot of interpretive power, but I'm not sure that it would help people make a correct interpretation. I suspect that if practitioners are neglecting the difference between a one-sided and two-sided test, they will likely forget (or gloss over) what priors are (and their non-trivial implementation).
I definitely agree that their is a disconnect between the math and its interpretation, though.
> Probabilities are meaningless unless it’s a repeatable experiment otherwise its a ludic fallacy
Um... this goes against the entirety of the Bayesian approach to statistics. I think you'd find a lot of very intelligent people who disagree strongly with this statement.
The Bayesian approach takes probabilities as subjective confidences. You can describe confidences as "well calibrated" if, when you look at their historical guesses, if their 70% assessments are correct 70% of the time.
> The mind doesn’t always work like Bayes Theorem prescribes. There’s lots of things Bayes can’t explain. Don’t try to fit everything you see into Bayes rule. But wherever beliefs are concerned, it’s a good model to use. Find the best beliefs and calibrate them!
The one concern I'm really interested by, and don't understand, is this:
> There are thus lots of reasons to believe that we do not think and should not think in a Bayesian way.
Can you give an example that can serve as motivation to read the entire paper linked?
I am not so sure about Bayes. Bayesianism is certainly a good tool for making rational decisions, but human intelligence usually does not look like rational decision making to me.
Sometimes I wonder if our conscious mind is just a puppet and rationalization mechanism for our unconscious decision making process. This makes conscious decision making a farce.
". I think you'd find a lot of very intelligent people who disagree strongly with this statement. "
I never claimed to be intelligent.
" You can describe confidences as "well calibrated" if, when you look at their historical guesses, if their 70% assessments are correct 70% of the time."
- Again, if you're trying to figure out if a coin is split 50/50 then yes, but without being able to repeat the same experiment you're fooling yourself and the whole aspect of bayesian thinking I think goes out the window. eg me being right about unrelated topics doesn't mean I'm right/wrong about specific topics.
There is much ink spilled about how people aren't bayesian educated, but I don't really believe it and therefore don't agree with you.
We're actually pretty good at identifying uncertainty and certainty within the physical world. It's only when we get into very large or very small numbers that we start to lose the plot.
I think that Bayesian reasoning applies to a different category of knowledge; there are things for which we have no predictive theories that we have to reason about probalistically and Bayes is good for that. But thinking about how things are and probability seems a poor duo - after all, we can end up 99.999% sure that something is true, observe one counter example and now know that 100% it is false... Bayes doesn't do that.
Cool article. My knowledge of statistics is really rusty, but isn't this another way approaching the topic of "Bayesian Thinking"? If you think about the scenarios in the article from the standpoint of predicting any given outcome in advance, male vs. female and hard department vs. easy department should be treated as "priors". Or to put it another way, Bayesian thinking means asking the question "What is the chance of X happening given Y?"
Which explains why a positive test on a mammogram means you only have an 8% chance of having breast cancer:
>The chance of getting a real, positive result is .008. The chance of getting any type of positive result is the chance of a true positive plus the chance of a false positive (.008 + 0.09504 = .10304).
>So, our chance of cancer is .008/.10304 = 0.0776, or about 7.8%.
>Interesting — a positive mammogram only means you have a 7.8% chance of cancer, rather than 80% (the supposed accuracy of the test). It might seem strange at first but it makes sense: the test gives a false positive 9.6% of the time (quite high), so there will be many false positives in a given population. For a rare disease, most of the positive test results will be wrong.
The problem I leap to is that this article seems to suggest that our thinking process is Bayesian. That is unlikely.
We update our beliefs based on new information, but the whole point of Bayesinaism via Bayes Theorem is updating it by a very specific amount based on evidence strength. Nobody is approximately Baysian in their thought process, and I doubt most people can even be trained to be. Statistics is a pen & paper exercise for the most part.
In my experience, which is not negligible, is the hardest part of statistics is talking people down from beliefs they have settled on because of something that looks like statistical evidence, but in fact is not.
A good Bayesian should be able to make confident decisions based on information available at the moment...
No.
A good Bayesian should be able to come to decisions like, "I am 70% confident that Osama bin Laden is in that compound." While the Bayesian next says, "I am only 50% confident that Osama bin Laden is in that compound." With both knowing that there is a difference of opinion, but no disagreement on basic facts or reasoning method.
It is very rare for a good Bayesian to be absolutely confident of any prediction. And if you are often so confident, you're probably not thinking very well. I mean that quite literally - the process of analyzing probabilities well requires being able to make the case both for what you think will happen, and what you think won't. Because only then can you start putting probabilities on the key assumptions.
For example, a leader can be absolutely confident that shelter-in-place is the best decision based on the available information, while acknowledging that there is missing information that would drastically change this assessment.
Really?
https://news.ycombinator.com/item?id=22750790 is a discussion that I was in recently about whether on a cost benefit analysis it is better to crash the economy by shutting things down, or to keep things open and let lots of people die.
The decision wasn't nearly as clear in the end as I would have expected it to be. (That all options are horrible was clear. But we knew that.)
But Baye's theorem is not obvious to humans. The intuition is P(A|B) = P(B|A). Base rate fallacy etc. etc.
That said, Bayesianism is not the same thing as Baye's Theorem. You can be a frequentist and still apply Bayes Theorems or the like. Bayes theorem is straight forward once you train your mind on it.
Bayesianism is much more than that and is not a trivial issue. But you are also correct in that human philosophical intuition on probability is Bayesian. With a very broken application. Few people that claim to be Bayesians are actually doing pure Bayesian probability.
Hand wringing over priors and model structure is what Bayesianism is all about. Philosophically, the viewpoints of bayesianism and how to best pick priors are interesting. Also interesting is how Quantum Mechanics fits neatly into the bayesian perspective.
That's fair. A lot of what I read about "Bayesian reasoning" boils down to: Check your assumptions, and update your beliefs when faced with new evidence. But those habits have been with us for millennia, and I'm not arguing against them. My parents were educated in an earlier rationalist movement, and I'm deeply influenced by them.
Being able to think about probabilities is useful, if they're quantifiable. And conditional probability plays a useful role in that process.
I think that is really powerfull, the ability to think in a Bayesian way, to not label something as one thing or the other but to accept fully, that as things stand now, something can be 70% A and 30% B. This does not even have to collide with objectivist views as the thing is either A or B but, based on all the information you have you can only be 70% sure it is A.
New information may well lead to to a new 20% A, 70% B, 10% something else. I tend to judge people by their ability to do this. Never enter a discussion unwilling to change your mind is one of my mantra's. I'm also fully aware I break it often by the way.
I have often seen two experimental situations with highly overlapping histograms for one parameter and then have a colleague ask: "So where do we draw the line for positive/negative?" I always say: "We don't. We call the sample that is exactly in the middle 50% positive, 50% negative and we have said everything we can about said sample given the 1 dimensional information we have."
What's cool for the Dune fans, this seems to me to be exactly what Bene Geserit witches and mentats do. Ingest proofs, take tiny, tiny hints, combine them in a Bayesian manner, produce a likelihood of truth for a certain hypothesis... Perhaps I'm over-interpreting (and projecting) ;)
reply