I wouldn't call that a danger. I would call that being human. We all have models of the world that deviate in some ways from one another, otherwise we would all have identical minds, identical environments and origins, identical heritages, identical ways of interpreting information, and identical ways of composing information into new creations.
Making an erroneous assumption or a mistake can mean that a particular sentiment is expressed "incorrectly", but that doesn't mean it's existence is useless. It may prove to be very useful eventually.
The idea that mental models can be incorrect by being compared to other mental models is a strange concept to me. It requires assumptions that can not be proven in their entirety.
Generally, every model is wrong about something. That the core difference between models and the things we model. Every description is an approximation of some sort.
However, some things are tautological. Some things may be necessary in reality, even if they are not in any model. (I wake up each morning because I have no alternative, not because it is the 'right thing to do'.) Some things are very safe to assume because our human cultural context or our biological environment is stable over multiple generations of activity. And some things are true simply because assuming otherwise contradicts the assumptions needed for language to work effectively. And sometimes we do prove things, even if we lack any meaningful ability to prove what we'd like to the degree that we'd like.
The point is, we're not always wrong in a linguistic sense. Any time you have a statement that is perfectly symmetrical and valid in every context, you do not have language which can be used to make decisions. It's only meaningful to be wrong if there are things you can be right about.
>As the saying goes: all models are wrong; some are useful.
models being wrong/right/useful/etc. is just a model of a relation between a model and [an aspect] of reality [or whatever else] the model is supposedly modeling.
I suppose it's an example of the age old adage, all models are wrong but some models are useful. Our models of ourselves, derived from from introspection and observations others make, are wrong, but if we're honest with ourselves, they'll be a useful reflection of reality.
> As the saying goes: all models are wrong; some are useful.
It's important to also recognize that everything we believe about the world except raw sensory perception is a model and, ipso facto, wrong, though perhaps useful.
Yes, we do that. But we also have learned, the hard way, to have some humility with that knowledge. Something may disagree with our mental model, but we're careful not to say something is impossible because our mental model may be wrong. (In fact, it is wrong, since we're dealing with a bug.)
The idea is that that mental model isn't a very useful and error-proof one. Saying "well that's the mental model" doesn't evade criticism of the mental model.
I disagree with the need for a warning. I think it would be better if this were _more_ commonly used.
So often debates arrive at a stasis like:
> "You're wrong"
> "No YOU'RE wrong"
And there they sit, each side certain that the other is an idiot.
The alternative is to admit that both parties are right according to their model, and that both models are wrong (because being right is not what models are for). I think this is better because the "which model is more useful" question sets up a lot more potentially fruitful interaction between opposite sides.
The danger you're referring to only occurs in a setting where science is implicitly authoritative in the first place. If we drop that assumption, science still produces the most useful models, but finding the most useful one for your project becomes less adversarial.
could it be the models haven't changed, but because of their probabilistic nature, they trigger all sorts of human biases that make us draw incorrect conclusions about their behavior?
> Does it matter that two bricks are not "the same piece of matter" if our predictions work the same for both of them
No, because of a dense, interconnected web of other social truths (the rest of the arithmetic model), the relative error of this one truth/model is negligible.
However, confusing your model for reality is a fallacy perhaps older than time.
Making an erroneous assumption or a mistake can mean that a particular sentiment is expressed "incorrectly", but that doesn't mean it's existence is useless. It may prove to be very useful eventually.
The idea that mental models can be incorrect by being compared to other mental models is a strange concept to me. It requires assumptions that can not be proven in their entirety.
reply