Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I wouldn't call that a danger. I would call that being human. We all have models of the world that deviate in some ways from one another, otherwise we would all have identical minds, identical environments and origins, identical heritages, identical ways of interpreting information, and identical ways of composing information into new creations.

Making an erroneous assumption or a mistake can mean that a particular sentiment is expressed "incorrectly", but that doesn't mean it's existence is useless. It may prove to be very useful eventually.

The idea that mental models can be incorrect by being compared to other mental models is a strange concept to me. It requires assumptions that can not be proven in their entirety.



sort by: page size:

Generally, every model is wrong about something. That the core difference between models and the things we model. Every description is an approximation of some sort.

However, some things are tautological. Some things may be necessary in reality, even if they are not in any model. (I wake up each morning because I have no alternative, not because it is the 'right thing to do'.) Some things are very safe to assume because our human cultural context or our biological environment is stable over multiple generations of activity. And some things are true simply because assuming otherwise contradicts the assumptions needed for language to work effectively. And sometimes we do prove things, even if we lack any meaningful ability to prove what we'd like to the degree that we'd like.

The point is, we're not always wrong in a linguistic sense. Any time you have a statement that is perfectly symmetrical and valid in every context, you do not have language which can be used to make decisions. It's only meaningful to be wrong if there are things you can be right about.


To quote you above:

> if our models are correct

What is this whole discussion about?

You're arguing a stance that assumes the model is correct in a discussion about the risks of assuming the model is correct.

But trust me - our model is not correct.


> All models are wrong, but some are useful.

A slogan usually trotted out in defence of bad modelling.

Some models are useful. Some are dangerous. Do you have the means to tell the difference?


>As the saying goes: all models are wrong; some are useful.

models being wrong/right/useful/etc. is just a model of a relation between a model and [an aspect] of reality [or whatever else] the model is supposedly modeling.


I suppose it's an example of the age old adage, all models are wrong but some models are useful. Our models of ourselves, derived from from introspection and observations others make, are wrong, but if we're honest with ourselves, they'll be a useful reflection of reality.

> As the saying goes: all models are wrong; some are useful.

It's important to also recognize that everything we believe about the world except raw sensory perception is a model and, ipso facto, wrong, though perhaps useful.


"All models are wrong, but some are more useful than others"

I've come to believe this goes for any verbal / written statements about reality.


I realize all that. I'm questioning the wisdom of the standard manoeuvre of preemptively calling all models imperfect or wrong, though.

Yes, we do that. But we also have learned, the hard way, to have some humility with that knowledge. Something may disagree with our mental model, but we're careful not to say something is impossible because our mental model may be wrong. (In fact, it is wrong, since we're dealing with a bug.)

Given how new models of thought threaten and destroy old models of thought, this might not work out as cordially as you imagine.

There is the chance the computational model is wrong.

Yeah, but then it's like «All models are wrong, some are useful.» that already presupposes that you are aware of that models are unavoidable ?

The idea is that that mental model isn't a very useful and error-proof one. Saying "well that's the mental model" doesn't evade criticism of the mental model.

I disagree with the need for a warning. I think it would be better if this were _more_ commonly used.

So often debates arrive at a stasis like:

> "You're wrong"

> "No YOU'RE wrong"

And there they sit, each side certain that the other is an idiot.

The alternative is to admit that both parties are right according to their model, and that both models are wrong (because being right is not what models are for). I think this is better because the "which model is more useful" question sets up a lot more potentially fruitful interaction between opposite sides.

The danger you're referring to only occurs in a setting where science is implicitly authoritative in the first place. If we drop that assumption, science still produces the most useful models, but finding the most useful one for your project becomes less adversarial.


> there's likely a mental model in your head

Yes, and unless there was a simple typo, the problem is the model in your head. Most bugs come from unchallenged assumptions.


could it be the models haven't changed, but because of their probabilistic nature, they trigger all sorts of human biases that make us draw incorrect conclusions about their behavior?

So you’re saying we can’t be sure about our models but we can be sure that they’re wrong in a certain direction.

> Does it matter that two bricks are not "the same piece of matter" if our predictions work the same for both of them

No, because of a dense, interconnected web of other social truths (the rest of the arithmetic model), the relative error of this one truth/model is negligible.

However, confusing your model for reality is a fallacy perhaps older than time.


Perhaps all that says is that your mental models about how the world works are inaccurate.
next

Legal | privacy