I think you have the blame reversed. These problems were caused by statisticians who failed to understand there is always a reason to "adjust" the data. They taught people to apply methods that were inapplicable to the real-life research conditions they would face.
Interesting, that's the chapter that caused me to finally put the book down. The gist of it was: people not thoroughly trained in statistics aren't good at it.
This is hardly news. In fact, even highly intelligent people struggle with the Monty Hall problem, for instance.
Statistics would be what would answer that question, i.e. it's not a who, it's math; I wasn't claiming to have those statistics. I was trying to discuss what you thought the math was and why.
Absolutely. The problem with statistics is not the maths but what it means. I think this is one reason why people become so easily unstuck with statistics, carelessly applying techniques to data without really understanding the assumptions or the subtly of the question being answered by the technique.
Some of the problems come from statistics itself. “All models are wrong, but some are damn hard to interpret.” Let’s make that a corollary to Box’s statement about the utility of models. Why does so much statistical analysis take place without even an expectation of human comprehension? It’s just magical sigils for most papers.
Many midlevel statistical practitioners suffer from a holier than thou complex, where a “correct” approach to statistical analysis might buy a little more precision at the expense of a lot of comprehension.
Box plots or Bar charts with error bars, using randomized data collection. That’s like 90% of the interpretive value right there. Statistics is a UI for math and it could use improvement if we expect so much from it.
See Brett Victor’s “Kill Math” for more context on why we should expect more from our mathematical interfaces.
http://worrydream.com/KillMath/
reply