Election polls are almost always right for the only ultimate answer that matters: who is going to win[0]. They also accurately predict the margin of victory. Same link. Polls, in general, are typically measuring tendencies or preferences, which they also (tend to) accurately capture. So saying they are always wrong doesn’t really make sense, as they are not trying to be right in an absolute sense. The error is with the reader who misconstrues what the poll is measuring and how it is doing so.
This is exactly why you should never blindly trust a single poll.
We don't know who is right/wrong until after the election so it is better to assume that a range of different polls taken at different times with different methodologies is likely to be more accurate.
> When and where was the last poll that predicted an election accurately?
Polls don't purport to predict elections; though people do poll-based forecasts.
538, for instance, does a lot of them every cycle. And seems to be fairly close on the odds, though generally the most-likely predicted outcome occurs at a slightly higher rate than their predictions.
You're arguing against a straw man. I never said that polls are always wrong. My political desires don't hinge on the current polls being wrong.
I'm simply asking a question about whether the pollsters have corrected their methodologies. There was clearly something wrong in 2016 when they repeatedly failed to predict the GOP primaries and failed to predict swing states and even some states that weren't considered swing states in the general election.
You're right that I haven't read the analysis that you say exists. That's why I'm asking for it. Which pollsters have done the reweighting on their 2016 data and shown that they then align with reality?
I mean, if you're looking for sub-1% accuracy, you're not going to get it, but polls are usually accurate to within a few percent. In the US, the last couple of elections were quite close, so fairly small misses had a big impact (at least in 2016; in 2020 the only state that was clearly a wrong call based on polls was Florida, and arguably Georgia).
Notably, polls were generally fairly accurate for the US 2018 midterms.
In general, polls work better in electoral systems where national polls are useful; the US presidential election in particular is challenging.
I'm sure there are mistakes and errors during any election. In the end it probably ends up being a non-factor. While wondering about the election results validity is a valid and good question, if you are suggesting maybe the polls are correct but the voting results are wrong, I find that nearly impossible.
There are so many ways a pollster could go wrong. Sample Size, Push Polling, Shy Voters, Poor Question, Poorly Trained Survey Collectors, and on and on.
Why isn't anyone considering that within the margin of error, the polls were correct? Isn't the confidence interval +/- a few percentage points anyways??
> The +/- in a given poll is measurement error derived from the sample size. They don’t account for systemic error.
Sure, but even the worst poll-based prediction outfit isn't just aggregating poll results and using the uncertainty based on sample size to determine the probabilities of different outcomes.
Pretty much all of them are using some model derived from the past relationships of polls to election results, which will, to a greater or lesser extent, capture systemic polling error that is not unique to the current cycle in the model.
The correct way to present polls is with a sampling of many different polls, like Real Clear Politics or 538. Yes, all the polls were off across the board, but individual polls have a higher likelihood of being wrong than a bunch of polls combined.
> That's why valid, trusted polls calculate a confidence interval instead of a single discrete result.
That is what each of these statistical models did, yes. And the actual outcomes fell into these confidence intervals.
> Other types of "errors" -- election results that repeatedly fall outside the confidence interval, or are consistently on only one side of the mean -- only arise when the poll is flawed for some reason.
Or the model was inaccurate. Perhaps the priors were too specific. Perhaps the data was missing, misrecorded, not tabulated properly, who knows. Again, the results fell within the CI of most models, the problem was simply that the result fell too close to the mean for most statisticians' comfort.
[0] https://en.m.wikipedia.org/wiki/Historical_polling_for_Unite...
Edit: slight elaboration.
reply