-
About
Our Story
back- Our Mission
- Our Leadership
- Accessibility
- Careers
- Diversity, Equity, Inclusion
- Learning Science
- Sustainability
Our Solutions
back
-
Community
Community
back- Newsroom
- Discussions
- Webinars on Demand
- Digital Community
- The Institute at Macmillan Learning
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
- Newsroom
- Macmillan Community
- :
- Psychology Community
- :
- Talk Psych Blog
- :
- Polling Probabilities and Political Punditry
Polling Probabilities and Political Punditry
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
After elections, people often note unexpected outcomes and then complain that “the polls got it wrong.”
After Donald Trump’s stunning 2016 presidential victory, the press gave us articles on “Why the Polls were such a Disaster,” on “4 Possible Reasons the Polls Got It So Wrong,” and on “Why the Polls Missed Their Mark.” Stupid pollsters. “Even a big poll only surveys 1500 people or so out of almost 130 million voters,” we may think, “so no wonder they can’t get it right.
Moreover, consider the many pundits who, believing the polls, confidently predicted a Clinton victory. They were utterly wrong, leaving many folks shocked on election night (some elated, others depressed, with later flashbulb memories of when they realized Trump was winning).
So how could the polls, the pundits, and the prediction models have all been so wrong?
Or were they? First, we know that in a closely contested race, a representative sample of a mere 1500 people from a 130 million population will—surprisingly to many people—allow us to estimate the population preference within ~3 percent.
Sounds easy. But there’s a challenge: Most randomly contacted voters don’t respond when called. The New York Times “Upshot” recently let us view its polling in real time. This enabled us to see, for example, that it took 14,636 calls to Iowa’s fourth congressional district to produce 423 responses, among which Steve King led J. D. Scholten by 5 percent—slightly more than the 3.4 percent by which King won.
Pollsters know the likely demographic make-up of the electorate, and so can weight results from respondents of differing age, race, and gender to approximate the population. And that, despite the low response rate, allows them to do remarkably well—especially when we bear in mind that their final polls are taken ahead of the election (and cannot account for last-minute events, which may sway undecided voters). In 2016, the final polling average favored Hillary Clinton by 3.9 percent, with a 3 percent margin of error. On Election Day, she won the popular vote by 2.1 percent (and 2.9 million votes)—well within that margin of error.
To forecast a race, fivethirtyeight.com’s prediction model does more. It “takes lots of polls, performs various types of adjustments to them [based on sample size, recency, and pollster credibility], and then blends them with other kinds of empirically useful indicators” such as past results, expert assessments, and fundraising. Here is their 2016 final estimation:
Ha! This prediction, like other 2016 prediction models, failed.
Or did it? Consider a parallel. Imagine that as a basketball free-throw shooter steps to the line, I tell you that the shooter has a 71 percent free-throw average. If the shooter misses, would you disbelieve the projection? No, because, if what I’ve told you is an accurate projection, you should expect to see a miss 29 percent of the time. If the player virtually never missed, then you’d rightly doubt my data.
Likewise, if, when Nate Silver’s fivethirtyeight.com gives a candidate a 7 in 10 chance of winning and that candidate always wins, then the model is, indeed, badly flawed. Yes?
In the 2018 U.S. Congressional races, fivethirtyeight.com correctly predicted 96 percent of the outcomes. On the surface, that may look like a better result, but it’s mainly because most races were in solid Blue or Red districts and not seriously contested.
Ergo, don’t be too quick to demean the quality polls and the prediction models they inform. Survey science still works.
(For David Myers’ other weekly essays on psychological science and everyday life visit TalkPsych.com)
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
-
Abnormal Psychology
6 -
Achievement
1 -
Affiliation
1 -
Cognition
7 -
Consciousness
8 -
Current Events
26 -
Development Psychology
11 -
Developmental Psychology
9 -
Emotion
10 -
Gender
1 -
Gender and Sexuality
1 -
Genetics
2 -
History and System of Psychology
2 -
Industrial and Organizational Psychology
2 -
Intelligence
3 -
Learning
3 -
Memory
2 -
Motivation
3 -
Motivation: Hunger
2 -
Nature-Nurture
4 -
Neuroscience
6 -
Personality
9 -
Psychological Disorders and Their Treatment
9 -
Research Methods and Statistics
22 -
Sensation and Perception
8 -
Social Psychology
79 -
Stress and Health
8 -
Teaching and Learning Best Practices
7 -
Thinking and Language
12 -
Virtual Learning
2