A real-world example of how wrong "likely voter" screens can be

As if in reply to my missive about hazards in political polling, Politico has a great story today about just how far astray likely voter screens can lead even seasoned political professionals:

For Republicans, one of the worst parts of the GOP’s 2012 trouncing was that they didn’t see it coming.
Top party strategists and officials always knew there was a chance that President Barack Obama would get reelected, or that Republicans wouldn’t gain control of the Senate. But down to the final days of the national campaign, few anticipated the severe setbacks that Republicans experienced on Nov. 6.
The reason: Across the party’s campaigns, committees and super PACs, internal polling gave an overly optimistic read on the electorate. The Romney campaign entered the last week of the election convinced that Colorado, Florida and Virginia were all but won, that the race in Ohio was neck and neck and that the Republican nominee had a legitimate shot in Pennsylvania.

In other words, the likely voter screen the Republican pollsters applied to figure out who would actually vote were grossly inaccurate.  And this conclusion is confirmed by Democratic pollsters and the Obama Campaign:

Democrats had argued for months before the election that Republican polling was screening out voters who would ultimately turn up to support Obama. In fact, Obama advisers said, if you applied a tighter likely voter screen to Democratic polling — counting only the very likeliest voters as part of the electorate — you could come up with results similar to what the GOP was looking at.

Keep in mind the twofold purposes of political polling next time you see polling results.  The first is to take an accurate snapshot of the electorate’s opinions on a certain date, but the second is to predict the results on election day.  The first goal isn’t easy to achieve, but the second one is even harder (because predicting behavior of human behavior is difficult in all circumstances, impossible in many).   Be much more skeptical of likely voter polls and focus more on polling averages than on a single poll, because you’re much more likely to have an accurate picture that way.  You should also read “polling postmortems” (like the one just published by Nate Silver at 538) to understand how each pollster stacked up against actual results.

This last conclusion applies to all kinds of forecasts, which is why I’m a strong advocate of retrospective comparisons of forecasting results to actual events (see for example Koomey, Jonathan G., Paul Craig, Ashok Gadgil, and David Lorenzetti. 2003. “Improving long-range energy modeling:  A plea for historical retrospectives.”  The Energy Journal (also LBNL-52448).  vol. 24, no. 4. October. pp. 75-92.   Email me for a copy.  Also check out this short post on a retrospective for a 1981 climate forecast).


keywords:
Blog Archive
Stock1

Koomey researches, writes, and lectures about climate solutions, critical thinking skills, and the environmental effects of information technology.

Partial Client List

  • AMD
  • Dupont
  • eBay
  • Global Business Network
  • Hewlett Packard
  • IBM
  • Intel
  • Microsoft
  • Procter & Gamble
  • Rocky Mountain Institute
  • Samsung
  • Sony
  • Sun Microsystems
  • The Uptime Institute
Copyright © 2025 Jonathan Koomey