Now that the results are official pretty much everywhere (New York is a fairly important holdout, though with an obvious rationale for the tardy count), we can finally do a more thorough examination of how America's pollsters fared in the 2012 electoral sweepstakes.
Yes...yes, I realize that this has already been done in a variety of ways elsewhere, but I decided to add my own spin to it. Given my background (I am a polls guy, but from the political angle, not necessarily the math angle), I decided to do a very Algebra I approach to grading the pollsters.
Here's how it worked:
1. I made two lists of pollsters. The first list was every pollster that released polling in at least five separate races (not counting national polls). That wound up being a grand total of 34 different pollsters. Then I did a secondary list, which was the "major pollsters" list. Here, I excluded two groups: pollsters who primarily worked for campaigns, and pollsters that only worked in 1-2 states. This left us with a list of 17 "major" pollsters.
2. I then excluded duplicate polls. Therefore, pollsters were only assessed by their most recent poll in each race. Only polls released after October 1st were considered in the assessment process.
3. I graded each of the pollsters on three criteria:
- The first criterion was a simple one--in how many contests did the pollster pick the correct winner? If the pollster forecasted a tie, then that counted for one-half a correct pick. I then rounded to the nearest whole percent, for a score between 0-100.
- The second criterion was a simple assessment of error. I rounded each result to the nearest whole number, did the same with the polling results, and then calculated the difference. For example, if the November 5th PPP poll out of North Carolina was 49-49, and Romney eventually won 50-48, the "simple error" would be two points.
I then gave each pollster an overall "error score" based on how little average error there was in their polling. The math here is painfully simple. No error at all would yield 100 points, while an average error of ten points would get you zip, zero, nada. By the way, if you think 10 points was too generous, bear this in mind: two GOP pollsters had an average error in 2012 of over ten points.
The math here was basic: for every tenth of a point of average error, I deducted one point from the 100 point perfect score. Therefore, the clubhouse leader on this measurement (a tie between Democratic pollsters Lake Research and the DCCC's own in-house IVR polling outfit) had an average error of just 2.0 percent. That would yield them the score of 80.
- The third measurement sought to reward those who did not show a strong partisan lean. This was called the "partisan error" score. Here, we took the error number from criteria two, and added an element. The question: did the pollster overestimate the Democratic performance, or the Republican one? The total number of points on the margin for each party were added up, and then the difference was taken. That was then divided by the number of polls. This led to a number that (usually) was lower than the "error" score, because a good pollster won't miss in favor of just one party every single time.
Interestingly, virtually every pollster had an average error that overestimated the performance of the GOP. This echoes the national polls we saw, which tended to lowball the lead that President Obama held over Mitt Romney.
For this criterion, the 0-100 score was calculated the same way. For example, Rasmussen, on average, erred in favor of the GOP by 3.5 percent (you'd have thought it'd be higher, but they had a couple of big point misses in blowouts like the North Dakota gubernatorial election. That muted their GOP swing). Therefore, their "partisan error" score would be 65.
So, how did the pollsters fare in 2012? The best, and worst, performances among the major performers might surprise you.
(UPDATE: The link to the GoogleDoc with the data and the "grades" for the pollsters should be fixed now. Apologies to those who tried to view it in the first hour.)
(Continue reading below the fold.)
No comments:
Post a Comment