Professor Larry Jacobs – by far the most-quoted non-elected person in Minnesota – defends the Humphrey Institute Poll:
Differences between polls may not be substantively significant as illustrated by the case of MinnPost’s poll with St. Cloud State, which showed Dayton with a 10 point lead, and the MPR/HHH poll, which reported a 12 point lead.
The “margin of sampling error,” which is calculated based on uniform formulas used by all polling firms, creates a cone around the estimate of each candidate’s support, reflecting the statistical probability of variation owing to random sampling.2 The practical effect is that the results of the MinnPost poll with St. Cloud State and MPR/HHH are, in statistical terms, within range of each other. Put simply, the 2 points separating them may reflect random variation and may well not be a statistically meaningful difference.
What might be a “statistically meaningful difference” is that Survey USA and Rasmussen all came much, much closer – as in, one-third to one-quarter of the Strib, HHH and St Cloud polls – to getting the actual election right, and tracked much closer to the GOP’s internal polling, which turned out to be dead-nut accurate (as we’ll see tomorrow).
Figure 2 creates a zone of sampling error around estimates of support for Dayton and Emmer by the five media polls completed during the last two weeks of the campaign.3 In terms of the estimates of Dayton’s support, the MPR/HHH poll is within the range of all four other polls. Take home point: its estimate of Dayton’s support was consistent with all other polls.
Well, no. It was consistent with the other polls who have developed a reputation for inaccuracy that inevitably favors the DFL. The other polls – Survey USA, Rasmussen, Laurence – were not consistent with the Humphrey poll at all.
Frank Newport of Gallup responds to this:
It is unclear from the report how much the write‐up of results from the October 21‐25 MPR/HHH poll emphasized the margin of error range around the point estimates. Although this is not part of their recommendation, if the authors feel strongly that the margin of effort around point estimates should be given more attention, future reports could include more emphasis on a band or range of estimated support, rather than the point estimates.
In other words, if the Humphrey Poll is really a range with no particular confidence in any particular number within the range, publicize the range.
But that’s not what the Humphrey Institute, or the media, led with just before the election. It was “DAYTON LEADS BY 12″. Not “Dayton leads by 8 to 16, maybe, sorta”.
The distinction might make a difference.
This is generally not done in pre‐election polling, under the assumption that the point estimate is still the most probable population parameter. Any education of the public on the meaning of margin of errors and ranges and comparisons of the margins of errors surrounding other polls is an admirable goal. It does, however, again raise the question of the purpose of and value of pre‐election polls if they are used only to estimate broad ranges of where the population stands. This topic is beyond the scope of this review.
In other words – if you take Jacobs at his word, then there’s nothing really newsworthy about the HHH poll.
Do you suppose they’ll stick with that line in the runup to the 2012 election?