The Great Poll Scam, Part XII: The Dog Ate Their Homework

Writing in defense of the Humphrey Institute Poll – which indicated our tie governor’s race was headed for a 12 point blowout – Professor Larry Jacobs says:

Careful review of polls in the field conducting interviews during the same period indicates that the MPR/HHH estimate for Emmer (see Figure 2) was within the margin of sampling error of 3 of the 4 other polls but that it was also on the “lower bound” after accounting for random sampling error. (Its estimate was nearly identical to that of MinnPost’s poll with St. Cloud.)

Which showed the race a ten point blowout for Dayton.

Jacobs is, in effect, saying “yeah, our poll was a hash – but so was everyone else’s”.

This pattern is not “wrong” given the need to account for the margin of sampling error, but it is also not desirable. As part of our usual practice, the post-election review investigated whether there were systematic issues that could be addressed.

Research suggests three potential explanations for the MPR/HHH estimate for Emmer; none revealed problems after investigation.

Indeed.

Here are the three areas the Humphrey Institute investigated:

Weighting: First, it made sense to examine closely the weighting of the survey data in general and the weighting used to identify voters most likely to vote. Weighting is a standard practice to align the poll data with known demographic and other features such as gender and age that are documented by the U.S. Census. (Political party affiliation is a product of election campaigns and other political events and is not widely accepted by survey professionals as a reliable basis for weighting responses.)

Our own review of the data did not reveal errors that, for instance, might inflate the proportion of Democrats or depress that of Republicans who are identified as likely to vote. To make sure our review did not miss something, we solicited the independent advice of well-regarded statistician, Professor Andrew Gelman at Columbia University in New York City, who we had not previously worked with or personally met. Professor Gelman concluded that the weighting was “in line with standard practice” and confirmed our own evaluation.

“And an expert said everything’s hunky dory!”

Our second investigation was of what are known as “interviewer effects” based on research indicating that the race of the interviewer may impact respondents.11 (Forty-four percent of the interviewers for the MPR/HHH poll were minorities, mostly African American.) In particular, we searched for differences in expressed support for particular candidates based on whether the interviewer was Caucasian or minority. This investigation failed to detect statistically significant differences.

And the third was the much higher participation in the poll from respondents in the “612” area code – Minneapolis and its very near ‘burbs.  Jacobs (with emphasis added by me):

When analyzing a poll to meet a media schedule, it is not always feasible to look in-depth at internals.

It’s apparently more important to make the 5PM news than to have useful, valid numbers.

With the time and ability that this review made possible, we discovered in retrospect that individuals called in the 612 area code were more prone to participate than statewide — 81% in the 612 area as compared to 67% statewide in the October poll.13 Given that Democratic candidates traditionally fare well among voters in the 612 area code, the higher cooperation rate among likely voters in the 612 area code may explain why the estimate of Emmer’s support by MPR/HHH was slightly lower than those by other polls conducted at around the same time. This is the kind of lesson that can be closely monitored in the future and addressed to improve polling quality. 

Except we bloggers have been “closely monitoring” this for years.  It’s been pointed out in the past; on this very blog, I have been writing about this phenomenon since 2004 at the very latest.  Liberals looooove to answer polls.  Conservatives seem  not to.

That Jacobs claims to be just discovering this now, after all these years, is…surprising?

Frank Newport at Gallup critiques Jacobs’ report:

The authors give the cooperation rate for 612 residents compared to the cooperation rate statewide. The assumption appears to be that this led to a disproportionately high concentration of voters in the sample from the 612 area code. A more relevant comparison would be the cooperation rate for 612 residents compared to all those contacted statewide in all area codes other than 612. Still more relevant would be a discussion of the actual proportion of all completed interviews in the final weighted sample that were conducted in the 612 area code (and other area codes) compared to the Census estimate of the proportion of the population of Minnesota living in the 612 area code, or the proportion of votes cast in a typical statewide election from the 612 area code, or the proportion of the initial sample in the 612 area code. These are typical calculations. The authors note that residents in the 612 area code can be expected, on average, to skew disproportionately for the Democratic candidate in a statewide race. An overrepresentation in the sample of voters in the 612 area code could thus be expected to move the overall sample estimates in a Democratic direction.

That Jacobs finds an excuse for failing to weight for higher participation in a city that is right up there with Portland and Berkeley as a liberal hotbed would be astounding, if it weren’t the Humphrey Institute we’re talking about.

The authors do not discuss the ways in which region was controlled in the survey process, if any. The authors make it clear that they did not weight the sample by region. This is commonly done in state polls, particularly in states where voting outcomes can vary significantly by region, as apparently is the case in Minnesota.

Summary:  The HHH poll is sloppy work.

17 thoughts on “The Great Poll Scam, Part XII: The Dog Ate Their Homework

  1. So the HHH poll isn’t ‘pro-DFL’–it’s just sloppy, is that it?

    If you were a slimy-though-brilliant Marxist pollster and you desperately wanted Mark Dayton to win–and your unpublished numbers had Dayton/Emmer within 0.5%–do you think it would help your man to report him as being up by 10 points?

  2. Gavin,

    I”ll presume you haven’t read the entire series. It’s a big read. But your questions are both answered earlier in the series.

    So the HHH poll isn’t ‘pro-DFL’–it’s just sloppy, is that it?

    You demonstrate the dangers of judging theses by taking a “snapshot”.

    My thesis – and the numbers support it – is that both the HHH and Strib polls have, since 1988, consistently underrepresented GOP turnout in the vast majority of elections (the HHH poll in all but one). That underrepresentation is more pronounced in close elections than in blowouts.

    The pattern is not random. It is utterly consistent. You can throw spitballs all you want – it’s kinda your MO, I know – but those are the numbers.

    If you were a slimy-though-brilliant Marxist pollster and you desperately wanted Mark Dayton to win–and your unpublished numbers had Dayton/Emmer within 0.5%–do you think it would help your man to report him as being up by 10 points?

    As I noted Monday, Albert Mehrabian’s research tends to indicate that it would, in fact, flake off some undecided voters.

  3. Not correcting for response by area code is unforgivable. It would be fast and easy to do — make up a spreadsheet ahead of time, enter the numbers, correct for historical over or under representation by area code.
    If HHH was a GOP outfit or even non-partisan, it would have been done. As it is, they were willing to destroy their reputation for the sake of a poll that could influence an election.
    The admission of the fault is a gift. They failed to perform due diligence. In the future HHH polls can be dismissed if they are unwilling to reveal their internals.
    The HHH Institute of Public Policy has put its name on some pretty shabby work outside of political polls. They published-by-press-release one study linking “predatory lending” to foreclosures in minority neighborhoods. The study was paid for by a law firm that specialized in suing mortgage lenders for discriminating against minority-owned house.
    Who pays for the HHH anyhow?

  4. Thanks Mitch–no, I’ve only read part XII, so all I know so far is that the ‘summary’ shown in p12 doesn’t in any way capture what you’re actually claiming. Point taken.

    If any non-adoring claim is being made concerning my “MO”, by all means let’s discuss this suggestion politely, with examples and quotations. ‘Tis the season.

  5. Gee, I have to wonder if SITD is a “superstition-based institution”. I don’t know, but then, I’m not that gullible. Empirically speaking, of course.

  6. all I know so far is that the ‘summary’ shown in p12 doesn’t in any way capture what you’re actually claiming.

    Summary of the immediately-presenting point, not of the entire series.

    That will come at noon tomorrow.

    If any non-adoring claim is being made concerning my “MO”, by all means let’s discuss this suggestion politely, with examples and quotations.

    Your little outburst last week about my piece about Scarlett J’s divorce is submitted in its entirety; an extended rhubarb over a largely inconsequential not-even-side-issue, done apparently to assuage your disdain for religious faith. Fair enough (if irritating), except for you having to scamper off to tattle to PZ Meyers with an out-of-context reference to something that I really had no intention of discussing at all. Call it spitballs, call it strawmen, call it Ethel, I don’t care.

    ‘Tis the season.

    God bless.

  7. You know, there was a time (a little over two years ago), when I thought that Hope and Change ™ might just be real. Now that history has proven otherwise (empirically speaking, of course), I have to conclude putting trust in human institutions is a pretty stupid thing to do.
    Funny how that works.

  8. Being “Pro-DFL” is a matter of intent and very difficult to prove.
    Being sloppy is easy to prove.
    Why is it that these “empiricists” have problems using “reason”?

  9. Your point is a good one, Mitch. A properly executed opinion poll ought to have as many errors in one direction as the other over time. If the error is always on one side of the equation, there is a bias. I once had an associate who was so consistently late to the office you could set your watch by him. There was nothing random about his behavior.

  10. With nary an effort, the thoughtful reader must conclude that Gavin Sullivan has all the qualifications to play the “Peevee” character during his pre-relapse, “I don’t need no stinkin’ med’s” period in the upcoming, soon-to-be classic, stage production of “Smarmy Moonbat, Interrupted”.

  11. Pingback: Shot in the Dark » Blog Archive » The 2010 Shootie Awards!

  12. Pingback: Shot in the Dark » Blog Archive » Strib Poll: Empowering The Powerful, Gulling The Gullible

  13. Pingback: Shot in the Dark » Blog Archive » Fearless Predictions

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.