Shot In The Dark: Today’s News, Two Years Ago

Nate Silver at the NYTimes has been widely respected for his ability as a statistician.

His reputation, though, seems to stem largely from his facility at what amount to rhetorical parlor tricks (he once earned a bit of a living counting cards at poker, and he made a name for himself with baseball stats), and his calling of the vast majority of the 2008 election slate correctly (with the help of an epochal wave election and lots of access to Obama campaign internal polling), leading to his hiring at the NYTimes in time for the 2010 race.

Silver’s method at the NYTimes involves…:

  • Taking regional polls – from polling services as well as media polls – and…
  • “weighting” them according to some special sauce known only to Nate Silver, Registered Statistical Genius.

Now, I wrote about Silver’s method two years ago, when he spent much of the race predicting Mark Dayton would win by six points (with an eight-point margin of error).  As I pointed out, Silver’s “methodology” involved giving a fairly absurd amount of weight to  polls like the long-discredited Star Tribune “Minnesota” Poll and the since-discontinued Hubert H. Humphrey Institute poll (for whose demise I sincerely hope I deserve some credit, having spent a good part of the fall of 2010 showing what a piece of pro-DFL propaganda it has always been).  During the middle of th 2010 race, Silver gave the absurdly inaccurate-in-the-DFL’s-favor (especially in close elections) HHH and Minnesota Polls immense weight, while undervaluing the generally-accurate Rasmussen polls and, to a lesser extent, Survey USA.

I said Silver’s methodology was “garbage in, garbage out” – he uses bad data, and gets bad results.  I was being charitable, of course; his methodology, untransparent and proprietary as it is, processes bad data into worse conclusions.

That was in 2010.

Today?  NRO’s Josh Jordan reaches the same conclusion:

While many in the media (and Silver himself) openly mock the idea of Republicans’ “unskewing polls” (and I am not a fan of by any means), Silver’s weighting method is just a more subtle way of doing just that. I outlined yesterday why Ohio is closer than the polls seem to indicate by looking at the full results of the polls as opposed to only the topline head-to-head numbers. Romney is up by well over eight points among independents in an average of current Ohio polls, the overall sample of those same polls is more Democratic than the 2008 electorate was, and Obama’s two best recent polls are among the oldest.

But look at some of the weights applied to the individual polls in Silver’s model. The most current Public Policy Polling survey, released Saturday, has Obama up only one point, 49–48. That poll is given a weighting under Silver’s model of .95201. The PPP poll taken last weekend had Obama up five, 51–46. This poll is a week older but has a weighting of 1.15569.

So it wasn’t just Minnesota!

And remember – PPP polls, while leaning a little left, are not generally flagrantly inaccurate in the sense that the Strib is and the HHH was.

And it’s not a fluke…:

The NBC/Marist Ohio poll conducted twelve days ago has a higher weighting attached to it (1.31395) than eight of the nine polls taken since. The poll from twelve days ago also, coincidentally enough, is Obama’s best recent poll in Ohio, because of a Democratic party-identification advantage of eleven points. By contrast, the Rasmussen poll from eight days later, which has a larger sample size, more recent field dates, but has an even party-identification split between Democrats and Republicans, has a weighting of .88826, lower than any other poll taken in the last nine days.

Jordan reaches a conclusion that even I didn’t:

This is the type of analysis that walks a very thin line between forecasting and cheerleading. When you weight a poll based on what you think of the pollster and the results and not based on what is actually inside the poll (party sampling, changes in favorability, job approval, etc), it can make for forecasts that mirror what you hope will happen rather than what’s most likely to happen.

Well, you can – if your goal isn’t so much to measure the nation’s zeitgeist (and report on it) but affect the election.

Which has, of course, been my contention all along.

Chanting Points Memo: “Minnesota Poll” Has Your Delivery Of Sandbags Right Here

Yesterday, the Star Tribune “Minnesota Poll” also delivered its mid-cycle tally of support for the Voter ID Amendment.

And coming barely a week after the generally-accurate Survey USA poll showing Voter ID passing by a 2:1 margin, the Strib would have you believe…:

Slightly more than half of likely voters polled — 52 percent — want the changes built around a photo ID requirement, while 44 percent oppose them and 4 percent are undecided.

That is a far cry from the 80 percent support for photo ID in a May 2011 Minnesota Poll, when the issue was debated as a change in state law. Support among Democrats has cratered during a year marked by court battles, all-night legislative debates and charges that the GOP is attempting to suppress Democratic votes.

Republicans and independents continue to strongly back the proposal, which passed the Legislature this year without a single DFL vote.

Wow.  Sounds close!

Sort of; if you accept the validity of the numbers (and unless the DFL is headed for a blowout win, you must never accept the validity of the “Minnesota Poll’s” numbers), and every single undecided voter today voted “no”, the measure would pass in a squeaker.

But are the numbers valid?    And by “valid”, I don’t mean “did they do the math right”, I mean “did they poll a representative sample of Minnesotans?”

To find that out, you have to do something that almost nobody in the Strib’s reading audience does; look at the partisan breakdown of the survey’s respondents.  Which is in a link buried in the middle of a sidebar, between the main article and the cloud of ads and clutter to the right of the page, far-removed from the headline and the lede graf.  Which takes you to a page that notes (with emphasis added):

• The self-identified party affiliation of the random sample is: 41 percent Democrat, 28 percent Republican and 31 percent independent or other.

That’s right – as with the Marriage Amendment numbers we looked at this morning (it’s the same survey), the Strib wants you to believe…

…well, no.  I’m not sure they “want” anyone to believe anything.  I’m sure they want people to read the headling and the “almost tied!” lede, and not dig too far into the numbers.

It’s part of the Democrat’s “Low-Information Voters” campaign; focus on voters who don’t dig for facts, who accept what the media tells them, who vote based on the last chanting point they heard.

Fearless prediction:  On November 4, the Strib will release a “Minnesota Poll” that shows the Voter ID Amendment slightly behind, using a partisan breakdown with an absurdly high number of DFLers.   It’ll be done as a sort of positive bandwagon effect – to make DFLers feel there’s a point to come out and vote against the Voter ID Amendment (and for Obama, Klobuchar, and the rest of the DFL slate, natch).

And it will be a complete lie.  Voter ID will pass by 20 points, and this cycle of polling will disappear down the media memory hole like all the rest of them.

Question:  Given that its entire purpose seems to be to build DFL bandwagons and discourage conservative voters, when do we start calling the “Minnesota Poll” what it seems to be – a form of vote suppression?

Chanting Points Memo: “Minnesota Poll” Orders Material For A Narrative-Building Spree

If you take the history of the Minnesota Poll as any indication, yesterday’s numbers on the Marriage Amendment might be encouraging for amendment supporters:

The increasingly costly and bitter fight over a constitutional amendment to ban same-sex marriage is a statistical dead heat, according to a new Star Tribune Minnesota Poll.

Six weeks before Election Day, slightly more Minnesotans favor the amendment than oppose it, but that support also falls just short of the 50 percent needed to pass the measure.

Wow.  That sounds close!

But as always with these polls, you have to check the fine print.  And the “Minnesota Poll” buries its fine print in a link well down the page; you don’t ever actually find it in the story itself.  And it contains the partisan breakdown (with emphasis added):

The self-identified party affiliation of the random sample is: 41 percent Democrat, 28 percent Republican and 31 percent independent or other.

That’s right – to get this virtual tie, the Strib, in a state that just went through photo-finish elections for Governor and Senator, and has been on the razor’s edge of absolute equality between parties for most of a decade, sampled three Democrats for every two Republicans to get to a tie.

If you believe – as I do – that the “Minnesota Poll” is first and foremost a DFL propaganda tool, intended largely to create a ‘bandwagon effect” to suppress conservative turnout (and we’ll come back to that), then this is good news; the Marriage Amendment is likely doing better  than the poll is showing.

What it does mean, though, is that they are working to build a narrative; that the battle over gay marriage is much more closely-fought than it is.

And the narrative’s players are already on board with this poll.  The Strib duly interviews Richard Carlbom, the former Dayton staffer who is leading the anti-Amendment

Actually, here’s my bet; the November 4 paper will show a “surge of support” that turns out to be much larger than any that actually materializes at the polls.

More At Noon.

UPDATE:  I wrote this piece on Sunday.  Monday morning, all of the local newscasts duly led with “both ballot initiatives are tied!”.

If you’re trying to find a construction job in Minnesota, you can get a job putting siding on the DFL’s narrative.

UPDATE 2:  Professor David Schultz at Hamline University – no friend of conservatism, he – did something I more or less planned to do on Wednesday; re-ran the numbers with a more realistic partisan breakdown:

Why is the partisan adjustment important? The poll suggests significant partisan polarization for both amendments, with 73% of DFLers opposing the marriage amendment and 71% of GOPers supporting. Similar partisan cleavages also exist with the Elections Amendment. If this is true, take the marriage Amendment support at 49% and opposition at 47%. If DFLers are overpolled by 3% and GOP underpolled by 6%, and if about 3/4 of each party votes in a partisan way, I would subtract about 2.25% from opposition (3% x .75) and add 4.5% to support (6% x .75) and the new numbers are 53.5% in support and 44.75% against. This is beyond margin or error.

If one applies the correction to the Elections Amendment there is about an 80% DFL opposition to it and a similar 80% GOP support for it. Then the polls suggest approximately 56.8% support it and 41.6% oppose.

Which brings us very nearly back to the 3:2 margin  for the Voter ID amendment, and the tight but solid lead for the Marriage Amendment that every other poll – the reputable ones, anyway – have found.

All That’s Silver Does Not Glitter

While the national polls show the presidential race a statistical toss-up, Nate Silver points out that polls conducted in swing state show Obama with an actual lead of sorts – around three points:.

While that isn’t an enormous difference in an absolute sense, it is a consequential one. A one- or two-point lead for Mr. Obama, as in the national polls, would make him barely better than a tossup to win re-election. A three- or four-point lead, as in the state polls, is obviously no guarantee given the troubles in the economy, but it is a more substantive advantage.

Here’s the part that caught my attention; I’ve added emphasis:

The difference isn’t an artifact of RealClearPolitics’s methodology. The FiveThirtyEight method, which applies a more complicated technique for estimating polling averages, sees broadly the same split between state and national polls.

On the one hand – well, doy.  Obama’s an incumbent elected in a wave, protected by a media that serves as his Praetorian Guard.  Of course he’s going to be polling well.

On the other hand?  My real point in this article is the abovementioned “FiveThirtyEigtht Method”.

I addressed this two years ago – when Silver, who is generally acknowledged to be a moderate Democrat, spent most of the 2010 campaign predicting a 6+ point Mark Dayton victory.

How did he arrive at that number?

  1. By taking an assortment of polls from around MInnesota, conducted by a variety of polling operations, and…
  2. Applying a weighting to each poll, the “538 Poll Weight”, which came from an unexplained formula known, near as I can tell, only to Silver.  Which is not to say that it’s wrong, or statistically, intellectually or journalistically dishonest, per se – merely that it’s completely opaque

But let’s take Silver’s methodology at face value – because he’s a respected statistician who works for the NYTimes, right?

The fact remains that, at least here in Minnesota, two of the polls that were given great weight in Silver’s methodology – the Star Tribune “Minnesota” poll and the Hubert H. Humphrey Institute poll, are palpably garbage, and should be viewed as DFL propaganda at best, calculated election fraud at worst. 

We went through this in some detail after the 2010 election: there’s an entire category on this blog devoted to going over the various crimes and misdemeanors of Twin Cities media pollsters.  ,Long story short – since 1988, the Strib “Minnesota” poll has consistently shorted Republican support in polls, especially the polls closest to the elections, especially in close elections.  The “Minnesota” poll’s only redeeming point?  The Humphrey Institute poll is worse.  In both cases, they tended – moreso in closer races – to exaggerate the lead the Democrat candidate for Governor, Senator or President had.   For example, in 2010 both polls showed Mark Dayton with crushing, overwhelming, humiliating leads over Tom Emmer on election-eve.  It ended up the closest gubernatorial race in Minnesota history.  The “Minnesota” poll was so bad, Frank Newport of Gallup actually wrote to comment on its dubious methodology. I suspect that the results are less mathematical background noise or methodological quicks – which would, if truly random, show distortions that would even out between the parties over time.  While it’s not provable without a whistle-blower from inside either or both organizations, I suspect the results shake out the way they do, if you are inclined to believe people have integrity, due to selection bias in setting up survey samples (and, if you don’t have much faith, in systematic bias working to achieve a “Bandwagon Effect” among the electorate.  Count me among the cynics; an organization with integrity would have noticed these errors long before a guy like me who maxed out at Algebra I in college and fixed the problem.  I’m willing to be persuaded, but you’ll have to have a much better argument than most of the polls’ defenders). 
The point being, this is the quality of the raw material that leads Nate Silver to his conclusions.  
And that should give Silver, and people who pay attention to him, pause.
I don’t know if the other state polls are as dodgy as Minnesota’s local media polling operations.  That’d be a great subject for a blogswarm.  

Marginal Notes On A Marginal Poll

I’m going to go back to Dave Mindeman’s piece at mnpACT, about the most recent Public Policy Polling (PPP) survey of Minnesota politics, for the numbers on some issues that don’t pertain to Governor Dayton and the Legislature.

Minnesota’s constitutional amendment to ban gay marriage is headed for a close vote. 48% of voters say they support it while 44% are opposed.

I neither support nor oppose the Amendment, but I have a fearless prediction; if the PPP poll, which trends a little left and features a left-heavy sample, calls it a four point race today, it’ll be 49-41 in November.

Let’s go back to the whole “people like their own bastards” bit:  Mindeman, mindful of the poll results, asks:

So, WHERE is the DFL candidates for MN-02 and MN-06 ? MN-03 and MN-08 seem to have multiple candidates in the mix …. if there are going to be any coattails from the top to help the State Legislature candidates, doesn’t there need to be someone in every district ?

There are two answers:  First, it’s further evidence that people like their own bastards; while national polling shows that Congress is less popular than Slobodan Milosevic, it doesn’t take a rocket surgeon to know that John Kline and Michele Bachmann will win their districts by 30 and 15 points respectively, even if the Dems endorsed Zombie JFK to run for the office.

“Even though Congress is unpopular?”

Yep.  As noted earlier today, polls of legislative bodies as a whole are almost always misleading.  Congress may be unpopular; Kline and Bachmann are not.

BTW … do you think the mature approach that Governor Dayton has taken on the Vikiings stadium has helped … even if the taxpayers don’t want to pay for it, they sure don’t want to the lose the business … and obviously the Governor is trying.

If by “mature approach” Mindeman means coming out of his closet long enough to croak “Uh want ivverbaddy to git to WOARK and sulve the prollum”, then retreating to the closet and letting the Legislature, the cities, the counties, the NFL and Wilf do all the work?  It may or may not be “mature”, but it’s certainly easier on the poll numbers.

Chanting Points Memo: “The People Love Dayton And Hate The Legislature!”

This particular chanting point has been making the rounds this week – a “Public Policy Polling” (PPP) survey appears to show that Mark Dayton is dreamily popular, and the people just can’t stand the GOP-run legislature.

It’s made the rounds of most of the mainstream media, the leftyblogs, and the lowest of the bunch, the  City Pages.  I figured I’d pick on Dave Mindeman at mnpACTttp and his take on it because unlike way too many Twin Cities leftybloggers, he’s articulate, recites the chanting point pretty much verbatim, and is otherwise not an idiot.

Mark Dayton’s numbers have improved since PPP last polled Minnesota in May and he’s one of the most popular Governors in the country.

Now, the numbers would seem to bear that statement out.  Let’s unpack them before we move on.

In observing PPP polls over the past couple of cycles, their results seem to consistently fall a little to the left of how Minnesota reality eventually shakes out.  Not in an egregions-to-the-point-of-fraud kind of way, like the Humphrey Institute or Strib Minnesota polls, but it’s noticeable.

I also think – and this is a theory, not something I’m stating as fact, but a decade of observation has led a lot of us on the right to wonder if there’s something to it – that liberals are much more prone to answer polls, especially in between election cycles.

Let’s ignore both of those for the moment.  Let’s talk about the surface indicators for this polling:

A little belated birthday present for Mark. Dayton has an approval rating of 53%, while disapproval is at 34% — a 19% spread.

The numbers have led Mindeman – and most other lefties – to a misleading conclusion.  Not wrong – I’m not telling people not to trust their lying eyes – but there’s more in those numbers than meets the eye.  Mindeman and the rest of the lefties are ignoring a key bit of American political behavior.

The poll covers the time between the shutdown and the present – when Dayton really didn’t do anything.  For that matter, he really didn’t do anything during the last session, or the shutdown.  He’s been for the most part a non-entity.  And if you don’t do anything – either positive or negative – then your numbers are going to be juuuuust fine.  Or at least fairly steady.

(Opposite case in point – Tim Pawlenty, who fought a two-court DFL advantage in 2009 and 2010 with aggression and passion.  He did not sit in his office drinking Kombucha or, given his hockey-playing pedigree, PBR, and his poll numbers showed it.  They were “lived-in”.  Who was a better governor?  Depends, now, doesn’t it?)

During the session, and the shutdown, it was the Legislature that did all the heavy lifting.  Dayton sat in his office, released the occasional demand, and until his final, fatal tour around the state, where he realized that getting behind his own plan would be political suicide, really did nothing.  And after that tour, when he folded his cards, he did so quietly, minimizing if not the GOP’s victory at least his own defeat.

In other words, he’s played defense.  He’s sat back and let the other guys take the hit.  The media, naturally, abet this behavior.

And in a state as polarized as Minnesota is, when you actually do things, you will take the hit – especially given our DFL-owned-and-operated media, whose interest in fluffing Dayton is obvious and constant.

And the Legisature has done things – affirmative things during the session and the shutdown, many of which pissed off Democrats and a few of which irritated the more conservative, and also not-so-affirmative things that have been all over the news lately.  Of course, sitting back and being passive-aggressive, like Dayton, was not an option for the Legislative branch; they were sent to Saint Paul on a mission, and the mission wasn’t going to get done without some serious action, and given the number of GOP freshmen who said they didn’t care if they only served a term, some fallout was to be expected.  It was inevitable.

But there’s more.

Dayton may get himself an easier legislature to work with next year. Democrats lead the generic legislative ballot in the state by a 48-39 margin. If that holds through November they should win back a whole lot of the seats they lost in 2010. It’s not that legislative Democrats are popular- only 31% of voters have a favorable opinion of them to 49% with a negative one. But legislative Republicans have horrible numbers. Their favorability rating is 23% with 62% of voters viewing them negatively. That honeymoon wore off real fast.

And here Mindeman and the rest of the metro chattering class fall into the seductive charms of drawing using high-level data to draw high-level conclusions on low-level questions.  Mindeman – and the entire regional left – have scoped the data wrong. I suggest.  The fact is that “generic” never manages to get endorsed to run for the Legislature.

The Legislature will take popularity hits – they, as a body, did all the work.

The Legislature, as a body, will always lag a do-nothing governor under those circumstances.  Just like Congress does.

But aggregate polls of the entire Legislature – those mythical “generic” legislators – are meaningless, just like aggregate polls of Congress.  People may want to vote the bastards in general out, but people tend, generally, to support their own bastard.  There are exceptions – they voted a lot of incumbent “bastards” out in 2006 and 2010 – but as a very general rule, unless you have a wave election, incumbency has its virtues.  This election may be many things – it may return both chambers of Congress to the GOP – but I don’t think anyone’s predicting a wave yet.

Tack on the fact that PPP polls trend left, that poll respondents this early in the cycle trend left, that the PPP poll was of registered voters (who always trend left), and the fact that the poll is meaningless, and the additional fact that redistricting – provided that it reflects actual demographic shifts rather than the DFL’s rhetoric – should favor the GOP, and I’m a lot less worried about this poll than the DFL, media (ptr) and the chattering classes want you to be.

And despite those numbers the GOP legislature continues to play ultra partisan games.

Well, yeah, Dave.  They know the numbers are meaningless.  So does the DFL.

The Great Poll Scam, Part XIV: Fool Me Ten Times…

You’ve heard the old saying – “the definition of insanity is repeating the same thing over and over again and expecting a different result. 

The joke writes itself.  Nearly every election season, Minnesota’s media runs the results of the Star-Tribune Minnesota Poll and the Humprey Institute/MPR Poll on its front pages; front and center on its 6 and 10PM newscasts; up-front in its hourly news bites; in the New York Times; prominently on that big news crawl above Seventh Street in downtown Saint Paul.   To those who don’t dig into the numbers – and that’s probably 99 percent of Minnesota voters – that’s all there is to it.  “Hm.  Looks like Dayton’s winning big!”.

In most elections- especially the close ones – both polls (along with their downmarket stepsibling, the SCSU Poll) show numbers for GOP candidate that beggar the imagination.  The media – the Strib, the TV stations, MPR – run the polls pretty  much without any analysis.  The job of actually fact-checking the polling falls to conservative bloggers – myself, MDE, Ed Morrissey, Scott Johnson and John Hinderaker, Gary Gross, the Dogs, Sheila Kihne and others; poll after poll, election after election, we shout into the storm “the numbers are a joke! Democrats are oversampled to an extent that is not warranted by electoral results we’ve seen in this state in nearly a generation!  Would someone look into this?”

The elections take place.  There is hand-wringing about the inaccuracy of the polls.  Two years pass.  Larry Jacobs and the  Strib release still more polls, repeating precisely the same pathologies, over and over and over.  Forever and ever, amen.  Lather, rinse, repeat.

Now, “journalism” is supposed to be about accuracy and clarity.  About telling the story, and telling it in a way that your sources reinforce your credibility and clarity.  If you are a reporter, and you report a story based on a source’s information, and that information turns out to be wrong, it’s a bit of a vocational black eye.

This morning I asked, rhetorically, “do you think that if a source burned Tom Scheck or Pat Doyle or Rochelle Olson or Rachel Stassen-Berger over and over, year in and year out, by feeding them laughably inaccurate information, not just once or twice but on nearly every story on which they are a key source, would they keep using them as sources?”  Without really serious corroboration, if indeed it could be found?  Ever?  

And yet the regional media not only continues running the Strib and HHH polls, election after election, without any serious question – until after the election, anyway.  Notwithstanding the fact that the Strib’s Minnesota Poll has been very regularly wrong for a generation now.  Notwithstanding the fact that the Humphrey Poll has been even more consistent in its systematic shorting of GOP candidates.  The polls are still treated not only as useful news, but front-page material.

This would prompt a curious person to ask a whooooole lot of questions:

Why do the pollsters continue to generate such a defective product?:  While I focused heavily over the past few days specifically on Gallup’s Frank Newport’s critique of the Humphrey Institute poll, that gives the impression that this is a one-time issue.  And yet both the major media polls have had nearly the same problems, election in, election out, for a generation (or in the case of the Humphrey Institute Poll,  in every major election since 2004).   It’s gotten to the point where I want to stand outside 425 Portland, or outside the Humphrey Institute’s building at the U, and wave a sign about; “It’s the same thing, every time!”.

Why do the media continue to present such a routinely defective product as newsworthy?:  Scott Johnson has been lighting up the “Minnesosta Poll’s” shortcomings for a solid decade now; the Strib’s poll is rarely even close, and performs worse in close elections than in blowouts.  And at the risk of repeating myself, let me repeat myself; the Humphrey Institute poll has underpolled Republicans by an average of nine points.  This past election was distinguished from the previous years’ ineptitude only in degree, not in concept.

Does it never occur to our “watchdogs” and “gatekeepers” to look into this?  Wasn’t “insatiable curiosity” once a pre-requisite for being a reporter?

Do the editors at the Strib, the PiPress, KARE, MPR, WCCO and the rest of the regional mainstream media genuinely consider “polls are a snapshot in time” an excuse for decades worth of a pattern of inaccuracy, not only in polling technique but in their own coverage of elections?

If a city councilman is caught cashing checks to herself, would saying “it’s just a snapshot in time!” get the Strib to call their dogs off?

Appearance Of…Something?: I’ve said it before; I’m not a fundamentally conspiracy-minded person.  I don’t necessarily believe that the media is involved in a conscious, considered conspiracy to short conservative candidates in close elections.

Still – given that…:

…I’ll ask again: if the Humphrey Institute (whose institutional sympathies lean definitively left-of-center) and the Strib (ditto) wanted to create a system that would help tip close-call contests toward the DFL, how would it be any different than the system they’ve developed?

Not accusing.  Just asking.

The Great Poll Scam, Part XIII: Reality Swings And Misses

Contrary to the impression some wrote about on various blogs, I never worked for the Emmer campaign.  Oh, I did a fair amount of writing about Emmer’s bid for governor – I thought he had what it took to be the best governor we’ve had in a long time, and I was a supporter from long before he actually declared his intent to run.  I volunteered a lot of time, and a lot of this blog’s space, to fight against the sleaziest, most toxic smear campaign in recent Minnesota electoral history, and I do believe the better man lost this election.

But I never got any money for it.

What I did get – although not to an extent that would make a Tom Scheck or a Rachel Stassen-Berger in any way jealous – was a certain amount of access.  I heard things.

One of the things I heard from sources inside the Emmer campaign, especially during the long, dry, advertising-dollar-free summer before the primaries, when all three DFL contenders curiously spent their entire ad budgets sniping at Emmer, and the media played dutiful stenographers for Alliance for a Better Minnesota’s smear campaign, was that the Emmer campaign had its work cut out for it.  In late July and early August, a source inside the Emmer campaign, speaking on MI-5-level deep background, told me the internal polls showed Emmer trailing by 12 points.  It wasn’t good news, certainly – but it was early in the race, it was a byproduct of being outspent roughly 16:1 to that point, and it was just part of doing business.   “We gotta pick up six points, and Dayton’s gotta lose six”, the source told me, as the campaign dug its way out of “Waitergate”.

I observed to the source that that should have been nothing new for Emmer; he’d come back from a bigger margin in the previous nine months or so, from being way back in the pack at the Central Committee straw poll about this time last year, where Marty Seifert won by a margin many considered insurmountable.

The source expressed confidence it could be done.

He was, statistically, exactly right. Emmer brought the race back from a 12 point blowout to a near-tie, with numbers that pretty steadily improved – according to the party’s own internal polling.


On October 11, I held a “Bloggers For Emmer” event at an undisclosed location in the western subs.  It had been ten busy weeks since my off-the-record conversation with my source in the campaign.  An Emmer functionary told me – off the record – that it was now a four point race.  

A week later, within ten days of the election, the same internal poll said the race was a statistical dead heat.

Then came the last-minute hit polls from the Humphrey Instititute, the Strib and Saint Cloud State – after which Emmer released his internal polling, which was reinforced by a Survey USA poll that more or less reinforced the internal polls’ results.

And then came the election.

Last week, David Brauer at the MinnPost interviewed Emmer campaign manager Cullen Sheehan.  As part of the piece, he graphed the respective polls: Emmer’s internal polling (orange), the Strib poll (wide dashes) and the HHH poll (dots), showing the indicated size of the Dayton lead.

Graph used by permission of the MinnPost

Graph used by permission of the MinnPost


Although “internal numbers” often become propagandistic leaks, Sheehan insists the data was not for public pre-election consumption. Though he wound up releasing the most favorable result during the campaign, it proved prescient, and two independent pollsters subsequently showed similar results.

And while Brauer points out that internal numbers “aren’t holy” – and many leftybloggers openly guffawed when Sheehan released them – the GOP’s internal numbers have a long record of accuracy, in my experience.  In 2002, when the Strib poll had Roger Moe measuring the drapes in the mansion, a GOP source leaked me internal polling showing that Pawlenty was tied and rising.  And internal polling released to a group of bloggers a month before the election showed Chip Cravaack pulling close to Jim Oberstar; numbers that the campaign asked be kept off the record showed that with “leaners”, Cravaack was actually leading.

So for all the leftyblogs’ caterwauling about “push polling”, the GOP’s internal polls – as seen both publicly and behind the scenes – called things as they were.  There’s a reason for that; parties need to accurate polling to help them allocate scarce resources effectively.  The DFL has not released their internal polling – but the Dayton campaign’s behavior indicates to me that they also saw Emmer’s late surge, leading them to re-roll-out the “Drunk Driving Ad” (the closest the Dayton campaign ever came to a coherent policy statement, with full irony intended).

But neither sides’ internal polling is affiliated with a major media outlet.  The Strib, Minnesota Public Radio and MinnPost all have symbiotic relationships with Princeton, the Humphrey Institute and Saint Cloud State, respectively (though to be accurate the MinnPost only paid for three questions in the SCSU poll, and those were, according to Brauer, on ranked-choice voting).  Those relationships, presumably, exist so that the news outlets can get “their” results out to the public first.

No matter how they’re arrived at, or so it seems.

Brauer confirms after the fact what my sources in the campaign told me, off the record, at the time; it was a real numerical rollercoaster ride:

Although “internal numbers” often become propagandistic leaks, Sheehan insists the data was not for public pre-election consumption. Though he wound up releasing the most favorable result during the campaign, it proved prescient, and two independent pollsters subsequently showed similar results.

“It really is, internally, a compass,” Sheehan says of the campaign’s polling.

Emmer’s own numbers show a candidate trailing — sometimes badly — for nearly the entire race.

On July 28 — three weeks after Emmer’s interminable “tip credit” debacle — the Republican trailed Dayton by 11 points. Ironically, the Star Tribune poll — which Republicans say overstates DFL support — had it closer: Dayton plus-10.

It was a demonstrable fact that the Strib poll oversampled DFL voters by a big margin – but that’s a poll-technique discussion to be held some other time.

In the wake of the double-digit gap, Sheehan took over as campaign manager. But by early October, the internal numbers had barely budged: Emmer was still down 7. A Strib survey taken a week or so earlier showed the Republican down 9 — again, pretty close to what the campaign was seeing.

Finally, on Oct. 13, Emmer got his first great inside news: he was only down 1. But the next media poll (SurveyUSA/KSTP) had him down 5, and an Oct. 18 internal poll repeated that number. It was two weeks before Election Day.

And then came the Big Three media polls, one after the other – the Strib, SCSU and the Humphrey polls – showing Emmer 9, 10 and 12 points down, respectively.  At which point Sheehan opted to release the internal numbers – which were shortly reinforced by SUSA.


“At that point [right before the election – the polls on which I’ve focused throughout this series], undecided voters are making up their minds and supporters are getting anxious, having seen 7 down, 10 down and 12 down,” Sheehan says. “It impacts fundraising and volunteers. It’s definitely not the only factor, but it is a factor.”

Sheehan, now the Minnesota GOP Senate caucus chief of staff, is a Republican, but Democratic Senate Majority Leader Harry Reid’s pollster feels similarly. Reid’s internal numbers proved better than media polls predicting his opponent would win.

Says Sheehan, “The point I am making is that outside public polls have an impact on campaigns — ultimately, some impact on eventual outcome of campaigns, especially in close races.”

At least one media outlet agreed even before the results were known. This year, the Star Tribune declined to do its traditional final-weekend poll. A key reason, editor Nancy Barnes told me, is that “a poll can sometimes influence the outcome of an election.”

Sheehan’s plea? Withhold questionable numbers. “I’m under no illusion that public polls will cease, but I do think news organizations have a responsibility to ask themselves, when they get their results, if they really believe they’re accurate,” he says.

I’ve met Sheehan not a few times.  Great guy.  Big future in politics.  Now, I’m not sure if he’s ever read this series; if he has, I’m sure he needs to be diplomatic.  He’s gotta get along with the regional media.

But the fact remains that the closer the race got, the farther off-the-beam the Strib and HHH polls swerved.

Just the same as they do in practically every election, especially the close ones.

So Sheehan has a point; the news media should treat suspicious polls as they would a source that’s burned them. 

Seriously – can you imagine Erik Black or Bill Salisbury or David Brauer putting a story on the front page (or “page”) based on the uncorroborated word of a source that had burned them, over and over again?  As in, not even close, but really, really embarassingly burned?

And the Strib and Humphrey Polls have burned the regional media – over and over and over again.

Presuming, of course, that accuracy is what they’re shooting for.

More later today.

The Great Poll Scam, Part XII: The Dog Ate Their Homework

Writing in defense of the Humphrey Institute Poll – which indicated our tie governor’s race was headed for a 12 point blowout – Professor Larry Jacobs says:

Careful review of polls in the field conducting interviews during the same period indicates that the MPR/HHH estimate for Emmer (see Figure 2) was within the margin of sampling error of 3 of the 4 other polls but that it was also on the “lower bound” after accounting for random sampling error. (Its estimate was nearly identical to that of MinnPost’s poll with St. Cloud.)

Which showed the race a ten point blowout for Dayton.

Jacobs is, in effect, saying “yeah, our poll was a hash – but so was everyone else’s”.

This pattern is not “wrong” given the need to account for the margin of sampling error, but it is also not desirable. As part of our usual practice, the post-election review investigated whether there were systematic issues that could be addressed.

Research suggests three potential explanations for the MPR/HHH estimate for Emmer; none revealed problems after investigation.


Here are the three areas the Humphrey Institute investigated:

Weighting: First, it made sense to examine closely the weighting of the survey data in general and the weighting used to identify voters most likely to vote. Weighting is a standard practice to align the poll data with known demographic and other features such as gender and age that are documented by the U.S. Census. (Political party affiliation is a product of election campaigns and other political events and is not widely accepted by survey professionals as a reliable basis for weighting responses.)

Our own review of the data did not reveal errors that, for instance, might inflate the proportion of Democrats or depress that of Republicans who are identified as likely to vote. To make sure our review did not miss something, we solicited the independent advice of well-regarded statistician, Professor Andrew Gelman at Columbia University in New York City, who we had not previously worked with or personally met. Professor Gelman concluded that the weighting was “in line with standard practice” and confirmed our own evaluation.

“And an expert said everything’s hunky dory!”

Our second investigation was of what are known as “interviewer effects” based on research indicating that the race of the interviewer may impact respondents.11 (Forty-four percent of the interviewers for the MPR/HHH poll were minorities, mostly African American.) In particular, we searched for differences in expressed support for particular candidates based on whether the interviewer was Caucasian or minority. This investigation failed to detect statistically significant differences.

And the third was the much higher participation in the poll from respondents in the “612” area code – Minneapolis and its very near ‘burbs.  Jacobs (with emphasis added by me):

When analyzing a poll to meet a media schedule, it is not always feasible to look in-depth at internals.

It’s apparently more important to make the 5PM news than to have useful, valid numbers.

With the time and ability that this review made possible, we discovered in retrospect that individuals called in the 612 area code were more prone to participate than statewide — 81% in the 612 area as compared to 67% statewide in the October poll.13 Given that Democratic candidates traditionally fare well among voters in the 612 area code, the higher cooperation rate among likely voters in the 612 area code may explain why the estimate of Emmer’s support by MPR/HHH was slightly lower than those by other polls conducted at around the same time. This is the kind of lesson that can be closely monitored in the future and addressed to improve polling quality. 

Except we bloggers have been “closely monitoring” this for years.  It’s been pointed out in the past; on this very blog, I have been writing about this phenomenon since 2004 at the very latest.  Liberals looooove to answer polls.  Conservatives seem  not to.

That Jacobs claims to be just discovering this now, after all these years, is…surprising?

Frank Newport at Gallup critiques Jacobs’ report:

The authors give the cooperation rate for 612 residents compared to the cooperation rate statewide. The assumption appears to be that this led to a disproportionately high concentration of voters in the sample from the 612 area code. A more relevant comparison would be the cooperation rate for 612 residents compared to all those contacted statewide in all area codes other than 612. Still more relevant would be a discussion of the actual proportion of all completed interviews in the final weighted sample that were conducted in the 612 area code (and other area codes) compared to the Census estimate of the proportion of the population of Minnesota living in the 612 area code, or the proportion of votes cast in a typical statewide election from the 612 area code, or the proportion of the initial sample in the 612 area code. These are typical calculations. The authors note that residents in the 612 area code can be expected, on average, to skew disproportionately for the Democratic candidate in a statewide race. An overrepresentation in the sample of voters in the 612 area code could thus be expected to move the overall sample estimates in a Democratic direction.

That Jacobs finds an excuse for failing to weight for higher participation in a city that is right up there with Portland and Berkeley as a liberal hotbed would be astounding, if it weren’t the Humphrey Institute we’re talking about.

The authors do not discuss the ways in which region was controlled in the survey process, if any. The authors make it clear that they did not weight the sample by region. This is commonly done in state polls, particularly in states where voting outcomes can vary significantly by region, as apparently is the case in Minnesota.

Summary:  The HHH poll is sloppy work.

The Great Poll Scam Part XI: Weasels Rip My Results

Professor Larry Jacobs – by far the most-quoted non-elected person in Minnesota – defends the Humphrey Institute Poll:

Differences between polls may not be substantively significant as illustrated by the case of MinnPost’s poll with St. Cloud State, which showed Dayton with a 10 point lead, and the MPR/HHH poll, which reported a 12 point lead.

The “margin of sampling error,” which is calculated based on uniform formulas used by all polling firms, creates a cone around the estimate of each candidate’s support, reflecting the statistical probability of variation owing to random sampling.2 The practical effect is that the results of the MinnPost poll with St. Cloud State and MPR/HHH are, in statistical terms, within range of each other. Put simply, the 2 points separating them may reflect random variation and may well not be a statistically meaningful difference.

What might be a “statistically meaningful difference” is that Survey USA and Rasmussen all came much, much closer – as in, one-third to one-quarter of the Strib, HHH and St Cloud polls – to getting the actual election right, and tracked much closer to the GOP’s internal polling, which turned out to be dead-nut accurate (as we’ll see tomorrow).

Figure 2 creates a zone of sampling error around estimates of support for Dayton and Emmer by the five media polls completed during the last two weeks of the campaign.3 In terms of the estimates of Dayton’s support, the MPR/HHH poll is within the range of all four other polls. Take home point: its estimate of Dayton’s support was consistent with all other polls.

Well, no.  It was consistent with the other polls who have developed a reputation for inaccuracy that inevitably favors the DFL.  The other polls – Survey USA, Rasmussen, Laurence – were not consistent with the Humphrey poll at all.

Frank Newport of Gallup responds to this:

It is unclear from the report how much the write‐up of results from the October 21‐25 MPR/HHH poll emphasized the margin of error range around the point estimates. Although this is not part of their recommendation, if the authors feel strongly that the margin of effort around point estimates should be given more attention, future reports could include more emphasis on a band or range of estimated support, rather than the point estimates.

In other words, if the Humphrey Poll is really a range with no particular confidence in any particular number within the range, publicize the range.

But that’s not what the Humphrey Institute, or the media, led with just before the election.  It was “DAYTON LEADS BY 12”.  Not “Dayton leads by 8 to 16, maybe, sorta”.

The distinction might make a difference.

This is generally not done in pre‐election polling, under the assumption that the point estimate is still the most probable population parameter. Any education of the public on the meaning of margin of errors and ranges and comparisons of the margins of errors surrounding other polls is an admirable goal. It does, however, again raise the question of the purpose of and value of pre‐election polls if they are used only to estimate broad ranges of where the population stands. This topic is beyond the scope of this review.

In other words – if you take Jacobs at his word, then there’s nothing really newsworthy about the HHH poll.

Do you suppose they’ll stick with that line in the runup to the 2012 election?

The Great Poll Scam, Part X: Weasel Words

I’ve been raising kids for a long time.  Before that, I grew up around a bunch of them.  Indeed, I was one myself, once.

And I know now as I knew then the same thing that every single person who watches Cops knows, instinctively; if you think someone did something, and their response is “you can’t prove it”, it’s the same as an admission of guilt.

Oh, it doesn’t stand up in court – and it’s probably a good thing.

And in the rarified world of academics – and its poor, profoundly handicapped accidental offspring, political public opinion polling – I’m going to suggest it works the same way.

If there is a poll that is, year in and year out, just as ludicrous as the Humphrey and Strib polls, it’s the Saint Cloud State University poll.  I haven’t heretofore included it in my “Great Poll Scam” series, because it’s sort of out of sight and out of mind.

But in David Brauer’s interview with Emmer campaign manager Cullen Sheehan, the director of the SCSU poll – which is done in conjunction with the MinnPost – a fellow named Stephen Frank, tips us off; he concludes…:

Frank says. “Campaign managers like to find excuses rather than looking at their candidate or performance. Do you think if we stopped [publishing results] others would — or the candidates would and the latter won’t go public or only partially public?”

True, to a point.

But he began the statement by saying:

“Please show me one credible study that shows people change their mind on the basis of a poll,”

On the one hand:  “You can’t proooooooooove we did it!”

On the other hand – allow me to introduce you to Dr. Albert Mehrabian, who published a study entitled “Effects of Poll Reports on Voter Preferences”

From the abstract summary, with emphasis added:

Results of two experimental studies described in this article constituted clear experimental demonstration of how polls influence votes. Findings showed that voters tended to vote for those who they were told were leading in the polls; furthermore, that these poll-driven effects on votes were substantial.

How substantial?  I don’t know.  As I write this, it’s 5AM, and I have no way of getting to the University of Minnesota library to find a copy of Journal of Applied Social Psychology (Volume 28).  But I will.

But Mehrabian noted a decided “bandwagon effect” in voter responses to poll results.

Effects of polls on votes tended to be operative throughout a wide spectrum of initial (i.e., pre-poll) voter preferences ranging from undecided to moderately strong. There was a limit on poll effects, however, as noted in Study Two: Polls failed to influence votes when voter preferences were very strong to begin with.


I’d have voted for Tom Emmer even if he did finish 12 points back, as the Humphrey Institute suggested.  Or ten points out of the game, as Frank’s survey (which I ridiculed in this space), or thirty points back.  But then, nobody really doubted that.

But people who don’t live and breathe politics?  That’s another story – says Dr. Mehrabian.

Additional findings of considerable interest showed that effects of polls were stronger for women than for men and also were stronger for more arousable (i.e., more emotional) and more submissive (or less dominant) persons.

Which would be important, in a year when the DFL was worried about women flaking away from Dayton, and moderates being drawn (successfully!) to the Tea Party.

Wouldn’t it?

Especially noteworthy is my discussion of similarities and differences between the study methods and real- life political campaigns beginning with the middle paragraph on page 2128 (“Overall, results …).

I’ll dredge up a copy of Mehrabian’s study (unless any of you academics out there can shoot me a pointer…).

Mehrabian was cited in this study of the subject – “Social information and bandwagon behaviour in voting: an economic experiment“, by Ivo Bischoff and Henrik Egbert, a pair of German economists; the paper isn’t about the bandwagon effect – but it touches on it pretty heavily (all emphases are added by me):

The political science literature contains a number of empirical studies that test for bandwagon behaviour in voting. A first group of studies analyses data from large-scale opinion polls conducted in times of upcoming elections or on election days. The evidence from these studies is mixed (see the literature reviews in Marsh, 1984; McAllister and Studlar, 1991; Nadeau et al., 1997). One essential shortcoming of these studies is that it is very difficult to disentangle the complex interrelations between voting intentions, poll results and other pieces of information that drive both of the former simultaneously (Marsh, 1984; Morwitz and Pluzinski, 1996; Joslyn, 1997). Avoiding these difficulties, a second group of studies are based on experiments. Mehrabian (1998) presents two studies on bandwagon behaviour in voting. In his first study, he elicits the intended voting behaviour among Republicans in their primaries for the presidential election in 1996. He finds that the tendency to prefer Bob Dole over Steve Forbes depends on the polls presented to the voters. Voters are more likely to vote for Dole when he leads in the opinion poll compared to the situation with Forbes leading. The second study involves students from the University of California, Los Angeles. These are asked to express their approval to proposals for different modes of testing their performance: a midterm exam or an extra-credit paper. Mehrabian (1998) uses bogus polls in his studies. Results show that bogus polls do not influence the answers when subjects have clear and strong preferences. However, bogus polls have an impact when preference relations are weak. In this case, bandwagon behaviour in voting is observed. Next to Mehrabian (1998), there are a number of others experimental studies that find evidence for bandwagon behaviour in voting (Laponce 1966; Fleitas 1971; Ansolabehere and Iyengar 1994; Goidel and Shields, 1994; Mehrabian 1998).

It’s not an open-and-shut, according to Bischoff and Egbert – but there is evidence to suggest that the “Bandwagon Effect” exists, and that polling drives it.

Is it possible that the learned Professors Larry Jacobs or Stephen Frank are unaware of this?  Certainly.

Given both polls’ lock-step consistency, especially at under-polling GOP support in close elections, where people with weak initial preferences – people whose “preference relations are weak”, as Bischoff and Egbert put it, which might well be as good a good description for “independents” and “swing voters” as I’ve seen –  it’s worth a look, though.

More from Dr. Mehrabian in the near future.

The Great Poll Scam Part IX: The Rockstar Who Couldn’t See His Face In The Mirror

In reading Professor Larry Jacobs’ defense of the Hubert H. Humphrey Institute poll – which always underpolls Republicans in its immediate pre-election survey, by an average of six points, with the tendency even more exaggerated in close races – Jacobs writes (with emphasis added):

Appropriately interpreting Minnesota polls as a snapshot is especially important because President Barack Obama’s visit on October 23rd very likely created what turned out to be a temporary surge for Dayton. Obama’s visit occurred in the middle of the interviewing for the MPR/HHH poll; it was the only survey in the field when the President spoke on October 23rd at a rally widely covered by the press. Our write-up of the MPR/HHH poll emphasized that the President appeared to substantially increase support for Dayton and suggested that this bump might last or might fade to produce a closer race:

Well.  That kinda covers all the possibilities, doesn’t it?

Effect of Obama Visit: Obama’s visit to Minnesota on October 23rd and the resulting press coverage did increase support for Dayton. Among the 379 likely Minnesota voters who were surveyed on October 21st and 22nd (the 2 days before Obama’s visit), 40% support Dayton. By contrast, among the 145 likely Minnesota voters who were surveyed on October 24th and 25th (the 2 days after Obama’s visit) 53% support Dayton. This increase in support for Dayton could be a trend that will hold until Election Day, or it could be a temporary blip that will dissipate in the final days of the campaign and perhaps diminish his support.

Did you catch that?

Obama’s presence in the city caused Daytons’ numbers to boom by five points (if you take the HHH’s numbers at face value, something no well-informed person ever does), and then lurch downward by a dozen by election day?  The presence or absence of Barack Obama is responsible for one out of eight Minnesota voters changing their mind and changing it back inside of a week?

Obama’s impact in temporarily inflating Dayton’s lead is a vivid illustration of the importance of using polls as a snapshot.

No.  The HHH polls’ impact in temporarily inflating Dayton’s lead is vivid illustraiton of how these polls need to disregarded or abandoned!.

Indeed, according to the MPR/HHH poll, Dayton’s lead before Obama’s visit was 8 points – nearly identical to the Star Tribune’s lead at nearly the same point in time (7 points). Treating polls as snapshots, then, is especially important when a major event may artificially impact a poll’s results or, as in the case of the MPR/HHH poll, there were a large number of voters who were undecided (about 1 out of 6 voters) or were considering the possibility of shifting from a third party candidate to the Democratic or Republican candidate.

Read another way:  “They’re snapshots, so we can’t be held accountable.  But keep the funding and recognition coming anyway”.

The take-home point: polls are only a snapshot of what can be a fast moving campaign as events intervene and voters reach final decisions. Polls conducted closest to Election Day are most likely to approximate the actual vote tally precisely because they are capturing the changing decisions of actual voters.

Newport dipolmatically notes the real “take-home point”:

The authors raise the issue of the impact of President Obama’s visit to Minnesota on October 23rd. The authors note, and apparently reported when the poll was released, that interviews conducted October 24th and 25th as part of the MPR/HHH poll were more Democratic in voting intention than those conducted before the Obama visit. It is certainly true that “real world” events can affect the voting intentions of the electorate. In this instance, if the voting intentions of Minnesota voters were affected by the President’s visit, the effect would apparently have been short‐lived, given the final outcome of voting. The authors do not mention that the SurveyUSA poll also overlapped the Obama visit by at least one day. It is unclear from the report if there is other internal evidence in the survey that could be used to shed light on the Obama visit, including Obama job approval and 2008 presidential voting.

Up next – at noon – what effect do bogus polls really have on voters?

The Great Poll Scam Part VIII: Snapshots That Never Come Into Focus

I was reading Larry Jacobs’ defense of the Humphrey Institute’s shoddy work this past election.

His first point in defense is that polls are “a snapshot in time”:

Polls do not offer a “prediction” about which candidate “will” win. Polls are only a snapshot of one point in time. The science of survey research rests on interviewing a random sample to estimate opinion at a particular time. Survey interview methods provide no basis for projecting winners in the future.

So far so good.

How well a poll’s snapshot captures the thinking of voters at a point in time can be gleamed [sic] from the findings of other polls taken during the same period. Figure 1 shows that four polls were completed before the final week of the campaign when voters finalized their decisions.

I read this bit, and thought immediately of Eric Cartman playing Glenn Beck in South Park last season; disclaiming loathsome inflammatory statements with a simple “I’m just asking questions…”

Frank Newport at Gallup responded to this particular claim:

[Jacobs and his co-author, Joanne Miller] by discussing what they term a misconception about survey research, namely that polls are predictions of election outcomes rather than snapshots of the voting intentions of the electorate at one particular point in time. The authors present the results of five polls conducted in the last month of the election. The spread in the Democratic lead across the five polls ranged from 0 to 12. The authors note that the SurveyUSA poll was the closest to the election and closest to the actual election outcome. At the same time, the MPR/HHH poll was the second closest to Election Day and reported the highest Democratic margin. Another poll conducted prior to the MPR/HHH poll showed a 3‐point margin for the Democratic candidate.

Emmer’s internal poll showed a dead heat.  More on that later on this week.

Newport, with empasis from me:

The authors in essence argue that the accuracy of any poll conducted more than a few days before Election Day is unknowable, since there is no external validation of the actual voting intentions of the population at any time other than Election Day. This is true, but raises the broader question of the value of polls conducted prior to the final week of the Election – a discussion beyond the scope of the report or this review of the report.

By inference, Newport is indicating that a great enough number of voters make up their mind right before election day as to make pre-election polling essentially pointless.

Or is it?

Polling does affect peoples’ choices in elections; people don’t go to the polls when they know their candidate is going to become a punch line the next day; donors don’t turn out for races they are pretty sure are doomed.

And as I showed a few weeks ago, while Jacobs acknowledges that his poll is just a “snapshot” of numbers that may or may not have any bearing on the election itself, we noted a few weeks back that the Humphrey Poll’s results themselves are less “snapshot” than “slide show”; they have a coherent theme.  Election in, election out, they short the GOP, especially in tight elections.  Every single significant election, no exceptions.  Tight GOP wins (2006 Gubernatorial), comfy Democrat wins (2008 Presidential), squeakers (2008 Senate, 2010 Gubernatorial), every single one, without any exception, without the faintest hint of random “noise” that might indicate some random nature to the pattern, the HHH poll systematically shorts the GOP.

Given the completely non-random nature of this pattern – every election, no exceptions – there are three logical explanations:

  • The Humphrey Institute genuinely believes in the soundness of its polling methodology, which systematically (in the purest definition of the word) shorts GOP representation.
  • The Humphrey Institute is unable to change its methodology, or is structurally incapable of learning from its mistakes.
  • The Humphrey Institute is just fine with the poll’s inaccuracies, because it serves an unstated purpose.

To read Jacobs’ defense, you’d think…:

  • …that there’s nothing – nothing! – the HHH can do about fixing the inaccuracies of its “snapshot”, and…
  • …it’s all a matter of timing.

As we see elsewhere in the coverage of the Humphrey (and Strib) polls, both are false.

More later this week.

The Great Poll Scam, Part VII: Post Mortem

The Twin Cities’ media and academic establishment is starting to try to unpack the disaster of their polling efforts this past election cycle.

Minnesota Public Radio has done us the service of printing both the Humphrey Institute’s Larry Jacobs’ defense of the Humphrey Institute poll and a counter from Frank Newport of Gallup Polling. And David Brauer of the MinnPost does some excellent coverage, including a revealing interview with Cullen Sheehan, who was Tom Emmer’s campaign manager, with some rare insights into what a complete crock of used food Jacobs’ explanation is.

I’ll be trying to unpack this over the course of the coming week.

The Great Poll Scam, Part VI: The Hay They Make

We’ve been discussing the MPR/Humphrey Institute and Minnesota polls for the past two weeks.  Indeed, it’s been one of the ongoing “go to” subjects of this blog for almost eight years now.


Because while  the polls themselves are risible, they have an effect on elections in Minnesota.

Part of it is in terms of people – “undecided”, “independent” voters – going to the polls at all.  I’ve related on this blog several stories of people who’ve pondered not going to the polls this past year.  Part of it was  because of the overwhelming negativity about Tom Emmer portrayed by the media – negativity, partly driven by the “Alliance For A Better Minnesota’s long, Dayton-family-funded, largely dubiously-factual smear campaign, but pushed hard in the media via the “polling” that they, themselves, commissioned.

Larry Jacobs at the Hubert H. Humphrey (HHH) Institute is the most over-quoted person in the Twin Cities media.  And during the campaign, Jacobs was seen as relentlessly as always in the Twin Cities media, flogging the Humphrey Institute’s polling first during the primaries (where the HHH’s polls showed Dayton with a crushing lead even though Dayton won the primaries by a margin not a whole lot bigger than the one we currently have in the governor’s race) and, finally, during the run-up to the election when the HHH poll showed Dayton winning with a 12 point blowout.

We’re still working on the recount for the 0.4% race.

Jacobs defended the poll (quoted in LFR):

JACOBS: Well, you know, a poll is nothing more than a snapshot in time. We’ve begun the interviewing nearly 2 weeks before election day. Barack Obama visited and we talked openly about the fact that this would likely change. There are, of course, all kinds of other factors that happened at the end, including the fact the almost 1 out of 5 undecided voters in our poll started to make up their mind.

The other thing to remember is that there were alot of other polls being conducted that showed the race closing at the time, something we were watching at the time, also.

That’s right, Dr. Jacobs.  There were a lot of other polls.

And except for the HHH and Minnesota polls, all of them showed a “snapshot in time” that was something close to the reality that eventually emerged on election day.

All of them.

So what?

Because opinion polling has an inordinate effect on media coverage and, less directly, the money and effort that people put into campaigns.

As to the media?  The New York Times has absorbed Nate Silver’s “Five Thirty Eight” stats-blog for its election polling coverage.  And throughout the race, the Times ran with the idea that Dayton was overwhelmingly likely to win.

And that supposition was based entirely on a statistical tabulation of opinion poll results.  And the stats were heavily based on the Minnesota and Humphrey polls, especially through the middle of the race, when the tone of the campaign was being set.  All together, the crunching of the opinion poll numbers led Silver to claim the stats showed Minnesota would be a convincing 6.6 point victory for Dayton; since political statistics are an essentially weaselly “science”, Silver also ran with an eight point margin of error.

Naturally, the media ran with the 6.6 points; a little less with the margin of error.

Now, there’s some media attention – the Minnpost, the City Pages – to the ludicrous nature of the polls.  Jacobs:

“If a shortcoming is identified, we will fix it. If not, we will have third-party verification that our methods are sound.”

Dr. Jacobs:  take it from this third party; it’s flawed.  Flawed to the point of illegitimacy.

More on the Minnesota Poll later…


\The series so far:

Monday, 11/8: Introduction.

Wednesday, 11/10: Polling Minnesota – The sixty-six year history of the Strib’s Minnesota Poll. It offers some surprises.

Friday, 11/12: Daves, Goliath:  Rob Daves ran the Minnesota Poll from 1987 ’til 2007.  And the statistics during that era have a certain…consistency?

Monday, 11/15: Hubert, You Magnificent Bastard, I Read Your Numbers!:  The Humphrey Institute has been polling Minnesota for six years, now.  And the results are…interesting.  In the classic Hindi sense of the term.

Wednesday, 11/17: Close Shaves: Close races are the most interesting.  For everyone.  Including you, if you’re reading this series.

Monday, 11/22: The Hay They Make: So what does the media and the Twin Cities political establishment do with these numbers?

Wednesday, 11/24: A Million’s A Crowd:  Attention, statisticians:  Raw data!  Suitable for cloudsourcing!

The Great Poll Scam, Part V: Close Shaves

It’s almost become a cliche, among conservative observers of Minnesota elections.  You’re supporting a Republican.  You know the race is close.  You can feel the race is close.

And the final Humprhey and Minnesota polls come out, and the DFLer leads by an utterly absurd margin – like this year’s Humphrey Institute Poll, which showed a 12 point race…

…which, two days later, came in a statistical dead heat, with much less than half a point separating the two candidates.

And yet the Minnesota and Humphrey Institute polls have their defenders.


Remember the 2006 Senate race?  Mark Kennedy vs. Amy Klobuchar?

The Minnesota poll did pretty well, all in all.  The final Minnesota poll showed Mark Kennedy getting 34 points, to Amy Klobuchar’s 55.  The race ended up being 58.06 to just shy of 38.    The Minnesota poll showed both candidates doing a little worse than they eventually wound up doing – Klobuchar a little worse, in fact.

Defenders of the Minnesota Poll – media people and lefty pundits – chimed in.  “See?  The Minnesota poll is OK” or at the very least “The Minnesota Poll is an equal-opportunity incompetent”.

But if you’re a cynic – and when it comes to the Minnesota and Humphrey Polls, I most certainly am – the answer there is obvious; if you accept that the polls exist to help one party or another out of close jams (and let’s just say I think there’s a case to be made), then the real question is “how do the polls stack up when it really counts – during the close elections?

I took a look at the Minnesota poll’s history with close races – Gubernatorial, Presidential and Senate races that ended up less than five points apart – over the past 66 years.   Since 1944 in these races – twenty of them – the DFL ended up getting 47.69% to the GOP’s 47.57% in the final elections.  The Minnesota Poll has shown the DFL getting 44.3% to 43.28% in the final pre-election poll.  Both numbers are very close, of course.  The Minnesota Poll has underrepresented Republicans by an average of 4.3 points, the DFL by 3.39.  So while the poll underrepresented Republicans in 14 of 20 races, it was by less than a point, on average.

But that’s over 66 years.  And if you recall from episode 1 of this series, the Minnesota Poll used to systematically undercount the DFL.  But long story short – looking at the poll’s entire history, things are fairly close.

When you look at the Rob Daves era at the Minnesota poll, though, things change.

In close races (<5 point final difference) during the Rob Daves era, the GOP has actually gotten a slightly higher average vote total – 46.77% to 46.48% – in actual elections.  But the final Minnesota Poll has shown the DFL outpolling the GOP 43.33% to 40.78%.    Republicans come up an average of six points light in the final Minnesota Poll before the election, with DFLer finishing a little over three points short – nearly a 2-1 margin in underrepresentation.

In other words, in close races the Minnesota Poll has shown the GOP doing six points worse than they actually did, compared to three points for the DFL.  And the average Minnesota Poll has shown the DFL leading the GOP, when in fact the races have been mixed, with move Republican winners than in the previous 20-odd years of Minnesota history.

If you are an idealist, you could think that  it’s just a statistical anomaly.  To which the cynic notes that of eight close races, the GOP has been undercounted by less than the DFL exactly once.

The cynic might continue that it’s entirely possible that the Minnesota Poll doesn’t systematically short Republicans in close elections.  But given that the poll shorts Republicans in races that end up less than five points apart by an average of considerablymore than five points, the cynic would ask “if the Minnesota Poll were designed to keep Republicans home from the polls out of pure discouragement, how would it be any different than what we have now?”

Well, it could look like the Humphrey Poll.

Because the Humprey Poll is worse.  Granted, it’s a smaller sample size – there’ve been four “close” races (2004 Presidential, and the 2006 Governor,  2008 Senate and 2010 Governor races, which were/are very close indeed).

But in those race, the DFL won by an average of 45.43% to 44.7% (most of the gap coming from the four-point 2004 Presidental race; the other three had/have tallies within a point in difference).   But the final HHH poll showed the DFL/Democratic candidate winning by an average of seven points – 42.5 to 35.75%.  The DFL, is underrepresented in the HHH’s final pre-election poll by just a shade under three points; GOP is underpolls its real-life results by an average of almost nine points.

It’s possible that this is an honest error.  It is possible that the Humphrey Institute really, really believes that they have a likely voter model that accurately reflects Minnesota.  Perhaps it even does; maybe Minnesota really is a land of people who answer “DFL” on polls but come racing over to the GOP on election day.  But again – if the Humphrey Institute intended to help the DFL and keep Republicans home, it’s hard to see what they’d do differently.

Especially given the media’s reaction to these polls.

More on Friday.


The series so far:

Monday, 11/8: Introduction.

Wednesday, 11/10: Polling Minnesota – The sixty-six year history of the Strib’s Minnesota Poll. It offers some surprises.

Friday, 11/12: Daves, Goliath:  Rob Daves ran the Minnesota Poll from 1987 ’til 2007.  And the statistics during that era have a certain…consistency?

Monday, 11/15: Hubert, You Magnificent Bastard, I Read Your Numbers!:  The Humphrey Institute has been polling Minnesota for six years, now.  And the results are…interesting.  In the classic Hindi sense of the term.

Wednesday, 11/17: Close Shaves: Close races are the most interesting.  For everyone.  Including you, if you’re reading this series.

Friday, 11/19: The Hay They Make: So what does the media and the Twin Cities political establishment do with these numbers?

Monday, 11/22: A Million’s A Crowd:  Attention, statisticians:  Raw data!  Suitable for cloudsourcing!

The Great Poll Scam, Part IV: Hubert, You Magnificent Bastard, I Read Your Numbers!

The Hubert H. Humphrey Institute is a combination public-policy study program and think tank at the University of Minnesota in Minneapolis.  Named for the patriarch of the Democratic Farmer-Labor party – a forties-era amalgamation of traditional Democrats and neo-wobbly Farmer-Labor Union members whose Stalinist elements Humphrey famously purged in the mid-forties – the institution serves as a clearinghouse of soft-left chanting points and a retirement program for mostly left-of-center politicians and heelers.

The Institute has been doing general public opinion polling for years; in 2004, in conjunction with Minnesota Public Radio, they dove into the horserace game.

Let’s just sum up their performance in each of the five Presidential, Gubernatorial and Senate races they’ve polled in that time:

2004 Presidential Race

  • HHH Poll:  Kerry 43, Bush 37
  • Actual Election Results: Kerry 51, Bush 47
  • Bush underrepresented by 10.61, Kerry by 8.09.

2006 Gubernatorial Race]

  • HHH Poll: Hatch 45, Pawlenty 40
  • Actual Election Results: Pawlenty 46.45.
  • Pawlenty underrepresented by six, Hatch polled accurately.

2006 Senate Race

  • HHH Poll: Klobuchar 54, Kennedy 34
  • Actual Election Results: Klobuchar 58.06, Kennedy 37.94
  • Kennedy underpolled by 3.94, Klobuchar by 4.06 – but it was a blowout.  We’ll come back to this.

2008 Presidential Election

  • HHH Poll: Obama 56, Mccain 37
  • Actual Election Results: Obama 54.2, McCain 44.
  • Obama overrepresented almost two points; McCain, almost seven points under. A ten point race was portrayed as a 20 point landslide.

2008 US Senate Race

  • HHH Poll: Franken 41, Coleman 37
  • Actual Election Results: Franken by 41.99 to 41.98.
  • Franken underrepresented by less than a point; Coleman, by almost five.  A tie race was portayed as a convincing five points beat-down.

2010 Governor Race

  • HHH Poll: Dayton 41, Emmer 29.
  • Actual Election: Dayton 43.63, Emmer 43.21, recount in progress.
  • A tie race was depicted as a 12 point blowout.

A polling guru will say that these gross inaccuracies are a function of the Humphrey’s likely voter model – which for whatever reason assumed in each case that Democrats were much more likely to vote than Republicans, and likely to make up a greater portion of the electorate.

And yet the Humphrey Institute’s heuristics – the procedural, institutional and methodological rules by which institutions develop intelligence about things like voter behavior – seem to be stuck, for whatever reason, in the eighties.  The average HHH poll shows Republican candidates to be polling over five and a half points lower than Democrats in their real-life election performances.


In five of the six races covered above, the errors in measurement underrepresented the GOP.  It’s an figure lower than that of the “Minnesota Poll” only because they’ve been in business sixty years fewer than the Strib’s poll.

Why would this be?

More next week.

In our next installment: I’ve shown you the behavior of both polls in horseraces across the board.  But a particularly interesting bit of behavior comes out if you throw out the blowouts – the 30 point massacre in the 1994 Governor race, the 20 points slaughter in the 2006 Senate contest – and focus on the tight races.

More on Wednesday.


\The series so far:

Monday, 11/8: Introduction.

Wednesday, 11/10: Polling Minnesota – The sixty-six year history of the Strib’s Minnesota Poll. It offers some surprises.

Friday, 11/12: Daves, Goliath:  Rob Daves ran the Minnesota Poll from 1987 ’til 2007.  And the statistics during that era have a certain…consistency?

Monday, 11/15: Hubert, You Magnificent Bastard, I Read Your Numbers!:  The Humphrey Institute has been polling Minnesota for six years, now.  And the results are…interesting.  In the classic Hindi sense of the term.

Wednesday, 11/17: Close Shaves: Close races are the most interesting.  For everyone.  Including you, if you’re reading this series.

Friday, 11/19: The Hay They Make: So what does the media and the Twin Cities political establishment do with these numbers?

Monday, 11/22: A Million’s A Crowd:  Attention, statisticians:  Raw data!  Suitable for cloudsourcing!

The Great Poll Scam, Part III: Daves, Goliath

Rob Daves took over the Minnesota Poll in 1987.

Rob Daves

Rob Daves

I have never met Rob Daves.  Either, to the best of my knowledge, has anyone else.  I don’t know that his alt-media bete noir, Scott Johnson, has even met him, despite not a few requests for interviews.

I have no idea what Rob Daves thinks, believes, wants, says or does.  I know nothing about his personal life, and I really don’t want or need to.  For all I know, he’s a perfectly wonderful human being.

But for a 20 year period under his direction, the Minnesota Poll turned into an epic joke.

How epic?

The numbers don’t lie.


During the Rob Daves years, party politics in Minnesota skittered all over the map.  The governors office started DFL, changed hands, and maybe have changed back last week – we’ll see.  The Reagan/Bush 41 era seesawed to Clinton, then Dubya, and now Obama; both Senate seats started Republican; both switched to the DFL, eventually.

There has, in short, been a lot of variety, at least in terms of the Party ID winning the various elections.

But the Minnesota Poll has been oddly homogenous.

Throughout the Rob Daves era, the Democratic or DFL candidate in Presidential, Gubernatorial and Senate races has gotten an average of 45.68% of the vote, to 45.21% for the GOP.  That’s very, very close.

Some of the races have been blowouts – Amy Klobuchar’s 20 point drubbing of Mark Kennedy, Arne Carlson’s 30 point hammering of John Marty – and some, like our 2008 Senate and 2010 Governor races, have been (or still are) painfully close.

But you’d never know it from the Minnesota poll. The average vote totals – between the blowouts and upsets and squeakers – during Daves’ 1987-2007 tenure favored the DFL, barely, by 45.98 to 45.34%.  But the Minnesota Polls released just before all those elections showed the population favoring the DFL by 43.33 to 39.89%.

And of 18 total contests, the polling inaccuracies skewed in the direction of the DFL in 15.   The average skew toward the DFL came to almost three percentage points.

When you break things out, the differences get wider; in the five Presidential elections, the Minnesota Poll discerned a 49.67 to 36% DFL lead; the actual results were 50.13 to 41.64%.  The Minnesota Poll underrepresented the GOP by an average of 5.64% in Presidential elections during the Daves years.   The Strib Poll showed every single GOP candidate coming up short of his actual election performance:  George HW Bush polled 3.80% light; Dole, 7.00%;  Dubya, 8.50 and 6.61; McCain also polled seven points under his real performance.  The Democrats, on the other hand, seemed to be polled fairly accurately; the average error poll  and election for Democratic presidential candidates was less than half a point.

The Senate races are a little closer – the Republicans underperform the election results 4.29% to 3.14%, a difference of 1.15% under their election results, which isn’t very significant – if you just look at raw numbers.  Well come back to that next Wednesday.

In the Gubernatorial races during the Daves years, though, the polling results were pretty lockstep. In gubernatorial races since 1987, the GOP has outpolled the DFL by an average of 46.77 to 38.91% – including one huge blowout (1994) and several squeakers.  But the Minnesota Poll has shown Minnesotans’ preferences at 40.17 to 36.67 in favor of the GOP.  Republicans’ performance was underpolled by 6.6% in the Minnesota poll – that of the DFL by only 2.24%.  The Minnesota poll showed Minnesotans underselecting Republicans by almost triple the margin of the actual elections.

A classic – and large – example was the 2002 Governor race.  The election-eve Minnesota Poll showed Pawlenty tipping Moe by 35-32.  The real margin was 44-36.  While the poll oversampled Independence Party candidate Tim Penny by a fairly impressive margin, the fact is that while the final MN Poll undershot Moe’s support by 4%, it underrepresented Pawlenty’s by nine solid points.

All in all, of the 20 Presidential, Senate and Gubernatorial races during the Daves era, 16 of them showed the Minnesota Poll underpolling the GOP by a greater degree than the DFL.

And that’s just counting all the races.


Daves was let go at the Strib in 2007.  The Minnesota Poll was taken over by “Princeton Research Study Group”, which also does polling for Newsweek (whose polling is generally considered atrocious).

The 2008 races were very different, of course; the Senate race was a virtual tie, while Obama beat McCain handily.

But the day before the election, the Minnesota poll said McCain was polling just 37%; he ended up with 44%.  It overestimated Obama’s support by under a point, calling him at 55% when he got 54.2%.  The Minnesota Poll sandbagged Mac by seven points.

And Franken v. Coleman?   The day before the election, the poll showed Coleman almost four points below his actual performance (38% versus 41.98) ; it nailed Franken almost dead-on (42% i the poll, 41.99% by the time the recount was over).

PRSA showed both GOP candidates performing drastically off their real pace on election eve.

And three weeks ago, a week before the gubernatorial election, the Minnesota Poll showed Emmer at 34%; he got 43.21%.  Nine points better than the Minnesota poll indicated.

The upshot?  Of the 20 total election contests in the Rob Daves and PRSA eras, the Minnesota Poll has underpolled GOP support in 17 – 85% – of those races.

And PRSA polling has, on average, underpolled the GOP by 6.12% in those three elections.   In other words, PRSA’s errors have favored the DFL to the tune of six points – which is more than the three-plus points of the Rob Daves era.

One might think that random statistics would scatter on both sides of the middle more or less equally.  And in the first 42 years of the Minnesota poll, in aggregate, they did, as we showed Wednesday.

But during the Daves years, and continuing with PRSA, the errors developed a consistency – shorting Republicans – and grew in magnitude.


Of course, those averages hide some big swings; some races in those averages were real blowouts.

It’s been my theory that the Minnesota Poll’s “peculiarities” are most pronounced during close elections.

We’ll test that out next Wednesday, when we’ll examine races that were decided by the proverbial cat’s whisker.

First – Monday – we’ll meet the Hubert H. Humphrey Institute Poll.


The series so far:

Monday, 11/8: Introduction.

Wednesday, 11/10: Polling Minnesota – The sixty-six year history of the Strib’s Minnesota Poll. It offers some surprises.

Friday, 11/12: Daves, Goliath:  Rob Daves ran the Minnesota Poll from 1987 ’til 2007.  And the statistics during that era have a certain…consistency?

Monday, 11/15: Hubert, You Magnificent Bastard, I Read Your Numbers!:  The Humphrey Institute has been polling Minnesota for six years, now.  And the results are…interesting.

Wednesday, 11/17: Close Shaves: Close races are the most interesting.  For everyone.  Including you, if you’re reading this series.

Friday, 11/19: The Hay They Make: So what does the media and the Twin Cities political establishment do with these numbers?

Monday, 11/22: A Million’s A Crowd:  Attention, statisticians:  Raw data!  Suitable for cloudsourcing!

MN Poll Result: 42.79 Elecction Result;: 46.61 Difference: -3.83   MN Poll Result: 49.62 Elecction Result;: 50.97 Difference: -1.35   Total/Lean DFL 21.00 13.00 0.62 Average Skew: 2.48

The Great Poll Scam, Part II: Polling Minnesota

My interest in the Minnesota Poll as an individual institution started right about the time I started this blog, six or eight years ago.

Now bear in mind that I, Mitch Berg, have made skepticism of the media at least a hobby, if not a fringey living, since 1986.  I have believed that the media needed to be distrusted and then verified for pretty much my entire adult life.

And yet until very recently, I maintained, if not a naive faith in the public opinion polling about elections, at least a detached sense that, somehow or other, they all evened out.   It was the same naivete that we all have about where babies and Christmas presents come from when we’re nine, or how entitlements get paid for when we’re 18 (50 for Minnesota government employees), or how sausage and bacon are made.

Ignorance is, indeed, bliss.

The scales started falling from my eyes when I started reading PowerLine.  Scott Johnson has been keeping his eye of the MNPoll for most of a decade, now; he’s led the pack of Minnesota bloggers in documenting the poll’s abuses.

And in reading the history of conservative criticism of the Minnesota Poll, I started wondering – what is the historical context?

There’s more of it than I’d figured.


The Star Tribune started running public opinion polling of the Minnesota electorate in 1944.  It’s polled Minnesotans over a variety of topics, but the marquee subjects are always the big three elections – State Governor, US Senate and Presidential elections.

Now, if you’ve lived in Minnesota in the past fifty years or so (I go back half of that time – I moved here in ’85), it’s hard to believe that Minnesota used to be a largely Republican state.  Of course, the Republicans we had up until very recently were the type that make the likes of Lori Sturdevant grunt with approval – “progressive” Republicans like Elmer Anderson and Wheelock Whitney and the like.

I bring this up to note that while the various parties have changed – Republicans used to be “progressive”, Democrats used to be “America First” – that Minnesota party politics for the past 66 years have been a little more evenly-matched than current political consciousness – shaped as its been by Humphrey and Mondale and “Minnesota Miracle” and Wellstone and Carlson – might make you believe.

Now, if you look at the Minnesota Poll’s statistics for the past 66 years – going back to the 1944 elections, for Governor, Senator and President – the Minnesota Poll is actually fairly even.  In that time, Republicans have gotten an average of 46.85 percent of the vote for all those offices, to 49.37% for DFLers.  During that time, the Minnesota Poll’s “election eve” predictions have averaged 44.1% for Republicans, and 46.77% for Democrats.  That means that over history, the big final Minnesota Poll has shown Republicans doing 2.75 points worse than they turned out, with DFLers coming in 2.59 points worse than they finally turned out.  The results have tended to be, over the course of 66 years, infinitesimally more accurate – .16% – for Democrats.  It’s insignificant, truly.

Indeed, when you go through the numbers from the forties and the fifties, you can see some blogger back in 1958 decrying two things – the lack of an internet to blog on, and a serious pro-Republican bias in the Minnesota poll; in polls run before 1960, the Minnesota poll predicted Republicans would get 51.58, while GOP candidates for the big three offices actually got 50.32% of the vote – the poll overestimated Republicans by an average of 1.26%.  The DFL got an average of 49.73% of the vote during those years, while the Minnesota Poll had them at an average of 43.51% –  which is 6.22% lower than they actually turned out doing (although this number gets inflated by a truly horrible performance in the 1948 Gubernatorial election, where the MNPoll had John Halstead at 25% in their pre-election poll; he ended up losing, but with 45%. That had to be frustrating).  In all, before 1960, the Strib “Minnesota Poll”‘s pre-election poll overestimated the GOP’s performance compared to the DFL’s in 76% of elections; the poll’s overestimates favored the GOP by an average of almost 7.5%.

By the mid-sixties, of course, Minnesota politics changed drastically; by the middle of the decade, the golden age of “progressive” politics and the DFL, led by the likes of Hubert H. Humphrey and Walter Mondale for the DFL, and Elmer Anderson for the GOP, left Minnesota a very different state.  During those years – from about 1966, after Barry Goldwater re-introduced a partisan divide to national politics for the first time, really, since the war – the DFL won the average vote 50.97 to 46.61.  The Minnesota Poll predicted DFL victories, on average, of 49.62 to 42.79; they underreported the final support for Republicans by an average of 3.83%, and DFLers by 1.35%, an average skew of almost 2.5% in favor of the DFL.

But if you look at the actual elections covered in those years – from 1966 to 1990, the “Golden Age of the DFL” – of the 21 contests for President, Governor and Senator, the Minnesota Poll showed the Democrat doing better than they turned out doing by a greater margin than the Republican in 13 of the elections, and inflating the GOP candidates results in eight.  The 1980 Presidential election skewed things a bit – the MNPoll underestimated Jimmy Carter’s performance by 12.5% (Carter got 46.5%, while the MNPoll predicted 34%; it also overestimated Reagan’s performance by a little over a point, leading to one of the biggest pro-Republican skews in the recent history of the Minnesota Poll).

Overall, for the entire history of the Minnesota Poll from 1944 to 1986, the Minnesota Poll showed the public voting, on election eve, for the DFL by a 48.25% to 46.34% average margin; the actual elections favored the DFL to 51.10 47.81; the poll underpolled Republicans by a 1.47% average, and Democrats by an average of 2.85%.  Of the 41 total contests in that time, the DFL was overestimated by a greater margin than the GOP in 44% of the polls – again, not a really significant number.

In other words, the poll’s statistical vicissitudes were fairly balanced through its first 42 years.

But in 1987, the Strib hired Rob Daves to run the Minnesota Poll.

And things would change.


The series so far:

Monday, 11/8: Introduction.

Wednesday, 11/10: Polling Minnesota – The sixty-six year history of the Strib’s Minnesota Poll. It offers some surprises.

Friday, 11/12: Daves, Goliath:  Rob Daves ran the Minnesota Poll from 1987 ’til 2007.  And the statistics during that era have a certain…consistency?

Monday, 11/15: Hubert, You Magnificent Bastard, I Read Your Numbers!:  The Humphrey Institute has been polling Minnesota for six years, now.  And the results are…interesting.

Wednesday, 11/17: Close Shaves: Close races are the most interesting.  For everyone.  Including you, if you’re reading this series.

Friday, 11/19: The Hay They Make: So what does the media and the Twin Cities political establishment do with these numbers?

Monday, 11/22: A Million’s A Crowd:  Attention, statisticians:  Raw data!  Suitable for cloudsourcing!

MN Poll Result: 42.79 Elecction Result;: 46.61 Difference: -3.83   MN Poll Result: 49.62 Elecction Result;: 50.97 Difference: -1.35   Total/Lean DFL 21.00 13.00 0.62 Average Skew: 2.48

The Great Poll Scam: Introduction

The weekend before the election, I was talking with a friend – a woman who has become a newly-minted conservative in the past two years.  She’d sat out the 2008 election, and had voted for Kerry in ’04, but finally became alarmed about the state of this nation’s future – she’s got kids – and got involved with the Tea Party and started paying attention to politics.  And she was going to vote conservative.  Not Republican, mind you, but conservative.

And the Saturday before the election, she sounded discouraged.  “Have you seen the polls?” she asked.  “Emmer’s gonna get clobbered”.

I set her straight, of course – referred her to my blog posts debunking the election-eve Humphrey and Minnesota polls.and showing her the Emmer campaign internal poll that showed the race a statistical dead heat (which, obviously, was the most correct poll before election day).

She left the room feeling better.  She voted for Emmer.  And she voted for her Republican candidates in her State House and Senate districts, duly helping flip her formerly blue district to the good guys and helping gut Dayton’s agenda, should he (heaven forefend) win the recount.

But I walked away from that meeting asking myself – what about all the thousands of newly-minted conservatives who don’t have the savvy or inclination to check the cross-tabs?  The thousands who saw those polls, and didn’t have access to a fire-breathing conservative talk show host with a keen BS detector who’s learned to read the fine print?

How many votes did Tom Emmer lose because of the Hubert H. Humphrey and Minnesota polls that showed him trailing by insurmountable margins?

How many votes to conservatives and Republicans lose in every election due to these polls’ misreporting?

Why do these two polls seem so terribly error-prone?  And why do those errors always seem to favor the Democrats, with the end result of discouraging Republican voters?



Public opinion polling is the alchemy of the post-renaissance age.  Especially “likely voter” polling; every organization that runs a poll has a different way of taking the hundreds or thousands of responses they get, and classifying the respondents as “likely” or not to vote, and tabulating those results into a snapshot of how people are thinking about an election at a given moment.

But the Star Tribune’s Minnesota Poll has, to the casual observer, a long history of coming out with polls that seem to short Republicans – especially conservative ones – every single election.  And the relative newcomer to the regional polling game, the Hubert H. Humphrey Institute’s poll done in conjunction with Minnesota Public Radio, seems – again, anecdotally (so far) to take that same approach and supercharge it.

I’ve had this discussion in the past – David Brauer of the MinnPost and I had a bit of a back and forth on the subject, on-line and on the Northern Alliance one Saturday about a month ago.

And so it occurred to me – it’s easy to come up with anecdotes, one way or another.  But how do the numbers really stack up?   If you dig into the actual numbers for the Humphrey Institute and the Minnesota Poll, what do they say?

I’ll be working on that for the next couple of weeks.  Here’s the plan:

Next On The Agenda

The die has been cast.  The votes have been counted.  They’ll be counted again, shortly, as re the governor race.

So what’s next?

It’s time someone investigated the Star-Tribune’s “Minnesota Poll” and the Hubert H. Humphey Institute’s poll.

The Minnesota Poll – especially the one released one to seven days before every gubernatorial, presidential and senate, election – may not be an effort to drive down GOP voting, per se.

But if they were, it’s hard to say now the polls would be any different.

Investigation next week on Shot In The Dark.

A Not Remotely Modest Proposal

We don’t know how the Minnesota gubernaturial election is going to turn out yet.  I have my predictions in; you are welcome to do your own.

But one thing is for certain; it’s not going to be a 12 point race.

Which would provoke a curious person to ask; what is with the “Star/Tribune Minnesota Poll” and the “MPR/Hubert H. Humphrey Institute Polls”?

This week, they showed results for the gubernatorial election (MNPoll had Dayton +7, HHH had Dayton +12) that, I assert, may not actually be intended as DFL morale-builders – but if they were, it’d be hard to show how they’d be different.  Their oversample of Democrat “likely voters” may or may not be built on experience in Minnesota elections – but it doesn’t take a keen-eyed journalist to see that their methodology is drastically wrong.  Indeed, there are those who are taking that look; Jake Grovum at PIM does a good job of BS-detecting; he covers ground Ed and I have covered on the show as well as our various blogs over the past few months; it’s well worth a read.

And it doesn’t take a conspiracy theorist to look at the record of both of these polls and at least suspect that they smell a rat.  The Minnesota Poll has a 20-plus year record of showing DFL gubernatorial and Senate candates faring an average of 7.5% stronger on the eve of the election than they actually perform. I need to go over the figures for the Humphrey poll, but off the top of my head I do know that the HHH showed Mike Hatch leading by six points at this time in the ’06 campaign; somehow, Tim Pawlenty did seven points better than that.

It’s not that I’m qualified to bag on the inner workings of the statistician’s game; I dropped the class after one week in college.

But when you have…:

  • a twenty year history with the Strib/MNPoll, and a growing history with the HHH poll, of…
  • …errors in methodology in polling that consistently result in 6-7 point polling errors…
  • just happen to consistently – as in, without exception – favor the DFL candidate in close, important elections (forget about the 2006 Senate race), and which are…
  • …lavishly publicized at the beginning of the elections’ “get out the vote” phases…
  • …by the respective  sponsoring news and academic organizations, both of whomcan be accused – perhaps unfairly but definitely rationally – of having group cultures that favor, implicitly or explicitly, the party that is the consistent (invariable!) beneficiary of the statistical error, cycle after cycle after cycle…

…well, that strikes me as an interesting story.

Now, it’s been made clear to me in this election cycle that the elite of the Twin Cities political media establishment – the Rachel Stassen-Bergers and Tom Schecks and Bill Salisburys and Pat Kesslers and David Brauers and Erik Blacks and Tim Pugmires who do the heavy lifting at political coverage for the major regional media – don’t like mere peasants with blogs kibitzing about how they do their jobs, to say nothing about their story timing and selection.

But if I were a journalist (pardon the blasphemy – tis a silly thought), this woudl strike me a subject worthy of some scrutiny.

Perhaps even…investigation!

But I suspect that job will be left to us mere unlettered peasants, in our spare time, over the next two years.

Just saying.

HHH Institute?  Princeton Research? Strib?  MPR?  Expect a phone call in early December.

The DFL Morale Builder, Part II

The Star Tribune‘s “Minnesota Poll” continues to serve its primary function – manipulating voter turnout.

As always with the MNPoll, the marquee numbers are nearly meaningless;

Dayton has strengthened his lead to 41 percent, according to the poll, followed by Emmer at 34 percent. Horner, who has struggled to get out of the teens in all public polls, is at 13 percent. That’s down from a peak of 18 percent last month.

The poll was conducted between Oct. 18 and 21 among 999 likely Minnesotans voters on both land-line and cell phones. It has a margin of sampling error of plus or minus 3.9 percentage points.

No, it’s the crosstab numbers that matter  It’s buried on the second page of the online report, naturally:

In this poll, the sample of likely voters consisted of 34 percent Democrats, 31 percent independents and 30 percent Republicans.

Four percent overpoll of Democrats?  This year?

The poll is of 999 “likely voters” – and it’s there that the methodology goes from “reporting” to , as David Brauer puts it, the “secret sauce”.

[the poll is] based on 804 land-line and 402 cell phone interviews conducted Oct. 18-21 with a representative sample of Minnesota adults. Of that sample, 999 were deemed to be likely voters, and the poll results are based on those respondents.

And there’s the detail in which the Devil is.  How does Princeton Research (the company that actually does the Strib’s polling) take those 1,200 likely voters and “deem” 1,000 or so of them to be “likely”?

We don’t know.  None of the major pollsters will say.

The article, by Rachel Stassen-Berger, goes on to squeeze in a puff piece for Dayton.

We really know two things:

The Minnesota Poll has, for a generation, always shown Republicans behind the week before the election, sometimes by ludicrious amounts, when they went on to win.

And the Minnesota Poll’s errors immediately before elections inevitably appear designed to drive down Republican turnout in elections that every other pollster in the business shows to be incredibly tightly contested.

It is time for someone to investigate the Strib’s polling operations, both under Princeton Research and, before 2007, under Rob Daves.  If Emmer wins – and I predict he will, by a three point margin – it’ll be further proof that the Minnesota poll is nothing a get out the DFL vote/suppress the GOP vote effort.

The deniablity is plausible – but only just.

Chanting Points Memo: Garbage In, Garbage Out

Mark Dayton has run one of the single dumbest campaigns in Minnesota history.

Dayton himself has been a virtual non-entity, relying on the Twin Cities’ media’s inability and/or unwillingness to question him on  his background, the immense gaps in his budget “plan”, his history of erratic behavior…anything.

His surrogates have been another matter entirely; “Alliance for a Better Minnesota” – whose financing, almost exclusively from big union donors and members and ex-members of Mark Dayton’s family of trust fund babies – has run the slimiest, most defamatory campaign in Minnesota political history.   From mischaracterizing Emmer’s “DUI” record and slandering his efforts to reform Minnesota DUI laws, to their outright lies about his budget, ABM has profaned this state’s politics in a way that I only hope can be salvaged in the future – although I doubt this will happen until the DFL decays to third-party status.

If it were a Republican group doing it, the Dems would be whining about “voter intimidation”.

The Dayton campaign, in short, has been not so much a campaign as an attempt to orchestrate negative projected PR, social inertia and the ignorance of most voters to their advantage.  It hasn’t been a dumb campaign, per se;  when your job is to sell Mark Dayton, “The Bumbler”, desperate situations call for desperate measures.  And as we saw in 1998, there are enough stupid people do make anything possible.

A big part of Dayton’s under-the-table campaign has been to portray the impression that Dayton’s coronation is inevitable.  If your nature is to be suspicious of institutions with long, arguably circumstantial records of bias, one might see the Minnesota Poll as an instrument toward that aim – given its three-decade record of showing DFLers doing an average of 7.5% better than they ended up doing.   (If you favor the Democrats, you might say the same about Rasmussen – if you ignored the fact that they’ve been consistently the most accurate major pollster for the last couple of cycles.  Other than that, just the same thing).

The latest chapter in this campaign has been the regional DFLbloggers’ chanting the latest results from Nate Silver’s “Five Thirty Eight”, a political stats-blog that was bought out by the NYTimes a while back.

Silver’s latest look at the Minnesota gubernatorial race gives Dayton an 83% chance of winning, in a six point race.

And that’s where the Sorosbloggers leave it.

Of course, Silver’s analysis on its face has a margin of error of a little over eight points – which is  – considerably larger than the forecast margin.

Of course, with any statistical, numerical output, you have to ask yourself – “are the inputs correct?”

Here are Silver’s inputs:

Courtesy 538/New York Times

Courtesy 538/New York Times

The important column is the “538 Poll Weight” column, the third from the right.  It shows how much weight Silver gives each poll in his final calculation.  The number is at least partly tied to time – but not completely; for some reason, the five-week old Survey USA poll gets 20% more weight than the four week old Rasmussen poll; the October 6 Rasmussen poll that showed Emmer with a one point lead gets about 3/4 the oomph of the latest Survey USA poll, which showed Dayton with a five point lead…

…and whose “likely voter model” seemed to think that Democrats are four points more likely to show up at the polls that Republicans.  This year.

Pollsters – and Silver – are fairly cagey about their methodology.  I’m not a statistics wiz.  I dropped the class after one week, in fact.  But I can tell when something isn’t passing the stink test.  Any poll that gives Democrats a four point edge in turnout this year may or may not be wishful thinking (we’ll find out in less than two weeks, won’t we?), but does seem to be based more on history than current behavior which, I should point out, involves a lot of hocus-pocus to predict during a normal election.

And this is not a normal election.

I’m not going to impugn Nate Silver, per se – if only because I haven’t the statistical evidence.  Yet.

I will, impugn the NYTimes, but then that’s what I do.  They very much do want to drive down Republican turnout.

And that is the main reason the DFL machine – including the ranks of more-or-less kept leftybloggers in this state – are parrotting this “story” so dutifully.  They want to convince Republicans that all is lost.

Pass the word, folks.  We’re gonna win this thing.

Meet The New Poll, Same As The Old Poll?

Yesterday, I dubbed the Strib/”Minnesota” Poll “The DFL Morale Booster”.  Not for the first time, of course.

David Brauer writing at the MinnPost responded, more or less:

So with the new Star Tribune poll out showing DFLer Mark Dayton with a 9-point lead over Republican Tom Emmer, it’s the right’s turn to howl over alleged bias.

I dunno that I was “howling”, per se, but if one can’t use hyperbole in the last month of a campaign, when can one?  I’ll let it slide, while pointing out that I, and conservatives in general, have legitimate questions about the Minnesota Poll.

Brauer quotes a bit of yesterday’s post:

In the spirit of Dems accusing Rasmussen Reports of being a Republican house organ, Mitch Berg at the True North blog dubs the Strib results “The DFL morale-booster”:

I’ll remind you that if the Minnesota poll were accurate, we’d be referring to Governor Humphrey (the poll showed Moe with a strong lead over Coleman, with Ventura well out of the running), Senator Mondale (who had a five point lead in the MN Poll on the eve of the ’02 election), Governor Moe (to whom the MNPoll gave a slim lead, while significantly overpolling IP candidate Tim Penny in ’02), Governor Hatch (yep, slated to win in ’06)…

And he digs into some history, pointing out correctly that the Strib Poll changed pollsters in 2007, ditching Rob Daves, who presided over years of polling in which the Strib’s house poll was a laughingstock among those who paid attention.

And Brauer brings up a couple of valid points – points I never really disputed in my original piece.  Polls aren’t generally intended to be “predictions”.  And…

…missing the final margin doesn’t necessarily mean a pollster is wrong. Sentiment can swing in the voting booth, after polling ends. (This is why pollsters refer to their results as a “snapshot in time.”) Also, any poll has margin of sampling error. The trick is to see patterns — the so-called “house effect” toward a particular party, and whether results are consistent outliers.


And as I noted in  my post, the Strib during the Daves years was an extremely consistent outlier

Let’s begin with Daves’ last cycle, the 2006 election.

Mitch rakishly references “Gov. Hatch.” Here are the three major pollsters’ final November results, via Real Clear Politics’ roundups:

Brauer correctly notes that the Minnesota Poll put Hatch three points above Pawlenty; Rasmussen had him by two, and Survey USA called it a tie; none of the major polls showed Pawlenty winning.  Pawlenty,k of course, won by one.  Brauer also notes that Daves correctly predicted A-Klo’s blowout againt Mark Kennedy.

He then goes through the 2008 results, which was both the first cycle without Daves, and the first with Princeton Research doing the math.

…the Strib picked two winners, SUSA two (we’ll give ’em the TPaw tie) and Rasmussen only the AKlo blowout.

Even allowing for GOP mewling that Franken stole the 2008 election, it seems clear that the three polls have circled the final result roughly equally. I’d also note that, at least from 2006 on, if you’re comparing the final polls to the eventual outcome, SUSA’s house effect is as Republican as the Strib’s is Democratic.

2008 – and to some extent 2006 – are not the best years to analyze, really; except for the Pawlenty/Hatch and Franken/Coleman races, neither were especially suspenseful years, although the Minnesota Poll came out with a four or five point error in the DFL’s favor in both races.   In short – and to be admittedly cynical – the DFL didn’t need a morale boost in either of those cycles.  They won just about everything that mattered!

Brauer is correct that SUSA erred by the same margin in Coleman’s favor; I’d argue that at least some conventional wisdom would have backed that at the time, if not by five points.  But I doubt you can say with a straight face that Survey USA has a generation-long history of GOP bias averaging seven points per Presidential, Gubernatorial and Senate race.

Of course, Daves is out, and the Strib has Princeton, an ostensibly unbiased third party, doing the poll.  And that’s where we get into the real meat of this MNPoll; how has the methodogy changed, and will it affect the MNPoll’s accuracy?

Whenever the Rasmussen and Humphrey Polls show the gubernatorial race well within the margin of error, the regional leftyblog buildup chants in unison “they only poll landlines”.  The MNPoll ostensibly addresses that:

As I’ve noted in several columns this month, the Strib’s 2010 polling now include cellphone-only voters, a potentially significant methological difference with Rasmussen, SUSA, and the Humphrey Institute/MPR poll.

Perhaps – if you presume that people who don’t have land lines are primarily younger and DFL-leaning, that the Humphrety and Rasmussen’s efforts to correct for this phenomenon aren’t valid (both note in their breakouts that they attempted to weight for this)and that younger/DFL voters are especially more likely to vote in this cycle.

Brauer concludes:

A potentially bigger difference: how each pollster screens for likely general-election voters. I’m surveying the major pollsters on their “likely voter screens” and will let you know after I hear back from everyone.

That is, of course, a key question.  I’ll watch for Brauer’s followup.

Equally important, at least as re the MNPoll, is how they broke out the numbers they did include in the poll: their sample of  “likely voters” included 35% DFL, 28% Republican, 28% “Independent” (but not necessarily “Independence”), and 9% “other parties” or undecided.

Is the party ID gap, in this year of the Tea Party, with the most motivated conservative base in a generation, really still 25% in favor of the DFL in Minnesota?

Are “independents” really going to break predominantly for Dayton, in this anti-big-government year?  In the Metro, perhaps – but statewide?

I’m no mathematician.  But this just doesn’t pass the stink test.

UPDATE 2: Welcome Politics in Minnesota reader!

UPDATE 3: Power Line notes that the Princeton Research Study Group is behind Newsweek’s polls – which came in dead last for accuracy in 2008.