Chanting Points Memo: Camouflaging The Battleground

The Strib “Minnesota Poll” is doing what it’s paid to do:  create a pro-DFL bandwagon effect, and suppress GOP voter turnout.  It’s calling Minnesota at Obama with 48% and Romney with 40%.

But the poll uses the same absurd D41/R28 breakdown that the Marriage and Voter ID polls.  This polling would have you believe that while in 2008, with a messianic media darling running against an unpopular two-term candidate (McCain was irrevant) and the war the DFL had a six point advantage in partisan turnout (D39 R33), this year, mirabile dictu, we have a 13 point Democrat advantage in this state?

If you use turnout numbers from somewhere in between 2008 and 2010 – say, D36 R34 – and multiply the changes by the percent of each party that the poll itself says plan on voting for their candidate (93% of Democrats plan to vote for Obama, vs 96% of Republicans), then you wind up lopping off roughly .3% of Obama’s numbers, and adding a whopping 5.8% to Romney’s.

That makes the real split 47.7% Obama, 45.8% Romney.  

Question – especially for you libs in the audience:  In what way is a widely (one might say “lavishly”) publicized poll using a partisan split that this state hasn’t seen since Watergate to be interepreted as anything other than an elaborate voter-suppression scam?

Chanting Points Memo: “Minnesota Poll” Orders Material For A Narrative-Building Spree

If you take the history of the Minnesota Poll as any indication, yesterday’s numbers on the Marriage Amendment might be encouraging for amendment supporters:

The increasingly costly and bitter fight over a constitutional amendment to ban same-sex marriage is a statistical dead heat, according to a new Star Tribune Minnesota Poll.

Six weeks before Election Day, slightly more Minnesotans favor the amendment than oppose it, but that support also falls just short of the 50 percent needed to pass the measure.

Wow.  That sounds close!

But as always with these polls, you have to check the fine print.  And the “Minnesota Poll” buries its fine print in a link well down the page; you don’t ever actually find it in the story itself.  And it contains the partisan breakdown (with emphasis added):

The self-identified party affiliation of the random sample is: 41 percent Democrat, 28 percent Republican and 31 percent independent or other.

That’s right – to get this virtual tie, the Strib, in a state that just went through photo-finish elections for Governor and Senator, and has been on the razor’s edge of absolute equality between parties for most of a decade, sampled three Democrats for every two Republicans to get to a tie.

If you believe – as I do – that the “Minnesota Poll” is first and foremost a DFL propaganda tool, intended largely to create a ‘bandwagon effect” to suppress conservative turnout (and we’ll come back to that), then this is good news; the Marriage Amendment is likely doing better  than the poll is showing.

What it does mean, though, is that they are working to build a narrative; that the battle over gay marriage is much more closely-fought than it is.

And the narrative’s players are already on board with this poll.  The Strib duly interviews Richard Carlbom, the former Dayton staffer who is leading the anti-Amendment

Actually, here’s my bet; the November 4 paper will show a “surge of support” that turns out to be much larger than any that actually materializes at the polls.

More At Noon.

UPDATE:  I wrote this piece on Sunday.  Monday morning, all of the local newscasts duly led with “both ballot initiatives are tied!”.

If you’re trying to find a construction job in Minnesota, you can get a job putting siding on the DFL’s narrative.

UPDATE 2:  Professor David Schultz at Hamline University – no friend of conservatism, he – did something I more or less planned to do on Wednesday; re-ran the numbers with a more realistic partisan breakdown:

Why is the partisan adjustment important? The poll suggests significant partisan polarization for both amendments, with 73% of DFLers opposing the marriage amendment and 71% of GOPers supporting. Similar partisan cleavages also exist with the Elections Amendment. If this is true, take the marriage Amendment support at 49% and opposition at 47%. If DFLers are overpolled by 3% and GOP underpolled by 6%, and if about 3/4 of each party votes in a partisan way, I would subtract about 2.25% from opposition (3% x .75) and add 4.5% to support (6% x .75) and the new numbers are 53.5% in support and 44.75% against. This is beyond margin or error.

If one applies the correction to the Elections Amendment there is about an 80% DFL opposition to it and a similar 80% GOP support for it. Then the polls suggest approximately 56.8% support it and 41.6% oppose.

Which brings us very nearly back to the 3:2 margin  for the Voter ID amendment, and the tight but solid lead for the Marriage Amendment that every other poll – the reputable ones, anyway – have found.

The Bandwagoneers

Have you noticed something?

No “Minnesota Poll” yet this cycle.  Ditto the Humprey Institute.

Usually by this point in an election cycle, they’ve run a poll showing the Republican candidate down by some absurd amount that turns out to be many times greater than the eventual margin of victory (or defeat) for the DFLer.

Now, I’ve been writing about the HHH and Strib “Minnesota” polls for quite some time.  I noted that since 1988, the Strib Minnesota Poll has consistently shorted Republicans by a consistently greater margin than Democrats in their pre-election polls – and that the discrepancy is even greater in elections that end up being closest.  I noted that the HHH poll is even worse – but that in polls where the DFLer appears to be in no danger, their polls end up being more accurate.

It is my contention that the Strib and the Humphrey Institute are allied – at least at the executive level – with the DFL, and use their polls to further the DFL’s ends; everyone involved is certainly aware of the “Bandwagon Effect” – the phenomenon by which voters who believe their candidates have no chance of victory will stay home.

So we’ve seen no “Minnesota” poll so far this cycle; Amy Klobuchar – perhaps the greatest beneficiary of media bias in the history of Minnesota politics, as the daughter of a former Strib columnist – seems to be in no great danger, so the polls say, from Kurt Bills (not to say I won’t do everything I can, personally, to fix that).  I’ll bet dimes to dollars the Strib polls wind up pretty darn close to the election totals, in fact!

———-

But the “Bandwagon” effect is going nationwide; Minnesota in 2008 and 2010 showed that it can keep juuuuuuuuust enough people home, if it’s relentless enough, to tip a close election.

And so you see the mainstream media already declaring the election over, based entirely on polling that is entirely based on the Democrats getting turnout they didn’t even get in 2008.

It is, in fact, the flip side of the “Low Information Voter” strategy they’ve run on their own side – convincing the ill-informed, the querulous and the not-bright that there’s a “war on women” and Obama “stands with the 99%” and “the economy was Bush’s fault but it’s almost back, any day now”; trying to convince people, especially independents, who might be sick to death of Obama and possibly thinking of voting GOP that it’s all hopeless and they should stay home.

Think about it.  Why else would they run polls that are transparently false?  That rely on assumptions that probably didn’t even occur during the post-Watergate election in 1976, much less 2008, much less today?

Because only the high-information voters either dig into the partisan breakdowns (or read the bloggers who do), and the record in Minnesota shows there are just enough incurious, too-busy, ill-informed, and just plain un-bright people to sway the matter if it’s close enough.

The media at all levels – bald-faced cheerleaders like the LATimes and the Strib and the supposedly-ethical ones like NPR alike – are going to be beating the “it’s over” drum constantly ’til the election.

The well-informed people know it’s baked wind.

But it’s not aimed at them.

All That’s Silver Does Not Glitter

While the national polls show the presidential race a statistical toss-up, Nate Silver points out that polls conducted in swing state show Obama with an actual lead of sorts – around three points:.

While that isn’t an enormous difference in an absolute sense, it is a consequential one. A one- or two-point lead for Mr. Obama, as in the national polls, would make him barely better than a tossup to win re-election. A three- or four-point lead, as in the state polls, is obviously no guarantee given the troubles in the economy, but it is a more substantive advantage.

Here’s the part that caught my attention; I’ve added emphasis:

The difference isn’t an artifact of RealClearPolitics’s methodology. The FiveThirtyEight method, which applies a more complicated technique for estimating polling averages, sees broadly the same split between state and national polls.

On the one hand – well, doy.  Obama’s an incumbent elected in a wave, protected by a media that serves as his Praetorian Guard.  Of course he’s going to be polling well.

On the other hand?  My real point in this article is the abovementioned “FiveThirtyEigtht Method”.

I addressed this two years ago – when Silver, who is generally acknowledged to be a moderate Democrat, spent most of the 2010 campaign predicting a 6+ point Mark Dayton victory.

How did he arrive at that number?

  1. By taking an assortment of polls from around MInnesota, conducted by a variety of polling operations, and…
  2. Applying a weighting to each poll, the “538 Poll Weight”, which came from an unexplained formula known, near as I can tell, only to Silver.  Which is not to say that it’s wrong, or statistically, intellectually or journalistically dishonest, per se – merely that it’s completely opaque

But let’s take Silver’s methodology at face value – because he’s a respected statistician who works for the NYTimes, right?

The fact remains that, at least here in Minnesota, two of the polls that were given great weight in Silver’s methodology – the Star Tribune “Minnesota” poll and the Hubert H. Humphrey Institute poll, are palpably garbage, and should be viewed as DFL propaganda at best, calculated election fraud at worst. 

We went through this in some detail after the 2010 election: there’s an entire category on this blog devoted to going over the various crimes and misdemeanors of Twin Cities media pollsters.  ,Long story short – since 1988, the Strib “Minnesota” poll has consistently shorted Republican support in polls, especially the polls closest to the elections, especially in close elections.  The “Minnesota” poll’s only redeeming point?  The Humphrey Institute poll is worse.  In both cases, they tended – moreso in closer races – to exaggerate the lead the Democrat candidate for Governor, Senator or President had.   For example, in 2010 both polls showed Mark Dayton with crushing, overwhelming, humiliating leads over Tom Emmer on election-eve.  It ended up the closest gubernatorial race in Minnesota history.  The “Minnesota” poll was so bad, Frank Newport of Gallup actually wrote to comment on its dubious methodology. I suspect that the results are less mathematical background noise or methodological quicks – which would, if truly random, show distortions that would even out between the parties over time.  While it’s not provable without a whistle-blower from inside either or both organizations, I suspect the results shake out the way they do, if you are inclined to believe people have integrity, due to selection bias in setting up survey samples (and, if you don’t have much faith, in systematic bias working to achieve a “Bandwagon Effect” among the electorate.  Count me among the cynics; an organization with integrity would have noticed these errors long before a guy like me who maxed out at Algebra I in college and fixed the problem.  I’m willing to be persuaded, but you’ll have to have a much better argument than most of the polls’ defenders). 
The point being, this is the quality of the raw material that leads Nate Silver to his conclusions.  
And that should give Silver, and people who pay attention to him, pause.
I don’t know if the other state polls are as dodgy as Minnesota’s local media polling operations.  That’d be a great subject for a blogswarm.  

Where Used Car Salespeople Fear To Tread

Say what you will about the Minnesota Poll and the Hubert H. Humphrey poll.  As bad, inaccurate, DFL-biased and seemingly-rigged as both are, they both actually release their cross tabs – such as they are.

So far.

With the WaPo’s new practice of sitting on the data for their polls – which, naturally, show that Barack Obama has bounced back – I don’t expect that to last for long.

Ed Morrissey wrote about the new practice:

More importantly, though, the poll series has dropped its reporting of partisan identification within their samples.  It’s the second time that the poll has not included the D/R/I split in its sample report, and now it looks as though this will be policy from this point forward.  Since this is a poll series that has handed double-digit partisan advantages to Democrats in the past (for instance, this poll from April 2011 where the sample only had 22% Republicans), it’s not enough to just hear “trust us” on sample integrity from the Washington Post or ABC.

One cannot determine whether Obama’s improvement in this series is a result of the State of the Union speech, as Dan Balz and Jon Cohen suggest, or whether it’s due to shifting the sample to favor Democrats more so than in previous samples.  The same is true for the Post’s report that Obama “for the first time has a clear edge” over Romney head-to-head.  One would need a poll of registered or likely voters to actually make that claim (one has to register to cast a vote, after all), and one would need to see the difference in partisan splits between this and other surveys in the series to determine whether the movement actually exists or got manufactured by the pollster.

Expect the effort to get Obama re-coronated to result in the extinction of whatever passes for “Journalistic Standards” in the polling industry.

He’s Baaaaaack

The lefties were all atwitter yesterday over a poll in the MinnPost that purported to show that Minnesotans blame the Minnesota GOP for the shutdown:

By a whopping 2-1 margin, Minnesotans blame the Republicans who control both houses of the Legislature for the recent government shutdown more than they blame Gov. Mark Dayton, according to a poll taken this week for MinnPost.

 

Predictably, most Republicans blamed Dayton more (by 56 to 10 percent, with the rest saying both sides were to blame or holding no opinion). DFLers blamed the Republicans by an even more overwhelming majority (68 percent to just 2 percent of DFLers who blamed DFLer Dayton).

 

But the key swing group of self-identified independents was also much more likely to blame Republicans than to blame Dayton. Among independents, 46 percent “blamed” the Republicans, 18 percent blamed Dayton and 25 percent both.

Hm. That sounds bad!

It also sounded familiar – indeed, it sounded right in line with a prediction I made in this space mere weeks ago.  Go ahead and read it; Prediction 1 was a month late, and it appeared in the MinnPost rather than the Strib; the piece is written by Erik Black and Doug Grow, former Strib staffers, so the feeling of deja vu was so overwhelming…

…that when I first read this post, I practically predicted the bit that is emphasized in the quote below:

Based on other questions in the poll, it was difficult to say whether the fallout from the shutdown will give DFLers a significant advantage heading into the 2012 elections, as Republicans seek to retain their majorities. Projecting current attitudes onto an election 16 months in the future would be folly.

 

Also, this poll, conducted for MinnPost by Daves & Associates Research, was designed to take the pulse of the state in the aftermath of the shutdown, not to predict the next election. No likely voter screen was used and sample surely includes non-voters.

And there you have it.  The MinnPost gets its polling from “Daves and Associates”.  That’d be Rob Daves – the guy who ran the Minnesota Poll for 21 years – the poll whose election-eve polls on Gubernatorial, Senate and Presidential races *always* showed the GOP doing worse – usually much worse – than it ended up doing.

And if it’s a post on politics in Minnesota by Strib alums Black and Grow, who else just has to show up?

Humphrey Center Political Scientist Larry Jacobs said the results of the new poll were “basically bad news for the Republicans.”

 

“They have to think about this fact,” said Jacobs.”The principles that they ran on in 2010 — that they would advocate for cuts only and would refuse to go along with any tax increase — may still be the principles that appeal to the most enthusiastic base of support they have. But that position seems to be pretty unpopular not only with two-thirds of Minnesotans, but with half of their own party, all of whom prefer a mix of significant spending cuts and at least some tax increases.”

Yep, Dr. Jacobs, whose Hubert H. Humphrey Institute Poll is even worse, and whose methodology was openly and publicly savaged by Frank Newman of Gallup last year after the Humphrey Institute polls were not only grossly wrong (predicting a 12 point Dayton blowout in the gubernatorial race which ended up about a .4% race) but were shown to have systematically oversampled strongly DFL areas of the state.

Both Daves’ and Jacobs’ polls, as I showed last year, shared an interesting trait: if the final result of an election ended up being really close, like the ’08 Senate and ’10 Governor’s race (as opposed to blowouts, like the ’06 Senate race), the Minnesota and HHH Polls *both* shorted Republicans *even more*:

The reason? Well, it’s a known fact that voters are prone to the “Bandwagon Effect”; they do tend to go along with what polls tell them, positively or negatively.  My theory – while it’s conceivable that the Strib, Rob Daves, the Minnpost, the HHH Institute and Larry Jacobs are unaware of the “bandwagon effect”,  I’d be a lot more convinced if Daves didn’t have a 24 year record of shorting the GOP on controversial, loaded polls when the chips were down (and Jacobs’ polls even worse for seven years).

The poll canvassed less than 600 random adults – not registered, much less likely, voters – and, as usual, it heavily-sampled identified DFLers and unspecified “independents”.

Fearless Predictions

I have a couple of predictions for you.

Prediction 1:  Polled To Death Take this to the bank:  sometime before July 1, the Strib will run another “Minnesota Poll” in re the shutdown.

The poll’s headlines will be within one rhetorical standard deviation of  “65% of Minnesotans Favor Compromise On Budget Impasse”.

The crosstabs, carefully buried, will show that DFLers are oversampled by 50%; those trying to investigate the faint whiff of metrocentrism in the polling will be frustrated by the absolute lack of crosstabs showing geography.

Prediction 2: Dead Silence – Despite the avalanche of evidence coming out of the state bureaucracy that Dayton is not only pushing for the shutdown, but actively trying to make it “hurt” as much as possible, there will be not one word on the subject from the Strib, WCCO, the PiPress, the KARE Bears (whose John Cronan is rapidly shaping up to be an Esme Murphy-grade stealth-DFL  propagandist), or MPR.

Place your bets.

Or make your own predictions, in the comment section.

Strib Poll: Empowering The Powerful, Gulling The Gullible

The poll was as drearily predictable as the annual stadium extortion-fest; notwithstanding last November’s electoral GOP legislative sweep, yet another Star/Tribune “Minnesota Poll” shows that the public is, mirabile dictu, entirely on board with the DFL agenda:

Sixty-three percent of respondents said they favor a blend of higher taxes and service reductions to tackle the state’s $5 billion projected deficit. Just 27 percent said they want state leaders to balance the budget solely through cuts.

The poll comes [with utter predictability – Ed.] as the Republican-led Legislature and the DFL governor head into the final week of a legislative session still dug in on their vastly different approaches to balancing the budget.

Dayton said the results show the public backs his position. Republicans said the results run counter to last fall’s election and what they are hearing from Minnesotans.

Predictable?  Absolutely.  Whether through editorial perfidy or lazy methodology, the Strib/”Minnesota” Poll has a long history of releasing “news” the DFL needs, exactly when it needs it.  Especially when the issue is especially close-fought; the harder-fought the issue, the more absurdly lopsided the  Strib poll, like the “Humphrey Institute” Poll run for many years by the U of M and MPR polls, seem to be.  Right when the DFL needs it.

My theory; the DFL knows full well how the “bandwagon effect” in polling works for manipulating public perception; the Strib serves the DFL, wittingly or not.

And, sure enough, the poll’s methodology was as predictable as the Strib’s smug headline; emphasis is added by me:

Today’s Star Tribune Minnesota Poll findings are based on 565 landline and 241 cellphone interviews conducted May 2-5 with a representative sample of Minnesota adults. Interviews were conducted under the direction of Princeton Survey Research Associates International.

Results of a poll based on 806 interviews will vary by no more than 4.7 percentage points, plus or minus, from the overall population 95 times out of 100.

The self-identified party affiliation of the random sample is 33 percent Democrat, 23 percent Republican and 37 percent independent. The remaining 7 percent said they were members of another party, no party or declined to answer.

Results for the question about the best approach to solving the budget deficit — primarily through service reductions or through a combination of tax hikes and spending cuts — are based on interviews with 548 of the 806 respondents. The question was reasked in follow-up calls to all respondents because of a problem in the original wording of the question, and 548 of the respondents were reached. Results of a poll based on 548 interviews will vary by no more than 5.7 percentage points, plus or minus, from the overall population 95 times out of 100.

In other words, a group which self-reports its political leaning, whose geographical weighting and mix are unknown (remember the Humphrey Institute’s overweighting of Minneapolis respondents? Which they didn’t bother to report until after the election, even though their actual poll, which indicated a 12 point blowout for Mark Dayton, went out on schedule, right before the election?), and where the “independents” are given no known context, and which gives the DFL a completely unearned 50% head start, shows the public solidly behind Mark Dayton.

Just like it needed to.

I doubt the Twin Cities media will ever admit that the “Minnesota Poll” and the “Humphrey Institute” polls are, intentionally or not, pro-DFL propaganda. But it’s gotten to the point where the evidence doesn’t support any other conclusion.

Trump, The Media, and Bandwagons

For background, I’ll refer you to…:

The Huckabee Corollary the McCain Corolloary To Berg’s Eleventh Law: The Republican that the media covers most intensively before the nomination for any office will be the one that the liberals know they have the best chance of beating after the nomination, and/or will most cripple the GOP if nominated.

If you’re like me, you looked at the polls “showing” Donald Trump “leading” the GOP field and thought “Huckabee Corollary!”.

Nate Silver – fresh from playing a role in engineering the DFL’s “Bandwagon Effect” in the Minnesota gubernatorial election last year – notices the media blitz on Trump without, I suspect, getting the “Why“:

One of the few pieces of statistical evidence that we can look toward at this early stage of the presidential campaign is the number of media hits that each candidate is receiving. Apart from being interesting unto itself, it’s plausible that this metric has some predictive power. At this point in 2007, Barack Obama and John McCain were receiving the most coverage among the Democratic and Republican candidates respectively, and both won their races despite initially lagging in the polls.

In contrast to four years ago, however, when the relative amount of media coverage was fairly steady throughout the campaign, there have already been some dramatic shifts this year. Sarah Palin’s potential candidacy, for instance, is only receiving about one-fifth as much attention as it did several months ago.

In the past, I’ve usually used Google News to study these questions, but I’ve identified another resource — NewsLibrary.com — that provides more flexibility in search options and more robustness in its coverage. (One problem with counting things on Google is that the number of hits can vary fairly dramatically from day to day, for reasons I don’t entirely understand.)

(Another downside to Google News: it seriously overweights the left).

I’ve counted the number of times on NewsLibrary.com in which the candidate’s name appeared in the lead paragraph of the article, and a select combination of words appeared down in the article body. In particular, I’ve looked for instances in which any combination of the words “president”, “presidential” or “presidency” appeared, as well as any of the words “candidate”, “candidacy”, “campaign”, “nomination” or “primary.”

The idea is to identify cases in which a candidate was the main focus of the article (as opposed to being mentioned in passing) and when the article was about the presidential campaign itself (as opposed to, say, Mr. Trump’s reality show). The technique isn’t perfect — there are always going to be a few “false positives” from out-of-context hits — but it ought to be a reasonably good benchmark for the amount of press attention that each candidate is getting.

And the results?

So far this month, however, Ms. Palin has accounted for just 124 hits out of 1,090 total, or roughly 11 percent. Instead, her place has been taken by Mr. Trump, who has accounted for about 40 percent of the coverage.

The decline in media coverage for Ms. Palin tracks with a decline in her polling numbers. Whereas she was pulling between 15 and 20 percent of the Republican primary vote in polls conducted several months ago, she’s down to about 10 percent in most surveys now.  Mr. Trump, meanwhile, whose media coverage has increased exponentially, has surged in the polls, and is essentially in a three-way tie for the lead with Mitt Romney and Mike Huckabee over an average of recent surveys.

Hm.  What do you suppose the odds were that the mainstream media would pump the hell out of a buffoonish cartoon like Trump at the expense of the serious GOP candidates?

After the MN Gubernatorial election we noted that  the “Bandwagon Effect” is known to have an effect on election turnout,  shown in academic studies on the subject.  As studied, it’s a negative effect – people are less likely to turn out for candidates that the media says are getting drubbed in the polls (like the Humphrey Institute’s polling last fall, which showed Emmer near-tie race as a 12 point loss with all-too-convenient timing.

So why would the media not be building up Trump as a “force to be reckoned with”?  It’s a win/win for the Media and the Democrats (pardon the redundancy); as long as Trump is pictured as a contender, GOP candidates have to waste time and money fighting the strawman with the bad combover.  And if by some freak of fate he gets the nomination (he won’t, because he’s no conservative, but let’s run with it) the media will tear him down promptly, because – let’s be honest – that’s what he’s there for.

This blog will be watching the libs/media and their bandwagonning over the next year and a half.  It’ll be a growth industry.

The Great Poll Scam, Part XIV: Fool Me Ten Times…

You’ve heard the old saying – “the definition of insanity is repeating the same thing over and over again and expecting a different result. 

The joke writes itself.  Nearly every election season, Minnesota’s media runs the results of the Star-Tribune Minnesota Poll and the Humprey Institute/MPR Poll on its front pages; front and center on its 6 and 10PM newscasts; up-front in its hourly news bites; in the New York Times; prominently on that big news crawl above Seventh Street in downtown Saint Paul.   To those who don’t dig into the numbers – and that’s probably 99 percent of Minnesota voters – that’s all there is to it.  “Hm.  Looks like Dayton’s winning big!”.

In most elections- especially the close ones – both polls (along with their downmarket stepsibling, the SCSU Poll) show numbers for GOP candidate that beggar the imagination.  The media – the Strib, the TV stations, MPR – run the polls pretty  much without any analysis.  The job of actually fact-checking the polling falls to conservative bloggers – myself, MDE, Ed Morrissey, Scott Johnson and John Hinderaker, Gary Gross, the Dogs, Sheila Kihne and others; poll after poll, election after election, we shout into the storm “the numbers are a joke! Democrats are oversampled to an extent that is not warranted by electoral results we’ve seen in this state in nearly a generation!  Would someone look into this?”

The elections take place.  There is hand-wringing about the inaccuracy of the polls.  Two years pass.  Larry Jacobs and the  Strib release still more polls, repeating precisely the same pathologies, over and over and over.  Forever and ever, amen.  Lather, rinse, repeat.

Now, “journalism” is supposed to be about accuracy and clarity.  About telling the story, and telling it in a way that your sources reinforce your credibility and clarity.  If you are a reporter, and you report a story based on a source’s information, and that information turns out to be wrong, it’s a bit of a vocational black eye.

This morning I asked, rhetorically, “do you think that if a source burned Tom Scheck or Pat Doyle or Rochelle Olson or Rachel Stassen-Berger over and over, year in and year out, by feeding them laughably inaccurate information, not just once or twice but on nearly every story on which they are a key source, would they keep using them as sources?”  Without really serious corroboration, if indeed it could be found?  Ever?  

And yet the regional media not only continues running the Strib and HHH polls, election after election, without any serious question – until after the election, anyway.  Notwithstanding the fact that the Strib’s Minnesota Poll has been very regularly wrong for a generation now.  Notwithstanding the fact that the Humphrey Poll has been even more consistent in its systematic shorting of GOP candidates.  The polls are still treated not only as useful news, but front-page material.

This would prompt a curious person to ask a whooooole lot of questions:

Why do the pollsters continue to generate such a defective product?:  While I focused heavily over the past few days specifically on Gallup’s Frank Newport’s critique of the Humphrey Institute poll, that gives the impression that this is a one-time issue.  And yet both the major media polls have had nearly the same problems, election in, election out, for a generation (or in the case of the Humphrey Institute Poll,  in every major election since 2004).   It’s gotten to the point where I want to stand outside 425 Portland, or outside the Humphrey Institute’s building at the U, and wave a sign about; “It’s the same thing, every time!”.

Why do the media continue to present such a routinely defective product as newsworthy?:  Scott Johnson has been lighting up the “Minnesosta Poll’s” shortcomings for a solid decade now; the Strib’s poll is rarely even close, and performs worse in close elections than in blowouts.  And at the risk of repeating myself, let me repeat myself; the Humphrey Institute poll has underpolled Republicans by an average of nine points.  This past election was distinguished from the previous years’ ineptitude only in degree, not in concept.

Does it never occur to our “watchdogs” and “gatekeepers” to look into this?  Wasn’t “insatiable curiosity” once a pre-requisite for being a reporter?

Do the editors at the Strib, the PiPress, KARE, MPR, WCCO and the rest of the regional mainstream media genuinely consider “polls are a snapshot in time” an excuse for decades worth of a pattern of inaccuracy, not only in polling technique but in their own coverage of elections?

If a city councilman is caught cashing checks to herself, would saying “it’s just a snapshot in time!” get the Strib to call their dogs off?

Appearance Of…Something?: I’ve said it before; I’m not a fundamentally conspiracy-minded person.  I don’t necessarily believe that the media is involved in a conscious, considered conspiracy to short conservative candidates in close elections.

Still – given that…:

…I’ll ask again: if the Humphrey Institute (whose institutional sympathies lean definitively left-of-center) and the Strib (ditto) wanted to create a system that would help tip close-call contests toward the DFL, how would it be any different than the system they’ve developed?

Not accusing.  Just asking.

The Great Poll Scam, Part XIII: Reality Swings And Misses

Contrary to the impression some wrote about on various blogs, I never worked for the Emmer campaign.  Oh, I did a fair amount of writing about Emmer’s bid for governor – I thought he had what it took to be the best governor we’ve had in a long time, and I was a supporter from long before he actually declared his intent to run.  I volunteered a lot of time, and a lot of this blog’s space, to fight against the sleaziest, most toxic smear campaign in recent Minnesota electoral history, and I do believe the better man lost this election.

But I never got any money for it.

What I did get – although not to an extent that would make a Tom Scheck or a Rachel Stassen-Berger in any way jealous – was a certain amount of access.  I heard things.

One of the things I heard from sources inside the Emmer campaign, especially during the long, dry, advertising-dollar-free summer before the primaries, when all three DFL contenders curiously spent their entire ad budgets sniping at Emmer, and the media played dutiful stenographers for Alliance for a Better Minnesota’s smear campaign, was that the Emmer campaign had its work cut out for it.  In late July and early August, a source inside the Emmer campaign, speaking on MI-5-level deep background, told me the internal polls showed Emmer trailing by 12 points.  It wasn’t good news, certainly – but it was early in the race, it was a byproduct of being outspent roughly 16:1 to that point, and it was just part of doing business.   “We gotta pick up six points, and Dayton’s gotta lose six”, the source told me, as the campaign dug its way out of “Waitergate”.

I observed to the source that that should have been nothing new for Emmer; he’d come back from a bigger margin in the previous nine months or so, from being way back in the pack at the Central Committee straw poll about this time last year, where Marty Seifert won by a margin many considered insurmountable.

The source expressed confidence it could be done.

He was, statistically, exactly right. Emmer brought the race back from a 12 point blowout to a near-tie, with numbers that pretty steadily improved – according to the party’s own internal polling.

Steadily?

On October 11, I held a “Bloggers For Emmer” event at an undisclosed location in the western subs.  It had been ten busy weeks since my off-the-record conversation with my source in the campaign.  An Emmer functionary told me – off the record – that it was now a four point race.  

A week later, within ten days of the election, the same internal poll said the race was a statistical dead heat.

Then came the last-minute hit polls from the Humphrey Instititute, the Strib and Saint Cloud State – after which Emmer released his internal polling, which was reinforced by a Survey USA poll that more or less reinforced the internal polls’ results.

And then came the election.

Last week, David Brauer at the MinnPost interviewed Emmer campaign manager Cullen Sheehan.  As part of the piece, he graphed the respective polls: Emmer’s internal polling (orange), the Strib poll (wide dashes) and the HHH poll (dots), showing the indicated size of the Dayton lead.

Graph used by permission of the MinnPost

Graph used by permission of the MinnPost

Brauer:

Although “internal numbers” often become propagandistic leaks, Sheehan insists the data was not for public pre-election consumption. Though he wound up releasing the most favorable result during the campaign, it proved prescient, and two independent pollsters subsequently showed similar results.

And while Brauer points out that internal numbers “aren’t holy” – and many leftybloggers openly guffawed when Sheehan released them – the GOP’s internal numbers have a long record of accuracy, in my experience.  In 2002, when the Strib poll had Roger Moe measuring the drapes in the mansion, a GOP source leaked me internal polling showing that Pawlenty was tied and rising.  And internal polling released to a group of bloggers a month before the election showed Chip Cravaack pulling close to Jim Oberstar; numbers that the campaign asked be kept off the record showed that with “leaners”, Cravaack was actually leading.

So for all the leftyblogs’ caterwauling about “push polling”, the GOP’s internal polls – as seen both publicly and behind the scenes – called things as they were.  There’s a reason for that; parties need to accurate polling to help them allocate scarce resources effectively.  The DFL has not released their internal polling – but the Dayton campaign’s behavior indicates to me that they also saw Emmer’s late surge, leading them to re-roll-out the “Drunk Driving Ad” (the closest the Dayton campaign ever came to a coherent policy statement, with full irony intended).

But neither sides’ internal polling is affiliated with a major media outlet.  The Strib, Minnesota Public Radio and MinnPost all have symbiotic relationships with Princeton, the Humphrey Institute and Saint Cloud State, respectively (though to be accurate the MinnPost only paid for three questions in the SCSU poll, and those were, according to Brauer, on ranked-choice voting).  Those relationships, presumably, exist so that the news outlets can get “their” results out to the public first.

No matter how they’re arrived at, or so it seems.

Brauer confirms after the fact what my sources in the campaign told me, off the record, at the time; it was a real numerical rollercoaster ride:

Although “internal numbers” often become propagandistic leaks, Sheehan insists the data was not for public pre-election consumption. Though he wound up releasing the most favorable result during the campaign, it proved prescient, and two independent pollsters subsequently showed similar results.

“It really is, internally, a compass,” Sheehan says of the campaign’s polling.

Emmer’s own numbers show a candidate trailing — sometimes badly — for nearly the entire race.

On July 28 — three weeks after Emmer’s interminable “tip credit” debacle — the Republican trailed Dayton by 11 points. Ironically, the Star Tribune poll — which Republicans say overstates DFL support — had it closer: Dayton plus-10.

It was a demonstrable fact that the Strib poll oversampled DFL voters by a big margin – but that’s a poll-technique discussion to be held some other time.

In the wake of the double-digit gap, Sheehan took over as campaign manager. But by early October, the internal numbers had barely budged: Emmer was still down 7. A Strib survey taken a week or so earlier showed the Republican down 9 — again, pretty close to what the campaign was seeing.

Finally, on Oct. 13, Emmer got his first great inside news: he was only down 1. But the next media poll (SurveyUSA/KSTP) had him down 5, and an Oct. 18 internal poll repeated that number. It was two weeks before Election Day.

And then came the Big Three media polls, one after the other – the Strib, SCSU and the Humphrey polls – showing Emmer 9, 10 and 12 points down, respectively.  At which point Sheehan opted to release the internal numbers – which were shortly reinforced by SUSA.

Sheehan:

“At that point [right before the election – the polls on which I’ve focused throughout this series], undecided voters are making up their minds and supporters are getting anxious, having seen 7 down, 10 down and 12 down,” Sheehan says. “It impacts fundraising and volunteers. It’s definitely not the only factor, but it is a factor.”

Sheehan, now the Minnesota GOP Senate caucus chief of staff, is a Republican, but Democratic Senate Majority Leader Harry Reid’s pollster feels similarly. Reid’s internal numbers proved better than media polls predicting his opponent would win.

Says Sheehan, “The point I am making is that outside public polls have an impact on campaigns — ultimately, some impact on eventual outcome of campaigns, especially in close races.”

At least one media outlet agreed even before the results were known. This year, the Star Tribune declined to do its traditional final-weekend poll. A key reason, editor Nancy Barnes told me, is that “a poll can sometimes influence the outcome of an election.”

Sheehan’s plea? Withhold questionable numbers. “I’m under no illusion that public polls will cease, but I do think news organizations have a responsibility to ask themselves, when they get their results, if they really believe they’re accurate,” he says.

I’ve met Sheehan not a few times.  Great guy.  Big future in politics.  Now, I’m not sure if he’s ever read this series; if he has, I’m sure he needs to be diplomatic.  He’s gotta get along with the regional media.

But the fact remains that the closer the race got, the farther off-the-beam the Strib and HHH polls swerved.

Just the same as they do in practically every election, especially the close ones.

So Sheehan has a point; the news media should treat suspicious polls as they would a source that’s burned them. 

Seriously – can you imagine Erik Black or Bill Salisbury or David Brauer putting a story on the front page (or “page”) based on the uncorroborated word of a source that had burned them, over and over again?  As in, not even close, but really, really embarassingly burned?

And the Strib and Humphrey Polls have burned the regional media – over and over and over again.

Presuming, of course, that accuracy is what they’re shooting for.

More later today.

The Great Poll Scam, Part XII: The Dog Ate Their Homework

Writing in defense of the Humphrey Institute Poll – which indicated our tie governor’s race was headed for a 12 point blowout – Professor Larry Jacobs says:

Careful review of polls in the field conducting interviews during the same period indicates that the MPR/HHH estimate for Emmer (see Figure 2) was within the margin of sampling error of 3 of the 4 other polls but that it was also on the “lower bound” after accounting for random sampling error. (Its estimate was nearly identical to that of MinnPost’s poll with St. Cloud.)

Which showed the race a ten point blowout for Dayton.

Jacobs is, in effect, saying “yeah, our poll was a hash – but so was everyone else’s”.

This pattern is not “wrong” given the need to account for the margin of sampling error, but it is also not desirable. As part of our usual practice, the post-election review investigated whether there were systematic issues that could be addressed.

Research suggests three potential explanations for the MPR/HHH estimate for Emmer; none revealed problems after investigation.

Indeed.

Here are the three areas the Humphrey Institute investigated:

Weighting: First, it made sense to examine closely the weighting of the survey data in general and the weighting used to identify voters most likely to vote. Weighting is a standard practice to align the poll data with known demographic and other features such as gender and age that are documented by the U.S. Census. (Political party affiliation is a product of election campaigns and other political events and is not widely accepted by survey professionals as a reliable basis for weighting responses.)

Our own review of the data did not reveal errors that, for instance, might inflate the proportion of Democrats or depress that of Republicans who are identified as likely to vote. To make sure our review did not miss something, we solicited the independent advice of well-regarded statistician, Professor Andrew Gelman at Columbia University in New York City, who we had not previously worked with or personally met. Professor Gelman concluded that the weighting was “in line with standard practice” and confirmed our own evaluation.

“And an expert said everything’s hunky dory!”

Our second investigation was of what are known as “interviewer effects” based on research indicating that the race of the interviewer may impact respondents.11 (Forty-four percent of the interviewers for the MPR/HHH poll were minorities, mostly African American.) In particular, we searched for differences in expressed support for particular candidates based on whether the interviewer was Caucasian or minority. This investigation failed to detect statistically significant differences.

And the third was the much higher participation in the poll from respondents in the “612” area code – Minneapolis and its very near ‘burbs.  Jacobs (with emphasis added by me):

When analyzing a poll to meet a media schedule, it is not always feasible to look in-depth at internals.

It’s apparently more important to make the 5PM news than to have useful, valid numbers.

With the time and ability that this review made possible, we discovered in retrospect that individuals called in the 612 area code were more prone to participate than statewide — 81% in the 612 area as compared to 67% statewide in the October poll.13 Given that Democratic candidates traditionally fare well among voters in the 612 area code, the higher cooperation rate among likely voters in the 612 area code may explain why the estimate of Emmer’s support by MPR/HHH was slightly lower than those by other polls conducted at around the same time. This is the kind of lesson that can be closely monitored in the future and addressed to improve polling quality. 

Except we bloggers have been “closely monitoring” this for years.  It’s been pointed out in the past; on this very blog, I have been writing about this phenomenon since 2004 at the very latest.  Liberals looooove to answer polls.  Conservatives seem  not to.

That Jacobs claims to be just discovering this now, after all these years, is…surprising?

Frank Newport at Gallup critiques Jacobs’ report:

The authors give the cooperation rate for 612 residents compared to the cooperation rate statewide. The assumption appears to be that this led to a disproportionately high concentration of voters in the sample from the 612 area code. A more relevant comparison would be the cooperation rate for 612 residents compared to all those contacted statewide in all area codes other than 612. Still more relevant would be a discussion of the actual proportion of all completed interviews in the final weighted sample that were conducted in the 612 area code (and other area codes) compared to the Census estimate of the proportion of the population of Minnesota living in the 612 area code, or the proportion of votes cast in a typical statewide election from the 612 area code, or the proportion of the initial sample in the 612 area code. These are typical calculations. The authors note that residents in the 612 area code can be expected, on average, to skew disproportionately for the Democratic candidate in a statewide race. An overrepresentation in the sample of voters in the 612 area code could thus be expected to move the overall sample estimates in a Democratic direction.

That Jacobs finds an excuse for failing to weight for higher participation in a city that is right up there with Portland and Berkeley as a liberal hotbed would be astounding, if it weren’t the Humphrey Institute we’re talking about.

The authors do not discuss the ways in which region was controlled in the survey process, if any. The authors make it clear that they did not weight the sample by region. This is commonly done in state polls, particularly in states where voting outcomes can vary significantly by region, as apparently is the case in Minnesota.

Summary:  The HHH poll is sloppy work.

The Great Poll Scam Part XI: Weasels Rip My Results

Professor Larry Jacobs – by far the most-quoted non-elected person in Minnesota – defends the Humphrey Institute Poll:

Differences between polls may not be substantively significant as illustrated by the case of MinnPost’s poll with St. Cloud State, which showed Dayton with a 10 point lead, and the MPR/HHH poll, which reported a 12 point lead.

The “margin of sampling error,” which is calculated based on uniform formulas used by all polling firms, creates a cone around the estimate of each candidate’s support, reflecting the statistical probability of variation owing to random sampling.2 The practical effect is that the results of the MinnPost poll with St. Cloud State and MPR/HHH are, in statistical terms, within range of each other. Put simply, the 2 points separating them may reflect random variation and may well not be a statistically meaningful difference.

What might be a “statistically meaningful difference” is that Survey USA and Rasmussen all came much, much closer – as in, one-third to one-quarter of the Strib, HHH and St Cloud polls – to getting the actual election right, and tracked much closer to the GOP’s internal polling, which turned out to be dead-nut accurate (as we’ll see tomorrow).

Figure 2 creates a zone of sampling error around estimates of support for Dayton and Emmer by the five media polls completed during the last two weeks of the campaign.3 In terms of the estimates of Dayton’s support, the MPR/HHH poll is within the range of all four other polls. Take home point: its estimate of Dayton’s support was consistent with all other polls.

Well, no.  It was consistent with the other polls who have developed a reputation for inaccuracy that inevitably favors the DFL.  The other polls – Survey USA, Rasmussen, Laurence – were not consistent with the Humphrey poll at all.

Frank Newport of Gallup responds to this:

It is unclear from the report how much the write‐up of results from the October 21‐25 MPR/HHH poll emphasized the margin of error range around the point estimates. Although this is not part of their recommendation, if the authors feel strongly that the margin of effort around point estimates should be given more attention, future reports could include more emphasis on a band or range of estimated support, rather than the point estimates.

In other words, if the Humphrey Poll is really a range with no particular confidence in any particular number within the range, publicize the range.

But that’s not what the Humphrey Institute, or the media, led with just before the election.  It was “DAYTON LEADS BY 12”.  Not “Dayton leads by 8 to 16, maybe, sorta”.

The distinction might make a difference.

This is generally not done in pre‐election polling, under the assumption that the point estimate is still the most probable population parameter. Any education of the public on the meaning of margin of errors and ranges and comparisons of the margins of errors surrounding other polls is an admirable goal. It does, however, again raise the question of the purpose of and value of pre‐election polls if they are used only to estimate broad ranges of where the population stands. This topic is beyond the scope of this review.

In other words – if you take Jacobs at his word, then there’s nothing really newsworthy about the HHH poll.

Do you suppose they’ll stick with that line in the runup to the 2012 election?

The Great Poll Scam, Part X: Weasel Words

I’ve been raising kids for a long time.  Before that, I grew up around a bunch of them.  Indeed, I was one myself, once.

And I know now as I knew then the same thing that every single person who watches Cops knows, instinctively; if you think someone did something, and their response is “you can’t prove it”, it’s the same as an admission of guilt.

Oh, it doesn’t stand up in court – and it’s probably a good thing.

And in the rarified world of academics – and its poor, profoundly handicapped accidental offspring, political public opinion polling – I’m going to suggest it works the same way.

If there is a poll that is, year in and year out, just as ludicrous as the Humphrey and Strib polls, it’s the Saint Cloud State University poll.  I haven’t heretofore included it in my “Great Poll Scam” series, because it’s sort of out of sight and out of mind.

But in David Brauer’s interview with Emmer campaign manager Cullen Sheehan, the director of the SCSU poll – which is done in conjunction with the MinnPost – a fellow named Stephen Frank, tips us off; he concludes…:

Frank says. “Campaign managers like to find excuses rather than looking at their candidate or performance. Do you think if we stopped [publishing results] others would — or the candidates would and the latter won’t go public or only partially public?”

True, to a point.

But he began the statement by saying:

“Please show me one credible study that shows people change their mind on the basis of a poll,”

On the one hand:  “You can’t proooooooooove we did it!”

On the other hand – allow me to introduce you to Dr. Albert Mehrabian, who published a study entitled “Effects of Poll Reports on Voter Preferences”

From the abstract summary, with emphasis added:

Results of two experimental studies described in this article constituted clear experimental demonstration of how polls influence votes. Findings showed that voters tended to vote for those who they were told were leading in the polls; furthermore, that these poll-driven effects on votes were substantial.

How substantial?  I don’t know.  As I write this, it’s 5AM, and I have no way of getting to the University of Minnesota library to find a copy of Journal of Applied Social Psychology (Volume 28).  But I will.

But Mehrabian noted a decided “bandwagon effect” in voter responses to poll results.

Effects of polls on votes tended to be operative throughout a wide spectrum of initial (i.e., pre-poll) voter preferences ranging from undecided to moderately strong. There was a limit on poll effects, however, as noted in Study Two: Polls failed to influence votes when voter preferences were very strong to begin with.

Bingo.

I’d have voted for Tom Emmer even if he did finish 12 points back, as the Humphrey Institute suggested.  Or ten points out of the game, as Frank’s survey (which I ridiculed in this space), or thirty points back.  But then, nobody really doubted that.

But people who don’t live and breathe politics?  That’s another story – says Dr. Mehrabian.

Additional findings of considerable interest showed that effects of polls were stronger for women than for men and also were stronger for more arousable (i.e., more emotional) and more submissive (or less dominant) persons.

Which would be important, in a year when the DFL was worried about women flaking away from Dayton, and moderates being drawn (successfully!) to the Tea Party.

Wouldn’t it?

Especially noteworthy is my discussion of similarities and differences between the study methods and real- life political campaigns beginning with the middle paragraph on page 2128 (“Overall, results …).

I’ll dredge up a copy of Mehrabian’s study (unless any of you academics out there can shoot me a pointer…).

Mehrabian was cited in this study of the subject – “Social information and bandwagon behaviour in voting: an economic experiment“, by Ivo Bischoff and Henrik Egbert, a pair of German economists; the paper isn’t about the bandwagon effect – but it touches on it pretty heavily (all emphases are added by me):

The political science literature contains a number of empirical studies that test for bandwagon behaviour in voting. A first group of studies analyses data from large-scale opinion polls conducted in times of upcoming elections or on election days. The evidence from these studies is mixed (see the literature reviews in Marsh, 1984; McAllister and Studlar, 1991; Nadeau et al., 1997). One essential shortcoming of these studies is that it is very difficult to disentangle the complex interrelations between voting intentions, poll results and other pieces of information that drive both of the former simultaneously (Marsh, 1984; Morwitz and Pluzinski, 1996; Joslyn, 1997). Avoiding these difficulties, a second group of studies are based on experiments. Mehrabian (1998) presents two studies on bandwagon behaviour in voting. In his first study, he elicits the intended voting behaviour among Republicans in their primaries for the presidential election in 1996. He finds that the tendency to prefer Bob Dole over Steve Forbes depends on the polls presented to the voters. Voters are more likely to vote for Dole when he leads in the opinion poll compared to the situation with Forbes leading. The second study involves students from the University of California, Los Angeles. These are asked to express their approval to proposals for different modes of testing their performance: a midterm exam or an extra-credit paper. Mehrabian (1998) uses bogus polls in his studies. Results show that bogus polls do not influence the answers when subjects have clear and strong preferences. However, bogus polls have an impact when preference relations are weak. In this case, bandwagon behaviour in voting is observed. Next to Mehrabian (1998), there are a number of others experimental studies that find evidence for bandwagon behaviour in voting (Laponce 1966; Fleitas 1971; Ansolabehere and Iyengar 1994; Goidel and Shields, 1994; Mehrabian 1998).

It’s not an open-and-shut, according to Bischoff and Egbert – but there is evidence to suggest that the “Bandwagon Effect” exists, and that polling drives it.

Is it possible that the learned Professors Larry Jacobs or Stephen Frank are unaware of this?  Certainly.

Given both polls’ lock-step consistency, especially at under-polling GOP support in close elections, where people with weak initial preferences – people whose “preference relations are weak”, as Bischoff and Egbert put it, which might well be as good a good description for “independents” and “swing voters” as I’ve seen –  it’s worth a look, though.

More from Dr. Mehrabian in the near future.

The Great Poll Scam Part IX: The Rockstar Who Couldn’t See His Face In The Mirror

In reading Professor Larry Jacobs’ defense of the Hubert H. Humphrey Institute poll – which always underpolls Republicans in its immediate pre-election survey, by an average of six points, with the tendency even more exaggerated in close races – Jacobs writes (with emphasis added):

Appropriately interpreting Minnesota polls as a snapshot is especially important because President Barack Obama’s visit on October 23rd very likely created what turned out to be a temporary surge for Dayton. Obama’s visit occurred in the middle of the interviewing for the MPR/HHH poll; it was the only survey in the field when the President spoke on October 23rd at a rally widely covered by the press. Our write-up of the MPR/HHH poll emphasized that the President appeared to substantially increase support for Dayton and suggested that this bump might last or might fade to produce a closer race:

Well.  That kinda covers all the possibilities, doesn’t it?

Effect of Obama Visit: Obama’s visit to Minnesota on October 23rd and the resulting press coverage did increase support for Dayton. Among the 379 likely Minnesota voters who were surveyed on October 21st and 22nd (the 2 days before Obama’s visit), 40% support Dayton. By contrast, among the 145 likely Minnesota voters who were surveyed on October 24th and 25th (the 2 days after Obama’s visit) 53% support Dayton. This increase in support for Dayton could be a trend that will hold until Election Day, or it could be a temporary blip that will dissipate in the final days of the campaign and perhaps diminish his support.

Did you catch that?

Obama’s presence in the city caused Daytons’ numbers to boom by five points (if you take the HHH’s numbers at face value, something no well-informed person ever does), and then lurch downward by a dozen by election day?  The presence or absence of Barack Obama is responsible for one out of eight Minnesota voters changing their mind and changing it back inside of a week?

Obama’s impact in temporarily inflating Dayton’s lead is a vivid illustration of the importance of using polls as a snapshot.

No.  The HHH polls’ impact in temporarily inflating Dayton’s lead is vivid illustraiton of how these polls need to disregarded or abandoned!.

Indeed, according to the MPR/HHH poll, Dayton’s lead before Obama’s visit was 8 points – nearly identical to the Star Tribune’s lead at nearly the same point in time (7 points). Treating polls as snapshots, then, is especially important when a major event may artificially impact a poll’s results or, as in the case of the MPR/HHH poll, there were a large number of voters who were undecided (about 1 out of 6 voters) or were considering the possibility of shifting from a third party candidate to the Democratic or Republican candidate.

Read another way:  “They’re snapshots, so we can’t be held accountable.  But keep the funding and recognition coming anyway”.

The take-home point: polls are only a snapshot of what can be a fast moving campaign as events intervene and voters reach final decisions. Polls conducted closest to Election Day are most likely to approximate the actual vote tally precisely because they are capturing the changing decisions of actual voters.

Newport dipolmatically notes the real “take-home point”:

The authors raise the issue of the impact of President Obama’s visit to Minnesota on October 23rd. The authors note, and apparently reported when the poll was released, that interviews conducted October 24th and 25th as part of the MPR/HHH poll were more Democratic in voting intention than those conducted before the Obama visit. It is certainly true that “real world” events can affect the voting intentions of the electorate. In this instance, if the voting intentions of Minnesota voters were affected by the President’s visit, the effect would apparently have been short‐lived, given the final outcome of voting. The authors do not mention that the SurveyUSA poll also overlapped the Obama visit by at least one day. It is unclear from the report if there is other internal evidence in the survey that could be used to shed light on the Obama visit, including Obama job approval and 2008 presidential voting.

Up next – at noon – what effect do bogus polls really have on voters?

The Great Poll Scam Part VIII: Snapshots That Never Come Into Focus

I was reading Larry Jacobs’ defense of the Humphrey Institute’s shoddy work this past election.

His first point in defense is that polls are “a snapshot in time”:

Polls do not offer a “prediction” about which candidate “will” win. Polls are only a snapshot of one point in time. The science of survey research rests on interviewing a random sample to estimate opinion at a particular time. Survey interview methods provide no basis for projecting winners in the future.

So far so good.

How well a poll’s snapshot captures the thinking of voters at a point in time can be gleamed [sic] from the findings of other polls taken during the same period. Figure 1 shows that four polls were completed before the final week of the campaign when voters finalized their decisions.

I read this bit, and thought immediately of Eric Cartman playing Glenn Beck in South Park last season; disclaiming loathsome inflammatory statements with a simple “I’m just asking questions…”

Frank Newport at Gallup responded to this particular claim:

[Jacobs and his co-author, Joanne Miller] by discussing what they term a misconception about survey research, namely that polls are predictions of election outcomes rather than snapshots of the voting intentions of the electorate at one particular point in time. The authors present the results of five polls conducted in the last month of the election. The spread in the Democratic lead across the five polls ranged from 0 to 12. The authors note that the SurveyUSA poll was the closest to the election and closest to the actual election outcome. At the same time, the MPR/HHH poll was the second closest to Election Day and reported the highest Democratic margin. Another poll conducted prior to the MPR/HHH poll showed a 3‐point margin for the Democratic candidate.

Emmer’s internal poll showed a dead heat.  More on that later on this week.

Newport, with empasis from me:

The authors in essence argue that the accuracy of any poll conducted more than a few days before Election Day is unknowable, since there is no external validation of the actual voting intentions of the population at any time other than Election Day. This is true, but raises the broader question of the value of polls conducted prior to the final week of the Election – a discussion beyond the scope of the report or this review of the report.

By inference, Newport is indicating that a great enough number of voters make up their mind right before election day as to make pre-election polling essentially pointless.

Or is it?

Polling does affect peoples’ choices in elections; people don’t go to the polls when they know their candidate is going to become a punch line the next day; donors don’t turn out for races they are pretty sure are doomed.

And as I showed a few weeks ago, while Jacobs acknowledges that his poll is just a “snapshot” of numbers that may or may not have any bearing on the election itself, we noted a few weeks back that the Humphrey Poll’s results themselves are less “snapshot” than “slide show”; they have a coherent theme.  Election in, election out, they short the GOP, especially in tight elections.  Every single significant election, no exceptions.  Tight GOP wins (2006 Gubernatorial), comfy Democrat wins (2008 Presidential), squeakers (2008 Senate, 2010 Gubernatorial), every single one, without any exception, without the faintest hint of random “noise” that might indicate some random nature to the pattern, the HHH poll systematically shorts the GOP.

Given the completely non-random nature of this pattern – every election, no exceptions – there are three logical explanations:

  • The Humphrey Institute genuinely believes in the soundness of its polling methodology, which systematically (in the purest definition of the word) shorts GOP representation.
  • The Humphrey Institute is unable to change its methodology, or is structurally incapable of learning from its mistakes.
  • The Humphrey Institute is just fine with the poll’s inaccuracies, because it serves an unstated purpose.

To read Jacobs’ defense, you’d think…:

  • …that there’s nothing – nothing! – the HHH can do about fixing the inaccuracies of its “snapshot”, and…
  • …it’s all a matter of timing.

As we see elsewhere in the coverage of the Humphrey (and Strib) polls, both are false.

More later this week.

The Great Poll Scam, Part VII: Post Mortem

The Twin Cities’ media and academic establishment is starting to try to unpack the disaster of their polling efforts this past election cycle.

Minnesota Public Radio has done us the service of printing both the Humphrey Institute’s Larry Jacobs’ defense of the Humphrey Institute poll and a counter from Frank Newport of Gallup Polling. And David Brauer of the MinnPost does some excellent coverage, including a revealing interview with Cullen Sheehan, who was Tom Emmer’s campaign manager, with some rare insights into what a complete crock of used food Jacobs’ explanation is.

I’ll be trying to unpack this over the course of the coming week.

The Great Poll Scam, Part VI: The Hay They Make

We’ve been discussing the MPR/Humphrey Institute and Minnesota polls for the past two weeks.  Indeed, it’s been one of the ongoing “go to” subjects of this blog for almost eight years now.

Why?

Because while  the polls themselves are risible, they have an effect on elections in Minnesota.

Part of it is in terms of people – “undecided”, “independent” voters – going to the polls at all.  I’ve related on this blog several stories of people who’ve pondered not going to the polls this past year.  Part of it was  because of the overwhelming negativity about Tom Emmer portrayed by the media – negativity, partly driven by the “Alliance For A Better Minnesota’s long, Dayton-family-funded, largely dubiously-factual smear campaign, but pushed hard in the media via the “polling” that they, themselves, commissioned.

Larry Jacobs at the Hubert H. Humphrey (HHH) Institute is the most over-quoted person in the Twin Cities media.  And during the campaign, Jacobs was seen as relentlessly as always in the Twin Cities media, flogging the Humphrey Institute’s polling first during the primaries (where the HHH’s polls showed Dayton with a crushing lead even though Dayton won the primaries by a margin not a whole lot bigger than the one we currently have in the governor’s race) and, finally, during the run-up to the election when the HHH poll showed Dayton winning with a 12 point blowout.

We’re still working on the recount for the 0.4% race.

Jacobs defended the poll (quoted in LFR):

JACOBS: Well, you know, a poll is nothing more than a snapshot in time. We’ve begun the interviewing nearly 2 weeks before election day. Barack Obama visited and we talked openly about the fact that this would likely change. There are, of course, all kinds of other factors that happened at the end, including the fact the almost 1 out of 5 undecided voters in our poll started to make up their mind.

The other thing to remember is that there were alot of other polls being conducted that showed the race closing at the time, something we were watching at the time, also.

That’s right, Dr. Jacobs.  There were a lot of other polls.

And except for the HHH and Minnesota polls, all of them showed a “snapshot in time” that was something close to the reality that eventually emerged on election day.

All of them.

So what?

Because opinion polling has an inordinate effect on media coverage and, less directly, the money and effort that people put into campaigns.

As to the media?  The New York Times has absorbed Nate Silver’s “Five Thirty Eight” stats-blog for its election polling coverage.  And throughout the race, the Times ran with the idea that Dayton was overwhelmingly likely to win.

And that supposition was based entirely on a statistical tabulation of opinion poll results.  And the stats were heavily based on the Minnesota and Humphrey polls, especially through the middle of the race, when the tone of the campaign was being set.  All together, the crunching of the opinion poll numbers led Silver to claim the stats showed Minnesota would be a convincing 6.6 point victory for Dayton; since political statistics are an essentially weaselly “science”, Silver also ran with an eight point margin of error.

Naturally, the media ran with the 6.6 points; a little less with the margin of error.

Now, there’s some media attention – the Minnpost, the City Pages – to the ludicrous nature of the polls.  Jacobs:

“If a shortcoming is identified, we will fix it. If not, we will have third-party verification that our methods are sound.”

Dr. Jacobs:  take it from this third party; it’s flawed.  Flawed to the point of illegitimacy.

More on the Minnesota Poll later…

———-

\The series so far:

Monday, 11/8: Introduction.

Wednesday, 11/10: Polling Minnesota – The sixty-six year history of the Strib’s Minnesota Poll. It offers some surprises.

Friday, 11/12: Daves, Goliath:  Rob Daves ran the Minnesota Poll from 1987 ’til 2007.  And the statistics during that era have a certain…consistency?

Monday, 11/15: Hubert, You Magnificent Bastard, I Read Your Numbers!:  The Humphrey Institute has been polling Minnesota for six years, now.  And the results are…interesting.  In the classic Hindi sense of the term.

Wednesday, 11/17: Close Shaves: Close races are the most interesting.  For everyone.  Including you, if you’re reading this series.

Monday, 11/22: The Hay They Make: So what does the media and the Twin Cities political establishment do with these numbers?

Wednesday, 11/24: A Million’s A Crowd:  Attention, statisticians:  Raw data!  Suitable for cloudsourcing!

The Great Poll Scam, Part V: Close Shaves

It’s almost become a cliche, among conservative observers of Minnesota elections.  You’re supporting a Republican.  You know the race is close.  You can feel the race is close.

And the final Humprhey and Minnesota polls come out, and the DFLer leads by an utterly absurd margin – like this year’s Humphrey Institute Poll, which showed a 12 point race…

…which, two days later, came in a statistical dead heat, with much less than half a point separating the two candidates.

And yet the Minnesota and Humphrey Institute polls have their defenders.

———-

Remember the 2006 Senate race?  Mark Kennedy vs. Amy Klobuchar?

The Minnesota poll did pretty well, all in all.  The final Minnesota poll showed Mark Kennedy getting 34 points, to Amy Klobuchar’s 55.  The race ended up being 58.06 to just shy of 38.    The Minnesota poll showed both candidates doing a little worse than they eventually wound up doing – Klobuchar a little worse, in fact.

Defenders of the Minnesota Poll – media people and lefty pundits – chimed in.  “See?  The Minnesota poll is OK” or at the very least “The Minnesota Poll is an equal-opportunity incompetent”.

But if you’re a cynic – and when it comes to the Minnesota and Humphrey Polls, I most certainly am – the answer there is obvious; if you accept that the polls exist to help one party or another out of close jams (and let’s just say I think there’s a case to be made), then the real question is “how do the polls stack up when it really counts – during the close elections?

I took a look at the Minnesota poll’s history with close races – Gubernatorial, Presidential and Senate races that ended up less than five points apart – over the past 66 years.   Since 1944 in these races – twenty of them – the DFL ended up getting 47.69% to the GOP’s 47.57% in the final elections.  The Minnesota Poll has shown the DFL getting 44.3% to 43.28% in the final pre-election poll.  Both numbers are very close, of course.  The Minnesota Poll has underrepresented Republicans by an average of 4.3 points, the DFL by 3.39.  So while the poll underrepresented Republicans in 14 of 20 races, it was by less than a point, on average.

But that’s over 66 years.  And if you recall from episode 1 of this series, the Minnesota Poll used to systematically undercount the DFL.  But long story short – looking at the poll’s entire history, things are fairly close.

When you look at the Rob Daves era at the Minnesota poll, though, things change.

In close races (<5 point final difference) during the Rob Daves era, the GOP has actually gotten a slightly higher average vote total – 46.77% to 46.48% – in actual elections.  But the final Minnesota Poll has shown the DFL outpolling the GOP 43.33% to 40.78%.    Republicans come up an average of six points light in the final Minnesota Poll before the election, with DFLer finishing a little over three points short – nearly a 2-1 margin in underrepresentation.

In other words, in close races the Minnesota Poll has shown the GOP doing six points worse than they actually did, compared to three points for the DFL.  And the average Minnesota Poll has shown the DFL leading the GOP, when in fact the races have been mixed, with move Republican winners than in the previous 20-odd years of Minnesota history.

If you are an idealist, you could think that  it’s just a statistical anomaly.  To which the cynic notes that of eight close races, the GOP has been undercounted by less than the DFL exactly once.

The cynic might continue that it’s entirely possible that the Minnesota Poll doesn’t systematically short Republicans in close elections.  But given that the poll shorts Republicans in races that end up less than five points apart by an average of considerablymore than five points, the cynic would ask “if the Minnesota Poll were designed to keep Republicans home from the polls out of pure discouragement, how would it be any different than what we have now?”

Well, it could look like the Humphrey Poll.

Because the Humprey Poll is worse.  Granted, it’s a smaller sample size – there’ve been four “close” races (2004 Presidential, and the 2006 Governor,  2008 Senate and 2010 Governor races, which were/are very close indeed).

But in those race, the DFL won by an average of 45.43% to 44.7% (most of the gap coming from the four-point 2004 Presidental race; the other three had/have tallies within a point in difference).   But the final HHH poll showed the DFL/Democratic candidate winning by an average of seven points – 42.5 to 35.75%.  The DFL, is underrepresented in the HHH’s final pre-election poll by just a shade under three points; GOP is underpolls its real-life results by an average of almost nine points.

It’s possible that this is an honest error.  It is possible that the Humphrey Institute really, really believes that they have a likely voter model that accurately reflects Minnesota.  Perhaps it even does; maybe Minnesota really is a land of people who answer “DFL” on polls but come racing over to the GOP on election day.  But again – if the Humphrey Institute intended to help the DFL and keep Republicans home, it’s hard to see what they’d do differently.

Especially given the media’s reaction to these polls.

More on Friday.

———-

The series so far:

Monday, 11/8: Introduction.

Wednesday, 11/10: Polling Minnesota – The sixty-six year history of the Strib’s Minnesota Poll. It offers some surprises.

Friday, 11/12: Daves, Goliath:  Rob Daves ran the Minnesota Poll from 1987 ’til 2007.  And the statistics during that era have a certain…consistency?

Monday, 11/15: Hubert, You Magnificent Bastard, I Read Your Numbers!:  The Humphrey Institute has been polling Minnesota for six years, now.  And the results are…interesting.  In the classic Hindi sense of the term.

Wednesday, 11/17: Close Shaves: Close races are the most interesting.  For everyone.  Including you, if you’re reading this series.

Friday, 11/19: The Hay They Make: So what does the media and the Twin Cities political establishment do with these numbers?

Monday, 11/22: A Million’s A Crowd:  Attention, statisticians:  Raw data!  Suitable for cloudsourcing!

The Great Poll Scam, Part IV: Hubert, You Magnificent Bastard, I Read Your Numbers!

The Hubert H. Humphrey Institute is a combination public-policy study program and think tank at the University of Minnesota in Minneapolis.  Named for the patriarch of the Democratic Farmer-Labor party – a forties-era amalgamation of traditional Democrats and neo-wobbly Farmer-Labor Union members whose Stalinist elements Humphrey famously purged in the mid-forties – the institution serves as a clearinghouse of soft-left chanting points and a retirement program for mostly left-of-center politicians and heelers.

The Institute has been doing general public opinion polling for years; in 2004, in conjunction with Minnesota Public Radio, they dove into the horserace game.

Let’s just sum up their performance in each of the five Presidential, Gubernatorial and Senate races they’ve polled in that time:

2004 Presidential Race

  • HHH Poll:  Kerry 43, Bush 37
  • Actual Election Results: Kerry 51, Bush 47
  • Bush underrepresented by 10.61, Kerry by 8.09.

2006 Gubernatorial Race]

  • HHH Poll: Hatch 45, Pawlenty 40
  • Actual Election Results: Pawlenty 46.45.
  • Pawlenty underrepresented by six, Hatch polled accurately.

2006 Senate Race

  • HHH Poll: Klobuchar 54, Kennedy 34
  • Actual Election Results: Klobuchar 58.06, Kennedy 37.94
  • Kennedy underpolled by 3.94, Klobuchar by 4.06 – but it was a blowout.  We’ll come back to this.

2008 Presidential Election

  • HHH Poll: Obama 56, Mccain 37
  • Actual Election Results: Obama 54.2, McCain 44.
  • Obama overrepresented almost two points; McCain, almost seven points under. A ten point race was portrayed as a 20 point landslide.

2008 US Senate Race

  • HHH Poll: Franken 41, Coleman 37
  • Actual Election Results: Franken by 41.99 to 41.98.
  • Franken underrepresented by less than a point; Coleman, by almost five.  A tie race was portayed as a convincing five points beat-down.

2010 Governor Race

  • HHH Poll: Dayton 41, Emmer 29.
  • Actual Election: Dayton 43.63, Emmer 43.21, recount in progress.
  • A tie race was depicted as a 12 point blowout.

A polling guru will say that these gross inaccuracies are a function of the Humphrey’s likely voter model – which for whatever reason assumed in each case that Democrats were much more likely to vote than Republicans, and likely to make up a greater portion of the electorate.

And yet the Humphrey Institute’s heuristics – the procedural, institutional and methodological rules by which institutions develop intelligence about things like voter behavior – seem to be stuck, for whatever reason, in the eighties.  The average HHH poll shows Republican candidates to be polling over five and a half points lower than Democrats in their real-life election performances.

Coincidence?

In five of the six races covered above, the errors in measurement underrepresented the GOP.  It’s an figure lower than that of the “Minnesota Poll” only because they’ve been in business sixty years fewer than the Strib’s poll.

Why would this be?

More next week.

In our next installment: I’ve shown you the behavior of both polls in horseraces across the board.  But a particularly interesting bit of behavior comes out if you throw out the blowouts – the 30 point massacre in the 1994 Governor race, the 20 points slaughter in the 2006 Senate contest – and focus on the tight races.

More on Wednesday.

———-

\The series so far:

Monday, 11/8: Introduction.

Wednesday, 11/10: Polling Minnesota – The sixty-six year history of the Strib’s Minnesota Poll. It offers some surprises.

Friday, 11/12: Daves, Goliath:  Rob Daves ran the Minnesota Poll from 1987 ’til 2007.  And the statistics during that era have a certain…consistency?

Monday, 11/15: Hubert, You Magnificent Bastard, I Read Your Numbers!:  The Humphrey Institute has been polling Minnesota for six years, now.  And the results are…interesting.  In the classic Hindi sense of the term.

Wednesday, 11/17: Close Shaves: Close races are the most interesting.  For everyone.  Including you, if you’re reading this series.

Friday, 11/19: The Hay They Make: So what does the media and the Twin Cities political establishment do with these numbers?

Monday, 11/22: A Million’s A Crowd:  Attention, statisticians:  Raw data!  Suitable for cloudsourcing!

The Great Poll Scam, Part III: Daves, Goliath

Rob Daves took over the Minnesota Poll in 1987.

Rob Daves

Rob Daves

I have never met Rob Daves.  Either, to the best of my knowledge, has anyone else.  I don’t know that his alt-media bete noir, Scott Johnson, has even met him, despite not a few requests for interviews.

I have no idea what Rob Daves thinks, believes, wants, says or does.  I know nothing about his personal life, and I really don’t want or need to.  For all I know, he’s a perfectly wonderful human being.

But for a 20 year period under his direction, the Minnesota Poll turned into an epic joke.

How epic?

The numbers don’t lie.

———-

During the Rob Daves years, party politics in Minnesota skittered all over the map.  The governors office started DFL, changed hands, and maybe have changed back last week – we’ll see.  The Reagan/Bush 41 era seesawed to Clinton, then Dubya, and now Obama; both Senate seats started Republican; both switched to the DFL, eventually.

There has, in short, been a lot of variety, at least in terms of the Party ID winning the various elections.

But the Minnesota Poll has been oddly homogenous.

Throughout the Rob Daves era, the Democratic or DFL candidate in Presidential, Gubernatorial and Senate races has gotten an average of 45.68% of the vote, to 45.21% for the GOP.  That’s very, very close.

Some of the races have been blowouts – Amy Klobuchar’s 20 point drubbing of Mark Kennedy, Arne Carlson’s 30 point hammering of John Marty – and some, like our 2008 Senate and 2010 Governor races, have been (or still are) painfully close.

But you’d never know it from the Minnesota poll. The average vote totals – between the blowouts and upsets and squeakers – during Daves’ 1987-2007 tenure favored the DFL, barely, by 45.98 to 45.34%.  But the Minnesota Polls released just before all those elections showed the population favoring the DFL by 43.33 to 39.89%.

And of 18 total contests, the polling inaccuracies skewed in the direction of the DFL in 15.   The average skew toward the DFL came to almost three percentage points.

When you break things out, the differences get wider; in the five Presidential elections, the Minnesota Poll discerned a 49.67 to 36% DFL lead; the actual results were 50.13 to 41.64%.  The Minnesota Poll underrepresented the GOP by an average of 5.64% in Presidential elections during the Daves years.   The Strib Poll showed every single GOP candidate coming up short of his actual election performance:  George HW Bush polled 3.80% light; Dole, 7.00%;  Dubya, 8.50 and 6.61; McCain also polled seven points under his real performance.  The Democrats, on the other hand, seemed to be polled fairly accurately; the average error poll  and election for Democratic presidential candidates was less than half a point.

The Senate races are a little closer – the Republicans underperform the election results 4.29% to 3.14%, a difference of 1.15% under their election results, which isn’t very significant – if you just look at raw numbers.  Well come back to that next Wednesday.

In the Gubernatorial races during the Daves years, though, the polling results were pretty lockstep. In gubernatorial races since 1987, the GOP has outpolled the DFL by an average of 46.77 to 38.91% – including one huge blowout (1994) and several squeakers.  But the Minnesota Poll has shown Minnesotans’ preferences at 40.17 to 36.67 in favor of the GOP.  Republicans’ performance was underpolled by 6.6% in the Minnesota poll – that of the DFL by only 2.24%.  The Minnesota poll showed Minnesotans underselecting Republicans by almost triple the margin of the actual elections.

A classic – and large – example was the 2002 Governor race.  The election-eve Minnesota Poll showed Pawlenty tipping Moe by 35-32.  The real margin was 44-36.  While the poll oversampled Independence Party candidate Tim Penny by a fairly impressive margin, the fact is that while the final MN Poll undershot Moe’s support by 4%, it underrepresented Pawlenty’s by nine solid points.

All in all, of the 20 Presidential, Senate and Gubernatorial races during the Daves era, 16 of them showed the Minnesota Poll underpolling the GOP by a greater degree than the DFL.

And that’s just counting all the races.

———-

Daves was let go at the Strib in 2007.  The Minnesota Poll was taken over by “Princeton Research Study Group”, which also does polling for Newsweek (whose polling is generally considered atrocious).

The 2008 races were very different, of course; the Senate race was a virtual tie, while Obama beat McCain handily.

But the day before the election, the Minnesota poll said McCain was polling just 37%; he ended up with 44%.  It overestimated Obama’s support by under a point, calling him at 55% when he got 54.2%.  The Minnesota Poll sandbagged Mac by seven points.

And Franken v. Coleman?   The day before the election, the poll showed Coleman almost four points below his actual performance (38% versus 41.98) ; it nailed Franken almost dead-on (42% i the poll, 41.99% by the time the recount was over).

PRSA showed both GOP candidates performing drastically off their real pace on election eve.

And three weeks ago, a week before the gubernatorial election, the Minnesota Poll showed Emmer at 34%; he got 43.21%.  Nine points better than the Minnesota poll indicated.

The upshot?  Of the 20 total election contests in the Rob Daves and PRSA eras, the Minnesota Poll has underpolled GOP support in 17 – 85% – of those races.

And PRSA polling has, on average, underpolled the GOP by 6.12% in those three elections.   In other words, PRSA’s errors have favored the DFL to the tune of six points – which is more than the three-plus points of the Rob Daves era.

One might think that random statistics would scatter on both sides of the middle more or less equally.  And in the first 42 years of the Minnesota poll, in aggregate, they did, as we showed Wednesday.

But during the Daves years, and continuing with PRSA, the errors developed a consistency – shorting Republicans – and grew in magnitude.

———-

Of course, those averages hide some big swings; some races in those averages were real blowouts.

It’s been my theory that the Minnesota Poll’s “peculiarities” are most pronounced during close elections.

We’ll test that out next Wednesday, when we’ll examine races that were decided by the proverbial cat’s whisker.

First – Monday – we’ll meet the Hubert H. Humphrey Institute Poll.

———-

The series so far:

Monday, 11/8: Introduction.

Wednesday, 11/10: Polling Minnesota – The sixty-six year history of the Strib’s Minnesota Poll. It offers some surprises.

Friday, 11/12: Daves, Goliath:  Rob Daves ran the Minnesota Poll from 1987 ’til 2007.  And the statistics during that era have a certain…consistency?

Monday, 11/15: Hubert, You Magnificent Bastard, I Read Your Numbers!:  The Humphrey Institute has been polling Minnesota for six years, now.  And the results are…interesting.

Wednesday, 11/17: Close Shaves: Close races are the most interesting.  For everyone.  Including you, if you’re reading this series.

Friday, 11/19: The Hay They Make: So what does the media and the Twin Cities political establishment do with these numbers?

Monday, 11/22: A Million’s A Crowd:  Attention, statisticians:  Raw data!  Suitable for cloudsourcing!

MN Poll Result: 42.79 Elecction Result;: 46.61 Difference: -3.83   MN Poll Result: 49.62 Elecction Result;: 50.97 Difference: -1.35   Total/Lean DFL 21.00 13.00 0.62 Average Skew: 2.48

The Great Poll Scam, Part II: Polling Minnesota

My interest in the Minnesota Poll as an individual institution started right about the time I started this blog, six or eight years ago.

Now bear in mind that I, Mitch Berg, have made skepticism of the media at least a hobby, if not a fringey living, since 1986.  I have believed that the media needed to be distrusted and then verified for pretty much my entire adult life.

And yet until very recently, I maintained, if not a naive faith in the public opinion polling about elections, at least a detached sense that, somehow or other, they all evened out.   It was the same naivete that we all have about where babies and Christmas presents come from when we’re nine, or how entitlements get paid for when we’re 18 (50 for Minnesota government employees), or how sausage and bacon are made.

Ignorance is, indeed, bliss.

The scales started falling from my eyes when I started reading PowerLine.  Scott Johnson has been keeping his eye of the MNPoll for most of a decade, now; he’s led the pack of Minnesota bloggers in documenting the poll’s abuses.

And in reading the history of conservative criticism of the Minnesota Poll, I started wondering – what is the historical context?

There’s more of it than I’d figured.

———-

The Star Tribune started running public opinion polling of the Minnesota electorate in 1944.  It’s polled Minnesotans over a variety of topics, but the marquee subjects are always the big three elections – State Governor, US Senate and Presidential elections.

Now, if you’ve lived in Minnesota in the past fifty years or so (I go back half of that time – I moved here in ’85), it’s hard to believe that Minnesota used to be a largely Republican state.  Of course, the Republicans we had up until very recently were the type that make the likes of Lori Sturdevant grunt with approval – “progressive” Republicans like Elmer Anderson and Wheelock Whitney and the like.

I bring this up to note that while the various parties have changed – Republicans used to be “progressive”, Democrats used to be “America First” – that Minnesota party politics for the past 66 years have been a little more evenly-matched than current political consciousness – shaped as its been by Humphrey and Mondale and “Minnesota Miracle” and Wellstone and Carlson – might make you believe.

Now, if you look at the Minnesota Poll’s statistics for the past 66 years – going back to the 1944 elections, for Governor, Senator and President – the Minnesota Poll is actually fairly even.  In that time, Republicans have gotten an average of 46.85 percent of the vote for all those offices, to 49.37% for DFLers.  During that time, the Minnesota Poll’s “election eve” predictions have averaged 44.1% for Republicans, and 46.77% for Democrats.  That means that over history, the big final Minnesota Poll has shown Republicans doing 2.75 points worse than they turned out, with DFLers coming in 2.59 points worse than they finally turned out.  The results have tended to be, over the course of 66 years, infinitesimally more accurate – .16% – for Democrats.  It’s insignificant, truly.

Indeed, when you go through the numbers from the forties and the fifties, you can see some blogger back in 1958 decrying two things – the lack of an internet to blog on, and a serious pro-Republican bias in the Minnesota poll; in polls run before 1960, the Minnesota poll predicted Republicans would get 51.58, while GOP candidates for the big three offices actually got 50.32% of the vote – the poll overestimated Republicans by an average of 1.26%.  The DFL got an average of 49.73% of the vote during those years, while the Minnesota Poll had them at an average of 43.51% –  which is 6.22% lower than they actually turned out doing (although this number gets inflated by a truly horrible performance in the 1948 Gubernatorial election, where the MNPoll had John Halstead at 25% in their pre-election poll; he ended up losing, but with 45%. That had to be frustrating).  In all, before 1960, the Strib “Minnesota Poll”‘s pre-election poll overestimated the GOP’s performance compared to the DFL’s in 76% of elections; the poll’s overestimates favored the GOP by an average of almost 7.5%.

By the mid-sixties, of course, Minnesota politics changed drastically; by the middle of the decade, the golden age of “progressive” politics and the DFL, led by the likes of Hubert H. Humphrey and Walter Mondale for the DFL, and Elmer Anderson for the GOP, left Minnesota a very different state.  During those years – from about 1966, after Barry Goldwater re-introduced a partisan divide to national politics for the first time, really, since the war – the DFL won the average vote 50.97 to 46.61.  The Minnesota Poll predicted DFL victories, on average, of 49.62 to 42.79; they underreported the final support for Republicans by an average of 3.83%, and DFLers by 1.35%, an average skew of almost 2.5% in favor of the DFL.

But if you look at the actual elections covered in those years – from 1966 to 1990, the “Golden Age of the DFL” – of the 21 contests for President, Governor and Senator, the Minnesota Poll showed the Democrat doing better than they turned out doing by a greater margin than the Republican in 13 of the elections, and inflating the GOP candidates results in eight.  The 1980 Presidential election skewed things a bit – the MNPoll underestimated Jimmy Carter’s performance by 12.5% (Carter got 46.5%, while the MNPoll predicted 34%; it also overestimated Reagan’s performance by a little over a point, leading to one of the biggest pro-Republican skews in the recent history of the Minnesota Poll).

Overall, for the entire history of the Minnesota Poll from 1944 to 1986, the Minnesota Poll showed the public voting, on election eve, for the DFL by a 48.25% to 46.34% average margin; the actual elections favored the DFL to 51.10 47.81; the poll underpolled Republicans by a 1.47% average, and Democrats by an average of 2.85%.  Of the 41 total contests in that time, the DFL was overestimated by a greater margin than the GOP in 44% of the polls – again, not a really significant number.

In other words, the poll’s statistical vicissitudes were fairly balanced through its first 42 years.

But in 1987, the Strib hired Rob Daves to run the Minnesota Poll.

And things would change.

———-

The series so far:

Monday, 11/8: Introduction.

Wednesday, 11/10: Polling Minnesota – The sixty-six year history of the Strib’s Minnesota Poll. It offers some surprises.

Friday, 11/12: Daves, Goliath:  Rob Daves ran the Minnesota Poll from 1987 ’til 2007.  And the statistics during that era have a certain…consistency?

Monday, 11/15: Hubert, You Magnificent Bastard, I Read Your Numbers!:  The Humphrey Institute has been polling Minnesota for six years, now.  And the results are…interesting.

Wednesday, 11/17: Close Shaves: Close races are the most interesting.  For everyone.  Including you, if you’re reading this series.

Friday, 11/19: The Hay They Make: So what does the media and the Twin Cities political establishment do with these numbers?

Monday, 11/22: A Million’s A Crowd:  Attention, statisticians:  Raw data!  Suitable for cloudsourcing!

MN Poll Result: 42.79 Elecction Result;: 46.61 Difference: -3.83   MN Poll Result: 49.62 Elecction Result;: 50.97 Difference: -1.35   Total/Lean DFL 21.00 13.00 0.62 Average Skew: 2.48

The Great Poll Scam: Introduction

The weekend before the election, I was talking with a friend – a woman who has become a newly-minted conservative in the past two years.  She’d sat out the 2008 election, and had voted for Kerry in ’04, but finally became alarmed about the state of this nation’s future – she’s got kids – and got involved with the Tea Party and started paying attention to politics.  And she was going to vote conservative.  Not Republican, mind you, but conservative.

And the Saturday before the election, she sounded discouraged.  “Have you seen the polls?” she asked.  “Emmer’s gonna get clobbered”.

I set her straight, of course – referred her to my blog posts debunking the election-eve Humphrey and Minnesota polls.and showing her the Emmer campaign internal poll that showed the race a statistical dead heat (which, obviously, was the most correct poll before election day).

She left the room feeling better.  She voted for Emmer.  And she voted for her Republican candidates in her State House and Senate districts, duly helping flip her formerly blue district to the good guys and helping gut Dayton’s agenda, should he (heaven forefend) win the recount.

But I walked away from that meeting asking myself – what about all the thousands of newly-minted conservatives who don’t have the savvy or inclination to check the cross-tabs?  The thousands who saw those polls, and didn’t have access to a fire-breathing conservative talk show host with a keen BS detector who’s learned to read the fine print?

How many votes did Tom Emmer lose because of the Hubert H. Humphrey and Minnesota polls that showed him trailing by insurmountable margins?

How many votes to conservatives and Republicans lose in every election due to these polls’ misreporting?

Why do these two polls seem so terribly error-prone?  And why do those errors always seem to favor the Democrats, with the end result of discouraging Republican voters?

Coincidence?

———-

Public opinion polling is the alchemy of the post-renaissance age.  Especially “likely voter” polling; every organization that runs a poll has a different way of taking the hundreds or thousands of responses they get, and classifying the respondents as “likely” or not to vote, and tabulating those results into a snapshot of how people are thinking about an election at a given moment.

But the Star Tribune’s Minnesota Poll has, to the casual observer, a long history of coming out with polls that seem to short Republicans – especially conservative ones – every single election.  And the relative newcomer to the regional polling game, the Hubert H. Humphrey Institute’s poll done in conjunction with Minnesota Public Radio, seems – again, anecdotally (so far) to take that same approach and supercharge it.

I’ve had this discussion in the past – David Brauer of the MinnPost and I had a bit of a back and forth on the subject, on-line and on the Northern Alliance one Saturday about a month ago.

And so it occurred to me – it’s easy to come up with anecdotes, one way or another.  But how do the numbers really stack up?   If you dig into the actual numbers for the Humphrey Institute and the Minnesota Poll, what do they say?

I’ll be working on that for the next couple of weeks.  Here’s the plan:

http://www.shotinthedark.info/wp/?p=15172

Chanting Points Memo: Garbage In, Garbage Out

Mark Dayton has run one of the single dumbest campaigns in Minnesota history.

Dayton himself has been a virtual non-entity, relying on the Twin Cities’ media’s inability and/or unwillingness to question him on  his background, the immense gaps in his budget “plan”, his history of erratic behavior…anything.

His surrogates have been another matter entirely; “Alliance for a Better Minnesota” – whose financing, almost exclusively from big union donors and members and ex-members of Mark Dayton’s family of trust fund babies – has run the slimiest, most defamatory campaign in Minnesota political history.   From mischaracterizing Emmer’s “DUI” record and slandering his efforts to reform Minnesota DUI laws, to their outright lies about his budget, ABM has profaned this state’s politics in a way that I only hope can be salvaged in the future – although I doubt this will happen until the DFL decays to third-party status.

If it were a Republican group doing it, the Dems would be whining about “voter intimidation”.

The Dayton campaign, in short, has been not so much a campaign as an attempt to orchestrate negative projected PR, social inertia and the ignorance of most voters to their advantage.  It hasn’t been a dumb campaign, per se;  when your job is to sell Mark Dayton, “The Bumbler”, desperate situations call for desperate measures.  And as we saw in 1998, there are enough stupid people do make anything possible.

A big part of Dayton’s under-the-table campaign has been to portray the impression that Dayton’s coronation is inevitable.  If your nature is to be suspicious of institutions with long, arguably circumstantial records of bias, one might see the Minnesota Poll as an instrument toward that aim – given its three-decade record of showing DFLers doing an average of 7.5% better than they ended up doing.   (If you favor the Democrats, you might say the same about Rasmussen – if you ignored the fact that they’ve been consistently the most accurate major pollster for the last couple of cycles.  Other than that, just the same thing).

The latest chapter in this campaign has been the regional DFLbloggers’ chanting the latest results from Nate Silver’s “Five Thirty Eight”, a political stats-blog that was bought out by the NYTimes a while back.

Silver’s latest look at the Minnesota gubernatorial race gives Dayton an 83% chance of winning, in a six point race.

And that’s where the Sorosbloggers leave it.

Of course, Silver’s analysis on its face has a margin of error of a little over eight points – which is  – considerably larger than the forecast margin.

Of course, with any statistical, numerical output, you have to ask yourself – “are the inputs correct?”

Here are Silver’s inputs:

Courtesy 538/New York Times

Courtesy 538/New York Times

The important column is the “538 Poll Weight” column, the third from the right.  It shows how much weight Silver gives each poll in his final calculation.  The number is at least partly tied to time – but not completely; for some reason, the five-week old Survey USA poll gets 20% more weight than the four week old Rasmussen poll; the October 6 Rasmussen poll that showed Emmer with a one point lead gets about 3/4 the oomph of the latest Survey USA poll, which showed Dayton with a five point lead…

…and whose “likely voter model” seemed to think that Democrats are four points more likely to show up at the polls that Republicans.  This year.

Pollsters – and Silver – are fairly cagey about their methodology.  I’m not a statistics wiz.  I dropped the class after one week, in fact.  But I can tell when something isn’t passing the stink test.  Any poll that gives Democrats a four point edge in turnout this year may or may not be wishful thinking (we’ll find out in less than two weeks, won’t we?), but does seem to be based more on history than current behavior which, I should point out, involves a lot of hocus-pocus to predict during a normal election.

And this is not a normal election.

I’m not going to impugn Nate Silver, per se – if only because I haven’t the statistical evidence.  Yet.

I will, impugn the NYTimes, but then that’s what I do.  They very much do want to drive down Republican turnout.

And that is the main reason the DFL machine – including the ranks of more-or-less kept leftybloggers in this state – are parrotting this “story” so dutifully.  They want to convince Republicans that all is lost.

Pass the word, folks.  We’re gonna win this thing.