Shot In The Dark: Today’s News, Two Years Ago

Nate Silver at the NYTimes has been widely respected for his ability as a statistician.

His reputation, though, seems to stem largely from his facility at what amount to rhetorical parlor tricks (he once earned a bit of a living counting cards at poker, and he made a name for himself with baseball stats), and his calling of the vast majority of the 2008 election slate correctly (with the help of an epochal wave election and lots of access to Obama campaign internal polling), leading to his hiring at the NYTimes in time for the 2010 race.

Silver’s method at the NYTimes involves…:

  • Taking regional polls – from polling services as well as media polls – and…
  • “weighting” them according to some special sauce known only to Nate Silver, Registered Statistical Genius.

Now, I wrote about Silver’s method two years ago, when he spent much of the race predicting Mark Dayton would win by six points (with an eight-point margin of error).  As I pointed out, Silver’s “methodology” involved giving a fairly absurd amount of weight to  polls like the long-discredited Star Tribune “Minnesota” Poll and the since-discontinued Hubert H. Humphrey Institute poll (for whose demise I sincerely hope I deserve some credit, having spent a good part of the fall of 2010 showing what a piece of pro-DFL propaganda it has always been).  During the middle of th 2010 race, Silver gave the absurdly inaccurate-in-the-DFL’s-favor (especially in close elections) HHH and Minnesota Polls immense weight, while undervaluing the generally-accurate Rasmussen polls and, to a lesser extent, Survey USA.

I said Silver’s methodology was “garbage in, garbage out” – he uses bad data, and gets bad results.  I was being charitable, of course; his methodology, untransparent and proprietary as it is, processes bad data into worse conclusions.

That was in 2010.

Today?  NRO’s Josh Jordan reaches the same conclusion:

While many in the media (and Silver himself) openly mock the idea of Republicans’ “unskewing polls” (and I am not a fan of unskewedpolls.com by any means), Silver’s weighting method is just a more subtle way of doing just that. I outlined yesterday why Ohio is closer than the polls seem to indicate by looking at the full results of the polls as opposed to only the topline head-to-head numbers. Romney is up by well over eight points among independents in an average of current Ohio polls, the overall sample of those same polls is more Democratic than the 2008 electorate was, and Obama’s two best recent polls are among the oldest.

But look at some of the weights applied to the individual polls in Silver’s model. The most current Public Policy Polling survey, released Saturday, has Obama up only one point, 49–48. That poll is given a weighting under Silver’s model of .95201. The PPP poll taken last weekend had Obama up five, 51–46. This poll is a week older but has a weighting of 1.15569.

So it wasn’t just Minnesota!

And remember – PPP polls, while leaning a little left, are not generally flagrantly inaccurate in the sense that the Strib is and the HHH was.

And it’s not a fluke…:

The NBC/Marist Ohio poll conducted twelve days ago has a higher weighting attached to it (1.31395) than eight of the nine polls taken since. The poll from twelve days ago also, coincidentally enough, is Obama’s best recent poll in Ohio, because of a Democratic party-identification advantage of eleven points. By contrast, the Rasmussen poll from eight days later, which has a larger sample size, more recent field dates, but has an even party-identification split between Democrats and Republicans, has a weighting of .88826, lower than any other poll taken in the last nine days.

Jordan reaches a conclusion that even I didn’t:

This is the type of analysis that walks a very thin line between forecasting and cheerleading. When you weight a poll based on what you think of the pollster and the results and not based on what is actually inside the poll (party sampling, changes in favorability, job approval, etc), it can make for forecasts that mirror what you hope will happen rather than what’s most likely to happen.

Well, you can – if your goal isn’t so much to measure the nation’s zeitgeist (and report on it) but affect the election.

Which has, of course, been my contention all along.

All That’s Silver Does Not Glitter

While the national polls show the presidential race a statistical toss-up, Nate Silver points out that polls conducted in swing state show Obama with an actual lead of sorts – around three points:.

While that isn’t an enormous difference in an absolute sense, it is a consequential one. A one- or two-point lead for Mr. Obama, as in the national polls, would make him barely better than a tossup to win re-election. A three- or four-point lead, as in the state polls, is obviously no guarantee given the troubles in the economy, but it is a more substantive advantage.

Here’s the part that caught my attention; I’ve added emphasis:

The difference isn’t an artifact of RealClearPolitics’s methodology. The FiveThirtyEight method, which applies a more complicated technique for estimating polling averages, sees broadly the same split between state and national polls.

On the one hand – well, doy.  Obama’s an incumbent elected in a wave, protected by a media that serves as his Praetorian Guard.  Of course he’s going to be polling well.

On the other hand?  My real point in this article is the abovementioned “FiveThirtyEigtht Method”.

I addressed this two years ago – when Silver, who is generally acknowledged to be a moderate Democrat, spent most of the 2010 campaign predicting a 6+ point Mark Dayton victory.

How did he arrive at that number?

  1. By taking an assortment of polls from around MInnesota, conducted by a variety of polling operations, and…
  2. Applying a weighting to each poll, the “538 Poll Weight”, which came from an unexplained formula known, near as I can tell, only to Silver.  Which is not to say that it’s wrong, or statistically, intellectually or journalistically dishonest, per se – merely that it’s completely opaque

But let’s take Silver’s methodology at face value – because he’s a respected statistician who works for the NYTimes, right?

The fact remains that, at least here in Minnesota, two of the polls that were given great weight in Silver’s methodology – the Star Tribune “Minnesota” poll and the Hubert H. Humphrey Institute poll, are palpably garbage, and should be viewed as DFL propaganda at best, calculated election fraud at worst. 

We went through this in some detail after the 2010 election: there’s an entire category on this blog devoted to going over the various crimes and misdemeanors of Twin Cities media pollsters.  ,Long story short – since 1988, the Strib “Minnesota” poll has consistently shorted Republican support in polls, especially the polls closest to the elections, especially in close elections.  The “Minnesota” poll’s only redeeming point?  The Humphrey Institute poll is worse.  In both cases, they tended – moreso in closer races – to exaggerate the lead the Democrat candidate for Governor, Senator or President had.   For example, in 2010 both polls showed Mark Dayton with crushing, overwhelming, humiliating leads over Tom Emmer on election-eve.  It ended up the closest gubernatorial race in Minnesota history.  The “Minnesota” poll was so bad, Frank Newport of Gallup actually wrote to comment on its dubious methodology. I suspect that the results are less mathematical background noise or methodological quicks – which would, if truly random, show distortions that would even out between the parties over time.  While it’s not provable without a whistle-blower from inside either or both organizations, I suspect the results shake out the way they do, if you are inclined to believe people have integrity, due to selection bias in setting up survey samples (and, if you don’t have much faith, in systematic bias working to achieve a “Bandwagon Effect” among the electorate.  Count me among the cynics; an organization with integrity would have noticed these errors long before a guy like me who maxed out at Algebra I in college and fixed the problem.  I’m willing to be persuaded, but you’ll have to have a much better argument than most of the polls’ defenders). 
The point being, this is the quality of the raw material that leads Nate Silver to his conclusions.  
And that should give Silver, and people who pay attention to him, pause.
I don’t know if the other state polls are as dodgy as Minnesota’s local media polling operations.  That’d be a great subject for a blogswarm.  

You Get One Guess

Minnesota Public Radio’s Mark Zdechlik notes that Minnesota could very well see a lot more nail-biter races, because…

…well, we all know how this works, don’t we?  Minnesota is more polarized, and the parties are more extreme.  Right?

Analysts say elections have become so close because Republicans and Democrats share almost the same number of supporters and that both sides are becoming more extreme and more polarized.

And who’s the source?

You only get one guess!  Hurry!  (Emphasis added)

University of Minnesota Political Science Professor Larry Jacobs…

Oh, who the hell else?

I wonder – does the Humphrey Institute give some sort of spiff to reporters for quoting Jacobs in every single story about politics at any level anywhere in Minnesota?

If every single news outlet – MPR, WCCO, the Strib, the PiPress, the MinnPost – quoted Mitch Pearlstein of the conservative Center of the American Experiment, do you think someone would squawk that they were adopting a partisan point of view?

So given the largely  monochromatic, left-of-center pedigrees of the Humphrey Center’s faculty, why does this monopoly on sourcing in the Twin Cities media pass unmentioned?

…said politics in Minnesota has been reduced to something akin to tribal warfare; most Democrats and Republicans are dug-in so deep they wouldn’t even consider supporting a candidate from the other side.

“You’ve got kind of the Hatfields on one side and the McCoys in another,” Jacobs said.

Far better, to some in the Twin Cities “intelligentsia”, to return to the seventies, when all politicians came to us in generic yellow boxes with black lettering, all spouting more or less the same center-left institutional twaddle?  When you had your choice between John Marty and Arne Carlson – ergo no choice at all?

Jacobs said this year’s governor’s race is a good example of the polarization. He said that Republican Party candidate Tom Emmer was probably the most conservative statewide candidate we’ve seen nominated on the Republican side in the state’s history, or at least since World War II.

They always put this like it’s a bad thing.

He was nominated – they know that, right?  It’s not as if Karl Rove flew in and gave the guy the nomination personally.

With the exception of DFL Sen. Amy Klobuchar’s lop-sided 2006 victory, the past three statewide elections have shown core Republicans and Democrats in Minnesota are evenly split.

Because winning with a majority has become so difficult, Jacobs said election strategy in Minnesota has become all about ripping the opposition and appealing to the base.

And, um, trying to scare off independents by showing that your guy is really ahead, appealing to the Bandwagon Effect.  Right, Dr. Jacobs?

Question:  If I had access to Lexis/Nexis, and could divide the number of stories on politics in the Strib, PiPress, WCCO, MPR and the MinnPost featuring quotes by Dr. Jacobs by the total number of stories on politics, would the result be over or under 25%?

The Great Poll Scam Part XI: Weasels Rip My Results

Professor Larry Jacobs – by far the most-quoted non-elected person in Minnesota – defends the Humphrey Institute Poll:

Differences between polls may not be substantively significant as illustrated by the case of MinnPost’s poll with St. Cloud State, which showed Dayton with a 10 point lead, and the MPR/HHH poll, which reported a 12 point lead.

The “margin of sampling error,” which is calculated based on uniform formulas used by all polling firms, creates a cone around the estimate of each candidate’s support, reflecting the statistical probability of variation owing to random sampling.2 The practical effect is that the results of the MinnPost poll with St. Cloud State and MPR/HHH are, in statistical terms, within range of each other. Put simply, the 2 points separating them may reflect random variation and may well not be a statistically meaningful difference.

What might be a “statistically meaningful difference” is that Survey USA and Rasmussen all came much, much closer – as in, one-third to one-quarter of the Strib, HHH and St Cloud polls – to getting the actual election right, and tracked much closer to the GOP’s internal polling, which turned out to be dead-nut accurate (as we’ll see tomorrow).

Figure 2 creates a zone of sampling error around estimates of support for Dayton and Emmer by the five media polls completed during the last two weeks of the campaign.3 In terms of the estimates of Dayton’s support, the MPR/HHH poll is within the range of all four other polls. Take home point: its estimate of Dayton’s support was consistent with all other polls.

Well, no.  It was consistent with the other polls who have developed a reputation for inaccuracy that inevitably favors the DFL.  The other polls – Survey USA, Rasmussen, Laurence – were not consistent with the Humphrey poll at all.

Frank Newport of Gallup responds to this:

It is unclear from the report how much the write‐up of results from the October 21‐25 MPR/HHH poll emphasized the margin of error range around the point estimates. Although this is not part of their recommendation, if the authors feel strongly that the margin of effort around point estimates should be given more attention, future reports could include more emphasis on a band or range of estimated support, rather than the point estimates.

In other words, if the Humphrey Poll is really a range with no particular confidence in any particular number within the range, publicize the range.

But that’s not what the Humphrey Institute, or the media, led with just before the election.  It was “DAYTON LEADS BY 12”.  Not “Dayton leads by 8 to 16, maybe, sorta”.

The distinction might make a difference.

This is generally not done in pre‐election polling, under the assumption that the point estimate is still the most probable population parameter. Any education of the public on the meaning of margin of errors and ranges and comparisons of the margins of errors surrounding other polls is an admirable goal. It does, however, again raise the question of the purpose of and value of pre‐election polls if they are used only to estimate broad ranges of where the population stands. This topic is beyond the scope of this review.

In other words – if you take Jacobs at his word, then there’s nothing really newsworthy about the HHH poll.

Do you suppose they’ll stick with that line in the runup to the 2012 election?

The Great Poll Scam, Part X: Weasel Words

I’ve been raising kids for a long time.  Before that, I grew up around a bunch of them.  Indeed, I was one myself, once.

And I know now as I knew then the same thing that every single person who watches Cops knows, instinctively; if you think someone did something, and their response is “you can’t prove it”, it’s the same as an admission of guilt.

Oh, it doesn’t stand up in court – and it’s probably a good thing.

And in the rarified world of academics – and its poor, profoundly handicapped accidental offspring, political public opinion polling – I’m going to suggest it works the same way.

If there is a poll that is, year in and year out, just as ludicrous as the Humphrey and Strib polls, it’s the Saint Cloud State University poll.  I haven’t heretofore included it in my “Great Poll Scam” series, because it’s sort of out of sight and out of mind.

But in David Brauer’s interview with Emmer campaign manager Cullen Sheehan, the director of the SCSU poll – which is done in conjunction with the MinnPost – a fellow named Stephen Frank, tips us off; he concludes…:

Frank says. “Campaign managers like to find excuses rather than looking at their candidate or performance. Do you think if we stopped [publishing results] others would — or the candidates would and the latter won’t go public or only partially public?”

True, to a point.

But he began the statement by saying:

“Please show me one credible study that shows people change their mind on the basis of a poll,”

On the one hand:  “You can’t proooooooooove we did it!”

On the other hand – allow me to introduce you to Dr. Albert Mehrabian, who published a study entitled “Effects of Poll Reports on Voter Preferences”

From the abstract summary, with emphasis added:

Results of two experimental studies described in this article constituted clear experimental demonstration of how polls influence votes. Findings showed that voters tended to vote for those who they were told were leading in the polls; furthermore, that these poll-driven effects on votes were substantial.

How substantial?  I don’t know.  As I write this, it’s 5AM, and I have no way of getting to the University of Minnesota library to find a copy of Journal of Applied Social Psychology (Volume 28).  But I will.

But Mehrabian noted a decided “bandwagon effect” in voter responses to poll results.

Effects of polls on votes tended to be operative throughout a wide spectrum of initial (i.e., pre-poll) voter preferences ranging from undecided to moderately strong. There was a limit on poll effects, however, as noted in Study Two: Polls failed to influence votes when voter preferences were very strong to begin with.

Bingo.

I’d have voted for Tom Emmer even if he did finish 12 points back, as the Humphrey Institute suggested.  Or ten points out of the game, as Frank’s survey (which I ridiculed in this space), or thirty points back.  But then, nobody really doubted that.

But people who don’t live and breathe politics?  That’s another story – says Dr. Mehrabian.

Additional findings of considerable interest showed that effects of polls were stronger for women than for men and also were stronger for more arousable (i.e., more emotional) and more submissive (or less dominant) persons.

Which would be important, in a year when the DFL was worried about women flaking away from Dayton, and moderates being drawn (successfully!) to the Tea Party.

Wouldn’t it?

Especially noteworthy is my discussion of similarities and differences between the study methods and real- life political campaigns beginning with the middle paragraph on page 2128 (“Overall, results …).

I’ll dredge up a copy of Mehrabian’s study (unless any of you academics out there can shoot me a pointer…).

Mehrabian was cited in this study of the subject – “Social information and bandwagon behaviour in voting: an economic experiment“, by Ivo Bischoff and Henrik Egbert, a pair of German economists; the paper isn’t about the bandwagon effect – but it touches on it pretty heavily (all emphases are added by me):

The political science literature contains a number of empirical studies that test for bandwagon behaviour in voting. A first group of studies analyses data from large-scale opinion polls conducted in times of upcoming elections or on election days. The evidence from these studies is mixed (see the literature reviews in Marsh, 1984; McAllister and Studlar, 1991; Nadeau et al., 1997). One essential shortcoming of these studies is that it is very difficult to disentangle the complex interrelations between voting intentions, poll results and other pieces of information that drive both of the former simultaneously (Marsh, 1984; Morwitz and Pluzinski, 1996; Joslyn, 1997). Avoiding these difficulties, a second group of studies are based on experiments. Mehrabian (1998) presents two studies on bandwagon behaviour in voting. In his first study, he elicits the intended voting behaviour among Republicans in their primaries for the presidential election in 1996. He finds that the tendency to prefer Bob Dole over Steve Forbes depends on the polls presented to the voters. Voters are more likely to vote for Dole when he leads in the opinion poll compared to the situation with Forbes leading. The second study involves students from the University of California, Los Angeles. These are asked to express their approval to proposals for different modes of testing their performance: a midterm exam or an extra-credit paper. Mehrabian (1998) uses bogus polls in his studies. Results show that bogus polls do not influence the answers when subjects have clear and strong preferences. However, bogus polls have an impact when preference relations are weak. In this case, bandwagon behaviour in voting is observed. Next to Mehrabian (1998), there are a number of others experimental studies that find evidence for bandwagon behaviour in voting (Laponce 1966; Fleitas 1971; Ansolabehere and Iyengar 1994; Goidel and Shields, 1994; Mehrabian 1998).

It’s not an open-and-shut, according to Bischoff and Egbert – but there is evidence to suggest that the “Bandwagon Effect” exists, and that polling drives it.

Is it possible that the learned Professors Larry Jacobs or Stephen Frank are unaware of this?  Certainly.

Given both polls’ lock-step consistency, especially at under-polling GOP support in close elections, where people with weak initial preferences – people whose “preference relations are weak”, as Bischoff and Egbert put it, which might well be as good a good description for “independents” and “swing voters” as I’ve seen –  it’s worth a look, though.

More from Dr. Mehrabian in the near future.

The Great Poll Scam Part IX: The Rockstar Who Couldn’t See His Face In The Mirror

In reading Professor Larry Jacobs’ defense of the Hubert H. Humphrey Institute poll – which always underpolls Republicans in its immediate pre-election survey, by an average of six points, with the tendency even more exaggerated in close races – Jacobs writes (with emphasis added):

Appropriately interpreting Minnesota polls as a snapshot is especially important because President Barack Obama’s visit on October 23rd very likely created what turned out to be a temporary surge for Dayton. Obama’s visit occurred in the middle of the interviewing for the MPR/HHH poll; it was the only survey in the field when the President spoke on October 23rd at a rally widely covered by the press. Our write-up of the MPR/HHH poll emphasized that the President appeared to substantially increase support for Dayton and suggested that this bump might last or might fade to produce a closer race:

Well.  That kinda covers all the possibilities, doesn’t it?

Effect of Obama Visit: Obama’s visit to Minnesota on October 23rd and the resulting press coverage did increase support for Dayton. Among the 379 likely Minnesota voters who were surveyed on October 21st and 22nd (the 2 days before Obama’s visit), 40% support Dayton. By contrast, among the 145 likely Minnesota voters who were surveyed on October 24th and 25th (the 2 days after Obama’s visit) 53% support Dayton. This increase in support for Dayton could be a trend that will hold until Election Day, or it could be a temporary blip that will dissipate in the final days of the campaign and perhaps diminish his support.

Did you catch that?

Obama’s presence in the city caused Daytons’ numbers to boom by five points (if you take the HHH’s numbers at face value, something no well-informed person ever does), and then lurch downward by a dozen by election day?  The presence or absence of Barack Obama is responsible for one out of eight Minnesota voters changing their mind and changing it back inside of a week?

Obama’s impact in temporarily inflating Dayton’s lead is a vivid illustration of the importance of using polls as a snapshot.

No.  The HHH polls’ impact in temporarily inflating Dayton’s lead is vivid illustraiton of how these polls need to disregarded or abandoned!.

Indeed, according to the MPR/HHH poll, Dayton’s lead before Obama’s visit was 8 points – nearly identical to the Star Tribune’s lead at nearly the same point in time (7 points). Treating polls as snapshots, then, is especially important when a major event may artificially impact a poll’s results or, as in the case of the MPR/HHH poll, there were a large number of voters who were undecided (about 1 out of 6 voters) or were considering the possibility of shifting from a third party candidate to the Democratic or Republican candidate.

Read another way:  “They’re snapshots, so we can’t be held accountable.  But keep the funding and recognition coming anyway”.

The take-home point: polls are only a snapshot of what can be a fast moving campaign as events intervene and voters reach final decisions. Polls conducted closest to Election Day are most likely to approximate the actual vote tally precisely because they are capturing the changing decisions of actual voters.

Newport dipolmatically notes the real “take-home point”:

The authors raise the issue of the impact of President Obama’s visit to Minnesota on October 23rd. The authors note, and apparently reported when the poll was released, that interviews conducted October 24th and 25th as part of the MPR/HHH poll were more Democratic in voting intention than those conducted before the Obama visit. It is certainly true that “real world” events can affect the voting intentions of the electorate. In this instance, if the voting intentions of Minnesota voters were affected by the President’s visit, the effect would apparently have been short‐lived, given the final outcome of voting. The authors do not mention that the SurveyUSA poll also overlapped the Obama visit by at least one day. It is unclear from the report if there is other internal evidence in the survey that could be used to shed light on the Obama visit, including Obama job approval and 2008 presidential voting.

Up next – at noon – what effect do bogus polls really have on voters?

The Great Poll Scam Part VIII: Snapshots That Never Come Into Focus

I was reading Larry Jacobs’ defense of the Humphrey Institute’s shoddy work this past election.

His first point in defense is that polls are “a snapshot in time”:

Polls do not offer a “prediction” about which candidate “will” win. Polls are only a snapshot of one point in time. The science of survey research rests on interviewing a random sample to estimate opinion at a particular time. Survey interview methods provide no basis for projecting winners in the future.

So far so good.

How well a poll’s snapshot captures the thinking of voters at a point in time can be gleamed [sic] from the findings of other polls taken during the same period. Figure 1 shows that four polls were completed before the final week of the campaign when voters finalized their decisions.

I read this bit, and thought immediately of Eric Cartman playing Glenn Beck in South Park last season; disclaiming loathsome inflammatory statements with a simple “I’m just asking questions…”

Frank Newport at Gallup responded to this particular claim:

[Jacobs and his co-author, Joanne Miller] by discussing what they term a misconception about survey research, namely that polls are predictions of election outcomes rather than snapshots of the voting intentions of the electorate at one particular point in time. The authors present the results of five polls conducted in the last month of the election. The spread in the Democratic lead across the five polls ranged from 0 to 12. The authors note that the SurveyUSA poll was the closest to the election and closest to the actual election outcome. At the same time, the MPR/HHH poll was the second closest to Election Day and reported the highest Democratic margin. Another poll conducted prior to the MPR/HHH poll showed a 3‐point margin for the Democratic candidate.

Emmer’s internal poll showed a dead heat.  More on that later on this week.

Newport, with empasis from me:

The authors in essence argue that the accuracy of any poll conducted more than a few days before Election Day is unknowable, since there is no external validation of the actual voting intentions of the population at any time other than Election Day. This is true, but raises the broader question of the value of polls conducted prior to the final week of the Election – a discussion beyond the scope of the report or this review of the report.

By inference, Newport is indicating that a great enough number of voters make up their mind right before election day as to make pre-election polling essentially pointless.

Or is it?

Polling does affect peoples’ choices in elections; people don’t go to the polls when they know their candidate is going to become a punch line the next day; donors don’t turn out for races they are pretty sure are doomed.

And as I showed a few weeks ago, while Jacobs acknowledges that his poll is just a “snapshot” of numbers that may or may not have any bearing on the election itself, we noted a few weeks back that the Humphrey Poll’s results themselves are less “snapshot” than “slide show”; they have a coherent theme.  Election in, election out, they short the GOP, especially in tight elections.  Every single significant election, no exceptions.  Tight GOP wins (2006 Gubernatorial), comfy Democrat wins (2008 Presidential), squeakers (2008 Senate, 2010 Gubernatorial), every single one, without any exception, without the faintest hint of random “noise” that might indicate some random nature to the pattern, the HHH poll systematically shorts the GOP.

Given the completely non-random nature of this pattern – every election, no exceptions – there are three logical explanations:

  • The Humphrey Institute genuinely believes in the soundness of its polling methodology, which systematically (in the purest definition of the word) shorts GOP representation.
  • The Humphrey Institute is unable to change its methodology, or is structurally incapable of learning from its mistakes.
  • The Humphrey Institute is just fine with the poll’s inaccuracies, because it serves an unstated purpose.

To read Jacobs’ defense, you’d think…:

  • …that there’s nothing – nothing! – the HHH can do about fixing the inaccuracies of its “snapshot”, and…
  • …it’s all a matter of timing.

As we see elsewhere in the coverage of the Humphrey (and Strib) polls, both are false.

More later this week.