To be fair, it’s actually a very good article with important points about the difficulties in achieving decently reliably polling. I must admit that overall I have great respect for Michael Barone and his expertise in the polling domain. And in the end, he concludes that the polls are skewed decidedly leftward.
But he, along with many others, still has a terrible case of the “can’t see the forest for the trees” disease. Because accurate sampling models are much more difficult to achieve these days, he does not believe that the polling agencies are not deliberately skewing the results in order to influence public attitudes. Curiously, he draws a distinction between “media pollsters” (no ulterior motives) and “partisan pollsters” (ulterior motives). Seriously , Mike? In what way are WaPo/ABC/NBC/USAToday/NYT/Pew/PPP not desperately partisan, and all in one direction?
He points out how ridiculous it is for those presumably honest media pollsters to use turnout models that weight turnout and party identification that are even more left-skewed than 2008. He notes that the pollsters themselves are most certainly aware that 2008 was a high water mark for Democrats, and that the cold industrial-strength slapdown of 2010 has moved reasonable expectations for party ID and turnout model expectations rather decidedly rightward. Yet he won’t take the next step, that if you know A is true and B is false, but you use B (about 50 times), that it’s not merely inaccurate but deliberately deceitful.
Hanlon’s Razor notwithstanding
I’m well aware of Hanlon’s Razor, “never attribute to malice what can be adequately explained by incompetence.” It’s a clever aphorism, all the more delightful because it speaks true wisdom. But just like Occam’s Razor, the corollary also must hold (because, contra the rule, Kennedy was certainly killed by assassins not named Lee Oswald). Sometimes the patina of incompetence is a convenient cover for evil intent. I will argue strenuously here that such is certainly the case with these pollsters. This takes a bit of explaining, so please hang with me for a bit.
Barone, to his discredit in this article, allows a common misperception to stand: the important distinction between sampling and weighting. The first is polling methodology, and the second is processing the raw polling results in order to account for weaknesses or variations in the polling methodology. Both are complex, both are potentially subject to errors, and both are rife with potential for abuse, should the pollster be less than honest.
The likelihood that polling difficulties tend to leave conservatives naturally undersampled is widely reported but largely irrelevant. The real question is this: of all the subdivisions of groups you are sampling (by party ID, age, race, what city/state they live in, income level, and various other demographics), are those SAMPLES representative of the demographic group? Let’s say you were conducting a poll on the presidential election between Obama and Romney, with no other third party candidates, and no undecideds, just to make this simple. You made 10,000 polling contacts, which generated a sample size of 1,000 likely voters after screening. The breakdown in party ID (for example) might be (for R-I-D) a ratio of 24-36-40. The naked results of that poll are NOT reported as is, but we’ll get back to this.
In this case, the sample size of self-identified Republicans is 240. Among those 240 people, there was a breakdown of 84% Romney, 16% Obama. Now, did your polling methodology give you a realistic picture of the views of self-ID’d Republicans across the land? [the same question applies to the Dem and Independent sample sizes as well] That question is the most complex and most important issue with this kind of polling. So many people have cell-phones only, so many people don’t want to talk to pollsters, a few people are for all intents unreachable. Are those demographic groups of a political breakdown equal to the general population? Almost certainly not.
In addition to the most simple grouping, did that 240-person sample have realistic ratios of male-female, income grouping, and geographic distribution? If not, then your weighting of results (next stage) may have to account for those things. I’ll stop here with that. You can see how almost immediately the pollster’s job gets more difficult. But a large sample size is a cure for many ills.
But really, for the sake of this discussion, let’s say that the pollsters got the sampling right, or in the ballpark (“close enough for government work”, LOL…). The important next step is weighting. In my opinion, this is where those left-wing-media polling outfits are committing crimes against decency. If your polling sample for Party ID broke down to 25-35-40 (R-I-D), but you judge the Party ID of the overall likely voter population to be 32-35-33, then a bit of fairly easy math is used to shoehorn that 84-16 ratio of your Republicans into that 32% of the final poll result. If the sample size is decent, this is a statistically valid thing to do, and this is what I believe they all do.
My beef is with the models that these pollsters use to determine the demographic breakdown, where they show weighting models that favor Democrats in a way that is wholly implausible and frankly indefensible.
My beef with Barone is a relatively minor one, in which he argues the question of why Republicans are being undersampled. The unschooled reader might think that this undersampling is the same as underweighting.
About that Hanlon’s Razor
But are the pollsters REALLY weighting these numbers to effect a political aim, or are they just dumb? Or in the alternative, perhaps I am the dumb one, or perhaps I have let my political viewpoint unjustifiably taint my view of the voting population.
Regarding the fact that seemingly all the polls oversample Democrats; certainly groupthink is there, in the sense that the media fields are a natural magnet for a whole class of people who are less interested in telling the truth to the world, no matter where it leads, than in “changing the world” in a sense that is not democracy-minded. There is enough (overwhelming) evidence to conclude that the media who sponsor the pollsters are wholly consumed with propping up the Democrat and tearing down the Republican. Please, dear God, let us have not reason to debate that astoundingly obvious truth. I point it out to establish the point that it is not outlandish to posit that the media-sponsored pollsters are also possessed of similar malice.
Now let us speak to their competence.
Every single major pollster firm, including the in-house ones at the media outlets, is backed by some doctoral-degreed professional pollster who has years and years of experience with polling. Each one got to his lofty position in national polling because he excelled at lower levels of polling. The minor leagues, as it were. These guys know all about weighting, samples, and all manner of statistical models. These are not dummies. They know all about trends in demographics too, and about shifting electoral politics. They know about factors like voter enthusiasm, media and ad influence, and factors like current news and economic conditions, and how they themselves might influence turnout.
There can be no possible justification for using 2008 weighting models, for starters, as any clod with a supra-80 IQ can see that this was a high-water mark for Democrats. There were conditions at the time that made 2008 unique (economic collapse, war-weariness, that shiny new “hope and change” Democrat guy, and a grumpy sourpuss crusty old Republican guy), and there have been conditions post-2008 (the 5 years of bad economy, the “new and shiny” wearing off, the 2010 elections) that show beyond a shadow of a doubt that 2008 cannot possibly be repeated. In particular, the 2010 elections showed an epic voter repudiation of the administration. And yes, that vote was specifically about the Democrat Party and the administration.
Separate polls of voter enthusiasm show Republicans soaring over the Democrats. Pollsters cannot possibly fail to know this. I mean that. It is not possible to be a high-ranked professional political pollster and not recognize that 2008 was a high-water mark for Democrats. You are asking a math professor to get the 2 + 2 = x question wrong — twice a week for 6 months.
And yet they model not 2008 numbers, but 2008 + shifted Democrat 5-8 more points. That’s not merely unrealistic. It’s not merely dumb. It’s not even merely dishonest. Such weighting models are done for a driving reason.
Come on, Barone. Connect the very easy dots here. It’s malice. Period.