Are some polls wrong?
Polling drives the Democratic debacle narrative. Generic horse-races
fan the flames, aided by a host of polls in states and districts across
the country. We followed the odyssey of the bouncing generic vote in
previous columns. But what if the race-specific polls around which we
huddle each day, like insects swarming to a bright light on a summer
night, were systematically flawed? Would our narrative sound at least a
little different?
Explore with me three sources of systematic error that may upend what we think we know about several contests.
{mosads}Defining likely voters: A recent series of CNN/Time polls, which report results among both registered and “likely voters,” brings into bold relief the centrality of identifying likely voters. In Nevada, for instance, the poll finds Sen. Harry Reid (D) with an 11-point lead among registered voters — a fact you’d be hard-pressed to unearth in the panoply of press generated by this survey. The reason: Analysts focus on “likely voters,” and among those designated likely to vote by CNN/Time, Sharron Angle eked out a two-point advantage.
Unfortunately, CNN/Time do not reveal how they determine who is more and less likely to vote, but many firms use questions about enthusiasm even though research conducted by my firm and others demonstrates no link between an individual’s enthusiasm and his or her likelihood of turning out. So the poll may simply designate the wrong people as “likely voters.”
Even if one assumes pollsters have accurately distinguished between likely and less likely voters, a steeper hurdle remains. As I noted here some years ago, calling someone a “likely” voter is to make a probability statement. A likely voter may have, say, an 80 percent chance of turning out, while a “less likely voter” may have only a 20 percent chance of casting a ballot. In that scenario, 20 of every 100 likely voters will not show up, while 20 of every 100 less likely voters will. No real electorate is composed exclusively of “likely voters” — a fact evident from voter file data. In state after state, 30 to 40 percent of those who participated in the 2006 midterm were not consistent voters.
Consider the arithmetic impact in the Nevada example. CNN/Time’s results mean that Reid leads among those deemed less likely voters by a vast 30 points. Thus, by that poll’s own reckoning, if just 30 percent of the electorate is composed of “less likely voters,” Reid would hold a nearly eight-point lead.
Polling only the easy-to-reach: Some people are harder to reach on the phone than others, a factor unrelated to their propensity to turn out. Good pollsters go to great lengths to secure a completed interview with the respondent originally identified at random. Willy-nilly substitution produces not a random sample, but rather a sample of easy-to-reach voters, who may differ from others. Lo and behold, they are different. One of our clients led by just two points among those interviewed on the first or second attempt, but by nine among harder-to-reach respondents who required three or more calls. In another case, an 11-point margin among the easy-to-reach expanded to 16 among difficult-to-find respondents. IVR polls and others completed in just one or two days necessarily survey only the easiest to reach, producing results generally biased against Democrats.
Cell phones: IVR polls, like Rasmussen, are precluded by law from calling cell phones. Our client in one race leads by 19 points among those reached on cell phones but is nearly tied among those reached on a land line. A recent Pew Study found a smaller, but still significant, bias against Democrats on the generic vote from failing to reach cell-phone households.
As a result of these methodological shortcomings, many polls may lead to a net loss of knowledge about the races they purport to measure.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..