How to Push Phony Poll Numbers in the Public Square

Deep-pocketed companies can get the poll results they want, which is a good reason not to believe industry-supported polls

A Vanderbilt University poll has shown that regardless of political affiliation, Tennesseeans strongly support returning Sudafed and other cold medications that contain pseudoephedrine (PSE) to prescription-only status. This policy has been proven in other states to virtually eliminate meth labs, so it’s unsurprising that Tennesseeans would endorse it after years of suffering from the fires, explosions, burns, poisonings, destroyed property and lost tax dollars that the labs cause. Yet State Senate Speaker Ron Ramsey was shocked by the poll’s result. To understand why, one has to appreciate how a deep-pocketed special interest group can finance polls that create a false impression about what voters really want.

The Consumer Healthcare Products Association (CHPA) represents the manufacturers of pseudoephedrine-containing cold medications that are easily converted to meth. Unlike the pharmaceutical companies that produce the same medications in extraction-resistant formulations, the manufacturers represented by CHPA take in hundreds of millions of dollars a year from meth cooks. As documented by journalist Jonah Engle, CHPA’s desire to protect this enormous revenue stream has led it to spend unprecedented amounts of money lobbying state legislatures to not return PSE-containing medications to prescription-only status.

But CHPA doesn’t just pursue its interests through lobbying. It also releases polls that overstate the public’s agreement with CHPA’s corporate clients. That’s why Speaker Ramsey was “amazed” to find that Democrats, Independents and Republicans (including 64% of Tea Party Republicans) in his state overwhelmingly support returning PSE-containing cold medications to prescription only status. Ramsey, like many other people, had been misled by a press release about a CHPA-sponsored poll claiming that 56% of Tennesseans were opposed to the policy.

CHPA released similar polls in Oregon and Mississippi prior to those states returning PSE-containing cold medications to prescription-only status. In both cases, the policy has been popular with voters and no legislator has been voted out of office for supporting it. The Tennessee poll and others like it are thus a continuation of a well-established corporate strategy of spreading misinformation about public preferences.

Many people assume that wealthy interest groups generate phony poll results by hiring completely dishonest pollsters. That does sometimes happen, but it isn’t necessary for the production of a deceptive polling result.
Continue reading “How to Push Phony Poll Numbers in the Public Square”

Only McAulliffe’s Polling was Accurate in Virginia

Why was Terry McAuliffe the only candidate in the Virginia Governor’s race whose support was well-predicted by polls?

The closeness of the Virgina Governor’s race surprised many political observers. To understand why, take a look at this table, which was created from Real Clear Politics’ helpful summary of the eight polls conducted in the week leading up to election day (October 30-November 5). The top line shows the average (not weighted for poll sample size) performance of the three candidates in the eight polls, including the “don’t know” response option. The second line drops the “don’t knows” and reports how much support each candidate had among those poll respondents who expressed a preference. The third line is the actual election day result. McAuliffe is the only one who performed pretty much as expected.

Virginia

Sarvis’ support was grossly overstated in closing week polls by a factor of about 1.5 to 1. I am comfortable calling this eminently predictable because I predicted it. It’s simply mathematically harder to predict rare events (e.g., third party votes) than common events; even a few days before the election one poll had Sarvis at double the level of support he actually received.

Less predictable and therefore more intriguing is how Cuccinelli outperformed his polls, whether you analyze them in the aggregate as does the table or individually (none of the eight had him as high as his actual total). One could speculate endlessly about why this happened, but I am partial to Chris Matthews‘ view of polling: If pollsters sounded like Archie Bunker instead of staid professionals, there’s a slice of the electorate who would be more forthcoming about their less-than-enchanting political views.

Will a Third-Party Candidate Get 10% Support for Governor of Virginia?

Pools tend to overstate the support of third party candidates like Robert Sarvis of Virginia

Doug Mataconis is impressed that third-party candidate Robert Sarvis is polling as high as 10% in the Virginia Governor’s race.

Color me skeptical, not because of anything specific to Sarvis but because of statistics. As the true level of candidate’s support gets farther in either direction from 50%, polls become increasingly less accurate. To quote myself from a post that lays out the math in more detail:

It is simply harder to predict events that are unlikely than events which are likely. If a fair coin is being flipped over and over and you have to guess on which particular flip it will come up heads, you’ve got a 50-50 shot of winning the game. But if the same game is played with an unbalanced coin that comes up heads only 1% of the time, you will almost certainly not guess the right flip, even if you are allowed to play many times. Indeed, any system you might use to predict when the elusive heads result will occur will be less accurate over time than simply predicting that the coin will never come up heads no matter how many times it is flipped.

As I show in the linked post, a 90% accurate poll including a candidate who actually has 1% support will estimate his/her support at 11%. And when your support is really low, most errors in estimation can only go in one direction: Upward.

Sarvis could matter in the tight Virginia Governor’s race even if he only nets a few percent of the vote. But the likelihood that he truly has the support of 10% of Virginia voters is low.

There is No Such Thing as “the” Senior Vote

Political prognosticators tend to think of “the” senior vote as less diverse than it really is

Charlie Cook and Erica Seifert‘s analyses of senior citizens’ voting intentions have drawn a significant amount of attention (See Kevin Drum’s take here, Ed Kilgore’s here). While not uninformative, such analyses may rest on the mistaken assumption that there is such a thing as “the” senior vote. When senior citizens were a small part of the population, it might not have mattered in predicting electoral outcomes that their voting intentions were discussed as a lump. But with the senior population currently at 41.4 million and growing rapidly, underestimating its diversity could lead to serious political forecasting errors.

The current senior population includes veterans of World War II and veterans of the Viet Nam War; African-Americans whose adult life was lived entirely after the passage of the 1960s Civil Rights Acts, as well as many who lived for decades under Jim Crow; women who made decisions about marriage and career prior to the founding of NOW and women who made them afterwards; people who liked Ike and people who were too young to vote for him. Those people just turning 65 thus see many political issues differently than do those oldsters who are 75 or 85.

In analyzing the youth vote, political prognosticators focus on a narrower 11-year birth cohort (i.e., people age 18-29). Yet when they analyze seniors, they treat anyone from age 65 to 105 as an undifferentiated mass. As the country’s grey ranks continue to swell in the coming years, this oversight will become ever more problematic for electoral forecasting efforts.

The Most Misleading Feature of Public Opinion Polls

The biggest inaccuracy of polls is the inbuilt assumption that respondents care about the questions they are asked

There are many ways, either through error or chicanery, that a poll can misrepresent public opinion on some issue. For example, the chosen sample can be unrepresentative, the questions can be poorly worded, or, as in this classic demonstration from Yes, Minister, respondents can be lead by the nose to give a certain answer.

Yet none of those problems is as serious as the one that afflicts almost every poll: The presumption that those polled care a whit about the issue in question. Whoever commissioned the poll of course considers it important, but that is no guarantee that respondents have ever thought about it before they were polled, or will act on their opinions in any way afterwards.

Advocacy organizations exploit this aspect of polls relentlessly. If the Antarctic Alliance polls 1000 people and asks “Would you like it if there were a law that protected penguins?”, probably 80% of people will say yes because it’s hard to hate on penguins: They are always well-dressed, they waddle in a cute way and many people are still feeling bad for them because of that egg they lost in that movie where they marched all that way in the cold — what was it called? — anyway, man that was sad, so yeah, happy to tell a pollster that we should protect those furry little guys.

Antarctic Alliance will then argue that Congress should pass the Protect the Penguins Act immediately because their new poll shows that 80% of Americans “want penguins to be protected”. But if you asked those same poll respondents if they’d be willing to donate even $10 to help the law pass, most of them would say no. And if you asked them if they would vote for the Congressional Representative on the basis of how s/he responded to the Protect the Penguins Act, most of them would say no. And if you asked them the open ended question “What are the 10 biggest challenges Congress should be addressing now?”, probably none of them would put penguin protection on their list.

To give a darker variant of this problem, gun control laws generally poll well yet don’t pass. How can we not pass something that we “support”? Easily, if the people who say they support it are not willing to do much to see it pass and the people who are against it are willing to do a lot. Polls usually miss this sort of nuance because they don’t assess how much people care about what they are being polled about.

The few polls that somewhat surmount this problem are those that assess the voting intentions only among people who intend to vote, and, those that try to assess how intensely people feel about the opinions they express (e.g., With a follow-up question of “would you be willing to have your taxes rise to make this happen?”).

The only way I can see to consistently avoid the problem of assuming respondents actually care about the issue of interest as much as do poll commissioners is to expand the usual response format of “Yes, no, or don’t know” to include the option “Don’t care”. But I doubt pollsters would ever do this because it would put them out of business to tell their clients that most people simply don’t give a fig.

A Primer on Outlier Polls

Sometimes a single poll diverges from the pack of results generated by everyone else. How can you tell when the pollster is doing a better job of picking up a new trend versus simply being wrong?

Peter Kellner offers an educative account of how these events occur, using as an example a poll that shows two political parties in a neck and neck race while everyone poll has one party ahead. It’s a UK example, but that doesn’t matter, the value of the essay is its discussion of how to weigh poll respondents who say they have no preference, how small samples affect conclusions and the like.

The article is worth reading both for its clarity and its modesty. Kellner’s own polling firm disagrees with the outlier poll, but he remains balanced and gentlemanly throughout his critique.

Learnt climate helplessness: an Xmas puzzle

A survey on American attitudes to climate change consistently gives self-contradictory results.

A holiday post-prandial puzzle for you.

Another chart of the attitudes of Americans to climate change, from a long-term Yale/GMU project – it’s nice to know that GMU has some reality-based faculty, unconnected to the Koch payroll at the Mercatus Center. As usual, apologies for the poor-resolution screengrab. Better version in the source pdf, page 10.

This is more than odd. In the latest survey, only 32% of respondents who accept climate change thought that the carbon-saving actions they’d taken or were considering would “reduce my contribution to global warming a lot/some” (call this question A). Then (question B) 60% agreed that “if most people in the USA did these same actions, it would reduce global warming a lot/some”. Question C was the same, extended to “most people in modern industrialized countries “: 70% agreed.

On the face of it, this combination of positions is contradictory; a logical mistake, of the same order as found in Kahneman and Tversky’s famous Laura experiment. Continue reading “Learnt climate helplessness: an Xmas puzzle”

How Obama just won Ohio: moderate isolationism

In repeatedly talking about “nation building here at home,” Obama tapped into the one feeling ardently held by American voters that is unmentionable in polite company: moderate isolationism.

In 2004, John Kerry said in his acceptance speech at the Democratic convention,

we shouldn’t be opening firehouses in Baghdad and shutting them in the United States of America.

The line got huge applause. Christopher Hitchens noted and feared this, calling it “one of the sourest and nastiest and cheapest notes to have been struck for some time.” But Kerry knew a good line when he heard it, and re-used it endlessly in his stump speech and the debates—which he won.

Kerry played Mitt Romney in Obama’s debate prep. He taught some lessons that Obama used last night. Obama’s version of the same riff was:

the other thing that we have to do is recognize that we can’t continue to do nation building in these regions. Part of American leadership is making sure that we’re doing nation building here at home.

…and it wasn’t an accident or a minor point: Obama used versions of the line four times, unprompted.

I haven’t seen a single commentator noting the line. But I’ll wager it played a huge role in convincing undecided voters to give Obama a huge lead, 30 points, in the CBS instapoll. Though I can’t find the article, I remember a Kerry aide from 2004 commenting, a bit uncomfortably, that swing voters, who then as now tended to be low-information voters, were particular fans of the firehouse spiel.

Washington is a city of self-styled internationalists. (It would be bad manners to say “militarists,” much less to note that the Pentagon is a huge driver of the local economy, along with lobbying.) There’s a strong institutional bias in favor of candidates who call for higher military spending, lots of military interventions, and a hair-trigger attitude towards crises. But the American people have always been much more leery of military spending and foreign wars than the political class is. Scott Rasmussen—yes, that one—noted the disjuncture last month, in explaining why Republican efforts to make higher military spending a campaign winner were destined to fail. Polls on military spending are so unfavorable to the Republican position that Obama is running ads attacking Romney for wanting to spend more on defense. Military spending hikes are favored by 58 percent of Republicans—but only 40 percent of all voters.

American isolationism has very large costs. It drives our shocking lack of policy learning—our unwillingness to learn from other countries that do anything better than we do—as well as relative indifference to global problems from hunger to climate change and beyond. But it also has its benefits: the war machine that Romney and the neocons would like to sell, the public isn’t buying.

This is a line of attack likely to fly under the radar of elites, or even offend them. But this is a democracy. And I think Obama just won Ohio.

Why Polls of Third-Party Candidate Support are Usually Wrong

Libertarian Candidate Gary Johnson’s website boasts that the third-party candidate is “polling nationally from 2.4% to 9% and various states have him polling up to 15%”. Like polls of the support of countless minor political candidates in the past, these numbers are almost certainly wrong, for an intriguing statistical reason.

Imagine a poll about a candidate named Smith who represents a major party and has, in truth, 40% support in the population. Imagine further that the poll is accurate 90% of the time. The other 10% of the time (due to leading questions, pollster error, voter confusion etc.) the poll predicts that someone who in fact will vote for Smith will not do so or that someone who will in fact vote for someone else will vote for Smith.

To keep the example simple, assume that the poll is only concerned with whether people will vote for Smith or not, where the non-Smith category includes voting for candidate Jones, Green, Wilson or not voting at all. Again for simplicity, assume the poll is of 100 voters, so the number of voters equals the percentage of predicted support for Smith. The table below shows what the poll will conclude.

The poll predicts that Smith will garner 42% of the vote (i.e., the poll will count 42 of the 100 voters it surveys as Smith supporters). These 42 votes are counted in the poll correctly in 36 cases and incorrectly in 6 cases (10% of the voters who really aren’t going to vote for Smith got counted as supporting Smith). The 42% result is wrong but it’s not bad at all as an estimate, whether you compare raw numbers (40% vs. 42% support) or compare the size of the error to the base rate of support (2% is only 5% of Smith’s true support of 40%).

The estimate is in the right ballpark because Smith’s level of support is near 50%. Indeed if his support were in fact 45% the poll would be even more accurate, despite its 10% error rate. In contrast, imagine that Smith’s true level of support is far from the midpoint, for example 10%. Here are the same poll results with same number of respondents and the same error rate. Continue reading “Why Polls of Third-Party Candidate Support are Usually Wrong”

Approval Ratings of Congress, Parties and The President: Apples and Oranges

How often in your life have you heard a political commentator say something like “Well 50% of Americans may disapprove of the job the President is doing, but he is still better off than Members of Congress, of whom 70% of Americans disapprove”?

Countless op-eds, essays and news stories travel the same lines. Typically, they try to forecast elections by analyzing presidential approval ratings and Congressional approval ratings (or approval ratings of one of the parties).

But approval ratings of groups of politicians can’t be interpreted in the same fashion as approval ratings of individual politicians, particularly if we are trying to guess what will happen in an election. At least three flies trod the ointment:

(1) Everyone who responds to a poll about Presidential approval is expressing an opinion about the same person. But poll respondents who express approval or disapproval of a large group of people (e.g., Congress or the Democratic Party) could be giving an opinion about different individuals or subgroups within that greater whole. Their opinions therefore can’t be reasonably aggregated as if they had the same meaning. For example, Nancy Pelosi and Harry Reid-loving respondents’ disapproval of “Congress” may refer to how they loathe the Tea Party Caucus whereas Tea Party respondents’ disapproval may reflect how they detest Nancy Pelosi and Harry Reid.

(2) Ever wonder why pre-election polls often show that voters are overwhelmingly hostile to incumbents yet the results of the ensuing elections indicate that those same voters went out and supported an incumbent? Human beings have a self-serving cognitive bias when they make personally relevant judgments. If you ask a smoker “What proportion of people who smoke just like you will get lung cancer?”, and then ask “What is your own chance of getting lung cancer?” the smoker will usually explain why, for some reason or other, their personal risk is lower than what they quoted for the group of people who smoke just like them. The same phenomenon can be at play when someone tells you that 90% of the Congress are bums who should be thrown out of office, but also maintains that “Good Old Representative Smith” in their own district happens to be in the 10% of paragons on the Hill. You can’t play this self-serving cognitive game with yourself when a pollster asks you about whether you approve of the President because we all have the same President. If you think my President is a bum, by definition you think yours is too.

(3) Everyone can vote for the President, but no one gets to vote for more than a small slice of “the Congress” or one of the major parties. If you disapprove of the President, you have the power to act on the object of your disapproval when you vote. But even if you loathe most of the Congress and/or one party, you don’t have much power to translate those attitudes into action in your voting. That’s another reason why Presidential approval ratings can’t be interpreted in the same frame as generic party or Congressional approval ratings

How can you compare apples and apples when forecasting elections? Analyze data from those polls that follow questions about approval of the President with queries about approval of the Congressional Representative for the respondent’s own district and each of the individual Senators from the respondent’s home state.