Sunday, June 21, 2015

Cliff Zukin Explains How Voter Polling has Become Less Reliable

This is interesting.

Social media companies, marketers, the phone companies, Google, etc. are tracking our every move and finding out more and more about us.  Our life is an open book. So it seems counter-intuitive that pollsters should be having a harder and harder time predicting our voting behaviors. Yet, as we know, many pollsters have fared badly in predicting the results of recent elections, both here an abroad. Why would that be?

Cliff Zukin, professor of public research at Rutgers, explains why in a New York Times article. It turns out it's not just that pollsters are blinded by partisan interests: it turns out that the internet and ubiquitous use of cellphones is making scientific polling harder and more expensive to do.

Up until 10 years ago statistical sampling for voter polls was done by randomly calling selected landline telephone numbers. On the whole, people contacted for questioning in this manner were more receptive to cooperate with pollsters. This allowed statisticians to apply solid statistical models.

But over the last ten years most of us have abandoned landlines in favor of cellphones. Sixty percent of us primarily use cell-phones for communication. This poses a problem for pollsters because cell-phone numbers are not listed in phone books the way landlines used to be, and the FCC has interpreted recent consumer protection laws to prohibit automatic calling to cell phone numbers for hand-off to interviewers only after the phone is answered.  This makes polling much more expensive: "Dialing manually for cellphones takes a great deal of paid interviewer time, and pollsters also compensate cellphone respondents with as much as $10 for their lost minutes," says Zukin.

We're also getting tired of opinion researchers:

Zukin:
When I first started doing telephone surveys in New Jersey in the late 1970s, we considered an 80 percent response rate acceptable, and even then we worried if the 20 percent we missed were different in attitudes and behaviors than the 80 percent we got. Enter answering machines and other technologies. By 1997, Pew’s response rate was 36 percent, and the decline has accelerated. By 2014 the response rate had fallen to 8 percent. As Nate Silver of fivethirtyeight.com recently observed, “The problem is simple but daunting. The foundation of opinion research has historically been the ability to draw a random sample of the population. That’s become much harder to do.”
As a result of these challenges many pollsters have resorted to the internet. But that is not nearly as reliable as the old methods. First, there is an inverse relationship between voters and internet use.  Whereas 97 percent of 18-29 year olds use the internet, they tend not to vote; older people are much more likely to vote, but only 40 percent of those over 65 years of age use the internet.

And it's all less scientific:
Almost all online election polling is done with nonprobability samples. These are largely unproven methodologically, and as a task force of the American Association for Public Opinion Research has pointed out, it is impossible to calculate a margin of error on such surveys. What they have going for them is that they are very inexpensive to do, and this has attracted a number of new survey firms to the game. We saw a lot more of them in the midterm congressional election in 2014, in Israel and in Britain, where they were heavily relied on. We will see them more still in 2016.
And there is no immediate solution:
So what’s the solution for election polling? There isn’t one. Our old paradigm has broken down, and we haven’t figured out how to replace it. Political polling has gotten less accurate as a result, and it’s not going to be fixed in time for 2016. We’ll have to go through a period of experimentation to see what works, and how to better hit a moving target. .... We are less sure how to conduct good survey research now than we were four years ago, and much less than eight years ago. And don’t look for too much help in what the polling aggregation sites may be offering. They, too, have been falling further off the track of late. It’s not their fault. They are only as good as the raw material they have to work with.
For the time being, campaigns and pollsters will use more and more internet polling and big data mining, which--because it is not scientific--will invariably be colored by partisan politics and less reliable.

Read the whole thing here.

No comments:

Post a Comment