Updated on the 27th of June 2016 — this is a re-edited version of a post we originally put on the site in December 2015

 

I lied to the company that runs my gym recently. I like the gym and, although I have limited contact with them, the managers seem competent and nice enough. They sent me a market research survey and my lies on it are a symptom of the protest culture that is now dominating politics, consumer behaviour and many other aspects of our public lives. It is also why quantitative market research is becoming increasingly unreliable.

Thi unreliability is having disastrous effects. For example, look at the United Kingdom’s recent vote to leave the European Union. As of today, four million people have signed an online petition asking for a second referendum. Anecdotal media reports suggest that many of the four million voted to leave on Thursday the 23rd. As one told a TV channel on Friday the 24th, “I voted to leave but I never thought we actually would. It was a protest vote.”

 

We all want to manipulate market research

The gym sent me a questionnaire about music while I worked out. Did I think it was too loud or not loud enough? How strongly did I feel about it? Would it affect my decision to renew or whether I recommended the gym to my friends? The truth is that the music is fine: quiet enough that I can block it out by using earphones and unremarkable enough that I haven’t ever really noticed it. My answer was that it was far too loud and that I’d be much less likely to renew or recommend unless it got a lot quieter.

Why did I lie? Because I knew how the data would be used. They would average my answer with everyone else’s. There will be members who want the music much louder; I needed to offset their vote and the way to do that was not by giving a wishy-washy score of 2 or 3 on a scale of 0 to 5 but by saying that I was a zero and likely to career dangerously into minus territory if One Direction were even one decibel louder.

Whether my gym plays boy bands at medium or medium-high volume doesn’t matter much, even to me, but the same behaviour is distorting quantitative market research on vital political and commercial issues.

In the days leading up to the Brexit vote, virtually all polls showed a narrow majority for those wanting to remain in the European Union (the “Remain” side). A week earlier, the Leave side had been in the lead. The shift was attributed to the horrendous assassination of a popular, pro-remain politician and to a natural British preference for the status quo. What’s more, polling experts said, as the day of the vote drew closer, more and more voters would drift back to the safety of a Remain vote.

It seems as if some of the protest voters thought that their protest would not be loud enough. But the calibration was wrong: so many protested that the UK actually voted to leave the EU. The voters — egged on by 24 hour news channels and reports on social media — meant to send a shout of anger; they ended up doing terrible damage to the economy.

Buyers’ remorse does not seem limited to voters. Far from looking jubilant, the mainstream politicians who had led the Leave campaign appeared at a press conference looking devastated. “He thought what all those reluctant Brexiters thought: it would be a vote for remain, he would be seen as having stood up for a principle,” said a Conservative colleague of Boris Johnson, the most prominent of the Leave leaders. “After which Leave’s newest martyr could simply have bided his time for a year or so before being triumphantly installed in Downing Street,” added The Guardian.

 

Political polling’s public humiliation

It is in politics that the pollsters have embarrassed themselves most with high-profile missteps. Look at the polls in the US Presidential election in 2012 (most pollsters forecast a close contest between Obama and Romney but Romney was beaten decisively). One pollster did get it mostly, right and Nate Silver went on to become a very well-paid national legend (his fivethirtyeight.com site was bought by ESPN which applies Silver’s statistical methods to sports)

 

However, even Silver was wrong about national elections in Israel and the UK. Silver was so embarrassingly wrong in the UK that the BBC has taken down the Panorama programme it made based on his predictions. In Israel, the exit polls were completely off track too, although the British market research companies got those numbers right. In November 2015, the forecasters said that Bihar, the second-largest state in India, was on track to get a government formed by the BJP, the party which rules India nationally; in fact, the BJP was thrashed. Even the exit polls were horrendously inaccurate: one forecast a 16 point lead for the BJP while,in fact, the BJP came in about 10 points behind the winning alliance. Since then, in June 2016, Indian polling companies were again spectacularly wrong in predicting the Tamil Nadu election and in exit polling on what voters had actually done: a poll of exit polls suggested that the incumbent Chief Minister Jayalalithaa would be 40 seats behind her biggest rivals. In fact, she stayed in office and was almost forty seats ahead of the rival DMK.

 

Commercial implications

The commercial case studies are harder to find because management doesn’t like admitting how wrong decisions are. I’ve seen lots of examples, though. There was, for example, the internationally-famous research organisation that solemnly assured a client that the average Spanish doctor spent two hours a day looking at medical news on the Internet. Of course, the doctors who actually responded to their survey may well have spent two hours a day surfing: they were the ones happy to take €100 for an hour’s interview. Most Spanish doctors weren’t interested in doing the interviews as they had real patients to see and fees to collect.

 

Why it’s happening

This issue of sampling is the most common explanation for why quantitative research is becoming more unreliable. A lengthy and wide-ranging New Yorker article by Jill Laporte in November 2015 centred on the problem and its impact on democracy. In an accompanying podcast, Laporte said that fewer than one in ten Americans now agrees to be interviewed by phone or in person. She blames, in part, a US law which bars companies calling numbers at random. However, the same problems exist in the UK, which does allow random dialling. In fact they’re worse: ICM has to ring 20,000 phone numbers in order to get 1,000 responses – a response rate of 5 percent. Internet respondents are, of course, self-selecting.

 

A minority view is that polls are affecting political behaviour. FTI Consulting, for example, says that the prospect of an indecisive result in the 2015 British elections so spooked voters that they decided through some mysterious telepathic process to coalesce around the Conservatives. This is a convenient view because it means that the commercial findings, from which market research companies make real money, would still be reliable: typically, only the client sees those findings so they would not affect respondent behaviour. The British Polling Council, however, does not like this explanation and says that there is no evidence of a late swing (they also fear that this thinking might lead to new legislative curbs on political polling activity). This, though, is exactly what seems to have happened in last week’s referendum vote.

 

The three big problems

We think that there are three more fundamental problems. We have written about the first before. It’s the political aspect of my behaviour in the gym poll: respondents know how polling results are used and they want to use their answers to change the actions of others. Researchers at the US National Bureau of Economic Research call this “cheerleading” in an excellent 2013 paper. The cheerleading, applied to real voting, is what led to the unexpected Leave vote.

 

The second problem is that quantitative market research is not nearly as scientific as clients think it is. Here’s an explanation of one of the kinds of weighting used by all market researchers. In reality, it’s often a lot messier. In the British election débâcle, one polling company admitted that it had chosen to ignore discordant data. All said that they would think again about the sequence of questions (which, inadvertently or deliberately, may lead many respondents to an answer) and to how they made value judgements about how much weight to put on each response.

 

The third problem is more evident in commercial research: pollsters ask dull, repetitive questions. Most respondents give up. Those who are particularly obliging, or a bit mad, plow on but start to give answers at random. The worst example of this is discrete choice modelling in which respondents are forced to choose between pairs of statements in a mind-numbing fashion until only two or four statements are left in the mix. British Airways once ended up asking me if I thought in-flight service or flight safety was more important. (I picked service on the grounds that flying with any airline is so safe that you can afford to forget about safety considerations in picking an airline but BA probably never guessed at my rationale and the survey crashed at that point, much to the consternation of the nice young woman administering it)
The real answer lies with clients. They need to remember that quantitative research is not a substitute for exploring the questions thoroughly in sophisticated and well-structured qualitative work. And they need to remember that respondents like me lie.

31942779_ml