There is a widely held belief that views all over the world are becoming more polarised because market research says they are. This is most obvious in politics — people seem to be taking more and more extreme positions. But we have seen this phenomenon in qualitative and quantitative reputation research too: respondents seem more likely than they used to be to say that they have a very high opinion or a very low opinion of an organisation.

The picture is obscured because the world is full of market research companies doing bad market research: my favourite is asking if the respondent is aware of an organisation and then asking those who have said they are unaware of it what they think of it — this falsely keeps the sample size high but produces junk results. In qualitative research, more and more consultancy companies are doing market research with no professional training or oversight (we named and shamed one on this website a few months ago — look at “1. Bad Research”) and usually feed back to clients what they think the client wants to hear.

Even without the charlatans, we seem to be more divided than ever in our opinions. New research (for the US National Bureau of Economic Research) suggests that it may just be that people now understand better how market and policy research is used and try to manipulate the process. The researchers asked subjects to answer a set of questions that had factual but politically contentious answers (e.g. since President Obama took office, how has the unemployment rate changed? Or how many American casualties were there in Iraq in 2007 and 2008?). Some of the subjects were given a cash reward for answering correctly; some were not. Those who knew they would get paid for the right answer (or for admitting that they did not know)  were much more likely to be accurate — the gap between Republican and Democratic voters shrunk by up to 80 percent in the paid group.

In other words, respondents often knew the right answer but chose to give a wrong answer. They were, the researchers hypothesise “cheerleading”. Respondents know how polling results are used and they want to use their answers to change the actions of others. There are probably two components to this. First, respondents may, for example, think that if they inflate their perception of the number of casualties in Iraq it will dissuade policymakers from risking further casualties in future because the policymakers will over-estimate the political cost of allowing casualties. Second, respondents may want to signal to other participants in other polling that answers which serve a particular point of view are widely held by society as a whole — dragging opinion in the respondent’s direction. The researchers call all of this, “expressive utility that [respondents] gain from offering partisan- friendly survey responses.”

“Extending beyond political science, our results also inform understandings of contemporary public opinion. Scholarly and popular analysts alike frequently take survey responses at face value …  our results suggest that the concern should be far more widespread,” the researchers write. It should indeed. Let’s say that we (or another group of qualified specialist researchers and consultants) are asking a government official what he thinks of a multilateral organisation funded in part by the official’s government. The official may say that she has a less favourable opinion if she thinks that it will reduce the chance of more requests for extra contributions or she may bias the response to be more favourable if she thinks that it will send a signal to her own government that more support is required.

Here are four things we can all do immediately

  • follow good practice and never reveal the identity of a client before an interview. It makes recruitment much harder but results far more accurate. On the other hand, agreeing to reveal the client during the interview reassures the respondent that there will be an opportunity for targeted signalling at the end thus — we think but we cannot show — reducing the risk of biased answers in the early part of the interview
  • ask a few simple but controversial factual or semi-factual questions during the warm up that should show you whether the respondent is using the research to signal or to cheerlead
  • insist on long-form, semi-structured interviews by subject experts in a language in which the respondent is fully comfortable. Good questions, good follow up (off the script) and good analysis increases the chance that you will spot signalling or cheerleading and spot real trends and ideas
  • forget forever the idea that quantitative research is inherently more reliable than good qualitative studies

If you want the painless summary of this research, listen to a five minute interview on National Public Radio. To read the whole paper (Bullock, John G et al – a team from Yale and UC San Diego ), you will have to pay $5 but the summary is free here (and there’s a link to pay for the 100-page PDF. It is an excellent read and well worth the $5).