Election 2015: A wake-up call for research

Election 2015: Research wake-up

In the UK general election, the polls had a shocker. In the last month of the campaign, well over a hundred polls predicted a hung parliament. Every so often, individual polls would show a clear leader, but come the next survey they would collapse back into the average. The possibility of a Conservative majority was so slim as to be virtually eliminated from most of the predictive models. Yet that’s what we got.

As a result, it’s not just polling that’s in the dock but the whole market research industry. That’s because polling gave birth to the industry in 1920’s America and has always been regarded as the “gold standard” of market research. If as we saw last Friday, the polls can be so wrong, then so can the vast $40bn spent on commercial market research.

The UK election result made polling look like a national joke but the question the world’s largest buyers of market research should be asking is, did people really “defy the polls” or did polling ‘fail to predict the people’?

The industry has had a week to think about it and the excuses have begun piling up. And a theme has emerged. While Labour supporters and pollsters probably aren’t on speaking terms right now, on one thing a lot of them can agree: it was all the fault of those awful voters. Don’t blame the polls, blame the people. The explanations are full of shy Tories and late swingers, which makes the election sound like a particularly dreadful sex party.

But those pollsters who’ve dug into their data and re-contacted voters have found no evidence of a mass last-minute change of behaviour or of lying respondents. Not only were the polls wrong, they were probably wrong all the way through the campaign.

It would hardly be the first time polls have failed to predict election results. The world of political polling is one which tries to measure our deliberative, considered side – what Nobel-winning psychologist Daniel Kahneman calls our ‘System 2’ brain. Pollsters ask people to think about whether they will vote, how they will vote, and then ask them why they will (or did) vote that way. But as behavioural science has shown, our decisions are driven first and foremost by our fast, instinctive ‘System 1’ brain, reliant on experience and emotional impressions. Increasing the accuracy of election predictions will rely on switching from measuring the deliberative and considered to measuring the instinctive and emotional.

The best inspiration for this new, instinctive approach is the Iowa Electronic Market. The IEM is an academic experiment that’s been running for over 25 years covering more than 700 elections, and which has proved more accurate than the most accurate poll, three quarters of the time. Given its success, you’d be forgiven for asking why it hasn’t been universally adopted as the best way of predicting elections? The probable answer is it breaks every golden rule of current polling and literally defies belief by industry practitioners.
The way it works is this. Instead of a national representative sample of voters asked carefully worded questions about their own voting intentions, the IEM uses a crowd of 500 non-representative participants who use their gut instinct to buy and sell shares in who they believe most people will vote for. It turns out we’re better at predicting what other people will do than we are at accurately reporting our own future behaviour. Admittedly that’s a challenging finding but the IEM’s poll-busting accuracy speaks for itself.



If you’re a brand, not a political party, why do you need to worry about this stuff?

Because in commercial research, the equivalent of the deliberate, considered ‘voting intention’ is ‘purchase intention’. It seems plausible, even sensible, to assume someone saying they’re likely or highly likely to purchase a new product in market research, is more likely to buy the product in-market. Unfortunately, just as polling results can fail to materialise on election day, so purchase intention turns out to be a poor predictor of in-market behaviour. As research giant TNS concedes in its piece “The Trouble with Trackers”, “we’ve known for decades that at face value, answers to the question correlate poorly with what individual people actually do”. And a study by marketing expert Joel Rubinson concluded that of the people who say they “definitely will buy” a new product, the proportion that actually do is a feeble three out of 10. There is almost no positive correlation beyond random, between purchase intent and actual behaviour.

Worse, our own large scale work in advertising testing shows a negative correlation between purchase intent and advertising effectiveness. Let’s take the famous Cadbury Gorilla advert to see how this works. A guy in a gorilla suit, drumming wildly to Phil Collins’ ‘In The Air Tonight’ says absolutely nothing directly about chocolate or Cadbury but leaves us feeling utterly joyous. When people were asked in conventional pre-testing what they thought of the advert and whether they’d buy more chocolate, the almost universal answer was, “I love the ad and it made me smile but NO, absolutely not, I won’t be buying more chocolate”. But people did buy more chocolate; in fact, a lot more chocolate. The campaign was extremely commercially effective, with an ROI of 175, winning Silver in the IPA Effectiveness Awards and credited with wiping out memories of the Cadbury food scare the year before. In the case of advertising, measuring how people feel about an advert turns out to be the single best predictor of commercial success.

Seduction, it seems, trumps persuasion, and the more we feel, the more we buy.

The reason the plausible but flawed conventional questions about voting and purchase intention fail to accurately predict reality is we’re all rather unreliable witnesses to our own motivations. We’re full of good intentions, ego, hopes and aspirations, so our better-selves get the better of our real-selves when projecting the future. It’s the reason the majority of people fail at the first hurdle of their New Year’s resolutions and why gyms make the majority of their money from people who never turn up to use the facilities.

Try as we might, we’re often ‘self-deceit machines’. In one famous study, 50% of Swedish men believed they are in the top 10% of best drivers in the country, and it’s almost certain that particular male foible isn’t just a Swedish phenomenon. In another experiment, a room of 100 people are asked to imagine everyone’s iQ scores distributed along an index of 0-100. They’re asked to estimate their own iQ on the same scale. Our tendency towards self-deceit means almost nobody rates themselves lower than 50 on the 0-100 scale - after all, nobody likes to feel they’re below average, despite the fact that in this case half the group will be below 50.



The likely equivalent in the UK election was the inflated poll numbers for Labour due to an optimistic sense of our own altruism. In reality we’re not as altruistic as we’d like to think, we don’t buy as many green and healthy products as we say we will and we are more influenced by emotions and drumming-gorillas than we are by rational argument. These tendencies are enough to make conventional approaches to polling and pre-testing commercial research unreliable.

We started experimenting with Prediction Markets in 2005, following publication of James Surowiecki’s seminal book, The Wisdom of Crowds. The book wasn’t primarily focused on market research but his Iowa Electronic Market example piqued our interest and we started exploring the predictive ability of the technique. The next year we published an award winning paper proving that a large, diverse crowd buying and selling shares in new product ideas was more discriminating and more predictive of in-market success than asking conventional, targeted, product testing questions like purchase intent.

Just as the IEM proved more accurate than the most accurate poll, so over the next few years we proved Predictive Markets more accurate than existing ‘gold standard’ research approaches. In 2006 we used it to pick Leona Lewis as X-Factor winner after only a week of competition. And before the 2008 Democratic primaries in the US, where all the polls had Hillary Clinton well ahead, our Predictive Market showed her neck and neck with a lesser-known candidate, one Barack Obama.

Time and again we’ve showed our ability as social animals means we’re better at predicting other people than we are at predicting ourselves.

In a joint paper with Mark Earls in 2008, we labelled this transformative approach to improved predictions, ‘Me-to-We Research’. Since then Predictive Markets has been used with over 2 million respondents, making 15 million+ trades, predicting the success of over 35,000 product concepts globally.

Through all our ‘Me-to-We Research’ work, we’ve learnt some important lessons for the accuracy of Predictive Markets. In polling, we found they work best at the very start of a campaign - when the candidates are known but before the media circus gets into full swing and before all the polls start influencing and exaggerating things. In commercial new product evaluation it’s vital the people taking part can’t see the predictions of the other people taking part; otherwise it can lead to herding effects and exaggerated, inaccurate results.

We know how people feel about a brand or an advert is the key factor in how efficient and effective it is. The same should apply to political parties and politicians.

So we’d recommend junking traditional polls – in the same way you should junk the bulk of your traditional trackers – and only measuring feelings. Monitor the emotional response to parties and politicians, and – just as important – the implicit links people feel between them and the key issues. Labour might lead in the polls on the NHS, for instance. But implicit testing – which can cut through our ‘better selves’ and get at real motivation - might reveal that the issue just isn’t as big a deal as people want to admit.

Closer to the election, we can run a Predictive Market when the campaign kicks off – when it’s most effective and predictive. And if you want to ask about people voting, for goodness sake ask about other people!

The paradox of successful innovation is you need failure to achieve it, so we think there’s hope. The 2015 UK Election failure is embarrassing but it may also be the shock that could transform and actually expand our $40bn industry. This is the moment to acknowledge the inaccuracy of current research methods and gain the greater predictability that lies in the sort of techniques we have pioneered measuring instinct, intuition and emotion.

Read more from Brainjuicer in our Clubhouse.
 

Newsletter

Enjoy this? Get more.

Our monthly newsletter, The Edit, curates the very best of our latest content including articles, podcasts, video.

CAPTCHA
3 + 11 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Become a member

Not a member yet?

Now it's time for you and your team to get involved. Get access to world-class events, exclusive publications, professional development, partner discounts and the chance to grow your network.