I started my career many years ago in financial journalism. Journalists have long been considered one of the least trusted professions, on par with used car salesmen and lawyers – who always seem to get a bad rap, despite the fact that most of them, in my experience, are genuinely trying to get the best outcome for their clients.
When I decided to change career path a decade ago, I felt nostalgic about the industry I was leaving, but I was pleased to be joining a profession that is generally regarded as not only professional and trustworthy, but thought to play a key role in improving the customer experience and helping companies make data-led decisions.
The job of a researcher involves finding out what people want and what they don’t want, how they think, and often-times, how they’re likely to behave in a given situation.
Which brings me to the recent Federal Election.
We all read the political polls that told us Labor would win the Federal Election, and we all know those polls got it wrong.
So is this the death knell for the market research industry? Does it prove that research can’t be trusted?
First, let’s talk about Clive
I read a post on Linkedin recently questioning the value of advertising, given the $60m Clive Palmer spent advertising failed to win him a seat in parliament. Is this a reflection on advertising, the post asked, or was Clive and what he was offering so unpalatable for consumers that the campaign was doomed from the start?
I’d throw a third option into the mix; he wasn’t considered trustworthy. Consumers were sceptical about good old Clive’s promise to “Make Australia great again”.
I’m not sure any amount of money thrown at the media would have convinced Australians to buy what he was selling. However, there are much more scientific ways to analyse the reasons behind the campaign’s failure (if indeed success is defined as winning a seat, and not taking votes away from Labor).
But that’s not what I’m here to do. I’m here to talk to you about the polls.
It’s never a good idea to take a narrow view on a problem and make a sweeping generalisation, although this seems to be all the rage on social media these days. Print advertising is dead! Content marketing is dead! Another day, another marketing method that’s supposedly biting the dust because… well because someone woke up and decided to call it.
Yet there’s no doubt the failure of many respected research houses to accurately predict the winner is an indictment on the research industry at large.
This is not the first time the polls have been wrong, and it won’t be the last. In fact, forecasting has always been as much art as science.
Think back to the global financial crisis, and all the economists who failed to see it coming. Or the US elections, which predicted a Clinton victory. Or the Brexit vote, in which remainers were considered the majority, until they weren’t.
If we knew what was going to happen tomorrow, we’d all be a little bit wealthier and a lot less curious.
The problem with sample bias
But if there’s one lesson we take from the polling embarrassment, it’s that sampling bias is a problem that is getting harder to solve for.
Historically, the market research industry relied heavily on telephone interviews – what’s known as CATI, or Computer Assisted Telephone Interviews – to conduct research.
There’s always been inherent problems with CATI, due to interviewer bias – the ability for the influence of the person conducting the interview, whether intended or not, on the respondent to distort the outcome of the interview.
Even the best trained researcher, who knows how to ensure objectivity is retained in the way questions are designed and phrased, cannot control for the tendency for some respondents to want to ‘please’ the respondent – or give a socially desirable answer.
While online surveys are not perfect either, removing a person from the equation does offer respondents a safe, anonymous (ish) forum in which to be open and honest about how they really feel, without fear of reprisal. These days, CATI is mostly reserved for those who are hard-to-access online, such as vulnerable or older parts of society.
I’m not privy to the exact methodology used by the various research groups that undertook political polls, but I assume they were heavily reliant on telephone and face-to-face interview methods within selected electorates.
Further exacerbating the difficulty for polling companies to get a reliable, representative sample, is the widespread usage of mobile phones – making it harder to ensure you’re reaching the person you intended to, in the electorate you’re canvassing.
As one well-known research company admitted when they decided to abandon a telephone omnibus survey they’d been conducting for more than 25 years, the falling proportion of households with a landline connection is a problem for CATI.
“The very high and growing proportion of young people in particular eschewing landline connections has meant that it is both very expensive and progressively less reliable to effectively sample younger adults,” the company said.
Were the polls introducing their own bias by following the herd, or finding themselves subject to groupthink, as one theory suggests? It’s possible that the polls were manipulated, but if they were, the bias was most likely introduced unintentionally through subconscious and subtle decisions made during analysis.
Because deliberately fudging the numbers is not a great way for a research company to stay in business.
Where to from here?
I admit to having a vested interest in the continued success of the market research industry. But even if I didn’t, it’s hard to argue that good research, done well, doesn’t add value.
The key is in the representativeness of your sample.
The best way to tackle sampling depends on your objective; if you’re wanting to test a new type of meat product, for example, you’re not going to gain much from finding out what a bunch of vegans think.
If you are interested in the views of all Australians, however, then a large and representative sample – ideally weighted to the Australian population according to the ABS – should get you a statistically reliable result within a narrow margin of error.
How can you achieve that? Random sampling is important, as it reduces the potential for bias by giving everyone the same chance of being surveyed, as is casting a wide net.
You can’t control who chooses to answer the email they’ve received, and incentivisation can play a part here, but we live in a democratic society in which freedom of choice (to answer or not answer) is a human right, so it’s the closest you’ll get to removing sample error.
It may be tempting to sample fewer people to keep your costs low – and this might be sufficient if your objective is just to get a directional read on the market. But if you are making large dollar investments or important strategic decisions on the back of the results, then it’s not recommended.
Data-led decisions will always prevail over decisions made on gut instinct. So while the election polling is a timely reminder for all of us in the industry to reflect on how we can strive to do better, it’s not a reason to stop asking your customers what they want, or market testing your concepts.
Trust me.