Speaker by Various Artists

23

Polling 2017: life beyond landlines?

by Gavin White

There’s no doubt that the decline in landline usage is creating big challenges for the pollsters in New Zealand's general election and they’re dealing with it in different ways.

Unfortunately for the New Zealand public, we seem to be down to two main public polls this time (plus the occasional Roy Morgan), and it’s clear that the Newshub Reid and One News Colmar Brunton are providing wildly different results.

Let’s remember that Colmar Brunton and Reid Research are two of the longest-running political polls in NZ (UMR being the longest) and both have good records at elections.

People have been putting the unusually big differences between those two polls down to a volatile electorate, but I think there’s more to it than that.

For a start, Colmar Brunton’s poll is still conducted entirely by landline, while Reid’s is using a hybrid telephone-online approach.  Although Colmar Brunton has stuck with landlines this time around, I’m certain that they’ll be using a mix of quotas and weights to ensure that the sample is as representative of the wider population as possible – and there’s a lot more to designing a good poll than just who you talk to.

Newshub haven’t reported this on every occasion (and they definitely should), but here’s what the report on Reid’s July poll said on the methodology:

The Newshub-Reid Research poll was conducted July 20-28.  1000 people were surveyed, 750 by telephone and 250 by internet panel.  It has a margin of error of 3.1 percent.

When I developed my poll of polls for the 2014 election, I used the differences between the election result and each company’s final poll.  I’m hesitant to do that for Reid this time because, to me, they’re using a fundamentally different methodology.  It’s not necessarily a bad methodology, it’s just a different one, and I don’t feel like I can use their average "error" from previous elections to take a view on how accurate they’re likely to be this time.

As I say, there’s nothing necessarily wrong with their hybrid methodology, but I do feel that we need to know more about it before we can judge their results.

A little diversion for a moment.  In Australia, CATI (telephone) polls are very nearly dead, with online polls and robopolls taking over.  All the main political polls are now conducted online, and they performed very well at the 2016 federal election.

I think it’s inevitable that New Zealand political polls will eventually go the same way, but there are significant challenges to that.  The standard objection to landline-based polls is that not everyone has a landline any more – but the same applies to online polls.

Online surveys depend on online panels, and even when everyone has access to the internet, not everyone will be on an online survey panel. That seems to work okay in Australia, where there are major panel providers holding huge databases of people, but in New Zealand the panels don’t seem to be as big or representative.

The other thing to remember is that in a telephone survey, the participant hears the question, whereas in an online survey they see it.  That might not seem to matter, but in political polling it seems to be a big deal. In a telephone poll, a respondent will typically be asked "what party would you vote for if any election were today?" and have to answer off the top of their head, but in an online survey they have to be presented with a list of parties.

You’d think that because we actually vote on paper the online approach would be closer to the real experience, but for whatever reason phone polls have seemed to provide more credible results.  That could be down to the MMP system, where the major parties (which tend to be at the top of people’s minds) tend to have electorate candidates, and therefore appear nearer the top of the voting paper.

Because of the differences between the two formats, I’m generally reluctant to combine the two.  There are situations, however, where a top-up sample using a different methodology can be useful.  In a recent project in Australia, for example, I conducted the main survey online and then had a top-up CATI survey to reach those who were uncomfortable with doing things online (and were therefore unlikely to be on an online survey panel). 

Crucially, however, I didn’t feel that I could combine the results of the two surveys together, because I didn’t know enough about the "not comfortable with doing things online" population (i.e. their census statistics) to work out what weights to use for each survey.  I presented the results separately, and let people draw their own comparisons between the two.

Similarly, if I was designing a hybrid approach for a New Zealand political poll and I was going to stick with a phone poll as the main one, I’d use the online survey to target those who don’t have landlines. It’d be easier to weight the two results together than the Australian example I mentioned above, because there is census data on landline use, but even then I think there’d be significant challenges to getting a representative sample from the combined sample.

I’d like to know more about whether Reid have used an approach like that, and whether they are seeing any differences between their phone and online samples.  Certainly, I think the reporting needs to acknowledge that it is a new methodology, and that phone and online surveys are different.

Oh, and I’d like Newshub to stop reporting their poll results to one decimal place.  It’s a ridiculous thing to do – in a survey of n=1000 people, 0.1% is one person.  Polls have margins of error far greater than that, so they’re claiming accuracy they simply don’t have.

Note: Gavin White has previously worked for UMR New Zealand, but now lives in Australia, where he does some work for UMR Australia, a separate company. He  no longer sees UMR NZ's polling data and the views he expresses here are his and not those of UMR NZ.

23 responses to this post

Post your response…

This topic is closed.