Dodgy online polls seem to be becoming a staple of political reporting here in New Zealand. The story earlier this week on poll “hacking” sent me in search of an insightful opinion I’d heard previously.
So today, we have a guest author on The Standard – BPGP, on “Pretend polling”.
So for the second time this year a newspaper publisher has made a story out of the fictions that are their online poll results. The first was a piece earlier this year in the Dominion Post alleging Parliamentary staff had skewed their online preferred PM poll to show Clark was ahead of Key. And now we have the Herald shrieking about how a teenaged hacker skewed their online poll to show that readers didn’t think New Zealand was “becoming a less free and democratic country”, despite the Herald’s concerted efforts to convince its readers we’re heading for a Stalinist dystopia.
Both these stories demonstrate just how bogus online polls really are, and it makes me doubt their creators have ever heard of the terms “validity” or “sampling”. It is interesting though, that despite most online newspaper polls being regularly quoted as if they were the gospel truth (at least when their outcomes show opinions advantageous to their publishers’ editorial interests), these two polls are miraculously exposed as skewed.
Sometimes publishers are generous enough to mention that the polls “aren’t scientific” & a discrete way of saying they have no validity. But invariably the next sentence will go on to quote the results as if they do have meaning that can be generalized to the greater population. After all, why would you bother to run a poll if it was actually completely meaningless?
Well, there are several reasons, but few of them to do with illuminating public opinion.
One is that online polls are useful for giving the impression that publishers interact with their audiences and are genuinely interested in listening. You might say that polls help media outlets to gauge the opinion of their audiences, but in reality they aren’t so foolish as to alter their editorial stances on the basis of such flawed data (papers like the Herald use a more sound, random sampling procedure to invite clickers to participate in a Readers’ Survey for that). Online polls are also a useful gimmick for keeping up online readers’ interest and so contributing to the maintenance of hit rates.
But the main reason for using online polling is to create an impression of popular support for the outlet’s editorial stance. If you were really cynical you might think that poll commissioners just pump the data in whichever way they want it, but in reality it’s quite unnecessary to engage in such risky manipulation. There are other ways. Like priming readers with skewed reporting on the issue, such that so long as they rely on that outlet for their information on a particular issue, they are bound to come to similar conclusions as those promoted by the outlet’s reporting.
Another way is to use priming questions. The Section 59 “debate” was a gold plated example, asking questions like “Should parents have the right to discipline their children?” when in fact the bill had nothing to do with “rights to discipline”. And then there’s misrepresenting what the poll question actually meant, as in saying that because a majority understandably agree that “parents have the right to discipline their children”, they therefore are opposed to Section 59. Yet another way is to massage the response options. Typically, very complex questions are forced into Yes/No dichotomies, when respondents may in fact have more moderate or complicated responses.
Similarly, you can alter a result just by the number of response options you allow. For example, you could ask “Do you support the EFB – Yes or No?” and get 60% for and 40% against. But if you ask “Do you support the EFB – Yes, No, or Yes but with some changes?” you might get 30% Yes, 40% No, and 30% Yes but with some changes. These results of course, would then be reported as “70% percent opposed to EFB in current form”, when the same question with just a Yes/No format would have provided an unhelpful result (for the Herald at least) showing majority support for the EFB.
The Herald and Dominion Post articles point to another, user driven form of manipulating the results – multiple voting. No matter what newspapers claim about security to measures to stop such behaviour, they will always be vulnerable to it. Cookies and IP tracking to monitor voting behaviour can easily be circumvented with SSL-flushing between votes and freeware IP masking software. A little bit of script to automate the vote-flush-re-vote process makes thousands of votes just a click or two away. The only reason the Herald poll hacker got caught was his naivete and eagerness. Had he slowed down his program’s voting rate to a more naturalistic level and used IP masking, his votes would be indistinguishable from other votes. And like I said, pollsters can’t stop this from happening & it’s not technically possible. That’s why, unlike the Herald and Dominion Post, pollsters interested in real data from online polls use password controlled logins.
Even if it were possible to preclude multiple voting in online polls, they still have an enormous problems with their generalizability (the extent to which results can be assumed to be representative of a population) because of the nature of their self-selecting sampling. At best an online poll might represent the views of a website’s readership, but most websites’ poll results aren’t even representative of the site’s readership, unless all readers are equally likely to participate in the poll. In truth this isn’t so because those who do respond tend to be those who have a particular interest in the issue & ambivalent, busy or poll-savvy readers tend ignore them. In the electronic age it’s also easy for pressure groups to group-email their supporters a link to the poll, allowing mobilized interest groups to attack an online poll with just a few clicks’ effort.
So despite the fact online polls aren’t often even representative of the site readership’s views, if the results are desirable to the website’s interests they will be purported to represent “everyone”. That becomes really problematic because it assumes that “everyone” has an equal probability of participating in the online poll, even if they’ve never heard of the site, even if they’ve never heard of the issue being polled, even if they have no online access! Real polls of the entire population go to great lengths to make sure opinions are sampled from representative proportions of the demographies that comprise the entire country, in sufficient numbers to make sure margins of error are negligible. Unlike the Herald and Dominion Post, they don’t use self-selecting samples – that is, respondents aren’t those who actively seek to participate, rather they are selected by the researcher either randomly or as part of a stratified sample ensuring representative proportions of youth, elderly, North and South Islanders, Pakeha and Maori, male and female, rural and urban, etc.
Incidentally, the main problem with the accuracy of most commercial polling today, apart from priming questions, is the obsolete sampling technique of 9am to 9pm landline telephone calling, which ‘invisibilizes’ the opinions of all those without landlines (up to 67% of households in the poorest electorates), those who rely on cell phones for their principle communication channel, those who work night shifts, and those who move frequently such that their numbers are not listed – specifically, the poor and the young. Conversely, the elderly, self-employed and less transient home-owners, those who are readily contactable by such sampling methods with long-established landlines, are over-sampled. Still wondering why commercial polling tends to favour conservatives?
But back to the current fiasco of newspapers’ online polls. Doesn’t it make you wonder how many other polls might be so skewed but were never reported as such because they ‘demonstrated’ what the publisher wanted to promote? Now that the Herald and Dominion Post have acknowledged these serious weaknesses in the validity of their polls, will we see such gallant eschewal of future online poll results by their own commissioners, no matter how attractive their outcomes might be to those who published them? Will we see an improvement in the methodological integrity of polling generally? Will we see a public education campaign by news outlets on how any poll (not just the highly-vulnerable online poll) can be manipulated? I expect if the Herald or Dominion Post did a survey to see if the public had an appetite for answers to these questions, the result would be No.
Methodologically dubious polling, in terms of both priming and sampling, was epidemic in the lead up to the 2005 election and is a far greater threat to democracy than any teenage hacker able to thwart the efforts of media conglomerates like APN and Fairfax. Bogus polling is anti-democratic because it’s used to create opinions, not reflect them as they pretend to do. They create an illusion of consensus for whichever opinion is desired by the poll’s commissioning client. And when you’re talking about a public-opinion based game like politics, manufactured “public” opinion poll results can turn into self-fulfilling prophecies.
ps: The original Dominion Post story by Patrick Crewdson (published 1 Feb 2007) has since been removed from their site, so I’ve included it below. It’s a shame Parliamentary Services didn’t press the Dom Post for evidence and pursue defamation action because I know for a fact those bogus votes didn’t originate from a parliamentary server.
Parliamentary voters try to skew Key poll
Someone in Parliament now has severe finger cramps.
Yesterday’s Dominion Post poll on whether John Key will be the next prime minister proved popular with readers – particularly with those who walk the halls of power.
With more than 33,600 votes cast, 63 per cent disagreed that the National leader would succeed Helen Clark as prime minister, while 37 per cent thought he would.
But given that 17,104 of the votes were cast from parliamentary computers, Mr Key need not despair – it appears some poll participants had a vested interest.
Parliamentary workers seemingly spent yesterday supporting the status quo, 80 per cent of the votes from Parliament saying Mr Key would not be the next prime minister.
Removing all parliamentary votes from the poll meant 55 per cent favoured him as the country’s next leader.
The poll followed his first state of the nation speech.
He warned of an emerging “underclass” in New Zealand society.
Dominion Post readers were invited to vote by text message, phone, e-mail, or online at dompost.co.nz and Stuff.co.nz
Government allies outside Parliament also responded. An e-mail campaign, understood to have originated with a union, implored recipients to vote against Mr Key, urging: “go forth and vote, fellow Labourites!”
Voting in the poll was heaviest from 9am to 10am and during the lunch hour.
A related Stuff.co.nz poll had 80.9 per cent of 4000 voters supporting Mr Key’s view about a growing New Zealand “underclass”.