Written By:
lprent - Date published:
6:47 pm, October 26th, 2008 - 26 comments
Categories: polls -
Tags: poll bias, wikipedia
This is from the discussion area of the wikipedia page on opinion polls. It is an analysis of biases between the polling companies by trewa. I know how much our own commentators are interested in the biases of different polls, so this will give some kind of basis for comparison.
Personally I think that land-line based polling as being pretty worthless these days. In my opinion, having a listed (in the white pages) land-line is a characteristic of being older, more technophobic, and being of higher incomes – characteristics of a more conservative voter.
The current rise of populism challenges the way we think about people’s relationship to the economy.We seem to be entering an era of populism, in which leadership in a democracy is based on preferences of the population which do not seem entirely rational nor serving their longer interests. ...
The server will be getting hardware changes this evening starting at 10pm NZDT.
The site will be off line for some hours.
Interesting. While the money quote from the caption is “No polls show significant deviation from zero at the 95% level” note that it’s measuring against other polls using similar methodologies, not against a known quantity (not that there is one.
One-line benchmark for those of you who don’t feel like reading the graphs:
Colmar Brunton (one News): Overrates National, underrates all others.
Digipoll (Herald, Marae): Overrates both National and Labour, underrates both Green and NZF.
Nielsen (Fairfax): Overrates National, underrates Labour and Green.
Roy Morgan : Underrates both Labour and National, overrates minor parties.
TNS (3 News): Overrates Labour, underrates National.
UMR (Labour): Overrates National (!) and Green.
L
Do the dots (for example just above and below the greens on roy morgan) signify out lying polls or something?
Interesting reading, but surely you don’t seriously think that people on welfare cant afford landline phones?
Much appreciated, and a big ups to the guys doing all the work here.
I do have one question. How valid is it statistically:
s is the mean value estimated using the Loess smoother taking into account all polls.
when each of the pollsters is using a different methodology? Polls are not the equivalent of a full scale election held on the day of the sampling… rather they are a proxy for such a thing. What these graphs tell us is that these proxy polls are not centered with respect to each other, but what is less clear is what the actual center (ie the result of a real election held on the day) is with respect to these proxies. Is a simple “average of all polls” the correct method here?
In this respect I am very much reminded of the very controversial hockey stick climate change debate, and some of the stuff I have read (at an elementary level) around PCA analysis and reconciling non-centered data series.
This is excellent work at wikipedia. It’s consistent with some of the analysis work I’ve done around poll biases.
Shorty
Work and Income do not consider a landline phone a necessary expense unless its for medical or safety reasons. This means when a landline is cut off due to lack of payment Work and Income will not assist with the reconnection fees unless either of those circumstances are proven.
At the other end of the income scale, many high income young people are no longer connecting landlines, prefering to use cell phones and wireless hotspots or neighbours wireless signals for phone and internet.
The bottom line is no poll can be accurate but they do give us an idea of what is happening and should never be taken as gospel.
The red car is turning left, and has the right of way.
One other thing, has any organisation polled Mangere. Could Taito Phillip Field have a chance to upset things? Word is he is packing out churches all over the electorate.
What are the alternatives to land-line based polling? Which polling companies take these alternatives up?
Great post btw.
Shorty said: Interesting reading, but surely you don’t seriously think that people on welfare cant afford landline phones?
Shorty, I worked as an advocate for beenficiaries for many years. Even by the turn of the millenium, many beneficiaries had abandoned the landline because the pre-paid mobile with no line charge was a much cheaper option.
In my current employment, I still have considerable contact with beneficiaries, and I can assure you that for many of them, having a landline is beyond their resources. A pre-paid mobile with 5 or 6 short outgoing calls each week is a much cheaper option.
And if you are on a benefit, every $ counts.
Sarah, the focus on landlines is only part of the problem. Also the usual polling hours exclude night-workers and shift-workers and people who don’t answer the phone during Coro St, etc. And then there are the refusal rates. And …
But as to your first question, ultimately no methodology is without its distortions. So in some senses it’s better to stick with the broken set of methods we know, and use tools like the above to adjust.
L
gobsmacked said: Could Taito Phillip Field have a chance to upset things?
I really hope not. A prima facie case exists that he is a corrupt scumbag (which I actually had my suspicions about 20 years ago when both he and were union officials).
The Labour Party has a string candidate, there is a Family Fist candidate too, who will split the moral conservative vote, and the Green candidate, Mua Strickson-Pua, is asking only for a GreenPartey vote, rather than the electorate vote.
Hopefully Field, like Peters, is on the road to electoral oblivion in two weeks.
Lew: Surprisingly I agree. They are good indicators of trends, provided that you are aware of the limitations. That is something that the msm chooses not to examine. You have to be aware of alternate explanations.
For instance another possible reason for the decline in the Nay’s polls could be dilution. As the number of refuseniks for the pollsters decreases closer to the election, our ‘talkback’ audience influence in the polls declines. That certainly fits the available facts as well as the bleed idea that Tim was suggesting.
To me, this has always seemed the logical explanation for the rapid changes in the poll trends approaching and election, and why the polls get more accurate at the end.
However, I suspect that the msm will prefer the ‘bleed’ explanation because it allows them to construct a narrative to appeal to their audience. It is a lot easier than describing the problems with sampling.
toad: Don’t know enough about the Mangere electorate to guess.
But I think that NZF has a pretty good shot at tipping the 5% from my read of the audiences that he caters to. I’m afraid that the attack from the right was too blatant and too visible, and will now engender too much support. That is a real pity. He was getting close to being too respectable which would have been his death knell.
I’m expecting him to do better than expected in the maori electorates, and to hold most of his support in the seniors (that hasn’t died since the last election). But that is just my opinion. Besides Winston from all accounts is campaigning better than he has since the 90’s
Yes, Lynn, I fear that might be the case with Winston.
I’m just trying to be optimistic. But you are right, National and ACT were stupid enough to attack him so hard that he can claim martyrdom, and maybe get NZF above 5% yet again.
Which is a pity. Because a nice tidy Labour-Green- Maori coalition (including a revisit of the Foreshore and Seabed Act) would suit me fine.
But if we have to deal with Winston again – oh, dear!!! Mind you, at least the Nats say they won’t have him at all, so his bargaining power is minimised, unless Key does yet another flip-flop.
LP said:
I don’t know that there is much evidence that polls become more accurate the closer to the election. The experience from 2005 showed that there was enormous volatility between polls, both within different polling series and between those polling series.
We do know that the time-weighted polling average of all polls in the last 6 weeks was very accurate, but not the individual polls. I’d like to see some authoritative commentary on why that is. I suspect that with individual polling sample sizes decreasing, the margin for error increased: a poll where Labour’s true result was 41% may have put Labour’s support at 44%; it would have been well within the error range of 37%-45%; the next poll reported may have put Labour’s support at 38%, also within the error range. If you take the polls in isolation, you would assume that either Labour’s support had dropped from 44% to 38%, or that one of the polls was wrong. In fact, given the margin for error, they could well both have been right.
When you average them out, of course, you get much more confidence about the true levels of support. Relying on an individual poll result, though, is pretty hazardous.
As it appears that Neilson, Digipoll and Colmar Brunton all over-rate National, even a `poll of polls` is tainted by the “noise“ this group of polls introduce.
That today`s C-B poll shows a narrowing of the gap between Labour and National is interesting. Is their 47% now more accurate….or is National`s support now closer to 41%-42%…….
One other thing, has any organisation polled Mangere. Could Taito Phillip Field have a chance to upset things? Word is he is packing out churches all over the electorate.
Marco, my snouts told me a couple of months ago to watch out for Field – that he had a very good chance of winning Mangere. Time will tell of course, but i think there might just be a little surprise on election night.
TE: There is no real way to know if the result at (say) 3 months out reflects what would happen if an election was taken at that time. You’d have to take an election or a different type of survey.
However it has been my experience that the polls are less accurate the further away they are from the election. We notice it at an electorate level because we are looking in changes over time of canvassing of individuals (ie we randomly recanvass people). My home electorate is probably one of the most throughly canvassed in the country.
That way you can see the rate of movement from people to and from support of particular parties during the years between elections. It is never that high except when a new party comes on the scene and manages to pick up votes, or people decide that they don’t want to support a particular party because they are getting too far from their roots (2002 for instance). But generally they stay in roughly the same bloc, once voters are out of their 20’s.
So what I usually see coming up to an election is the national polls drifting towards the results I’d expect from the reading of the local tea leaves. What I’m using as a frame is the rate of change compared to other national polling sequences over multiple previous elections compared to what canvassing shows.
It is as empirical as hell, but doing that usually gives me a pretty good estimate of the major bloc outcomes. That is why I’m pretty sure that the Nay’s will go down a few points, and Labour will go up a few points at the Nov 8 poll.
There isn’t that much movement going on in the long-term support of parties. National has been slowly sucking up the right bloc support since their 2002 debacle. Labour has been running pretty steady, the greens have been slowly increasing their support by their retention of support from the young as they age.
In short I think that the swinging voter is an artifact of the polling system. They have been steadily getting less accurate over the years because the group that the sample from is getting smaller compared to the general voting population.
So what I’m seeing is the polls moving towards what I’m seeing on the ground. There are no major changes going on, it is just that the polls are getting more accurate as people stop refusing to answer the pollsters.
These are figures from the Morgan Poll at the 2005 elections
!st Figure actual result 2nd MP 3rd error
Labour : 41.1 38.5 -2.6
National: 39.1 37.0 -2.1
Greens: 5.3 7.5 2.2
Act : 1.51 3 .49
As you know last time NZF said that they would talk to the biggest party first all polls were neck and neck and it is generally assumed that some people switched from the Greens and Act to Labour and National at the last moment to try to get their party of choice over the line.
This being the case a study of the above figures would conclude that the Morgan poll was uncannily accurate .
One would assume therefore they have not changed the way they canvas
voters. So it logically would be safe to assume that they are the ones to watch
I’m glad you find this interesting – the real Kudos goes to the guys who have been collating the poll results for the last 18 months or so – I’ve just snuck in at the last minute with the graphs and bias analysis. Some notes that might be of interest:
1. Each individual poll is treated as single observation of the true value – there is no adjustment made for, for example, that Colmar-Brunton polls more frequently than UMR. Similarly, no adjustment is made for sample size. These are, of course, weaknesses. If I have time, I’d like to do a random-effects model to treat these types of things properly. But this makes a goodish first-order approximation.
2. The key result is that there are systematic discrepancies between the polling firms, and these are greater than any measure of “margin of error”. This is the most important point and is frequently overlooked by the MSM, who seem to be fixated with a change of 1% between indvidual pols. A poll of polls such as this tends to circumvent that problem somewhat, but it’s not perfect by any means (e.g. when we have lots of biased polls, the mean will also be biased).
3. Despite what everyone seems to think, there is no evidence here to suggest a “tory” bias in the Digipoll (subject to the given qualifications).
If you have suggestions about what else you’d like to see in this analysis, I’m open to them – please put them up on the wikipedia talk page!
I was pretty impressed both with data and the charts (as you can see),
Personally I don’ think there is a particular bias. It is just that the environment that they are sampling into is a lot more difficult. The diminishing number of listed landlines severely cramps their techniques because it is shifting the population of people with listed landlines away from the underlying population of voters.
That tends to favor the tories, or probably disfavor them if they start believing the polls too much.
RE:
“In my opinion, having a listed (in the white pages) land-line is a characteristic of being older, more technophobic, and being of higher incomes – characteristics of a more conservative voter”
This is a really interesting analysis. Especially as EVERY Labour Party MP has a land-line listed in the white pages!
increase sample size, phone lines cheaper
Observer: Do you intend to look like an idiot or is that just your natural charm?
Yes they’re called ‘office phones’ because every MP may have an electorate office, and they have at least one listed phone. It would not surprise me if there is listed MP’s private number, ie the one that parliamentary services pays for (and has an answer phone on, you wouldn’t want to pick it up..).
However what exactly does that have to do with the problem that the polling companies face with lower numbers of listed landlines? Or is this just some pathetic attempt to get involved in the discussion?