At the 2019 Australian federal election, four separate pollsters under-estimated the Coalition vote and over-estimated the Labor vote/Labor two-party-preferred, leading to one of the worst polling errors seen at an Australian federal election since the 1980s.
In response to this error, one pollster had their contract terminated (Ipsos), two have made significant, documented changes to their methodology and provided greater transparency (Essential, YouGov/Newspoll), and the last has, uh…it’s complicated (Morgan – see footnote). When Roy Morgan returned to publishing voting-intention polls, the pandemic had just been declared. Previously, Morgan used a variety of methods for each poll, but for its 2019 polling, it used the face-to-face method, which was obviously no longer viable during COVID.
Hence, when Morgan returned with a mixed phone/online survey, we don’t know if this change was made in response to the pandemic, or in response to the 2019 polling error. Additionally, unlike Essential or YouGov/Newspoll, Roy Morgan is not a member of the Australian Polling Council and hence provides very little information about its polling.
We don’t know, for example, if its phone interviews are conducted live or through an automated voice system (aka robopoll). We don’t know what weights it uses (Newspoll and Essential both adopted new weighting frames after 2019), how its participants are recruited, how questions are ordered on the survey etc.
The new Newspoll adopted methodology used internationally by YouGov, and has published polls for state elections (the only pollster to do so), with polling errors on its final state polls as follows:
Election | Labor | Lib/Nat | Greens | Others |
---|---|---|---|---|
2020 QLD | -2.6 | 0.1 | 1.5 | 1 |
2021 WA | -2.9 | 0.7 | 1.1 | 0.1 |
2022 SA | 1 | 2.4 | -0.1 | -3.3 |
Avg. abs. err. Historical avg. err. | ±2.1 ±1.8 | ±1.1 ±1.7 | ±0.9 ±1.3 | ±1.5 ±1.9 |
Errors for 2022 SA were estimated from the ABC party total projections as of 26/Mar/2022 2100H GMT +10.
All three Newspolls correctly called Labor victories in all three state elections, although obviously “calling” the right winner in WA (where Labor held a 66-34 lead going into the election) was easier than doing so in QLD (where Labor was ahead by just 51.5-48.5 in the final Newspoll). The average polling error for each party/grouping is also slightly lower than the historical average polling error for state Newspolls, although this is a small sample size of elections.
More interesting, however, is the question of what this implies for Newspoll’s accuracy at the upcoming federal election. Historically, Newspoll has been one of the most accurate federal pollsters, with a ~10% lower average error on the 2-party-preferred compared to other pollsters. Given its recent performance at state elections, should we expect the federal Newspoll to out-perform its historical accuracy?
To test this, I compared the average skew and error size on all state Newspolls between two federal elections for the Labor/Coalition/Greens/Others vote, and compared it to the error on the final Newspoll for the federal election that immediately followed them. I focus on Newspoll because it has polled most state elections and has maintained relatively similar methodologies between federal elections.
By contrast, while Morgan has polled some state elections, it tends to use a variety of sampling methodologies (face-to-face, phone interviews, SMS surveys, online surveys), and hence we would not expect strong correlations between Morgan state polling errors and federal Morgan poll errors.
In any case, the only pollster which has polled at the 2020 – 2022 state elections is Newspoll, so I think it’s a fair comparison. So for example, for the 2019 federal election we’d be drawing on QLD and WA in 2017, VIC and SA in 2018, as well as NSW in 2019, and comparing the average skew and error between those state and federal Newspolls.
Comparing state Newspoll skews to federal Newspoll errors
(I have excluded the data from the 2013 – 2016 election cycle, as there was a methods change at the 2016 federal election not enacted in the 2013 – 2016 state Newspolls. In any case, including 2016 does not change any of the conclusions below.)
The correlation between average Newspoll skew at recent state elections and the final Newspoll error at the next federal election is very weak, with the strongest being the skews/errors on the Coalition vote. However, even the Coalition state skew/federal error relationship isn’t particularly strong, with an average 3% gap between the average state skew and the polling error at the next federal election.
In other words, given a 1% over-estimate of the Coalition in state Newspolls, anywhere between a 7% under-estimate of the Coalition to a 9% over-estimate at the federal election would within this model’s historical margin of error – effectively useless for predictive purposes (for comparison, the historical margin of error on federal polling is something like +/- 4%).
What about the other hypothesis? Do higher errors at state elections correlate with higher than usual errors at upcoming federal elections? Below I compare the average absolute error on state vs federal Newspolls instead – this examines error size instead of error direction, so e.g. a -4% error and a 4% error would both be a 4% absolute error.
Comparing state Newspoll relative accuracy to federal Newspoll relative accuracy
(bear in mind that we’re talking about relative accuracy below – whether the federal Newspoll has historically tended to do well when it has done well at preceding state elections)
Similar to the average state skews vs federal error data, there is very little correlation between error size at preceding state elections and the federal election polling error size. You could maybe make the argument that there is some correlation between state Newspolls’ error size on the Coalition vote and the error size on the Coalition vote at federal elections. On that basis, the predicted error on the Coalition vote at the upcoming federal election is +/- 1%, a little over half the historical average error of +/- 1.6%; however as noted previously this is from a relatively small sample of state elections and needs to be treated with caution.
So there’s little correlation between Newspoll accuracy at preceding state elections and Newspoll accuracy at the federal election, or Newspoll skews at state elections and federal Newspoll errors. However, perhaps this is due to skews in each state cancelling out in a federal election. Do state election Newspoll errors predict upcoming federal Newspoll errors for their respective state? For example, if Newspoll over-estimated the Liberals at the 2022 SA election, should we also expect them to over-estimate the Liberals in their SA sample for the 2022 federal election?
Comparing state Newspoll errors with federal Newspoll errors in each state
To test this hypothesis, I’m using the state breakdowns provided by Newspoll for its federal poll (an example – paywalled). These breakdowns combine the samples of several polls so that a reasonable sample size is obtained even for the smaller states.
However, these breakdowns often combine samples from relatively long periods – usually 1 – 3 months. Hence, they are not directly comparable to final polling without some adjustment as they may not take into account genuine shifts in voting intention. For example, in 2013, the last available breakdown had L/NC 46 ALP 35 OTH 19, while the final Newspoll had L/NC 46 ALP 33 OTH 21. The result was L/NC 45.5 ALP 33.4 OTH 21.1, so it’s possible that the final Newspoll accurately captured a shift in voting intention from ALP -> OTH that had not happened in the timeframe sampled by the breakdown.
To correct for this, I apply a uniform swing onto all state breakdown figures so that the overall voting intention (voting intention in all states combined) matches the final Newspoll conducted for that election.
Below, I’ve plotted the state Newspoll error on the x-axis, versus the federal Newspoll error for that state on the y-axis.
So for example, if Labor was under-estimated at the SA state election, and under-estimated in the SA sample for the federal Newspoll, that point would be plotted in the bottom-left quadrant of the graph. Alternatively, if Labor was under-estimated at the SA state election but over-estimated in the SA sample for the federal Newspoll, that point would be plotted in the top-left quadrant.
Comparing state Newspoll errors to federal Newspoll errors in that state
(due to difficulties in sourcing data on the Greens for old Newspoll breakdowns, I’ve opted to combine Green error data into the All Others error)
There appears to be pretty much no correlation between whether state Newspolls tended to over- or under-estimate a party, and whether the federal Newspoll also over- or under-estimates that party in that state. This is in-line with recent elections; for example, the state Newspoll at the 2017 QLD state election nailed the ALP and LNP vote to within 1% each but the federal Newspoll in Queensland was atrocious, over-estimating Labor by 6.3% and under-estimating the Coalition by 7.7%.
What about our other hypothesis – that state Newspoll accuracy might correlate with federal Newspoll accuracy? I’ve constructed a similar set of plots for absolute error in each state against absolute error in federal Newspoll samples:
Comparing state Newspoll relative accuracy to federal Newspoll relative accuracy by state
Again – there is pretty much no correlation between Newspoll accuracy at individual state elections and Newspoll’s accuracy in samples for that state at the subsequent federal election.
How about overall? Has the historical Newspoll been particularly good or bad at polling particular states? (i.e. are particular states “difficult to poll”?)
State Newspoll average absolute error
State | Labor | Lib/Nat | All others |
---|---|---|---|
NSW | ±2.6 | ±1.9 | ±2.3 |
VIC | ±1.8 | ±2.0 | ±1.2 |
QLD | ±1.3 | ±1.1 | ±3.1 |
WA | ±1.8 | ±1.3 | ±2.9 |
SA | ±1.3 | ±2.1 | ±3.7 |
Federal Newspoll average absolute error by state
State | Labor | Lib/Nat | All others |
---|---|---|---|
NSW | ±1.8 | ±2.1 | ±2.2 |
VIC | ±2.3 | ±3.5 | ±2.9 |
QLD | ±4.4 | ±2.7 | ±3.4 |
WA | ±2.2 | ±2.1 | ±1.8 |
SA | ±2.4 | ±2.0 | ±2.7 |
There’s pretty much no correlation between how well Newspoll has tended to do in each state and the average error size on federal Newspoll samples in that state. This is partly confounded by sample size in each state; for federal polls you will tend to get more respondents from bigger states and fewer from smaller states and hence you would expect greater accuracy in federal polls for NSW/VIC than you would for WA/SA.
Even taking that into account, however, Newspoll accuracy at the state level can completely differ from Newspoll accuracy for the same state at the federal level. For example, Newspoll is pretty good at estimating the Liberal vote in SA for federal elections (completely opposite to what you would expect given SA has a fairly small sample size in most polls) but its estimates of the Liberal vote for SA state elections have had some of the highest errors on average.
Of more interest may be the Queensland error figures. The Queensland state sample for federal Newspoll has historically been subpar, recording by far the worst average error on the Labor vote, the second-worst average error on the LNP vote and the worst average error on the combined Others vote (this isn’t restricted to Newspoll – what data I could find from the other pollsters [Nielsen/Morgan/Essential/Ipsos] produced nearly identical average errors). However, Newspolls conducted for Queensland state elections have the lowest average error for the Labor and LNP vote, while the error on the combined Others vote is just a little above average. This could be a result of the relatively small number of elections we’re looking at here (n = 8), but it does show that “Queensland is a difficult state to poll” is too simple a conclusion from the data.
Implications for Newspoll accuracy at the upcoming federal election
There appears to be no correlation between Newspoll’s tendency to over- or under-estimate a party at state elections and Newspoll’s tendency to over/under-estimate the same party at federal elections, or for that matter Newspoll’s tendency to over/under-estimate that party in the same state. Hence, we shouldn’t assume that Newspoll will over-estimate the Coalition based off its errors on the QLD/WA/SA state elections.
Similarly, there’s no correlation between Newspoll’s accuracy at state elections and its accuracy at subsequent federal elections, or its accuracy in particular state elections and its accuracy in the same states at the federal level. While Newspoll has historically been one of the most accurate pollsters at Australian elections, it’s not possible to predict whether it will be more or less accurate than usual at a federal election from its performance at recent state elections.
This analysis is very much an “as you were” for expectations of Newspoll accuracy. At minimum, the new methodology appears to be doing about as well as the old methodology has done at state elections, so I’d expect the Newspoll to be about as accurate as it has been historically. This may sound like a bit of a cop-out, but it’s not nothing – “the new Newspoll will be about as good as the old Newspoll used to be” is equivalent to saying “the new Newspoll will, on average, be better than most polls”. It doesn’t mean that it’ll be perfect or that it’ll call the next election correctly, but it’s definitely more information than we have about other pollsters’ new methodology.
Newspoll looks under-dispersed coming into the 2022 Federal election; and with that comes the increased risk of bias under the bias-variance trade-off.
I’d like to note for readers that under-dispersion does not necessarily mean the same thing as herding.
A pollster is under-dispersed if it releases figures which don’t “bounce” as much as they should if they were true random samples of the population. However, an under-dispersed pollster may well release polls which are completely out of line with other pollsters (hence not herding).
e.g. if Newspoll said 49, 50, 50, 50, 51, 49, 50 – they would be under-dispersed, but if the other pollsters consistently had the race at 44-48, Newspoll would not be herding to those pollsters.
“Bias” is also a statistical term which does not necessarily imply intentional bias on the part of the pollster, or a bias on a Labor/Coalition basis. e.g. if the final Newspoll at ten elections said 49, 51, 52, 47, 47, 50, 51, 52, 48, 46, but the result was 48, 49, 54, 45, 44, 52, 54, 58, 49, 38 – Newspoll would not be very biased on a Labor/Coalition basis (just 0.6%), but it would be quite biased on a “under-estimating whoever’s ahead” basis (average 2.4%).