No truth stands alone.

Is There A Shy Tory Effect In Australian Polling?

What’s the Shy Tory effect?

The term “shy Tory” originates, as many wacky nicknames for political factions do (see: “Whig”), from the UK. It emerged in the aftermath of the 1992 UK general election, where a race which polls suggested was a tie with Labour narrowly ahead turned out to instead be a sizeable win for the ruling Conservatives; with an inquiry by the Market Research Society suggesting Conservative voters may have misled pollsters as to their voting intentions. Thus was born the idea that there is a segment of respondents who will not admit to voting for or holding views in line with politically-incorrect/conservative parties but who will vote for said parties in the privacy of the ballot box.

The shy Tory effect is an extension of the more widely established social desirability bias, where respondents are less likely to admit to behaviours considered socially undesirable (e.g. drug use) and more likely to claim they act in socially desirable ways (e.g. recycling, donating to charity). Furthermore, in the UK, there is some evidence that voting-intention polls have tended to underestimate the Conservatives and that live-interview polling (e.g. an interviewer directly asking people who they intend to vote for, either face to face or over the phone) tends to be more skewed than automated polling (e.g. an online survey), suggesting the possibility of a shy Tory effect there. However, shy Tory effect has since been expanded and is now regularly trotted out whenever a conservative party/candidate/cause overperforms its polls in elections (e.g. Donald Trump’s presidential bids), even when the error is fairly small and/or comes in an electorate with little history of polling skew.

In particular, shy Tory effect has been frequently cited as an explanation for the 2019 Australian polling failure, in which polls collectively underestimated the centre-right Liberal-National Coalition’s vote by 2.4 – 3.4% and over-estimated the centre-left Labor Party’s vote by a similar proportion. While some polls overestimated Labor by more (YouGov/Galaxy, Newspoll) and some by similar amounts (Morgan, Essential), Ipsos actually underestimated the Labor first-preference vote.

However, this is likely due to Ipsos’ historical house effect whereby they underestimate the Labor vote but overestimate the Green vote, leading to roughly similar 2-party-preferred estimates as the other polls.
At first glance, this might seem somewhat persuasive, given the Association of Market and Social Research Organization’s conclusion that “a skew (to Labor) has been evident in recent election cycles, with 17 of the 25 final poll results since 2010 (68%) overestimating 2PP support for Labor”.

However, the above is not credible evidence for a polling skew towards Labor.

Consider this: while “17 of 25 since 2010 (overestimated)…Labor” may sound like some pretty clear evidence, in reality that period only covers four elections, in which one (2019) Labor was clearly overestimated, another where Labor was slightly overestimated (2010), one where the polls were bang-on (2013) and one where the polls very slightly overestimated the Coalition (2016). Polling errors within elections tend to be at least somewhat correlated (e.g. polls all underestimated the Coalition in the 2019 federal election, but instead they overestimated them at the 2018 Victorian and 2017 WA state elections), and ignoring this fact can make a figure like “in 1 of 4 elections, pollsters overestimated Labor, in another they got the 2pp to within 1%, and in the remaining 2 of 4 they got the 2pp right to within 0.3%” sound a lot more skewed than it really is. If you’re still not convinced, think of it this way:

Let’s say I have four gold coins, with unknown odds of being heads or tails. Each gold coin determines the probabilities for five silver coins; if the gold coin lands on heads, then the chances of landing on heads for each of the five silver coins is 80%, while if it lands on tails, the chances of tails for each of the five silver coins is also 80%.

Now, I flip my four gold coins, then use those results to flip twenty silver coins and get 14 silver heads (70%). Does this prove the gold coins are biased towards heads?

If you assumed the twenty silver coins were independent (instead of being four separately correlated batches), then it might appear as if the sample was biased towards heads – the probability of getting more than 14 (or more) heads in 20 independent flips is just 2.1%, or just over 1 in 48. However, once you account for the fact that the coins are correlated – due to the gold master coin, the chance of silver coin 1 flipping heads is somewhat correlated with the chance of silver coin 2 being heads etc, then the true probability of there being 14 or more heads in 20 silver coin flips is about 1 in 6, or about the same probability as rolling a 3 on a six-sided dice.

That’s not even including other problems; for example, some of the polls which “(overestimated) 2PP support for Labor” did so by ridiculously tiny margins e.g. the 2010 Newspoll which had Labor at 50.2% of the 2pp (they won 50.12%) or the 2016 Ipsos poll which had Labor at 50% (Labor won 49.64%), or that pollsters often round their published figures (and hence a poll may have gotten the 2pp almost exactly correct but still be counted as “overestimating Labor”, e.g. that 2016 Ipsos poll might have had Labor 49.8% but rounded to 50%), or the fact that 2pp is probably not the best metric to determine if pollsters are systematically under-sampling conservative voters as most pollsters assume minor party votes will flow at similar rates to previous elections (and hence some proportion of 2pp error may be due to a shift which pollsters don’t even poll about).

Still, that doesn’t necessarily answer the question of whether or not Australian polling suffers from a shy Tory effect. To determine whether or not pollsters are being misled/ignored by conservative-leaning voters to an extent which skews their polls, let’s take a systematic look at Australian polling for elections and the same-sex marriage survey.

First, let’s set out what we would expect to see if there was a shy Tory effect in Australian polling:

  1. Over time, polls would systematically underestimate support for conservative and/or politically-incorrect parties and causes. They don’t always have to underestimate the conservative side, of course, but over a bunch of elections, polls should under-estimate conservatives on average if respondents were unwilling to admit to voting for conservatives. Here, I’m going to use voting-intention polls for the Liberal-National Coalition (and its component parties) and One Nation (a nationalist conservative, far-right party), as well as polling for the same-sex marriage survey, to determine if this is the case.

    Note that I intend to use polls of the first-preference voting intention (aka primary votes), not 2-party-preferred (2pp). While 2pp can be a useful metric, most modern pollsters estimate 2pp by assuming minor party voters’ preferences will flow to the major parties at the same rate as they have at prior elections (e.g. assuming 82% of Greens voters will place Labor above the Coalition candidate(s)), and therefore 2pp estimates can be off due to shifts in preference flows which pollsters don’t usually poll about.

    This analysis is also limited to final polls (defined as being the last poll released by that pollster for that election, and being taken within 7 days of the election), to minimise the effects of any late swing on pollster error. This presents a bit of a complication for the same-sex marriage survey, as it was conducted over a period of two months. In order to avoid having to weight polls by the share of voters who had already voted, I’ve opted to only look at polls taken within the last two weeks of the campaign.
  2. Polls which were taken using live-interview methods (i.e. an actual person either meeting with voters in person or over the phone to ask who they intend to vote for) should be more skewed against the conservatives than polls taken using automated methods (i.e. robopolls or online surveys). Anonymised methods of survey-taking reduce social-desirability bias, and given that live-interview polls in 2016 apparently showed Clinton further ahead compared to online/robopolls, live polls should show less skew than automated polls.

    This is important because even if we find a skew against conservatives in polling, it does not necessarily follow that it is due to voters being unwilling to admit to intending to vote for conservatives – for example, it might be because pollsters are undersampling right-leaning groups, but the ones they do find respond with their genuine voting intention. The difference between live and automated polling is important for determining whether polls simply need to weight their samples differently, or whether a group of respondents are systematically misleading pollsters as to their voting intentions.

The voting-intention polls used in this analysis are available here, while the polling for the same-sex marriage survey was sourced from Wikipedia. My thanks to Dr Kevin Bonham and William Bowe (of Poll Bludger) for having shared their archives of old Australian polling.

So, do polls underestimate conservatives?

On average, Australian polling does not under-estimate conservative parties

(if you’re on a mobile device, scroll right for full data or turn your device landscape. Click the Previous and Next buttons to view all data.)

ElectionMethodCoalitionOne Nation
1988 NSWLive-3.5
1988 NSWLive0
1991 NSWLive0.4
1995 NSWLive-0.9
1999 NSWLive4.3-2
1999 NSWLive1.3-2
2003 NSWLive-1.3
2003 NSWLive-1.80.2
2007 NSWLive-2
2007 NSWLive-1
2011 NSWLive-1.1
2011 NSWAutomated3.9
2011 NSWLive-0.1
2015 NSWLive-1.6
2015 NSWAutomated3.4
2015 NSWAutomated-0.1
2015 NSWLive-0.6
2015 NSWLive1.4
2019 NSWAutomated-0.53.9
2019 NSWAutomated-0.5-0.1
1988 VICLive-5.2
1992 VICLive-0.5
1996 VICLive3.3
1999 VICLive-1
2002 VICLive-0.2
2002 VICLive-1.7
2006 VICLive-2.5
2006 VICLive3.5
2006 VICLive-0.5
2010 VICLive0.3
2010 VICLive-0.2
2010 VICLive-0.7
2010 VICLive0.3
2014 VICLive0
2014 VICLive-2
2014 VICAutomated-2
2014 VICAutomated2
2014 VICAutomated-2
2018 VICAutomated4.8
2018 VICAutomated0.8
2018 VICAutomated4.8
1986 QLDLive-1.6
1986 QLDLive-0.6
1989 QLDLive0.4
1992 QLDLive-1.1
1995 QLDLive-3.4
1998 QLDLive1.8-4.2
2001 QLDLive-2.43.3
2004 QLDLive-2.4-1.9
2006 QLDLive0.10.4
2006 QLDLive1.10.4
2006 QLDLive0.10.4
2009 QLDLive0.4
2009 QLDLive-1.6
2012 QLDLive0.4
2012 QLDLive1.4
2012 QLDLive-2.6
2015 QLDLive-0.3
2015 QLDAutomated-2.3
2017 QLDAutomated0.3-0.7
2017 QLDAutomated1.3-1.7
2017 QLDAutomated-3.73.3
2020 QLDAutomated0.12.9
1986 WALive-5
1989 WALive-2.4
1993 WALive1.6
1993 WALive3.6
1993 WALive-0.4
1993 WALive3.1
1996 WALive2.3
2001 WALive3.7-2.5
2005 WALive4.7-0.6
2008 WALive-0.7
2008 WALive-0.2
2013 WALive-3.2
2013 WALive0.8
2017 WALive-0.64.1
2017 WAAutomated0.43.1
2017 WAAutomated3.41.9
2021 WAAutomated0.71.8
1989 SALive0.8
1993 SALive1.2
1997 SALive-0.4
2002 SALive4
2002 SALive3
2006 SALive1
2010 SALive0.9
2010 SALive0.4
2014 SALive-3.2
2018 SAAutomated-3.9
2018 SAAutomated-3.9
1987 Federal-1.9
1987 FederalLive-2.9
1990 FederalLive-3.9
1990 Federal-1.4
1993 Federal0.7
1993 FederalLive3.7
1993 FederalLive0.7
1996 FederalLive0.8
1996 Federal-4.2
1996 FederalLive-2.2
1998 FederalLive0.5-1.4
1998 FederalLive2.5-1.4
2001 FederalLive-0.4-1.8
2001 FederalLive3.1-1.3
2001 FederalLive3.1-1.3
2004 FederalAutomated-0.7-0.7
2004 FederalLive2.3-0.2
2004 FederalLive-1.7-0.2
2004 FederalLive3.8-0.2
2007 FederalLive-2.1
2007 FederalLive0.9
2007 Federal-0.6
2010 FederalLive-2.3
2010 FederalLive-1.8
2010 FederalLive0.2
2010 FederalLive-0.8
2013 FederalAutomated-2.5
2013 FederalLive-0.5
2013 FederalLive0.5
2013 FederalLive0.5
2013 FederalAutomated-2
2013 Federal-1.5
2013 FederalAutomated-3.5
2016 FederalLive-2
2016 FederalAutomated1
2016 FederalAutomated0.5
2016 FederalAutomated1
2016 FederalAutomated0
2019 FederalLive-2.90.9
2019 FederalAutomated-2.93.5
2019 FederalLive-2.40.9
2019 FederalAutomated-2.4-0.1
2019 FederalAutomated-3.4-0.1
Average-0.2%
(134)
+0.2%
(35)
Negative values mean the poll underestimated the party's vote share, while positive values mean the poll overestimated the party's vote share. Number of polls given in brackets for the average error.

As the above table rapidly makes clear, there is practically no bias against conservative parties in Australian polling. On average, the overall skew on both Coalition and One Nation vote shares has been minimal, at just -0.2% and +0.2% respectively, neither being statistically significant (p = 0.3 and p = 0.6). For every 2018 South Australia (-3.4%) or 2019 federal election (-2.8%) where the Coalition was underestimated, there’s a 2018 Victoria (+4.8%) or a 2001 WA (+3.6%) where they were instead overestimated.

This is similarly reflected in the polling averages for each election:

(if you’re on a mobile device, scroll right for full data or turn your device landscape. Click the Previous and Next buttons to view all data.)

ElectionCoalition
(polling average)
One Nation
(polling average)
1988 NSW-1.8
1991 NSW0.4
1995 NSW-0.9
1999 NSW2.8-2
2003 NSW-1.60.2
2007 NSW-1.5
2011 NSW0.9
2015 NSW0.4
2019 NSW-0.51.9
1988 VIC-5.2
1992 VIC-0.5
1996 VIC3.3
1999 VIC-1
2002 VIC-1
2006 VIC0.1
2010 VIC-0.1
2014 VIC-0.8
2018 VIC3.4
1986 QLD-1.1
1989 QLD0.4
1992 QLD-1.1
1995 QLD-3.4
1998 QLD1.8-4.2
2001 QLD-2.43.3
2004 QLD-2.4-1.9
2006 QLD0.40.4
2009 QLD-0.7
2012 QLD-0.3
2015 QLD-1.3
2017 QLD-0.80.3
2020 QLD0.12.9
1986 WA-5
1989 WA-2.4
1993 WA1.9
1996 WA2.3
2001 WA3.7-2.5
2005 WA4.7-0.6
2008 WA-0.5
2013 WA-1.3
2017 WA13
2021 WA0.71.8
1989 SA0.8
1993 SA1.2
1997 SA-0.4
2002 SA3.5
2006 SA1
2010 SA0.6
2014 SA-3.2
2018 SA-3.9
1987 Federal-2.4
1990 Federal-2.7
1993 Federal1.7
1996 Federal-1.9
1998 Federal1.5-1.4
2001 Federal1.9-1.5
2004 Federal0.9-0.3
2007 Federal-0.6
2010 Federal-1.2
2013 Federal-1.3
2016 Federal0.1
2019 Federal-2.81
Average-0.2%
(61)
0.02%
(17)
Negative values mean that an average of final polls underestimated the party's vote share, while positive values mean that an average of final polls overestimated the party's vote share. Number of poll averages given in brackets for the average error.

Overall, there is no evidence for voting-intention polling in Australia systematically under-estimating voting intention for conservative or politically-incorrect parties (in fact, One Nation, by far the most “politically incorrect” party split out in polls, is actually slightly overestimated on average). Polls under-estimated the Coalition in just 34 of 61 elections (55%), which is pretty close to what might be expected from pure chance, while they under-estimated One Nation in just 8 of 17 (45%) elections (again, very close to pure chance). Even at elections where social issues have been prioritised by the right (e.g. “African gangs” in VIC 2018, or to a lesser extent abortion in QLD 2020), polls have not under-estimated support for the conservative parties (VIC 2018, Coalition over-estimated by 3.4%; QLD 2020, LNP slightly over-estimated by 0.1% and One Nation overestimated by 2.9%) despite “politically incorrect” issues presumably being more noticeable to the electorate.

I think it might be a tad hard to claim shy Tory effect is an issue when Australian voting-intention polls aren’t even biased against conservatives in the first place (what’s the point of “shy Tories” if polls don’t even under-estimate Tories?), but maybe there’s evidence in other aspects of Australian polling for shy Tories. How did Australian pollsters do in predicting the results of the same-sex marriage legalisation postal survey?

Polls somewhat over-estimated the Yes vote in the same-sex marriage survey

(if you’re on a mobile device, scroll right for full data or turn your device landscape)

PollsterMethodYesError
NewspollAutomated63+1.4
EssentialAutomated66.5+5.1
YouGovAutomated64+2.4
GalaxyAutomated64+2.4
Average64.3+2.7
If a poll reported Undecideds, they were split by evenly dividing them between the Yes and No responses.

On average, polls taken within the final two weeks of the conclusion of the same-sex marriage survey somewhat over-estimated support for legalising same-sex marriage (+2.7%), with most polls taken over the survey period somewhat under-estimating opposition to same-sex marriage legalisation by varying amounts. Does this demonstrate a shy Tory bias?

Well, not really. First off – keep in mind, this is just the results of a single survey, and as I mentioned above, polling errors for any one election tend to be at least somewhat correlated. Hence, it’s hard to draw conclusions about how polling as a whole is doing from any one election, even if lots of polls were conducted for that election.

Secondly, most of the polling for the same-sex marriage survey either used automated methods or a mix of automated and live-interview methods, with little evidence of any difference in the results obtained by varying methods. This makes it unlikely that the polling error was due to respondents systematically misleading pollsters; as mentioned above, at elections where shy-Tory effects occurred or are suspected, automated polling generally produced better results for conservatives than live polling.

Finally, and most importantly, the same-sex marriage differs significantly from normal elections in that it was a voluntary survey, conducted over a two-month timeframe. Given that people didn’t have to return their survey forms (and an estimated 20% of registered voters didn’t), it’s entirely possible that the underlying samples were accurate, but pollsters did not or could not correctly weight their samples to get a representative sample of the electorate. For example, let’s say voters under 35 supported Yes by a margin of 85-15, and voters over 35 supported Yes by a margin of 55-45. From the 2016 Australian Census, such voters make up about 28% of the electorate; but if they were unable to vote in the survey for some reason (e.g. university students who had gone back to their hometown for the holidays, but whose mail still arrived at their student residence), then polls could get the correct result but still over-estimate the Yes vote.

If a hypothetical pollster found the exactly correct proportions in their sample (under-35s, Yes 85-15 + over-35s, Yes 55-45), and weighted using the Census data (28% under-35, 72% over-35), they would find 63.4% of the electorate supports the Yes vote. If, for whatever reason, the proportion of respondents to the survey was actually 22% under-35 and 78% over-35 (61.6% Yes), then the poll would over-estimate support for Yes despite there being no shy Tories and the pollster in fact having gotten the figures for their sub-samples exactly right.

(Note that I’m not saying voters under 35 actually supported Yes by 85-15 or that voters over 35 supported Yes by a margin of 55-45; these are merely hypothetical figures I’m using to illustrate how pollsters could get their samples exactly right but still get the overall result wrong due to weighting. Of course, this is less of an issue at elections, where voting is compulsory.)
Looking at Dr Bonham’s summary table of the same-sex marriage polls, at least some of the polls simply polled voters who had already voted, which may explain part or all of the error; this table of polls from Wikipedia does seem to suggest that in the same poll, Yes usually had a bigger lead amongst voters who had already voted as compared to voters who had not yet voted.

In other words, there’s various, more mundane possibilities for why polls over-estimated the Yes vote in 2016 that don’t involve conservative voters misleading pollsters or even pollsters under-sampling conservative voters. Furthermore, as I’ve stressed repeatedly, the same-sex survey is just one ‘election’ (which has its own set of complicating factors which do not occur at regular elections), and it has to be taken in context of voting-intention polling more broadly getting the vote shares for conservative parties correct.

How do automated polling methods compare to live-polling methods in Australia?

Another key aspect of shy-Tory effect is the fact that live-interview methods of polling (i.e. face-to-face, live interview by telephone) generally show more skew to the left than automated methods (i.e. online surveys, interactive voice response (IVR) aka “robopolls”). This makes sense if you think about shy-Tory effect as an extension of social-desirability bias – if there are people afraid or shy to admit their opinions and voting-intentions to other people, some of their worries may be alleviated if they are surveyed through more impersonal methods.

Hence, one way to ‘rescue’ the shy-Tory hypothesis is to suggest that, maybe, in Australia, a left-bias by live-interview polling is cancelled out or reduced by a right-bias in automated polling. Is that the case?

CoalitionOne Nation
Live polling-0.1%
(96)
-0.5%
(22)
Automated polling-0.2%
(32)
1.3%
(13)
Difference-0.1%+1.8%
Negative values indicate the party was under-estimated, while positive values indicate the party was over-estimated. Number of polls in each average indicated in brackets. Some polls were not included as their methodology was unclear or included a mix of live and automated methods.

In the case of the Coalition, both live and automated polling methods barely differ, whereas in the case of One Nation live polls very slightly under-estimate the One Nation vote while automated polls slightly over-estimate the One Nation vote. While it might be tempting to point at the One Nation numbers and go “Aha! Automated polls over-estimate One Nation compared to live polls!”, it’s worth noting that we’re working off a very small set of data-points here (just 13 automated polls of the One Nation vote, from just four elections), meaning that such an outcome may well be due to random chance. Furthermore, the result is not statistically significant (i.e. there’s a high probability of us getting an average bias in a random sample at least as extreme as the one we saw, even if there was no actual bias in the underlying polls). p = 0.03 from a t-test, and p = 0.09 from a Wilcox ranked-sign test.

It’s not statistically significant because I apply a Bonferroni correction to the classical p = 0.05 threshold. In this case, I tested two hypotheses, and therefore the appropriate significance level is p = 0.025 rather than p = 0.05; this correction is necessary as otherwise I can simply test lots of possibilities and only report the ones which came up positive.

For example, someone could test the hypotheses, “do automated polls under-estimate the Labor vote”, “do automated polls over-estimate the Coalition vote”, “do automated polls under-estimate the Green vote”, “do automated polls over-estimate the One Nation vote”, “do automated polls over-estimate the Palmer’s United/UAP vote”.

If they’re independent, the probability that one of those comes up positive would be 1 – 0.955 = 0.226; in other words, by giving yourself five tries to get a positive result, the chance of getting a result which meets the threshold by pure chance is about the same as getting two heads when flipping two coins. The Bonferroni correction is a quick way to try and reduce the chance of false positives by forcing me to acknowledge the fact I gave myself multiple tries to get a positive result and raise the threshold needed to declare a result statistically significant.

There’s also arguably an issue with simply averaging all of the live/automated polls conducted to see if there’s a difference in whether they over/under-estimate voting intention; it might be that automated polls were only conducted at a few elections where polls of all stripes, live or automated, over-estimated One Nation (a real worry when you’re only working with four elections). If that was the case, then claiming that this definitively shows automated polls over-estimate One Nation would be like a pollster who only polled the 2013 and 2016 federal elections (where most pollsters got really accurate results) claiming to have a lower average error when in reality they simply happened to not poll elections where many pollsters got it wrong.

To examine this possibility, we can simply compare elections where we have both live and automated polls, to see if the pattern still holds:

When we compare live and automated polls conducted at the same election, there is no shy Tory effect

CoalitionOne Nation
Live polling-0.6%+1.6%
Automated polling-0.1%+1%
Difference+0.5%
(10)
-0.6%
(4)
Negative values indicate the party was under-estimated, while positive values indicate the party was over-estimated. Number of elections in each average indicated in brackets.

With the Coalition polls, the pattern remains the same – the averaged bias of live and automated polling barely differs. On the other hand, for One Nation, the pattern reverses – now it’s live polls which over-estimate One Nation by more than automated polls! (although, again, the pattern isn’t statistically significant) This suggests a pretty simple explanation for why automated polling may have appeared to over-estimate One Nation voters – by luck of the draw, automated pollsters only polled elections where everyone, live or automated, was over-estimating the One Nation vote anyway, while live pollsters polled a wider range of elections, including some where they under-estimated One Nation support.

Hence, it’s pretty clear that there is basically no shy-Tory effect in Australian polling. From the fact that polls don’t under-estimate conservative/politically-incorrect parties overall, to the fact that there is basically no difference between live and automated polling overall, there is very little evidence for the idea that there is a group of voters who vote for conservatives but are unwilling to tell pollsters about it. More broadly, there doesn’t even seem to be any systematic skew to the left in Australian polling; polls are about as likely to over-estimate the Coalition vote as they are to under-estimate it, while they slightly tend to over-estimate support for One Nation.

I mean, think about it this way. Let’s say you had someone whose job it is to count the number of chickens in a closed coop, and they usually got within +/- 2 chickens with their errors averaging out to basically 0. The average absolute error on the Coalition primary vote – i.e. how far off the polls usually are from the actual Coalition vote – is about 1.8%. Maybe you’d discuss whether they could improve their counts to get closer to the actual number, but it would be kind of ridiculous to claim that they were systematically under-estimating the number of chickens, and it’s because the chickens were scared of them.

So why is the idea of shy Tories/conservative under-estimate so common?

(this is mixed with a bit more opinion than usual)

Here’s a piece from the 2021 WA state election, where they discuss previous examples of famous polling errors when the first Newspoll (Labor 68-32) of the campaign was released.

What are the examples used?

Trump.

Brexit.

Morrison’s 2019 re-election.

While the first two are a little more common overseas and the latter a little more common here in Australia, they all share a remarkable similarity – elections where polling showing the left slightly ahead but where the conservative side won narrow victories instead.

Don’t get me wrong, it’s not like I think the polls did well in 2019. However, pretty much any time anyone brings up the possibility of big poll errors, it’s pretty much always examples of polling errors where the left was over-estimated, despite the fact that there are plenty of counter-examples where it was instead the right which was over-estimated.

You want a big polling error where the right under-performed its polls?

2018 Victorian state election, Labor outperformed its 2pp polling by 3.2%, slightly more than they would under-perform their polls by in 2019.

2017 French presidential election; Macron over-performed his polls against the far-right Marine Le Pen by 4.3% in the second round.

Oh, but maybe “the right lost, but by bigger than expected” isn’t surprising or sensational enough. Maybe the media only likes to bring up examples where the “wrong” side won. Did any of those happen on the left?

2015 Queensland state election; changes in preference flows meant that a small lead for the LNP in final polling became a deficit for them on election day, with Labor winning the 2pp vote and working together with crossbenchers to form government.

2001 Western Australian state election, where the final polls had the race at 50-50 but Labor won by 52.9% on the 2pp (same error size as the 2019 federal election, though in the other direction).

2017 UK general election, where the Labour Party over-performed its polling by 5 points (about 3% on a 2-party basis) and forced Theresa May’s Conservatives, (who had been expected to increase their majority) into minority government and coalition with a minor Northern Irish party.

And of course, federally, there’s always the classic 1993 federal election, where Keating’s Labor beat their polls by about 2% (on a 2pp basis) to unexpectedly retain government.

Despite this, the same examples where conservatives over-performed their polls tend to get trotted out every time there’s a discussion about not trusting the polls. In fact, despite being written in 2019 and 2020 respectively, those last two linked pieces cite the Conservative over-performance at the 2015 UK general election instead of the more recent Labour over-performance at the 2017 UK general election as examples of polling failure – if I was more conspiratorial I would wonder if people were intentionally ignoring left-wing overperformance at the polls!

This bias in bringing up polling failures is in spite of the fact that polling errors both in Australia as well as overseas don’t tend to favour right-wing parties, so it’s not like the Trump/Brexit/Morrison errors are representative of polling errors more broadly. Furthermore, in at least some of those cases (Trump 2016, Brexit), the polling errors were fairly small (about 1 – 2% in Trump 2016, about 2% in Brexit), especially when compared to the errors I’ve listed above.

So what gives?

My guess is that the Trump/Brexit/Morrison errors were much more surprising to the people who tend to be the ones writing articles/editorials/programs in which they report both the polls, and also caution us to not assume the polls are going to be exactly correct. However, just because one or two events are surprising and have significant consequences to the people involved doesn’t necessarily mean that they occur all the time or even regularly; it would be like extensive reporting on shark attacks or nuclear plant meltdowns as if they were a common occurrence.

Continuously repeating examples of the right over-performing their polls, without stopping to consider examples of when polls under-estimated the left, can unduly introduce or reinforce the idea that polls under-estimate the right thanks to the availability heuristic. This might still be fine if polls actually tended to under-estimate the right (though still problematic as pollsters tend to change their methods in response to previous failures), but they don’t – not here, not in the USA, not in Europe. As I demonstrate above, there is no shy Tory effect or even a systematic under-estimate of conservatives in Australian polling, and those of us who write/produce to inform the public about polling should discuss examples of the left out-performing its polling An interesting example I didn’t bring up above because technically there was very little polling error is that of Sinn Féin, a left-wing Irish republican party.

Historically, Sinn Féin has tended to under-perform its polling, and when polls showed it slightly ahead of the other parties ahead of the 2020 Irish general election, everyone expected the same thing to happen again.

Instead, Sinn Féin pretty much matched its final polls (probably doing slightly better, depending on how you average the polls), and ended up as the largest party by popular vote. It came as such a surprise that even their own party leaders felt they should have stood more candidates as many surplus Sinn Féin votes ended up going to other parties’ candidates under the single-transferable vote (STV) system used to elect the Irish legislature.

It’s a good example of how historical patterns in polling error can break down when pollsters and/or voters change their methods/voting patterns – and it happened on the left.
in order to avoid creating a misleading impression of polling being biased against conservatives.


2 thoughts on Is There A Shy Tory Effect In Australian Polling?

  1. I am genuinely surprised. I had thought there was such an effect in Australia, but the data presented in the article shows this is not so.

    Now, if there is such an effect in other countries, why not Australia? Perhaps it’s our compulsory voting. Voluntary voting means that to get people to vote, politicians have to either inspire them or make them angry – and it’s much, much easier to make them angry. And some angry views are more or less socially-acceptable than others.

    Compulsory voting will capture more of the people who are neither angry nor inspired, and so have less strong opinions – opinions more subject to change, the swinging voters.

    1. Compulsory voting might well be part of it. Other possible factors (note, I don’t necessarily have data to implicate these factors, this is just informed speculation) include:

      -The fact that historically, the Coalition was fairly socially progressive (large elements of White Australia were dismantled by the Menzies and Holt Coalition governments, while Labor supported White Australia up until 1966).

      -The fact that the Coalition has been more popular federally than Labor. It’s hard for something to be socially undesirable if a majority of the population repeatedly demonstrates their preference for it.

      -The lack of a Brexit/Trump equivalent to both drive socially conservative voters to the polls as well as sow mistrust in mainstream organisations (particularly pollsters), so conservative voters answer surveys.

      -Polling might be easier here. I can think of many reasons why that might be – e.g. the vote is easier to weight by demographic, we might have greater trust in pollsters etc. Alternatively our pollsters could also be doing an oddly good job at adjusting for previous errors.

      -Our major parties might not be as polarized as in other countries, meaning that conservatives who live in majority-progressive areas in Australia may feel more comfortable expressing their views.

Add Your Comment

× Home Politics Economics Projects
×

You appear to be using an outdated browser, for which this site is not optimised.

For your security, we strongly recommend you download a newer browser.

Although any of the latest browsers will do, we suggest the latest version of Firefox.

Download Firefox here

(Click on the button in the top-right to close this reminder)