Opinion polls just before the 2019 election were interpreted by most political journalists as a prediction of a Labor landslide. The Coalition win in 2019 was interpreted by mostly those same journalists as proof the polls were wrong.
Neither of those things are true.
Opinion polls tell us exactly what they’re meant to tell us: the voting intentions of a statistically representative (by age and gender) sample of all Australians who live a relatively stable life and are willing to answer polling surveys, within a three percent margin of error.
The two party preferred polls just before the 2019 election were roughly 51-49 in favour of Labor. An error margin of about three percent means the polls were not predicting a Labor victory, they were telling us the election could go either way. Which was accurate.
Journalists who love peering into the entrails of polling data and talk about a 2 point move as if it’s meaningful are telling you they don’t understand how polling works. Ignore them.
There are other factors. Thirty years ago, opinion polls were more accurate than they are now. This is not because the polls have changed, it’s because Australia has changed.
When households were mostly headed by baby boomers with landlines, they were much easier to reach, more stable, and more homogeneous. The two-party system was less fractured, and people had much higher trust in government than they do now. The sum of all this suggests that twenty years ago, people were more likely to answer polling survey questions and the results were more representative of the entire population.
This gave journalists reason to believe polls were an accurate prediction of election outcomes without having to think too much about how the polling data was obtained or what it meant.
While most polls now try to reach people online and via mobile phones as well as landlines, they can’t help but fail to reach people who don’t answer unknown numbers, or who regularly change their mobile number, or who don’t click on polling question links, or who don’t want to answer polling questions. Those people make up a much greater proportion of the population than ever before and their voting intentions are much more difficult to predict.
Logically, people who fit that description are less politically engaged and possibly live less stable lives than the people who are easy to reach, identify and poll. They might have a ‘pox on both your houses’ attitude to politics. Maybe they’re too busy managing a life to care about elections. Maybe they hate all politicians equally or don’t believe their vote can make a difference. Most of them, however, still vote.
Do they vote for the status quo? Do they vote for change just for change’s sake? Do they vote independent with little care for what those independents represent? Or do they just draw a dick on their ballot paper, take their sausage and go?
The point is not only that we don’t know, but that we can’t possibly find out. No one can (or should) force people to answer polling questions against their will.
Also, polling is done on a nationally representative statistical sample. Elections are won by electorate, not a national vote. Electorates are not statistically smoothed in the same manner as polling data. Electorates tend to cluster by wealth, age, family structure, education, ethnicity, and region. Those divisions have widened in the last few decades making polling data even less able to predict elections accurately.
When the polls are saying (as they do now and did in 2019) that the two-party preferred intention of people who answer polling surveys is roughly even. In other words, the polls can’t predict who is going to win.
This is not the fault of the people who conduct polls. The factors that make modern polls a less accurate prediction of election outcomes are outside their control and there’s no real way of fixing the polls to include opinions of people who don’t want to be included.
What we need to do is better understand what the polls mean and what they can tell us.
—
As a side note, the reason I know this is not because I’m smarter or better than political journalists who don’t understand polling – I’m not. It comes from more than a decade of obsessing over the Personal Safety Survey, which is where we get all the information too often peddled as “facts” about men’s violence against women. It is not a “fact” that one in three women have experienced violence since the age of 15.
One in three women of a nationally statistically representative sample of people who live in private dwellings outside very remote areas, who were willing to tell a stranger about the most traumatic events in their life, will say they have experienced violence since the age of 15. Women in that sample are likely to underestimate the frequency and severity of that violence, while men were likely to overestimate it.
The data excludes people who are homeless or living in hotels, motels, hostels, hospitals, nursing homes, or short-stay caravan parks (about 15 percent). It also excludes the more than thirty percent of people in private dwellings who refused to answer the Personal Safety Survey questions. We cannot know for sure, but it seems logical that people who are in care or insecure housing, or who are unable to talk about the violence they survived might well be there precisely because they have a higher level of trauma from violence than the rest of the population.
All of which makes the one-in-three statistics (at most) an indication of the very lowest possible estimation of men’s violence against women in Australia.
Again, this not the fault of the people conducting the survey. They are simply operating withing the limits of what is possible.
Keep in mind that the same people who reported the 2019 polls as fact and then reported them as wrong are often the same ones who report the one-in-three figure as fact.
It is part of their job to explain this. Some of them do. Too many do not.