With a referendum well and truly on the way, something is in the air online. Whether your daily digital media diet includes Facebook, Twitter, YouTube, or TikTok, there's a flurry of activity and opinions on the question of what comes next on Australia's constitutional recognition journey.
Subscribe now for unlimited access.
or signup to continue reading
Some of the online chatter has taken a turn towards what tech platforms and electoral commissions define as "electoral process" misinformation and disinformation. These narratives undermine the independence and integrity of the voting process.
The Australian Electoral Commissioner came out last month and remarked on the highest levels of electoral process misinformation and disinformation the Commission had even seen online.
You would be forgiven for feeling like we've stepped into some new dystopian Hollywood multiverse franchise, this time based on Citizen Kane. Only in our multiverse, we've got the power of user targeting and of algorithmic distribution techniques to pull us all away from a shared set of facts, and into bespoke filter bubbles built from ambiguous data points.
Polarisation, conspiracies, and audience fragmentation have become regular topics of concerned conversation.
Since the internet started polarising our political debates, both a lot and little has changed. Misinformation and disinformation studies have taken off as a field, with a growing community of professionals and peer-reviewed methodologies.
Europe, after a failed experiment with platform self-regulation on disinformation, passed the Digital Services Act. This groundbreaking legislation mandates platforms to open up their algorithmic hoods for expert researchers to test public-interest questions, such as the role of algorithms in distributing content. The Act also heavily incentivises platforms to mitigate disinformation proactively through measures such as de-amplification.
In Australia, we're where Europe was in 2018 - we rely on the goodwill of large, offshore technology companies to self-report on their efforts against misinformation and disinformation, with targets set by themselves, and proportionality measures assessed themselves.
Worse than 2018, we also face a deteriorating accountability landscape where Twitter has removed researcher access to its API, a key data source for studying the platform and is now suing disinformation researchers, and Meta has backed out of CrowdTangle, a popular tool for social media researchers and journalists.
So if we're reliant on self-regulation and platforms' good will, is that working for us?
A small and rapid study suggests that platform's self regulation is failing to protect Australia's electoral debate, and that platforms are not responding to mis- and dis- information around the Voice referendum as they say they will in their own terms of service. This modest study suggests that we may have some real issues when it comes to platforms walking the walk, and raises questions about our light-touch regulatory approach.
MORE OPINION:
There are no winners in a race to the bottom on electoral integrity.
Platforms have special policies on these issues because maintaining electoral integrity is, quite simply, part of what keeps democracies alive.
As the vote draws nearer, we would hope to see decisive action from platforms against damaging false narratives designed to confuse or suppress the voting process, and for platforms to start living up to their own policies.
- Alice Dawkins is executive director at Reset.Tech Australia. Dr Rys Farthing is policy director at Reset.Tech Australia.