With the election campaign underway, can the law protect voters from fake news and conspiracy theories?
- Written by The Conversation
Last weekend’s “anti-lockdown” protest[1] in Auckland provided a snapshot of the various conspiracy theories and grievances circulating online and within the community: masks, vaccination, QAnon[2], 5G technology, government tyranny and COVID-19 were all in the mix.
The “freedom rally” also featured Advance NZ party leaders Jami-Lee Ross and Billy Te Kahika, who has previously described[3] COVID-19 as no more serious than influenza.
The same scepticism about the pandemic was reportedly[4] behind the Mt Roskill Evangelical Church cluster and spread, which prompted Health Minister Chris Hipkins to ask that people “think twice before sharing information that can’t be verified”.
Hipkins also refused to rule out[5] punitive measures for anyone found to be deliberately spreading lies.
It’s not a new problem. As far back as 1688[6], the English Privy Council issued a proclamation prohibiting the spread of false information. The difference in the 21st century, of course, is the reach and speed of fake news and disinformation.
The World Health Organisation (WHO) has even spoken of a massive “infodemic[7]” hindering the public health response to COVID-19: “an over-abundance of information – some accurate and some not – that makes it hard for people to find trustworthy sources and reliable guidance when they need it.”
The limits of freedom of speech
This is particularly dangerous when people are already anxious and politically polarised. Disinformation spreads fastest where freedom is greatest, including in New Zealand where everyone has the right[8] under the Bill of Rights Act “to freedom of expression, including the freedom to seek, receive, and impart information and opinions of any kind in any form”.
This leads to an anomaly. On the one hand, people using misleading or deceptive[9] information to market products (including medicines[10]) can be held to account, and advertising[11] must be responsible. On the other hand, spreading misleading or deceptive ideas is not, as a rule, illegal.
Read more: The Facebook prime minister: how Jacinda Ardern became New Zealand's most successful political influencer[12]
However, there are restrictions[13] on free speech when it comes to offensive[14] behaviour and language, racial[15] discrimination and sexual harassment. We also censor[16] objectionable material and police harmful digital communications[17] that target individuals.
So, should we add COVID-19 conspiracies and disinformation to that list? The answer is probably not. And if we do, we should be very specific.
A focused approach is crucial
Deciding who gets caught in the net and defining what information is harmful to the public is a very slippery slope. Furthermore, the internet has many corners to hide in and may be near impossible to police.
Given those spreading conspiracy theories and disinformation tend to believe already in government overreach, we risk pouring petrol on the fire by attempting to ban their activities.
The exception, where further restraint is justified, involves attempts to use misinformation or undue influence (especially by a foreign power) to manipulate elections. This is where a more focused approach to who and what is targeted makes sense.
Countries such Canada, the UK, France [18]and Australia[19] are all grappling with how best to protect their democracies from manipulation of information, but these initiatives are still in their infancy.
Read more: NZ's cyber security centre warns more attacks likely following stock market outages[20]
In New Zealand we have a law prohibiting the publishing of false statements to influence voters[21], and the Justice Committee put out an excellent report[22] on the 2017 general election that covered some of these points and urged vigilance.
Can we police the tech giants?
While tools such as Netsafe’s fake news awareness campaign[23] and official COVID-19 information sources[24] are excellent, they are not enough on their own.
The best line of defence against malicious information is still education. Scientific literacy and critical thinking are crucial. Good community leadership, responsible journalism and academic freedom can all contribute.
But if that isn’t enough, what can we do about the platforms where disinformation thrives?
Conventional broadcasters must make reasonable efforts[25] to present balanced information and viewpoints.
But that kind of balance is much harder to enforce in the decentralised, instantaneous world of social media. The worst example of this, the live-streamed terror attack in Christchurch, led to the Christchurch Call[26]. It’s a noble initiative, but controlling this modern hydra will be a long battle.
Read more: Survey shows 1 in 4 New Zealanders remain hesitant about a coronavirus vaccine[27]
Attempts to control misinformation on Facebook, Twitter and Google through self-regulation[28] and warning labels[29] are welcome. But the work is slow and ad-hoc. The European Commission is now proposing[30] new rules to formalise the social media platforms’ responsibility and liability for their content.
Like tobacco, that content might not be prohibited, but citizens should be warned about what they’re consuming – even if it comes from[31] the president of the United States.
The final line of defence would be to make individuals who spread fake news liable to prosecution. Many countries have already begun to make such laws[32], with China[33] and Russia[34] at the forefront.
The risk, of course, is that social media regulation can disguise political censorship designed to target dissent. For that reason we need to treat this option with extreme caution.
But if the tolerance of our liberal democracy is too sorely tested in the forthcoming election, and if all other defences prove inadequate, new laws that strengthen the protection of the electoral process may well be justified.
References
- ^ protest (www.stuff.co.nz)
- ^ QAnon (www.nytimes.com)
- ^ previously described (www.rnz.co.nz)
- ^ reportedly (www.rnz.co.nz)
- ^ refused to rule out (www.rnz.co.nz)
- ^ 1688 (quod.lib.umich.edu)
- ^ infodemic (www.who.int)
- ^ has the right (www.legislation.govt.nz)
- ^ misleading or deceptive (www.legislation.govt.nz)
- ^ medicines (www.legislation.govt.nz)
- ^ advertising (www.asa.co.nz)
- ^ The Facebook prime minister: how Jacinda Ardern became New Zealand's most successful political influencer (theconversation.com)
- ^ restrictions (www.legislation.govt.nz)
- ^ offensive (www.legislation.govt.nz)
- ^ racial (www.legislation.govt.nz)
- ^ censor (www.legislation.govt.nz)
- ^ harmful digital communications (www.legislation.govt.nz)
- ^ France (www.gouvernement.fr)
- ^ Australia (www.aph.gov.au)
- ^ NZ's cyber security centre warns more attacks likely following stock market outages (theconversation.com)
- ^ false statements to influence voters (www.legislation.govt.nz)
- ^ excellent report (www.parliament.nz)
- ^ campaign (www.netsafe.org.nz)
- ^ sources (covid19.govt.nz)
- ^ reasonable efforts (www.legislation.govt.nz)
- ^ Christchurch Call (www.christchurchcall.com)
- ^ Survey shows 1 in 4 New Zealanders remain hesitant about a coronavirus vaccine (theconversation.com)
- ^ self-regulation (ec.europa.eu)
- ^ warning labels (blog.twitter.com)
- ^ proposing (uk.reuters.com)
- ^ comes from (edition.cnn.com)
- ^ laws (www.poynter.org)
- ^ China (www.loc.gov)
- ^ Russia (www.bbc.com)