Investigating social media harm is a good idea, but parliament is about to see how complicated it is to fix
- Written by Rob Nicholls, Senior research associate, University of Sydney
Barely a day has gone by this month without politicians or commentators talking about online harms.
There have been multiple high-profile examples spurring on the conversation. There was the circulation of videos[1] of Bishop Mar Mari Emmanuel being stabbed in the Sydney church attack. The normalisation of violent content online has also been central to the discussion of the domestic violence crisis.
Then, of course, there’s the expressions of disdain[2] for the Australian legal system by X (formerly Twitter) owner Elon Musk.
Inevitably, there are calls to “do something” and broad public appetite for changes in regulations[3]. A new parliamentary committee will explore what that change should look like, but will have to contend with a range of legal, practical and ethical obstacles along the way.
Read more: Elon Musk is mad he's been ordered to remove Sydney church stabbing videos from X. He'd be more furious if he saw our other laws[4]
Ten busy days
On May 1 and May 10, the government made two major announcements.
The first was a Commonwealth response to some of the online harms identified by National Cabinet[5]. At the May 1 meeting, the Commonwealth promised to deliver new measures to address violent online pornography and misogynistic content targeting children and young people. This included promised new legislation to ban deepfake pornography and to fund a pilot project on age-assurance technologies.
The second was an announcement[7] establishing a Joint Parliamentary Select Committee to look into the influence and impacts of social media on Australian society. The government wants the committee to examine and report on four major issues:
The decision of Meta to abandon deals under the News Media and Digital Platforms Bargaining Code[8]
the important role of Australian journalism, news and public-interest media in countering misinformation and disinformation on digital platforms
the algorithms, systems and corporate decision-making of digital platforms in influencing what Australians see, and the impacts of this on mental health
other issues in relation to harmful or illegal content disseminated over social media, including scams, age-restricted content, child sexual abuse and violent extremist material.
However, the final terms of reference will be drafted after consultation with both the Senate crossbench and the opposition, so they may change a bit.
Why would they do this?
Asking the committee to review the Meta decision is an odd move.
In practice, Financial Services Minister Stephen Jones can “designate” Meta without a referral to the parliament. That is, the minister can decide all of the obligations of the News Media Bargaining Code apply to Meta.
However, a sounding by the committee may help to ensure Meta keeps concentrating on the issue. It also provides the opportunity to restate the underlying principles behind the code and the parlous state of much of the Australian news media.
In relation to harmful or illegal content disseminated over social media, there is already a review[9] of the Online Safety Act underway. The terms of reference seem to ask the committee to provide input into the review.
Read more: This week's changes are a win for Facebook, Google and the government — but what was lost along the way?[10]
The issue of misinformation and disinformation has also been the subject of review. The government released a draft of a proposed bill[11] to combat misinformation and disinformation in June 2023. It would give the Australian Communications and Media Authority (ACMA) power to enforce[12] an industry code, or to make one if the industry cannot.
That draft was criticised by the opposition at the time. However, there have been shifts since then and the committee might be a vehicle for the introduction of an amended version of the bill.
An age-old issue
Online age verification is a simple idea that is hard to implement unless there are significant consequences for non-compliance on a service provider.
Work in this area by the UK’s communications regulator, Ofcom, and the UK Information Commissioner’s Office are often cited as leading practice. However, the commissioner’s website notes[13] “age assurance is a complex area with technology developing rapidly”.
Shutterstock[14]One approach is for the minor to identify themselves to a platform by uploading a video or to send a photograph of their ID. This is entirely contrary to the eSafety Commissioner’s messaging[15] on online safety. The Commissioner advises parents to make sure children do not share images or videos of themselves and to never share their ID.
In practice, the most effective age identification for minors requires parents to intervene. This can be done by using software to limit access or by supervising screentime. If children and teenagers can get around the rules simply by borrowing a device from a school friend, age verification might not do much.
As the International Association of Privacy Professionals has found[16], age verification and data protection are far harder than they look. It is particularly difficult if the age barrier is not one already in place – such as the adult rights that those over the age of 18 possess – but rather a seemingly arbitrary point in the mid-teens. Other than online, the most important age to verify is 18 for things such as alcohol sales and credit. It is also the age at which contracts can be enforced.
Countries vs companies
One issue that is often raised about social media platforms is how Australia can deal with a global business.
Here, the European approach in the Digital Markets Act[17] provides some ideas. The act defines companies with a strong market position as “gatekeepers” and sets out rules they must follow. Under the act, important data must be shared as directed by the user to make the internet fairer and to ensure different sites and software can communicate with each other. It also calls for algorithms to be made more transparent, though these rules are a bit more limited.
Virginia Mayo/AP[18]In doing so, it limits the power of gatekeeper companies, including Alphabet (Google), Amazon, Apple, ByteDance (TikTok), Meta and Microsoft.
Obviously, Australia can’t harness the collective power of a group of nations in the same way the European Union does, but that doesn’t preclude some of the measures from being useful here.
There is considerable public support for governments to “do something[19]” about online content and social media access, but there are both legal and practical obstacles to imposing new laws.
There is also the difficulty of getting political consensus on such measures, as seen with the debate surrounding the misinformation bill.
But it’s clear in Australia, both citizens and governments have been losing patience with letting tech companies regulate themselves and shifting responsibility to parents.
References
- ^ circulation of videos (www.abc.net.au)
- ^ expressions of disdain (theconversation.com)
- ^ changes in regulations (ses.library.usyd.edu.au)
- ^ Elon Musk is mad he's been ordered to remove Sydney church stabbing videos from X. He'd be more furious if he saw our other laws (theconversation.com)
- ^ National Cabinet (www.pm.gov.au)
- ^ Bianca De Marchi/AAP (photos.aap.com.au)
- ^ announcement (minister.infrastructure.gov.au)
- ^ News Media and Digital Platforms Bargaining Code (www.accc.gov.au)
- ^ review (www.infrastructure.gov.au)
- ^ This week's changes are a win for Facebook, Google and the government — but what was lost along the way? (theconversation.com)
- ^ bill (www.infrastructure.gov.au)
- ^ power to enforce (www.infrastructure.gov.au)
- ^ website notes (ico.org.uk)
- ^ Shutterstock (www.shutterstock.com)
- ^ eSafety Commissioner’s messaging (www.esafety.gov.au)
- ^ has found (iapp.org)
- ^ Digital Markets Act (commission.europa.eu)
- ^ Virginia Mayo/AP (photos.aap.com.au)
- ^ do something (www.theguardian.com)