The Times Australia
The Times World News

.

'Transparency reports' from tech giants are vague on how they're combating misinformation. It's time for legislation

  • Written by Uri Gal, Professor in Business Information Systems, University of Sydney
'Transparency reports' from tech giants are vague on how they're combating misinformation. It's time for legislation

On May 30, Meta, Google and Twitter released their 2021 annual transparency reports[1], documenting their efforts to curb misinformation in Australia.

Despite their name, however, the reports offer a narrow view of the companies’ strategies to combat misinformation. They remain vague on the reasoning behind the strategies and how they are implemented. They therefore highlight the need for effective legislation to regulate Australia’s digital information ecosystem.

The transparency reports are published as part of the Digital Industry (DIGI) Group’s[2] voluntary code of practice that Meta, Google and Twitter signed onto in 2021 (along with Adobe, Apple, Microsoft, Redbubble and TikTok).

The DIGI group and its code of practice were created after the Australian government’s request in 2019 that major digital platforms do more to address disinformation and content quality concerns.

What do the transparency reports say?

In Meta’s latest report[3], the company claims to have removed 180,000 pieces of content from Australian Facebook and Instagram pages or accounts for spreading health misinformation during 2021.

It also outlines several new products, such as Facebook’s Climate Science Information Centre[4], aimed at providing “Australians with authoritative information on climate change”. Meta describes initiatives including the funding of a national media literacy survey, and a commitment to fund training for Australian journalists on identifying misinformation.

Similarly, Twitter’s report[5] details various policies it implements to identify false information and moderate its spread. These include:

  • alerting users when they engage with misleading tweets
  • directing users to authoritative information when they search for certain key words or hashtags, and
  • punitive measures such as tweet deletion, account locks and permanent suspension for violating company policies.

In the first half of 2021, Twitter suspended 7,851 Australian accounts and removed 51,394 posts from Australian accounts.

Google’s highlights[6] that in 2021 it removed more than 90,000 YouTube videos from Australian IP addresses, including more than 5,000 videos with COVID-19 misinformation.

Google’s report further notes that more than 657,000 creatives were blocked from Australia-based advertisers, for violating the company’s “misrepresentation ads policies (misleading, clickbait, unacceptable business practices, etc)”.

Google’s Senior Manager for Government Affairs and Public Policy, Samantha Yorke, told The Conversation:

We recognise that misinformation, and the associated risks, will continue to evolve and we will reevaluate and adapt our measures and policies to protect people and the integrity of our services.

The underlying problem

In reading these reports, we should keep in mind that Meta, Twitter, and Google are essentially advertising businesses. Advertising accounts for about 97% of Meta’s revenue[7], 92% of Twitter’s revenue[8] and 80% of Google’s[9].

They design their products to maximise user engagement, and extract detailed user data which is then used for targeted advertising.

Although they dominate and shape much of Australia’s public discourse, their core concern is not to enhance its quality and integrity. Rather, they hone their algorithms to amplify content that most effectively grabs users’ attention[10].

Read more: Wrong, Elon Musk: the big problem with free speech on platforms isn't censorship. It's the algorithms[11]

Having said that, let’s examine their transparency reports.

Who decides what ‘misinformation’ is?

Despite their apparent specificity, the reports leave out some important information. First, while each company emphasises efforts to identify and remove misleading content, they don’t reveal the exact criteria through which they do this – or how these criteria are applied in practice.

There are currently no acceptable, enforceable standards on identifying misinformation (DIGI’s code of practice is voluntary). This means each company can develop and use its own interpretation of the term “misinformation”.

Given they don’t disclose these criteria in their transparency reports, it’s impossible to gauge the actual scope of the mis/disinformation problem within each platform. It’s also hard to compare the severity across the platforms.

A Twitter spokesperson told The Conversation its policies regarding misinformation focused on four areas: synthetic and manipulated media[12], civic integrity[13], COVID[14] misinformation, and crisis misinformation[15]. But it’s not clear how the policies are applied in practice.

Meta[16] and YouTube[17] (which is owned by Google’s parent company Alphabet) are also vague in describing how they apply their misinformation policies.

Meta, the parent company of Facebook, earns the vast majority of its revenue through advertising. Shutterstock

There is little context

The reports also don’t provide enough quantitative context for their statements of content removal. While the companies do provide specific numbers of posts removed, or accounts acted against, it’s not clear what proportion of the overall activity these actions represent on each platform.

For example, it’s difficult to interpret the claim that 51,394 Australian posts were removed from Twitter in 2021 without knowing how many were hosted that year. We also don’t know what proportion of content was flagged in other countries, or how these numbers track over time.

And while the reports detail various features introduced to combat misleading information (such as directing users to authoritative sources), they don’t provide evidence as to their effectiveness in reducing harm.

What’s next?

Meta, Google and Twitter are some of the most powerful actors in the Australian information landscape. Their policies can affect the well-being of individuals and the country as a whole.

Read more: Stuff-up or conspiracy? Whistleblowers claim Facebook deliberately let important non-news pages go down in news blackout[18]

Concerns over the harm caused by misinformation on these platforms have been raised in relation to the COVID-19 pandemic[19], federal elections[20] and climate change[21], among other issues.

It’s crucial they operate on the basis of transparent and enforceable policies whose effectiveness can be easily assessed and independently verified.

In March, former prime minister Scott Morrison’s government announced[22] that, if re-elected, it would introduce new laws to provide the Australian Communications and Media Authority “new regulatory powers to hold big tech companies to account for harmful content on their platforms”. It’s now up to Anthony Albanese’s government to carry this promise forward.

Local policymakers could take a lead from their counterparts in the European Union, who recently agreed on the parameters for the Digital Services Act[23]. This act will force large technology companies to take greater responsibility for content that appears on their platforms.

References

  1. ^ annual transparency reports (digi.org.au)
  2. ^ Digital Industry (DIGI) Group’s (digi.org.au)
  3. ^ report (digi.org.au)
  4. ^ Centre (about.fb.com)
  5. ^ Twitter’s report (digi.org.au)
  6. ^ highlights (digi.org.au)
  7. ^ revenue (investor.fb.com)
  8. ^ revenue (www.businessofapps.com)
  9. ^ 80% of Google’s (www.statista.com)
  10. ^ grabs users’ attention (www.cambridge.org)
  11. ^ Wrong, Elon Musk: the big problem with free speech on platforms isn't censorship. It's the algorithms (theconversation.com)
  12. ^ media (help.twitter.com)
  13. ^ civic integrity (help.twitter.com)
  14. ^ COVID (help.twitter.com)
  15. ^ crisis misinformation (help.twitter.com)
  16. ^ Meta (transparency.fb.com)
  17. ^ YouTube (www.youtube.com)
  18. ^ Stuff-up or conspiracy? Whistleblowers claim Facebook deliberately let important non-news pages go down in news blackout (theconversation.com)
  19. ^ COVID-19 pandemic (misinforeview.hks.harvard.edu)
  20. ^ federal elections (www.abc.net.au)
  21. ^ climate change (wires.onlinelibrary.wiley.com)
  22. ^ announced (www.paulfletcher.com.au)
  23. ^ Digital Services Act (ec.europa.eu)

Read more https://theconversation.com/transparency-reports-from-tech-giants-are-vague-on-how-theyre-combating-misinformation-its-time-for-legislation-184476

Times Magazine

Headless CMS in Digital Twins and 3D Product Experiences

Image by freepik As the metaverse becomes more advanced and accessible, it's clear that multiple sectors will use digital twins and 3D product experiences to visualize, connect, and streamline efforts better. A digital twin is a virtual replica of ...

The Decline of Hyper-Casual: How Mid-Core Mobile Games Took Over in 2025

In recent years, the mobile gaming landscape has undergone a significant transformation, with mid-core mobile games emerging as the dominant force in app stores by 2025. This shift is underpinned by changing user habits and evolving monetization tr...

Understanding ITIL 4 and PRINCE2 Project Management Synergy

Key Highlights ITIL 4 focuses on IT service management, emphasising continual improvement and value creation through modern digital transformation approaches. PRINCE2 project management supports systematic planning and execution of projects wit...

What AI Adoption Means for the Future of Workplace Risk Management

Image by freepik As industrial operations become more complex and fast-paced, the risks faced by workers and employers alike continue to grow. Traditional safety models—reliant on manual oversight, reactive investigations, and standardised checklist...

From Beach Bops to Alpine Anthems: Your Sonos Survival Guide for a Long Weekend Escape

Alright, fellow adventurers and relaxation enthusiasts! So, you've packed your bags, charged your devices, and mentally prepared for that glorious King's Birthday long weekend. But hold on, are you really ready? Because a true long weekend warrior kn...

Effective Commercial Pest Control Solutions for a Safer Workplace

Keeping a workplace clean, safe, and free from pests is essential for maintaining productivity, protecting employee health, and upholding a company's reputation. Pests pose health risks, can cause structural damage, and can lead to serious legal an...

The Times Features

Tricia Paoluccio designer to the stars

The Case for Nuturing Creativity in the Classroom, and in our Lives I am an actress and an artist who has had the privilege of sharing my work across many countries, touring my ...

Duke of Dural to Get Rooftop Bar as New Owners Invest in Venue Upgrade

The Duke of Dural, in Sydney’s north-west, is set for a major uplift under new ownership, following its acquisition by hospitality group Good Beer Company this week. Led by resp...

Prefab’s Second Life: Why Australia’s Backyard Boom Needs a Circular Makeover

The humble granny flat is being reimagined not just as a fix for housing shortages, but as a cornerstone of circular, factory-built architecture. But are our systems ready to s...

Melbourne’s Burglary Boom: Break-Ins Surge Nearly 25%

Victorian homeowners are being warned to act now, as rising break-ins and falling arrest rates paint a worrying picture for suburban safety. Melbourne residents are facing an ...

Exploring the Curriculum at a Modern Junior School in Melbourne

Key Highlights The curriculum at junior schools emphasises whole-person development, catering to children’s physical, emotional, and intellectual needs. It ensures early year...

Distressed by all the bad news? Here’s how to stay informed but still look after yourself

If you’re feeling like the news is particularly bad at the moment, you’re not alone. But many of us can’t look away – and don’t want to. Engaging with news can help us make ...