The Times Australia
The Times World News

.

Can human moderators ever really rein in harmful online content? New research says yes

  • Written by Marian-Andrei Rizoiu, Senior Lecturer in Behavioral Data Science, University of Technology Sydney
Can human moderators ever really rein in harmful online content? New research says yes

Social media platforms have become the “digital town squares” of our time, enabling communication and the exchange of ideas on a global scale. However, the unregulated nature of these platforms has allowed the proliferation of harmful content such as misinformation, disinformation and hate speech.

Regulating the online world has proven difficult, but one promising avenue is suggested by the European Union’s Digital Services Act, passed in November 2022. This legislation mandates “trusted flaggers” to identify certain kinds of problematic content to platforms, who must then remove it within 24 hours.

Will it work, given the fast pace and complex viral dynamics of social media environments? To find out, we modelled the effect of the new rule, in research[1] published in the Proceedings of the National Academy of Sciences.

Our results show this approach can indeed reduce the spread of harmful content. We also suggest some insights into how the rules can be implemented in the most effective way.

Understanding the spread of harmful content

We used a mathematical model of information spread to analyse how harmful content is disseminated through social networks.

In the model, each harmful post is treated as a “self-exciting point process[2]”. This means it draws more people into the discussion over time and generates further harmful posts, similar to a word-of-mouth process.

The intensity of a post’s self-propagation decreases over time. However, if left unchecked, its “offspring” can generate more offspring, leading to exponential growth.

A constellation of lights in a dark room, with a group of people silhouetted against the light.
Social media posts spread online through a process much like word of mouth. Robynne Hu / Unsplash[3]

The potential for harm reduction

In our study, we used two key measures to assess the effectiveness of the kind of moderation set out in the Digital Services Act: potential harm and content half-life.

A post’s potential harm represents the number of harmful offspring it generates. Content half-life denotes the amount of time required for half of all the post’s offspring to be generated.

We found moderation by the rules of the Digital Services Act can effectively reduce harm, even on platforms with short content half-lives, such as X (formerly known as Twitter). While faster moderation is always more effective, we found that moderating even after 24 hours could still reduce the number of harmful offspring by up to 50%.

The role of reaction time and harm reduction

The reaction time required for effective content moderation increases with both the content half-life and potential harm. To put it another way, for content that is longer-lived and generates large numbers of harmful offspring, intervening later can still prevent many harmful subsequent posts.

This suggests the approach of the Digital Services Act can effectively combat harmful content, even on fast-paced platforms like X.

We also found the amount of harm reduction increases for content with greater potential harm. While apparently counterintuitive, this indicates moderation is effective when it targets the offspring of offspring generation – that is, when it breaks the word-of-mouth cycle.

Making the most of moderation efforts

Prior research has shown tools based on artificial intelligence struggle to detect online harmful content. The authors of such content are aware of the detection tools, and adapt their language to avoid detection.

Read more: Can ideology-detecting algorithms catch online extremism before it takes hold?[4]

The Digital Services Act moderation approach relies on manual tagging of posts by “trusted flaggers”, who will have limited time and resources.

To make the most of their efforts, flaggers should focus their efforts on content with high potential harm for which our research shows that moderation is most effective. We estimate the potential harm of a post at its creation by extrapolating its expected number of offspring from previously observed discussions.

Implementing the Digital Services Act

Social media platforms already employ content moderation teams, and our research suggests the major platforms at least already have enough staff to enforce the Digital Services Act legislation. There are, however, questions about the cultural awareness of the existing staff as some of these teams are based in different countries to the majority of content posters they are moderating.

The success of the legislation will lie in appointing trusted flaggers with sufficient cultural and language knowledge, developing practical reporting tools for harmful content, and ensuring timely moderation.

Our study’s framework will provide policymakers with valuable guidance in drafting mechanisms for content moderation that prioritise efforts and reaction times effectively.

A healthier and safer digital public square

As social media platforms continue to shape public discourse, addressing the challenges posed by harmful content is crucial. Our research on the effectiveness of moderating harmful online content offers valuable insights for policymakers.

By understanding the dynamics of content spread, optimising moderation efforts, and implementing regulations like the Digital Services Act, we can strive for a healthier and safer digital public square where harmful content is mitigated, and constructive dialogue thrives.

Read more: The 'digital town square'? What does it mean when billionaires own the online spaces where we gather?[5]

Read more https://theconversation.com/can-human-moderators-ever-really-rein-in-harmful-online-content-new-research-says-yes-209882

Times Magazine

Headless CMS in Digital Twins and 3D Product Experiences

Image by freepik As the metaverse becomes more advanced and accessible, it's clear that multiple sectors will use digital twins and 3D product experiences to visualize, connect, and streamline efforts better. A digital twin is a virtual replica of ...

The Decline of Hyper-Casual: How Mid-Core Mobile Games Took Over in 2025

In recent years, the mobile gaming landscape has undergone a significant transformation, with mid-core mobile games emerging as the dominant force in app stores by 2025. This shift is underpinned by changing user habits and evolving monetization tr...

Understanding ITIL 4 and PRINCE2 Project Management Synergy

Key Highlights ITIL 4 focuses on IT service management, emphasising continual improvement and value creation through modern digital transformation approaches. PRINCE2 project management supports systematic planning and execution of projects wit...

What AI Adoption Means for the Future of Workplace Risk Management

Image by freepik As industrial operations become more complex and fast-paced, the risks faced by workers and employers alike continue to grow. Traditional safety models—reliant on manual oversight, reactive investigations, and standardised checklist...

From Beach Bops to Alpine Anthems: Your Sonos Survival Guide for a Long Weekend Escape

Alright, fellow adventurers and relaxation enthusiasts! So, you've packed your bags, charged your devices, and mentally prepared for that glorious King's Birthday long weekend. But hold on, are you really ready? Because a true long weekend warrior kn...

Effective Commercial Pest Control Solutions for a Safer Workplace

Keeping a workplace clean, safe, and free from pests is essential for maintaining productivity, protecting employee health, and upholding a company's reputation. Pests pose health risks, can cause structural damage, and can lead to serious legal an...

The Times Features

Prefab’s Second Life: Why Australia’s Backyard Boom Needs a Circular Makeover

The humble granny flat is being reimagined not just as a fix for housing shortages, but as a cornerstone of circular, factory-built architecture. But are our systems ready to s...

Melbourne’s Burglary Boom: Break-Ins Surge Nearly 25%

Victorian homeowners are being warned to act now, as rising break-ins and falling arrest rates paint a worrying picture for suburban safety. Melbourne residents are facing an ...

Exploring the Curriculum at a Modern Junior School in Melbourne

Key Highlights The curriculum at junior schools emphasises whole-person development, catering to children’s physical, emotional, and intellectual needs. It ensures early year...

Distressed by all the bad news? Here’s how to stay informed but still look after yourself

If you’re feeling like the news is particularly bad at the moment, you’re not alone. But many of us can’t look away – and don’t want to. Engaging with news can help us make ...

The Role of Your GP in Creating a Chronic Disease Management Plan That Works

Living with a long-term condition, whether that is diabetes, asthma, arthritis or heart disease, means making hundreds of small decisions every day. You plan your diet against m...

Troubleshooting Flickering Lights: A Comprehensive Guide for Homeowners

Image by rawpixel.com on Freepik Effectively addressing flickering lights in your home is more than just a matter of convenience; it's a pivotal aspect of both home safety and en...