The Times Australia
The Times World News

.

AI is moving fast. Climate policy provides valuable lessons for how to keep it in check

  • Written by Milica Stilinovic, PhD Candidate, School of Media and Communications; Managing Editor, Policy & Internet journal, University of Sydney

Artificial intelligence (AI) might not have been created to enable new forms of sexual violence such as deepfake pornography. But that has been an unfortunate byproduct[1] of the rapidly advancing technology.

This is just one example of AI’s many unintended uses.

AI’s intended uses are not without their own problems, including serious copyright concerns[2]. But beyond this, there is much experimentation happening with the rapidly advancing technology. Models and code are shared, repurposed and remixed in public online spaces.

These collaborative, loosely networked communities — what we call “underspheres” in our recently published paper[3] in New Media & Society — are where users experiment with AI rather than simply consume it. These spaces are where generative AI is pushed into unpredictable and experimental directions. And they show why a new approach to regulating AI and mitigating its risks is urgently needed. Climate policy offers some useful lessons.

A limited approach

As AI advances, so do concerns about risk. Policymakers have responded quickly. For example, the European Union AI Act[4] which came into force in 2024 classifies systems by risk: banning “unacceptable” ones, regulating “high-risk” uses, and requiring transparency for lower-risk tools.

Other governments — including those of the United Kingdom[5], United States[6] and China[7] — are taking similar directions. However, their regulatory approaches differ in scope, stage of development, and enforcement.

But these efforts share a limitation: they’re built around intended use, not the messy, creative and often unintended ways AI is actually being used — especially in fringe spaces.

So, what risks can emerge from creative deviance in AI? And can risk-based frameworks handle technologies that are fluid, remixable and fast-moving?

A computer screen displaying a chat forum.
Sub communities within the larger Reddit platform often experiment with unintential uses of AI. Tada Images/Shutterstock[8]

Experimentation outside of regulation

There are several online spaces where members of the undersphere gather. They include GitHub (a web-based platform for collaborative software development), Hugging Face (a platform that offers ready-to-use machine learning models, datasets, and tools for developers to easily build and launch AI apps) and subreddits (individual communities or forums within the larger Reddit platform).

These environments encourage creative experimentation with generative AI outside regulated frameworks. This experimentation can include instructing models to avoid intended behaviours – or do the opposite. It can also include creating mashups or more powerful variations of generative AI by remixing software code that is made publicly available for anyone to view, use, modify and distribute.

The potential harms of this experimentation are highlighted by the proliferation of deepfake pornography. So too are the limits of the current approach to regulation rapidly advancing technology such as AI.

Deepfake technology wasn’t originally developed to create non-consensual pornographic videos and images. But this is ultimately what happened within subreddit communities, beginning in 2017. Deepfake pornography then quickly spread from this undersphere into the mainstream; a recent analysis[9] of more than 95,000 deepfake videos online found 98% of them were deep fake pornography videos.

It was not until 2019 – years after deepfake pornography first emerged – that attempts to regulate it began to emerge globally. But these attempts were too rigid to capture the new ways deepfake technology was being used by then to cause harm. What’s more, the regulatory efforts were sporadic and inconsistent between states. This impeded efforts to protect people – and democracies – from the impacts of deepfakes globally.

This is why we need regulation that can march in step with emerging technologies and act quickly when unintended use prevails.

Embracing uncertainty, complexity and change

A way to look at AI governance is through the prism of climate change. Climate change is also the result of many interconnected systems interacting in ways we can’t fully control — and its impacts can only be understood with a degree of uncertainty[10].

Over the past three decades, climate governance frameworks have evolved to confront this challenge: to manage complex, emerging, and often unpredictable risks. And although this framework has yet to demonstrate its ability to meaningfully reduce greenhouse gas emissions[11], it has succeeded in sustaining global attention over the years on emerging climate risks and their complex impacts.

At the same time it has provided a forum where responsibilities and potential solutions can be publicly debated.

A similar governance framework should also be adopted to manage the spread of AI. This framework should consider the interconnected risks caused by generative AI tools linking with social media platforms. It should also consider cascading risks, as content and code are reused and adapted. And it should consider systemic risks, such as declining public trust or polarised debate.

Importantly, this framework must also involve diverse voices. Like climate change, generative AI won’t affect just one part of society — it will ripple through many. And the challenge is how to adapt with it.

Applied to AI, climate change governance approaches could help promote preemptive action in the wake of unforeseen use (such as in the case of deepfake porn) before the issue becomes widespread.

People take part in a climate protest on a city street, holding signs featuring a burning planet Earth.
Over the past three decades, climate governance frameworks have evolved to manage complex, emerging, and often unpredictable risks. Alexandros Michailidis/Shutterstock[12]

Avoiding the pitfalls of climate governance

While climate governance offers a useful model for adaptive, flexible regulation, it also brings important warnings that must be avoided.

Climate politics has been mired by loopholes, competing interests and sluggish policymaking. From Australia’s shortcomings in implementing its renewable strategy[13], to policy reversals in Scotland[14] and political gridlock in the United States[15], climate policy implementation has often been the proverbial wrench in the gears of environmental law.

But, when it comes to AI governance, this all-too-familiar climate stalemate brings with it important lessons for the realm of AI governance.

First, we need to find ways to align public oversight with self-regulation and transparency on the part of AI developers and suppliers.

Second, we need to think about generative AI risks at a global scale. International cooperation and coordination are essential.

Finally, we need to accept that AI development and experimentation will persist, and craft regulations that respond to this in order to keep our societies safe.

References

  1. ^ an unfortunate byproduct (theconversation.com)
  2. ^ copyright concerns (theconversation.com)
  3. ^ recently published paper (journals.sagepub.com)
  4. ^ European Union AI Act (artificialintelligenceact.eu)
  5. ^ the United Kingdom (www.gov.uk)
  6. ^ United States (www.ncsl.org)
  7. ^ China (iclg.com)
  8. ^ Tada Images/Shutterstock (www.shutterstock.com)
  9. ^ recent analysis (www.securityhero.io)
  10. ^ understood with a degree of uncertainty (theconversation.com)
  11. ^ meaningfully reduce greenhouse gas emissions (ourworldindata.org)
  12. ^ Alexandros Michailidis/Shutterstock (www.shutterstock.com)
  13. ^ Australia’s shortcomings in implementing its renewable strategy (www.csiro.au)
  14. ^ policy reversals in Scotland (theconversation.com)
  15. ^ political gridlock in the United States (theconversation.com)

Read more https://theconversation.com/ai-is-moving-fast-climate-policy-provides-valuable-lessons-for-how-to-keep-it-in-check-255624

Times Magazine

Building an AI-First Culture in Your Company

AI isn't just something to think about anymore - it's becoming part of how we live and work, whether we like it or not. At the office, it definitely helps us move faster. But here's the thing: just using tools like ChatGPT or plugging AI into your wo...

Data Management Isn't Just About Tech—Here’s Why It’s a Human Problem Too

Photo by Kevin Kuby Manuel O. Diaz Jr.We live in a world drowning in data. Every click, swipe, medical scan, and financial transaction generates information, so much that managing it all has become one of the biggest challenges of our digital age. Bu...

Headless CMS in Digital Twins and 3D Product Experiences

Image by freepik As the metaverse becomes more advanced and accessible, it's clear that multiple sectors will use digital twins and 3D product experiences to visualize, connect, and streamline efforts better. A digital twin is a virtual replica of ...

The Decline of Hyper-Casual: How Mid-Core Mobile Games Took Over in 2025

In recent years, the mobile gaming landscape has undergone a significant transformation, with mid-core mobile games emerging as the dominant force in app stores by 2025. This shift is underpinned by changing user habits and evolving monetization tr...

Understanding ITIL 4 and PRINCE2 Project Management Synergy

Key Highlights ITIL 4 focuses on IT service management, emphasising continual improvement and value creation through modern digital transformation approaches. PRINCE2 project management supports systematic planning and execution of projects wit...

What AI Adoption Means for the Future of Workplace Risk Management

Image by freepik As industrial operations become more complex and fast-paced, the risks faced by workers and employers alike continue to grow. Traditional safety models—reliant on manual oversight, reactive investigations, and standardised checklist...

The Times Features

Is our mental health determined by where we live – or is it the other way round? New research sheds more light

Ever felt like where you live is having an impact on your mental health? Turns out, you’re not imagining things. Our new analysis[1] of eight years of data from the New Zeal...

Going Off the Beaten Path? Here's How to Power Up Without the Grid

There’s something incredibly freeing about heading off the beaten path. No traffic, no crowded campsites, no glowing screens in every direction — just you, the landscape, and the...

West HQ is bringing in a season of culinary celebration this July

Western Sydney’s leading entertainment and lifestyle precinct is bringing the fire this July and not just in the kitchen. From $29 lobster feasts and award-winning Asian banque...

What Endo Took and What It Gave Me

From pain to purpose: how one woman turned endometriosis into a movement After years of misdiagnosis, hormone chaos, and major surgery, Jo Barry was done being dismissed. What beg...

Why Parents Must Break the Silence on Money and Start Teaching Financial Skills at Home

Australia’s financial literacy rates are in decline, and our kids are paying the price. Certified Money Coach and Financial Educator Sandra McGuire, who has over 20 years’ exp...

Australia’s Grill’d Transforms Operations with Qlik

Boosting Burgers and Business Clean, connected data powers real-time insights, smarter staffing, and standout customer experiences Sydney, Australia, 14 July 2025 – Qlik®, a g...