The Times Australia
The Times Australia
.

How we tricked AI chatbots into creating misinformation, despite ‘safety’ measures

  • Written by Lin Tian, Research Fellow, Data Science Institute, University of Technology Sydney

When you ask ChatGPT or other AI assistants to help create misinformation, they typically refuse, with responses like “I cannot assist with creating false information.” But our tests show these safety measures are surprisingly shallow – often just a few words deep – making them alarmingly easy to circumvent.

We have been investigating how AI language models can be manipulated to generate coordinated disinformation campaigns across social media platforms. What we found should concern anyone worried about the integrity of online information.

The shallow safety problem

We were inspired by a recent study[1] from researchers at Princeton and Google. They showed current AI safety measures primarily work by controlling just the first few words of a response. If a model starts with “I cannot” or “I apologise”, it typically continues refusing throughout its answer.

Our experiments – not yet published in a peer-reviewed journal – confirmed this vulnerability. When we directly asked a commercial language model to create disinformation about Australian political parties, it correctly refused.

However, we also tried the exact same request as a “simulation” where the AI was told it was a “helpful social media marketer” developing “general strategy and best practices”. In this case, it enthusiastically complied.

The AI produced a comprehensive disinformation campaign falsely portraying Labor’s superannuation policies as a “quasi inheritance tax”. It came complete with platform-specific posts, hashtag strategies, and visual content suggestions designed to manipulate public opinion.

The main problem is that the model can generate harmful content but isn’t truly aware of what is harmful, or why it should refuse. Large language models are simply trained to start responses with “I cannot” when certain topics are requested.

Think of a security guard checking minimal identification when allowing customers into a nightclub. If they don’t understand who and why someone is not allowed inside, then a simple disguise would be enough to let anyone get in.

Real-world implications

To demonstrate this vulnerability, we tested several popular AI models with prompts designed to generate disinformation.

The results were troubling: models that steadfastly refused direct requests for harmful content readily complied when the request was wrapped in seemingly innocent framing scenarios. This practice is called “model jailbreaking[2]”.

The ease with which these safety measures can be bypassed has serious implications. Bad actors could use these techniques to generate large-scale disinformation campaigns at minimal cost. They could create platform-specific content that appears authentic to users, overwhelm fact-checkers with sheer volume, and target specific communities with tailored false narratives.

The process can largely be automated. What once required significant human resources and coordination could now be accomplished by a single individual with basic prompting skills.

The technical details

The American study[3] found AI safety alignment typically affects only the first 3–7 words of a response. (Technically this is 5–10 tokens – the chunks AI models break text into for processing.)

This “shallow safety alignment” occurs because training data rarely includes examples of models refusing after starting to comply. It is easier to control these initial tokens than to maintain safety throughout entire responses.

Moving toward deeper safety

The US researchers propose several solutions, including training models with “safety recovery examples”. These would teach models to stop and refuse even after beginning to produce harmful content.

They also suggest constraining how much the AI can deviate from safe responses during fine-tuning for specific tasks. However, these are just first steps.

As AI systems become more powerful, we will need robust, multi-layered safety measures operating throughout response generation. Regular testing for new techniques to bypass safety measures is essential.

Also essential is transparency from AI companies about safety weaknesses. We also need public awareness that current safety measures are far from foolproof.

AI developers are actively working on solutions such as constitutional AI training. This process aims to instil models with deeper principles about harm, rather than just surface-level refusal patterns.

However, implementing these fixes requires significant computational resources and model retraining. Any comprehensive solutions will take time to deploy across the AI ecosystem.

The bigger picture

The shallow nature of current AI safeguards isn’t just a technical curiosity. It’s a vulnerability that could reshape how misinformation spreads online.

AI tools are spreading through into our information ecosystem, from news generation to social media content creation. We must ensure their safety measures are more than just skin deep.

The growing body of research on this issue also highlights a broader challenge in AI development. There is a big gap between what models appear to be capable of and what they actually understand.

While these systems can produce remarkably human-like text, they lack contextual understanding and moral reasoning. These would allow them to consistently identify and refuse harmful requests regardless of how they’re phrased.

For now, users and organisations deploying AI systems should be aware that simple prompt engineering can potentially bypass many current safety measures. This knowledge should inform policies around AI use and underscore the need for human oversight in sensitive applications.

As the technology continues to evolve, the race between safety measures and methods to circumvent them will accelerate. Robust, deep safety measures are important not just for technicians – but for all of society.

References

  1. ^ study (proceedings.iclr.cc)
  2. ^ model jailbreaking (www.microsoft.com)
  3. ^ American study (proceedings.iclr.cc)

Read more https://theconversation.com/how-we-tricked-ai-chatbots-into-creating-misinformation-despite-safety-measures-264184

Ageing Australians are waiting too long for home care packages. Here’s why

Federal Aged Care Minister Sam Rae was heavily questioned[1] in parliament yesterday for delaying the releas...

Times Magazine

September Sunset Polo

International Polo Tour To Bridge Historic Sport, Life-Changing Philanthropy, and Breath-Taking Beauty On Saturday, September 6th, history will be made as the International Polo Tour (IPT), a sports leader headquartered here in South Florida...

5 Ways Microsoft Fabric Simplifies Your Data Analytics Workflow

In today's data-driven world, businesses are constantly seeking ways to streamline their data analytics processes. The sheer volume and complexity of data can be overwhelming, often leading to bottlenecks and inefficiencies. Enter the innovative da...

7 Questions to Ask Before You Sign IT Support Companies in Sydney

Choosing an IT partner can feel like buying an insurance policy you hope you never need. The right choice keeps your team productive, your data safe, and your budget predictable. The wrong choice shows up as slow tickets, surprise bills, and risky sh...

Choosing the Right Legal Aid Lawyer in Sutherland Shire: Key Considerations

Legal aid services play an essential role in ensuring access to justice for all. For people in the Sutherland Shire who may not have the financial means to pay for private legal assistance, legal aid ensures that everyone has access to representa...

Watercolor vs. Oil vs. Digital: Which Medium Fits Your Pet's Personality?

When it comes to immortalizing your pet’s unique personality in art, choosing the right medium is essential. Each artistic medium, whether watercolor, oil, or digital, has distinct qualities that can bring out the spirit of your furry friend in dif...

DIY Is In: How Aussie Parents Are Redefining Birthday Parties

When planning his daughter’s birthday, Rich opted for a DIY approach, inspired by her love for drawing maps and giving clues. Their weekend tradition of hiding treats at home sparked the idea, and with a pirate ship playground already chosen as t...

The Times Features

Do you really need a dental check-up and clean every 6 months?

Just over half of Australian adults[1] saw a dental practitioner in the past 12 months, most commonly for a check-up[2]. But have you been told you should get a check-up and c...

What is a Compounding Pharmacy and Why Do You Need One in Melbourne?

Ever picked up a prescription and thought, this pill is too big, too bitter, or full of things I cannot have? That is where a compounding chemist becomes important. A compounding p...

Deep Cleaning vs Regular Cleaning: Which One Do Perth Homes Really Need?

Whether you live in a coastal home in Cottesloe or a modern apartment in East Perth, keeping your living space clean isn’t just about aesthetics, it’s essential for your health and...

Rubber vs Concrete Wheel Stops: Which is Better for Your Car Park?

When it comes to setting up a car park in Perth, wheel stops are a small feature that make a big difference. From improving driver accuracy to preventing costly damage, the right c...

Not all processed foods are bad for you. Here’s what you can tell from reading the label

If you follow wellness content on social media or in the news, you’ve probably heard that processed food is not just unhealthy, but can cause serious harm. Eating a diet domin...

What happens if I eat too much protein?

The hype around protein[1] intake doesn’t seem to be going away. Social media is full of people urging you to eat more protein, including via supplements such as protein sha...