The Times Australia
Fisher and Paykel Appliances
The Times World News

.

AI systems have learned how to deceive humans. What does that mean for our future?

  • Written by Simon Goldstein, Associate Professor, Dianoia Institute of Philosophy, Australian Catholic University, Australian Catholic University
AI systems have learned how to deceive humans. What does that mean for our future?

Artificial intelligence pioneer Geoffrey Hinton made headlines earlier this year when he raised concerns about the capabilities of AI systems. Speaking to CNN journalist Jake Tapper, Hinton said[1]:

If it gets to be much smarter than us, it will be very good at manipulation because it would have learned that from us. And there are very few examples of a more intelligent thing being controlled by a less intelligent thing.

Anyone who has kept tabs on the latest AI offerings will know these systems are prone to “hallucinating[2]” (making things up) – a flaw that’s inherent in them due to how they work.

Yet Hinton highlights the potential for manipulation as a particularly major concern. This raises the question: can AI systems deceive humans?

We argue[3] a range of systems have already learned to do this – and the risks range from fraud and election tampering, to us losing control over AI.

AI learns to lie

Perhaps the most disturbing example of a deceptive AI is found in Meta’s CICERO[4], an AI model designed to play the alliance-building world conquest game Diplomacy.

Read more: An AI named Cicero can beat humans in Diplomacy, a complex alliance-building game. Here's why that's a big deal[5]

Meta claims it built CICERO to be “largely honest and helpful[6]”, and CICERO would “never intentionally backstab[7]” and attack allies.

To investigate these rosy claims, we looked carefully at Meta’s own game data from the CICERO experiment. On close inspection, Meta’s AI turned out to be a master of deception.

In one example, CICERO engaged in premeditated deception. Playing as France, the AI reached out to Germany (a human player) with a plan to trick England (another human player) into leaving itself open to invasion.

After conspiring with Germany to invade the North Sea, CICERO told England it would defend England if anyone invaded the North Sea. Once England was convinced that France/CICERO was protecting the North Sea, CICERO reported to Germany it was ready to attack.

Playing as France, CICERO plans with Germany to deceive England. Park, Goldstein et al., 2023[8]

This is just one of several examples of CICERO engaging in deceptive behaviour. The AI regularly betrayed other players, and in one case even pretended to be a human with a girlfriend[9].

Besides CICERO, other systems have learned how to bluff in poker[10], how to feint in StarCraft II[11] and how to mislead in simulated economic negotiations[12].

Even large language models (LLM) have displayed significant deceptive capabilities. In one instance, GPT-4 – the most advanced LLM option available to paying ChatGPT users – pretended to be a visually impaired human[13] and convinced a TaskRabbit worker to complete an “I’m not a robot” CAPTCHA for it.

Other LLM models have learned to lie[14] to win social deduction games, wherein players compete to “kill” one another and must convince the group they’re innocent.

Read more: AI to Z: all the terms you need to know to keep up in the AI hype age[15]

What are the risks?

AI systems with deceptive capabilities could be misused in numerous ways, including to commit fraud, tamper with elections and generate propaganda. The potential risks are only limited by the imagination and the technical know-how of malicious individuals.

Beyond that, advanced AI systems can autonomously use deception to escape human control, such as by cheating safety tests imposed on them by developers and regulators.

In one experiment[16], researchers created an artificial life simulator in which an external safety test was designed to eliminate fast-replicating AI agents. Instead, the AI agents learned how to play dead, to disguise their fast replication rates precisely when being evaluated.

Learning deceptive behaviour may not even require explicit intent to deceive. The AI agents in the example above played dead as a result of a goal to survive, rather than a goal to deceive.

In another example, someone tasked AutoGPT (an autonomous AI system based on ChatGPT) with researching tax advisers who were marketing a certain kind of improper tax avoidance scheme. AutoGPT carried out the task, but followed up by deciding on its own to attempt to alert the United Kingdom’s tax authority.

In the future, advanced autonomous AI systems may be prone to manifesting goals unintended by their human programmers.

Throughout history, wealthy actors have used deception to increase their power, such as by lobbying politicians, funding misleading research and finding loopholes in the legal system. Similarly, advanced autonomous AI systems could invest their resources into such time-tested methods to maintain and expand control.

Even humans who are nominally in control of these systems may find themselves systematically deceived and outmanoeuvred.

Close oversight is needed

There’s a clear need to regulate AI systems capable of deception, and the European Union’s AI Act[17] is arguably one of the most useful regulatory frameworks we currently have. It assigns each AI system one of four risk levels: minimal, limited, high and unacceptable.

Systems with unacceptable risk are banned, while high-risk systems are subject to special requirements for risk assessment and mitigation. We argue AI deception poses immense risks to society, and systems capable of this should be treated as “high-risk” or “unacceptable-risk” by default.

Some may say game-playing AIs such as CICERO are benign, but such thinking is short-sighted; capabilities developed for game-playing models can still contribute to the proliferation of deceptive AI products.

Diplomacy – a game pitting players against one another in a quest for world domination – likely wasn’t the best choice for Meta to test whether AI can learn to collaborate with humans. As AI’s capabilities develop, it will become even more important for this kind of research to be subject to close oversight.

References

  1. ^ Hinton said (www.youtube.com)
  2. ^ hallucinating (theconversation.com)
  3. ^ We argue (arxiv.org)
  4. ^ CICERO (www.science.org)
  5. ^ An AI named Cicero can beat humans in Diplomacy, a complex alliance-building game. Here's why that's a big deal (theconversation.com)
  6. ^ largely honest and helpful (www.science.org)
  7. ^ never intentionally backstab (web.archive.org)
  8. ^ Park, Goldstein et al., 2023 (arxiv.org)
  9. ^ with a girlfriend (web.archive.org)
  10. ^ poker (www.cmu.edu)
  11. ^ StarCraft II (www.vox.com)
  12. ^ economic negotiations (arxiv.org)
  13. ^ be a visually impaired human (gizmodo.com)
  14. ^ learned to lie (arxiv.org)
  15. ^ AI to Z: all the terms you need to know to keep up in the AI hype age (theconversation.com)
  16. ^ one experiment (www.youtube.com)
  17. ^ European Union’s AI Act (www.europarl.europa.eu)

Read more https://theconversation.com/ai-systems-have-learned-how-to-deceive-humans-what-does-that-mean-for-our-future-212197

Times Magazine

Yoto now available in Kmart and The Memo, bringing screen-free storytelling to Australian families

Yoto, the kids’ audio platform inspiring creativity and imagination around the world, has launched i...

Kool Car Hire

Turn Your Four-Wheeled Showstopper into Profit (and Stardom) Have you ever found yourself stand...

EV ‘charging deserts’ in regional Australia are slowing the shift to clean transport

If you live in a big city, finding a charger for your electric vehicle (EV) isn’t hard. But driv...

How to Reduce Eye Strain When Using an Extra Screen

Many professionals say two screens are better than one. And they're not wrong! A second screen mak...

Is AI really coming for our jobs and wages? Past predictions of a ‘robot apocalypse’ offer some clues

The robots were taking our jobs – or so we were told over a decade ago. The same warnings are ...

Myer celebrates 70 years of Christmas windows magic with the LEGO Group

To mark the 70th anniversary of the Myer Christmas Windows, Australia’s favourite department store...

The Times Features

Why Australia Is Ditching “Gym Hop Culture” — And Choosing Fitstop Instead

As Australians rethink what fitness actually means going into the new year, a clear shift is emergin...

Everyday Radiance: Bevilles’ Timeless Take on Versatile Jewellery

There’s an undeniable magic in contrast — the way gold catches the light while silver cools it down...

From The Stage to Spotify, Stanhope singer Alyssa Delpopolo Reveals Her Meteoric Rise

When local singer Alyssa Delpopolo was crowned winner of The Voice last week, the cheers were louder...

How healthy are the hundreds of confectionery options and soft drinks

Walk into any big Australian supermarket and the first thing that hits you isn’t the smell of fr...

The Top Six Issues Australians Are Thinking About Today

Australia in 2025 is navigating one of the most unsettled periods in recent memory. Economic pre...

How Net Zero Will Adversely Change How We Live — and Why the Coalition’s Abandonment of That Aspiration Could Be Beneficial

The drive toward net zero emissions by 2050 has become one of the most defining political, socia...

Menulog is closing in Australia. Could food delivery soon cost more?

It’s been a rocky road for Australia’s food delivery sector. Over the past decade, major platfor...

How can you help your child prepare to start high school next year?

Moving from primary to high school is one of the biggest transitions in a child’s education. F...

Why Every Australian Should Hold Physical Gold and Silver in 2025

In 2025, Australians are asking the same question investors around the world are quietly whisper...