The Times Australia
The Times Australia

.

Evidence shows AI systems are already too much like humans. Will that be a problem?

  • Written by Sandra Peter, Director of Sydney Executive Plus, University of Sydney

What if we could design a machine that could read your emotions and intentions, write thoughtful, empathetic, perfectly timed responses — and seemingly know exactly what you need to hear? A machine so seductive, you wouldn’t even realise it’s artificial. What if we already have?

In a comprehensive meta-analysis, published in the Proceedings of the National Academy of Sciences[1], we show that the latest generation of large language model-powered chatbots match and exceed most humans in their ability to communicate. A growing body of research shows these systems now reliably pass the Turing test[2], fooling humans into thinking they are interacting with another human.

None of us was expecting the arrival of super communicators. Science fiction taught us that artificial intelligence (AI) would be highly rational and all-knowing, but lack humanity.

Yet here we are. Recent experiments have shown that models such as GPT-4 outperform humans in writing persuasively[3] and also empathetically[4]. Another study found that large language models (LLMs) excel at assessing nuanced sentiment[5] in human-written messages.

LLMs are also masters at roleplay[6], assuming a wide range of personas and mimicking nuanced linguistic character styles[7]. This is amplified by their ability to infer human beliefs[8] and intentions from text. Of course, LLMs do not possess true empathy or social understanding – but they are highly effective mimicking machines.

We call these systems “anthropomorphic agents”. Traditionally, anthropomorphism refers to ascribing human traits to non-human entities. However, LLMs genuinely display highly human-like qualities, so calls to avoid anthropomorphising LLMs will fall flat.

This is a landmark moment: when you cannot tell the difference between talking to a human or an AI chatbot online.

On the internet, nobody knows you’re an AI

What does this mean? On the one hand, LLMs promise to make complex information more widely accessible via chat interfaces, tailoring messages to individual comprehension levels[9]. This has applications across many domains, such as legal services or public health. In education, the roleplay abilities can be used to create Socratic tutors that ask personalised questions and help students learn.

At the same time, these systems are seductive. Millions of users already interact with AI companion apps daily. Much has been said about the negative effects of companion apps[10], but anthropomorphic seduction comes with far wider implications.

Users are ready to trust AI chatbots[11] so much that they disclose highly personal information. Pair this with the bots’ highly persuasive qualities, and genuine concerns emerge[12].

A screen reading 'Introducing ChatGPT'
The launch of ChatGPT in 2022 triggered a wave of anthropomorphic, conversational AI agents. Wu Hao / EPA

Recent research by AI company Anthropic[13] further shows that its Claude 3 chatbot was at its most persuasive when allowed to fabricate information and engage in deception. Given AI chatbots have no moral inhibitions, they are poised to be much better at deception than humans.

This opens the door to manipulation at scale, to spread disinformation, or create highly effective sales tactics. What could be more effective than a trusted companion casually recommending a product in conversation? ChatGPT has already begun to provide product recommendations[14] in response to user questions. It’s only a short step to subtly weaving product recommendations into conversations – without you ever asking.

What can be done?

It is easy to call for regulation, but harder to work out the details.

The first step is to raise awareness of these abilities. Regulation should prescribe disclosure – users need to always know that they interact with an AI, like the EU AI Act mandates[15]. But this will not be enough, given the AI systems’ seductive qualities.

The second step must be to better understand anthropomorphic qualities. So far, LLM tests measure “intelligence” and knowledge recall, but none so far measures the degree of “human likeness”. With a test like this, AI companies could be required to disclose anthropomorphic abilities with a rating system, and legislators could determine acceptable risk levels for certain contexts and age groups.

The cautionary tale of social media, which was largely unregulated until much harm had been done, suggests there is some urgency. If governments take a hands-off approach, AI is likely to amplify existing problems with spreading of mis- and disinformation[16], or the loneliness epidemic[17]. In fact, Meta chief executive Mark Zuckerberg[18] has already signalled that he would like to fill the void of real human contact with “AI friends”.

Photo of Mark Zuckerberg sitting on a stage holding a microphone.
Meta CEO Mark Zuckerberg thinks AI ‘friends’ are the future. Jeff Chiu / AP

Relying on AI companies to refrain from further humanising their systems seems ill-advised. All developments point in the opposite direction. OpenAI is working on making their systems more engaging and personable, with the ability to give your version of ChatGPT a specific “personality”[19]. ChatGPT has generally become more chatty, often asking followup questions to keep the conversation going, and its voice mode[20] adds even more seductive appeal.

Much good can be done with anthropomorphic agents. Their persuasive abilities can be used for ill causes and for good ones, from fighting conspiracy theories to enticing users into donating and other prosocial behaviours.

Yet we need a comprehensive agenda across the spectrum of design and development, deployment and use, and policy and regulation of conversational agents. When AI can inherently push our buttons, we shouldn’t let it change our systems.

References

  1. ^ published in the Proceedings of the National Academy of Sciences (www.pnas.org)
  2. ^ pass the Turing test (doi.org)
  3. ^ writing persuasively (arxiv.org)
  4. ^ empathetically (arxiv.org)
  5. ^ excel at assessing nuanced sentiment (link.springer.com)
  6. ^ masters at roleplay (www.nature.com)
  7. ^ mimicking nuanced linguistic character styles (arxiv.org)
  8. ^ infer human beliefs (www.nature.com)
  9. ^ tailoring messages to individual comprehension levels (arxiv.org)
  10. ^ negative effects of companion apps (theconversation.com)
  11. ^ trust AI chatbots (aisel.aisnet.org)
  12. ^ genuine concerns emerge (theconversation.com)
  13. ^ Recent research by AI company Anthropic (www.anthropic.com)
  14. ^ begun to provide product recommendations (openai.com)
  15. ^ like the EU AI Act mandates (www.euaiact.com)
  16. ^ spreading of mis- and disinformation (www.science.org)
  17. ^ loneliness epidemic (theconversation.com)
  18. ^ Meta chief executive Mark Zuckerberg (www.wsj.com)
  19. ^ give your version of ChatGPT a specific “personality” (autogpt.net)
  20. ^ voice mode (www.theverge.com)

Read more https://theconversation.com/evidence-shows-ai-systems-are-already-too-much-like-humans-will-that-be-a-problem-256980

What To Expect From NDIS Functional Capacity Assessments

The National Disability Insurance Scheme (NDIS) ensures people with disabilities receive the necessary funding...

Times Magazine

What AI Adoption Means for the Future of Workplace Risk Management

Image by freepik As industrial operations become more complex and fast-paced, the risks faced by workers and employers alike continue to grow. Traditional safety models—reliant on manual oversight, reactive investigations, and standardised checklist...

From Beach Bops to Alpine Anthems: Your Sonos Survival Guide for a Long Weekend Escape

Alright, fellow adventurers and relaxation enthusiasts! So, you've packed your bags, charged your devices, and mentally prepared for that glorious King's Birthday long weekend. But hold on, are you really ready? Because a true long weekend warrior kn...

Effective Commercial Pest Control Solutions for a Safer Workplace

Keeping a workplace clean, safe, and free from pests is essential for maintaining productivity, protecting employee health, and upholding a company's reputation. Pests pose health risks, can cause structural damage, and can lead to serious legal an...

The Science Behind Reverse Osmosis and Why It Matters

What is reverse osmosis? Reverse osmosis (RO) is a water purification process that removes contaminants by forcing water through a semi-permeable membrane. This membrane allows only water molecules to pass through while blocking impurities such as...

Foodbank Queensland celebrates local hero for National Volunteer Week

Stephen Carey is a bit bananas.   He splits his time between his insurance broker business, caring for his young family, and volunteering for Foodbank Queensland one day a week. He’s even run the Bridge to Brisbane in a banana suit to raise mon...

Senior of the Year Nominations Open

The Allan Labor Government is encouraging all Victorians to recognise the valuable contributions of older members of our community by nominating them for the 2025 Victorian Senior of the Year Awards.  Minister for Ageing Ingrid Stitt today annou...

The Times Features

The Hidden Vision Problem Impacting Mid Life Australians Every Day

New research from Specsavers reveals millions of Australians are living with an undiagnosed condition that could be putting their safety at risk. For many Australians aged 35 ...

Meal Prep as Self-Care? The One Small Habit That Could Improve Your Mood, Focus & Confidence

What if the secret to feeling calmer, more focused, and emotionally resilient wasn’t found in a supplement or self-help book — but in your fridge? That’s the surprising link uncov...

From a Girlfriend’s Moisturiser to a Men’s Skincare Movement: How Two Mates Built Two Dudes

In a men’s skincare market that often feels like a choice between hyper-masculinity and poorly disguised women’s products, Two Dudes stands out. It’s not trying to be macho. It’s n...

The Great Fleecing: Time for Aussies to demand more from their banks

By Anhar Khanbhai, Chief Anti-Fleecing Officer, Wise   As Australians escape the winter chill for Europe’s summer or Southeast Asia’s sun, many don’t realise they’re walking strai...

Agentforce for Financial Services: Merging AI and Human Expertise for Tailored BFSI Solutions

In this rapidly evolving world of financial services, deploying customer experiences that are personalized and intelligent is crucial. Agentforce for Financial Services by Sale...

Cult Favourite, TokyoTaco, Opens Beachfront at Mooloolaba this June

FREE Tokyo Tacos to Celebrate!  Cult favourite Japanese-Mexican restaurant TokyoTaco is opening a beachfront venue at the Mooloolaba Esplanade on Queensland’s Sunshine Coast t...