The Times Australia
The Times World News

.
The Times Real Estate

.

Snapchat's 'creepy' AI blunder reminds us that chatbots aren't people. But as the lines blur, the risks grow

  • Written by Daswin de Silva, Deputy Director of the Centre for Data Analytics and Cognition, La Trobe University
Snapchat's 'creepy' AI blunder reminds us that chatbots aren't people. But as the lines blur, the risks grow

Artificial intelligence-powered (AI) chatbots are becoming increasingly human-like by design, to the point that some among us may struggle to distinguish between human and machine.

This week, Snapchat’s My AI chatbot glitched and posted a story of what looked like a wall and ceiling, before it stopped responding to users. Naturally, the internet began to question[1] whether the ChatGPT-powered chatbot had gained sentience.

A crash course in AI literacy could have quelled this confusion. But, beyond that, the incident reminds us that as AI chatbots grow closer to resembling humans, managing their uptake will only get more challenging – and more important.

From rules-based to adaptive chatbots

Since ChatGPT burst onto our screens late last year, many digital platforms have integrated AI into their services. Even as I draft this article on Microsoft Word, the software’s predictive AI capability is suggesting possible sentence completions.

Read more: Google and Microsoft are bringing AI to Word, Excel, Gmail and more. It could boost productivity for us – and cybercriminals[2]

Known as generative AI, this relatively new type of AI is distinguished from its predecessors[3] by its ability to generate new content that is precise, human-like and seemingly meaningful.

Generative AI tools, including AI image generators and chatbots, are built on large language models (LLMs). These computational models analyse the associations between billions of words, sentences and paragraphs to predict what ought to come next in a given text. As OpenAI co-founder Ilya Sutskever puts it[4], an LLM is

[…] just a really, really good next-word predictor.

Advanced LLMs are also fine-tuned with human feedback. This training, often delivered through countless hours of cheap human labour, is the reason AI chatbots can now have seemingly human-like conversations.

OpenAI’s ChatGPT is still the flagship generative AI model[5]. Its release marked a major leap from simpler “rules-based” chatbots, such as those used in online customer service.

Human-like chatbots that talk to a user rather than at them have been linked with higher levels of engagement. One study[6] found the personification of chatbots leads to increased engagement which, over time, may turn into psychological dependence. Another study involving stressed participants[7] found a human-like chatbot was more likely to be perceived as competent, and therefore more likely to help reduce participants’ stress.

These chatbots have also been effective in fulfilling organisational objectives in various settings, including retail, education, workplace and healthcare settings[8].

Read more: The hidden cost of the AI boom: social and environmental exploitation[9]

Google is using generative AI to build a “personal life coach” that will supposedly help[10] people with various personal and professional tasks, including providing life advice and answering intimate questions.

This is despite Google’s own AI safety experts warning that users could grow too dependant on AI and may experience “diminished health and wellbeing” and a “loss of agency” if they take life advice from it.

Friend or foe – or just a bot?

In the recent Snapchat incident, the company put the whole thing down to a “temporary outage[11]”. We may never know what actually happened; it could be yet another example of AI “hallucinatng”, or the result of a cyberattack, or even just an operational error.

Either way, the speed with which some users assumed the chatbot had achieved sentience suggests we are seeing an unprecedented anthropomorphism of AI. It’s compounded by a lack of transparency from developers, and a lack of basic understanding among the public.

We shouldn’t underestimate how individuals may be misled by the apparent authenticity of human-like chatbots.

Earlier this year, a Belgian man’s suicide was attributed[12] to conversations he’d had with a chatbot about climate inaction and the planet’s future. In another example, a chatbot named Tessa was found to be[13] offering harmful advice to people through an eating disorder helpline.

Chatbots may be particularly harmful to the more vulnerable among us, and especially to those with psychological conditions.

A new uncanny valley?

You may have heard of the “uncanny valley” effect. It refers to that uneasy feeling you get when you see a humanoid robot that almost looks human, but its slight imperfections give it away, and it ends up being creepy.

It seems a similar experience is emerging in our interactions with human-like chatbots. A slight blip[14] can raise the hairs on the back of the neck.

One solution might be to lose the human edge and revert to chatbots that are straightforward, objective and factual. But this would come at the expense of engagement and innovation.

Education and transparency are key

Even the developers of advanced AI chatbots often can’t explain how they work. Yet in some ways (and as far as commercial entities are concerned) the benefits outweigh the risks.

Generative AI has demonstrated its usefulness[15] in big-ticket items such as productivity, healthcare, education and even social equity[16]. It’s unlikely to go away. So how do we make it work for us?

Since 2018, there has been a significant push for governments and organisations to address the risks of AI. But applying responsible standards and regulations[17] to a technology that’s more “human-like” than any other comes with a host of challenges.

Currently, there is no legal requirement for Australian businesses to disclose the use of chatbots. In the US, California has introduced a “bot bill” that would require this, but legal experts have poked holes in it[18] – and the bill has yet to be enforced at the time of writing this article.

Moreover, ChatGPT and similar chatbots are made public as “research previews[19]”. This means they often come with multiple disclosures on their prototypical nature, and the onus for responsible use falls on the user.

The European Union’s AI Act[20], the world’s first comprehensive regulation on AI, has identified moderate regulation and education as the path forward – since excess regulation could stunt innovation. Similar to digital literacy, AI literacy should be mandated in schools, universities and organisations, and should also be made free and accessible for the public.

Read more: Do we need a new law for AI? Sure – but first we could try enforcing the laws we already have[21]

References

  1. ^ question (9to5mac.com)
  2. ^ Google and Microsoft are bringing AI to Word, Excel, Gmail and more. It could boost productivity for us – and cybercriminals (theconversation.com)
  3. ^ predecessors (www.timeshighereducation.com)
  4. ^ puts it (lifearchitect.ai)
  5. ^ flagship generative AI model (www.reuters.com)
  6. ^ study (www.ingentaconnect.com)
  7. ^ stressed participants (dl.acm.org)
  8. ^ healthcare settings (www.latrobe.edu.au)
  9. ^ The hidden cost of the AI boom: social and environmental exploitation (theconversation.com)
  10. ^ supposedly help (www.nytimes.com)
  11. ^ temporary outage (techcrunch.com)
  12. ^ was attributed (www.livemint.com)
  13. ^ was found to be (www.theguardian.com)
  14. ^ slight blip (www.newscientist.com)
  15. ^ demonstrated its usefulness (www.gatesnotes.com)
  16. ^ even social equity (theconversation.com)
  17. ^ responsible standards and regulations (www.itu.int)
  18. ^ poked holes in it (www.wired.com)
  19. ^ research previews (openai.com)
  20. ^ European Union’s AI Act (www.europarl.europa.eu)
  21. ^ Do we need a new law for AI? Sure – but first we could try enforcing the laws we already have (theconversation.com)

Read more https://theconversation.com/snapchats-creepy-ai-blunder-reminds-us-that-chatbots-arent-people-but-as-the-lines-blur-the-risks-grow-211744

The Times Features

How to Treat Hair Loss Without a Hair Transplant

Understanding Hair Loss Hair loss can significantly affect individuals, both physically and emotionally. Identifying the causes and types can help address the issue more effecti...

How to Find a Trustworthy Professional for Your Plumbing Needs

Nowra is an idyllic locality often referred to as the city of the Shoalhaven City Council in the South Coast region of New South Wales, Australia. This picturesque suburb feature...

How to Choose a Mattress for Back/Neck Pain and All Sleepers?

Waking up with a stiff neck or aching back can derail your entire day. If you're one of the millions struggling with chronic pain, a supportive mattress is more than a luxury – i...

What to Look for in a Professional Debt Collection Service

Often in life, overdue payments are accidental or caused by unusual circumstances. This can cause some temporary convenience, but everything carries on as usual. However, when th...

Be inspired by celeb home decor from across the globe

GET THE LOOK: INDULGE IN THE SAME INTERIOR AS YOUR FAVE CELEBS There is a reason that Denmark ranks the highest on the happiness scale worldwide, one word: Hygge. Hygge. Hygge is ...

Maximizing Space in Narrow Lot Homes: Smart Design Solutions

Urban housing markets continue to push homeowners toward smaller, narrower lots as land prices climb and city populations grow. These thin slices of real estate present unique de...

Times Magazine

The Essential Guide to Transforming Office Spaces for Maximum Efficiency

Why Office Fitouts MatterA well-designed office can make all the difference in productivity, employee satisfaction, and client impressions. Businesses of all sizes are investing in updated office spaces to create environments that foster collaborat...

The A/B Testing Revolution: How AI Optimized Landing Pages Without Human Input

A/B testing was always integral to the web-based marketing world. Was there a button that converted better? Marketing could pit one against the other and see which option worked better. This was always through human observation, and over time, as d...

Using Countdown Timers in Email: Do They Really Increase Conversions?

In a world that's always on, where marketers are attempting to entice a subscriber and get them to convert on the same screen with one email, the power of urgency is sometimes the essential element needed. One of the most popular ways to create urg...

Types of Software Consultants

In today's technology-driven world, businesses often seek the expertise of software consultants to navigate complex software needs. There are several types of software consultants, including solution architects, project managers, and user experienc...

CWU Assistive Tech Hub is Changing Lives: Win a Free Rollator Walker This Easter!

🌟 Mobility. Independence. Community. All in One. This Easter, the CWU Assistive Tech Hub is pleased to support the Banyule community by giving away a rollator walker. The giveaway will take place during the Macleod Village Easter Egg Hunt & Ma...

"Eternal Nurture" by Cara Barilla: A Timeless Collection of Wisdom and Healing

Renowned Sydney-born author and educator Cara Barilla has released her latest book, Eternal Nurture, a profound collection of inspirational quotes designed to support mindfulness, emotional healing, and personal growth. With a deep commitment to ...

LayBy Shopping