The Times Australia
The Times World News

.
The Times Real Estate

.

Why ChatGPT isn’t conscious – but future AI systems might be

  • Written by Colin Klein, Professor, School of Philosophy, Australian National University
Why ChatGPT isn’t conscious – but future AI systems might be

In June 2022, Google engineer Blake Lemoine made headlines by claiming the company’s LaMDA chatbot had achieved sentience. The software had the conversational ability of a precocious seven-year-old, Lemoine said[1], and we should assume it possessed a similar awareness of the world.

LaMDA, later released to the public as Bard[2], is powered by a “large language model” (LLM) of the kind that also forms the engine of OpenAI’s ChatGPT bot. Other big tech companies are rushing to deploy similar technology.

Hundreds of millions of people have now had the chance to play with LLMs, but few seem to believe they are conscious. Instead, in linguist and data scientist Emily Bender’s poetic phrase[3], they are “stochastic parrots”, which chatter convincingly without understanding. But what about the next generation of artificial intelligence (AI) systems, and the one after that?

Our team of philosophers, neuroscientists and computer scientists looked to current scientific theories of how human consciousness works to draw up a list of basic computational properties[4] that any hypothetically conscious system would likely need to possess. In our view, no current system comes anywhere near the bar for consciousness – but at the same time, there’s no obvious reason future systems won’t become truly aware.

Finding indicators

Since computing pioneer Alan Turing proposed his “Imitation Game[5]” in 1950, the ability to successfully impersonate a human in conversation has often been taken as a reliable marker of consciousness. This is usually because the task has seemed so difficult it must require consciousness.

However, as with chess computer Deep Blue’s 1997 defeat of grandmaster Gary Kasparov[6], the conversational fluency of LLMs may just move the goalposts. Is there a principled way to approach the question of AI consciousness that does not rely on our intuitions about what is difficult or special about human cognition?

Read more: A Google software engineer believes an AI has become sentient. If he’s right, how would we know?[7]

Our recent white paper[8] aims to do just that. We compared current scientific theories of what makes humans conscious to compile a list of “indicator properties” that could then be applied to AI systems.

We don’t think systems that possess the indicator properties are definitely conscious, but the more indicators, the more seriously we should take claims of AI consciousness.

The computational processes behind consciousness

What sort of indicators were we looking for? We avoided overt behavioural criteria – such as being able to hold conversations with people – because these tend to be both human-centric and easy to fake.

Instead, we looked at theories of the computational processes that support consciousness in the human brain. These can tell us about the sort of information-processing needed to support subjective experience.

“Global workspace theories”, for example, postulate that consciousness arises from the presence of a capacity-limited bottleneck which collates information from all parts of the brain and selects information to make globally available. “Recurrent processing theories” emphasise the role of feedback from later processes to earlier ones.

Each theory in turn suggests more specific indicators. Our final list contains 14 indicators, each focusing on an aspect of how systems work rather than how they behave.

No reason to think current systems are conscious

How do current technologies stack up? Our analysis suggests there is no reason to think current AI systems are conscious.

Some do meet a few of the indicators. Systems using the transformer architecture, a kind of machine-learning model behind ChatGPT and similar tools[9], meet three of the “global workspace” indicators, but lack the crucial ability for global rebroadcast. They also fail to satisfy most of the other indicators.

So, despite ChatGPT’s impressive conversational abilities, there is probably nobody home inside. Other architectures similarly meet at best a handful of criteria.

Read more: Not everything we call AI is actually 'artificial intelligence'. Here's what you need to know[10]

Most current architectures only meet a few of the indicators at most. However, for most of the indicators, there is at least one current architecture that meets it.

This suggests there are no obvious, in-principle technical barriers to building AI systems that satisfy most or all of the indicators.

It is probably a matter of when rather than if some such system is built. Of course, plenty of questions will still remain when that happens.

Beyond human consciousness

The scientific theories we canvass (and the authors of the paper!) don’t always agree with one another. We used a list of indicators rather than strict criteria to acknowledge that fact. This can be a powerful methodology in the face of scientific uncertainty.

We were inspired by similar debates about animal consciousness. Most of us think at least some nonhuman animals are conscious, despite the fact they cannot converse with us about what they’re feeling.

A 2021 report[11] from the London School of Economics arguing that cephalopods such as octopuses likely feel pain was instrumental in changing UK animal ethics policy[12]. A focus on structural features has the surprising consequence that even some simple animals, like insects, might even possess a minimal form of consciousness[13].

Our report does not make recommendations for what to do with conscious AI. This question will become more pressing as AI systems inevitably become more powerful and widely deployed.

Our indicators will not be the last word – but we hope they will become a first step in tackling this tricky question in a scientifically grounded way.

Read more https://theconversation.com/why-chatgpt-isnt-conscious-but-future-ai-systems-might-be-212860

The Times Features

Australian businesses face uncertainty under new wage theft laws

As Australian businesses brace for the impact of new wage theft laws under The Closing Loopholes Acts, data from Yellow Canary, Australia’s leading payroll audit and compliance p...

Why Staying Safe at Home Is Easier Than You Think

Staying safe at home doesn’t have to be a daunting task. Many people think creating a secure living space is expensive or time-consuming, but that’s far from the truth. By focu...

Lauren’s Journey to a Healthier Life: How Being a Busy Mum and Supportive Wife Helped Her To Lose 51kg with The Lady Shake

For Lauren, the road to better health began with a small and simple but significant decision. As a busy wife and mother, she noticed her husband skipping breakfast and decided ...

How to Manage Debt During Retirement in Australia: Best Practices for Minimising Interest Payments

Managing debt during retirement is a critical step towards ensuring financial stability and peace of mind. Retirees in Australia face unique challenges, such as fixed income st...

hMPV may be spreading in China. Here’s what to know about this virus – and why it’s not cause for alarm

Five years on from the first news of COVID, recent reports[1] of an obscure respiratory virus in China may understandably raise concerns. Chinese authorities first issued warn...

Black Rock is a popular beachside suburb

Black Rock is indeed a popular beachside suburb, located in the southeastern suburbs of Melbourne, Victoria, Australia. It’s known for its stunning beaches, particularly Half M...

Times Magazine

Lessons from the Past: Historical Maritime Disasters and Their Influence on Modern Safety Regulations

Maritime history is filled with tales of bravery, innovation, and, unfortunately, tragedy. These historical disasters serve as stark reminders of the challenges posed by the seas and have driven significant advancements in maritime safety regulat...

What workers really think about workplace AI assistants

Imagine starting your workday with an AI assistant that not only helps you write emails[1] but also tracks your productivity[2], suggests breathing exercises[3], monitors your mood and stress levels[4] and summarises meetings[5]. This is not a f...

Aussies, Clear Out Old Phones –Turn Them into Cash Now!

Still, holding onto that old phone in your drawer? You’re not alone. Upgrading to the latest iPhone is exciting, but figuring out what to do with the old one can be a hassle. The good news? Your old iPhone isn’t just sitting there it’s potential ca...

Rain or Shine: Why Promotional Umbrellas Are a Must-Have for Aussie Brands

In Australia, where the weather can swing from scorching sun to sudden downpours, promotional umbrellas are more than just handy—they’re marketing gold. We specialise in providing wholesale custom umbrellas that combine function with branding power. ...

Why Should WACE Students Get a Tutor?

The Western Australian Certificate of Education (WACE) is completed by thousands of students in West Australia every year. Each year, the pressure increases for students to perform. Student anxiety is at an all time high so students are seeking suppo...

What Are the Risks of Hiring a Private Investigator

I’m a private investigator based in Melbourne, Australia. Being a Melbourne Pi always brings interesting clients throughout Melbourne. Many of these clients always ask me what the risks are of hiring a private investigator.  Legal Risks One of the ...

LayBy Shopping