The Times Australia
Fisher and Paykel Appliances
The Times World News

.

Can machines be self-aware? New research explains how this could happen

  • Written by Michael Timothy Bennett, PhD Student, School of Computing, Australian National University
Can machines be self-aware? New research explains how this could happen

To build a machine, one must know what its parts are and how they fit together. To understand the machine, one needs to know what each part does and how it contributes to its function. In other words, one should be able to explain the “mechanics” of how it works.

According to a philosophical approach[1] called mechanism, humans are arguably a type of machine – and our ability to think, speak and understand the world is the result of a mechanical process we don’t understand.

To understand ourselves better, we can try to build machines that mimic our abilities. In doing so, we would have a mechanistic understanding of those machines. And the more of our behaviour the machine exhibits, the closer we might be to having a mechanistic explanation of our own minds.

This is what makes AI interesting from a philosophical point of view. Advanced models such as GPT4 and Midjourney can now mimic human conversation, pass professional exams and generate beautiful pictures with only a few words.

Yet, for all the progress, questions remain unanswered. How can we make something self-aware, or aware that others are aware? What is identity? What is meaning?

Although there are many competing philosophical descriptions of these things, they have all resisted mechanistic explanation.

In a sequence of papers[2] accepted for the 16th Annual Conference in Artificial General Intelligence[3] in Stockholm, I pose a mechanistic explanation for these phenomena. They explain how we may build a machine that’s aware of itself, of others, of itself as perceived by others, and so on.

Read more: A Google software engineer believes an AI has become sentient. If he’s right, how would we know?[4]

Intelligence and intent

A lot of what we call intelligence boils down to making predictions about the world with incomplete information. The less information a machine needs to make accurate predictions, the more “intelligent” it is.

For any given task, there’s a limit to how much intelligence is actually useful. For example, most adults are smart enough to learn to drive a car, but more intelligence probably won’t make them a better driver.

My papers describe the upper limit of intelligence[5] for a given task, and what is required to build a machine that attains it.

I named the idea Bennett’s Razor, which in non-technical terms is that “explanations should be no more specific than necessary”. This is distinct from the popular interpretation of Ockham’s Razor (and mathematical descriptions thereof[6]), which is a preference for simpler explanations.

The difference is subtle, but significant. In an experiment[7] comparing how much data AI systems need to learn simple maths, the AI that preferred less specific explanations outperformed one preferring simpler explanations by as much as 500%.

Hypothetical patent filing for a self-aware machine, generated by an artificial intelligence from just a few words. Michael Timothy Bennett / Generated using MidJourney

Exploring the implications of this discovery led me to a mechanistic explanation of meaning – something called “Gricean pragmatics[8]”. This is a concept in philosophy of language that looks at how meaning is related to intent.

To survive, an animal needs to predict how its environment, including other animals, will act and react. You wouldn’t hesitate to leave a car unattended near a dog, but the same can’t be said of your rump steak lunch.

Being intelligent in a community means being able to infer the intent of others, which stems from their feelings and preferences. If a machine was to attain the upper limit of intelligence for a task that depends on interactions with a human, then it would also have to correctly infer intent.

And if a machine can ascribe intent to the events and experiences befalling it, this raises the question of identity and what it means to be aware of oneself and others.

Causality and identity

I see John wearing a raincoat when it rains. If I force John to wear a raincoat on a sunny day, will that bring rain?

Of course not! To a human, this is obvious. But the subtleties of cause and effect are more difficult to teach a machine (interested readers can check out The Book of Why[9] by Judea Pearl and Dana Mackenzie).

To reason about these things, a machine needs to learn that “I caused it to happen” is different from “I saw it happen”. Typically, we’d program[10] this understanding into it.

However, my work explains how we can build a machine that performs at the upper limit of intelligence for a task. Such a machine must, by definition, correctly identify cause and effect – and therefore also infer causal relations. My papers[11] explore exactly how.

The implications of this are profound. If a machine learns “I caused it to happen”, then it must construct concepts of “I” (an identity for itself) and “it”.

The abilities to infer intent, to learn cause and effect, and to construct abstract identities are all linked. A machine that attains the upper limit of intelligence for a task must exhibit all these abilities.

This machine does not just construct an identity for itself, but for every aspect of every object that helps or hinders its ability to complete the task. It can then use its own preferences[12] as a baseline to predict[13] what others may do. This is similar to how humans tend to ascribe[14] intent to non-human animals.

So what does it mean for AI?

Of course, the human mind is far more than the simple program used to conduct experiments in my research. My work provides a mathematical description of a possible causal pathway to creating a machine that is arguably self-aware. However, the specifics of engineering such a thing are far from solved.

For example, human-like intent would require human-like experiences and feelings, which is a difficult thing to engineer. Furthermore, we can’t easily test for the full richness of human consciousness. Consciousness is a broad and ambiguous concept that encompasses – but should be distinguished from – the more narrow claims above.

I have provided a mechanistic explanation of aspects of consciousness – but this alone does not capture the full richness of consciousness as humans experience it. This is only the beginning, and future research will need to expand on these arguments.

References

  1. ^ philosophical approach (en.wikipedia.org)
  2. ^ sequence of papers (michaeltimothybennett.com)
  3. ^ 16th Annual Conference in Artificial General Intelligence (agi-conf.org)
  4. ^ A Google software engineer believes an AI has become sentient. If he’s right, how would we know? (theconversation.com)
  5. ^ the upper limit of intelligence (www.techrxiv.org)
  6. ^ mathematical descriptions thereof (www.cambridge.org)
  7. ^ experiment (www.techrxiv.org)
  8. ^ Gricean pragmatics (plato.stanford.edu)
  9. ^ The Book of Why (www.amazon.com.au)
  10. ^ program (en.wikipedia.org)
  11. ^ My papers (www.techrxiv.org)
  12. ^ use its own preferences (www.techrxiv.org)
  13. ^ baseline to predict (ieeexplore.ieee.org)
  14. ^ humans tend to ascribe (www.sciencedirect.com)

Read more https://theconversation.com/can-machines-be-self-aware-new-research-explains-how-this-could-happen-204371

Times Magazine

This Christmas, Give the Navman Gift That Never Stops Giving – Safety

Protect your loved one’s drives with a Navman Dash Cam.  This Christmas don’t just give – prote...

Yoto now available in Kmart and The Memo, bringing screen-free storytelling to Australian families

Yoto, the kids’ audio platform inspiring creativity and imagination around the world, has launched i...

Kool Car Hire

Turn Your Four-Wheeled Showstopper into Profit (and Stardom) Have you ever found yourself stand...

EV ‘charging deserts’ in regional Australia are slowing the shift to clean transport

If you live in a big city, finding a charger for your electric vehicle (EV) isn’t hard. But driv...

How to Reduce Eye Strain When Using an Extra Screen

Many professionals say two screens are better than one. And they're not wrong! A second screen mak...

Is AI really coming for our jobs and wages? Past predictions of a ‘robot apocalypse’ offer some clues

The robots were taking our jobs – or so we were told over a decade ago. The same warnings are ...

The Times Features

What’s been happening on the Australian stock market today

What moved, why it moved and what to watch going forward. 📉 Market overview The benchmark S&am...

The NDIS shifts almost $27m a year in mental health costs alone, our new study suggests

The National Disability Insurance Scheme (NDIS) was set up in 2013[1] to help Australians with...

Why Australia Is Ditching “Gym Hop Culture” — And Choosing Fitstop Instead

As Australians rethink what fitness actually means going into the new year, a clear shift is emergin...

Everyday Radiance: Bevilles’ Timeless Take on Versatile Jewellery

There’s an undeniable magic in contrast — the way gold catches the light while silver cools it down...

From The Stage to Spotify, Stanhope singer Alyssa Delpopolo Reveals Her Meteoric Rise

When local singer Alyssa Delpopolo was crowned winner of The Voice last week, the cheers were louder...

How healthy are the hundreds of confectionery options and soft drinks

Walk into any big Australian supermarket and the first thing that hits you isn’t the smell of fr...

The Top Six Issues Australians Are Thinking About Today

Australia in 2025 is navigating one of the most unsettled periods in recent memory. Economic pre...

How Net Zero Will Adversely Change How We Live — and Why the Coalition’s Abandonment of That Aspiration Could Be Beneficial

The drive toward net zero emissions by 2050 has become one of the most defining political, socia...

Menulog is closing in Australia. Could food delivery soon cost more?

It’s been a rocky road for Australia’s food delivery sector. Over the past decade, major platfor...