The Times Australia
Fisher and Paykel Appliances
The Times World News

.

AI pioneer Geoffrey Hinton says AI is a new form of intelligence unlike our own. Have we been getting it wrong this whole time?

  • Written by Olivier Salvado, Lead AI for Mission, CSIRO
AI pioneer Geoffrey Hinton says AI is a new form of intelligence unlike our own. Have we been getting it wrong this whole time?

Debates about AI often characterise it as a technology that has come to compete with human intelligence. Indeed, one of the most widely pronounced fears is that AI may achieve human-like intelligence and render humans obsolete in the process.

However, one of the world’s top AI scientists is now describing AI as a new form of intelligence – one that poses unique risks, and will therefore require unique solutions.

Geoffrey Hinton, a leading AI scientist and winner of the 2018 Turing Award, just stepped down from his role at Google to warn the world[1] about the dangers of AI. He follows in the steps of more than 1,000 technology leaders who signed an open letter calling for a global halt on the development of advanced AI for at least six months[2].

Hinton’s argument is nuanced. While he does think AI has the capacity to become smarter than humans, he also proposes it should be thought of as an altogether different form of intelligence to our own.

Why Hinton’s ideas matter

Although experts have been raising red flags for months, Hinton’s decision to voice his concerns is significant.

Dubbed the “godfather of AI”, he has helped pioneer many of the methods underlying the modern AI systems we see today. His early work on neural networks led to him being one of three individuals awarded the 2018 Turing Award[3]. And one of his students, Ilya Sutskever, went on to become co-founder of OpenAI, the organisation behind ChatGPT.

When Hinton speaks, the AI world listens. And if we’re to seriously consider his framing of AI as an intelligent non-human entity, one could argue we’ve been thinking about it all wrong.

The false equivalence trap

On one hand, large language model-based tools such as ChatGPT produce text that’s very similar to what humans write. ChatGPT even makes stuff up, or “hallucinates”, which Hinton points out is something humans do as well. But we risk being reductive when we consider such similarities a basis for comparing AI intelligence with human intelligence.

We can find a useful analogy in the invention of artificial flight. For thousands of years, humans tried to fly by imitating birds: flapping their arms with some contraption mimicking feathers. This didn’t work. Eventually, we realised fixed wings create uplift, using a different principle, and this heralded the invention of flight.

Planes are no better or worse than birds; they are different. They do different things and face different risks.

AI (and computation, for that matter) is a similar story. Large language models such as GPT-3 are comparable to human intelligence in many ways, but work differently. ChatGPT crunches vast swathes of text to predict the next word in a sentence. Humans take a different approach to forming sentences. Both are impressive.

Read more: I tried the Replika AI companion and can see why users are falling hard. The app raises serious ethical questions[4]

How is AI intelligence unique?

Both AI experts and non-experts have long drawn a link between AI and human intelligence – not to mention the tendency to anthropomorphise AI[5]. But AI is fundamentally different to us in several ways. As Hinton explains[6]:

If you or I learn something and want to transfer that knowledge to someone else, we can’t just send them a copy […] But I can have 10,000 neural networks, each having their own experiences, and any of them can share what they learn instantly. That’s a huge difference. It’s as if there were 10,000 of us, and as soon as one person learns something, all of us know it.

AI outperforms humans on many tasks, including any task that relies on assembling patterns and information gleaned from large datasets. Humans are sluggishly slow in comparison, and have less than a fraction of AI’s memory.

Yet humans have the upper hand on some fronts. We make up for our poor memory and slow processing speed by using common sense and logic. We can quickly and easily learn how the world works, and use this knowledge to predict the likelihood of events. AI still struggles with this (although researchers are working on it).

Humans are also very energy-efficient, whereas AI requires powerful computers (especially for learning) that use orders of magnitude more energy than us. As Hinton puts it:

humans can imagine the future […] on a cup of coffee and a slice of toast.

Okay, so what if AI is different to us?

If AI is fundamentally a different intelligence to ours, then it follows that we can’t (or shouldn’t) compare it to ourselves.

A new intelligence presents new dangers to society and will require a paradigm shift in the way we talk about and manage AI systems. In particular, we may need to reassess the way we think about guarding against the risks of AI.

One of the basic questions that has dominated these debates is how to define AI. After all, AI is not binary; intelligence exists on a spectrum, and the spectrum for human intelligence may be very different from that for machine intelligence.

This very point was the downfall of one of the earliest attempts to regulate AI back in 2017 in New York, when auditors couldn’t agree on which systems should be classified as AI. Defining AI when designing regulation is very challenging[7][8]

So perhaps we should focus less on defining AI in a binary fashion, and more on the specific consequences of AI-driven actions.

What risks are we facing?

The speed of AI uptake in industries has taken everyone by surprise, and some experts are worried about the future of work.

This week, IBM CEO Arvind Krishna announced the company[9] could be replacing some 7,800 back-office jobs with AI in the next five years. We’ll need to adapt how we manage AI as it becomes increasingly deployed for tasks once completed by humans.

More worryingly, AI’s ability to generate fake text, images and video is leading us into a new age of information manipulation[10]. Our current methods of dealing with human-generated misinformation won’t be enough to address it.

Read more: AI could take your job, but it can also help you score a new one with these simple tips[11]

Hinton is also worried about the dangers of AI-driven autonomous weapons[12], and how bad actors may leverage them to commit all forms of atrocity.

These are just some examples of how AI – and specifically, different characteristics of AI – can bring risk to the human world. To regulate AI productively and proactively, we need to consider these specific characteristics, and not apply recipes designed for human intelligence.

The good news is humans have learnt to manage potentially harmful technologies before, and AI is no different.

If you’d like to hear more about the issues discussed in this article, check out the CSIRO’s Everyday AI podcast[13].

References

  1. ^ warn the world (www.technologyreview.com)
  2. ^ least six months (theconversation.com)
  3. ^ 2018 Turing Award (awards.acm.org)
  4. ^ I tried the Replika AI companion and can see why users are falling hard. The app raises serious ethical questions (theconversation.com)
  5. ^ anthropomorphise AI (theconversation.com)
  6. ^ explains (www.technologyreview.com)
  7. ^ should be classified as AI (carnegieendowment.org)
  8. ^ very challenging (theconversation.com)
  9. ^ announced the company (gizmodo.com)
  10. ^ new age of information manipulation (theconversation.com)
  11. ^ AI could take your job, but it can also help you score a new one with these simple tips (theconversation.com)
  12. ^ AI-driven autonomous weapons (theconversation.com)
  13. ^ Everyday AI podcast (www.csiro.au)

Read more https://theconversation.com/ai-pioneer-geoffrey-hinton-says-ai-is-a-new-form-of-intelligence-unlike-our-own-have-we-been-getting-it-wrong-this-whole-time-204911

Times Magazine

Can bigger-is-better ‘scaling laws’ keep AI improving forever? History says we can’t be too sure

OpenAI chief executive Sam Altman – perhaps the most prominent face of the artificial intellig...

A backlash against AI imagery in ads may have begun as brands promote ‘human-made’

In a wave of new ads, brands like Heineken, Polaroid and Cadbury have started hating on artifici...

Home batteries now four times the size as new installers enter the market

Australians are investing in larger home battery set ups than ever before with data showing the ...

Q&A with Freya Alexander – the young artist transforming co-working spaces into creative galleries

As the current Artist in Residence at Hub Australia, Freya Alexander is bringing colour and creativi...

This Christmas, Give the Navman Gift That Never Stops Giving – Safety

Protect your loved one’s drives with a Navman Dash Cam.  This Christmas don’t just give – prote...

Yoto now available in Kmart and The Memo, bringing screen-free storytelling to Australian families

Yoto, the kids’ audio platform inspiring creativity and imagination around the world, has launched i...

The Times Features

Why the Mortgage Industry Needs More Women (And What We're Actually Doing About It)

I've been in fintech and the mortgage industry for about a year and a half now. My background is i...

Inflation jumps in October, adding to pressure on government to make budget savings

Annual inflation rose[1] to a 16-month high of 3.8% in October, adding to pressure on the govern...

Transforming Addiction Treatment Marketing Across Australasia & Southeast Asia

In a competitive and highly regulated space like addiction treatment, standing out online is no sm...

Aiper Scuba X1 Robotic Pool Cleaner Review: Powerful Cleaning, Smart Design

If you’re anything like me, the dream is a pool that always looks swimmable without you having to ha...

YepAI Emerges as AI Dark Horse, Launches V3 SuperAgent to Revolutionize E-commerce

November 24, 2025 – YepAI today announced the launch of its V3 SuperAgent, an enhanced AI platf...

What SMEs Should Look For When Choosing a Shared Office in 2026

Small and medium-sized enterprises remain the backbone of Australia’s economy. As of mid-2024, sma...

Anthony Albanese Probably Won’t Lead Labor Into the Next Federal Election — So Who Will?

As Australia edges closer to the next federal election, a quiet but unmistakable shift is rippli...

Top doctors tip into AI medtech capital raise a second time as Aussie start up expands globally

Medow Health AI, an Australian start up developing AI native tools for specialist doctors to  auto...

Record-breaking prize home draw offers Aussies a shot at luxury living

With home ownership slipping out of reach for many Australians, a growing number are snapping up...