The Times Australia
Google AI
The Times World News

.

Why comparisons between AI and human intelligence miss the point

  • Written by Celeste Rodriguez Louro, Associate Professor, Chair of Linguistics and Director of Language Lab, The University of Western Australia
Why comparisons between AI and human intelligence miss the point

Claims that artificial intelligence (AI) is on the verge of surpassing human intelligence have become commonplace. According to some commentators[1], rapid advances in large language models signal an imminent tipping point – often framed as “superintelligence[2]” – that will fundamentally reshape society.

But comparing AI to individual intelligence misses something essential about what human intelligence is. Our intelligence doesn’t operate primarily at the level of isolated individuals. It is social, embodied and collective. Once this is taken seriously, the claim that AI is set to surpass human intelligence becomes far less convincing.

These claims rest on a particular comparison: AI systems are measured against individual human cognitive performance. Can a machine write an essay, pass an exam, diagnose disease, or compose music as well as a person? On these narrow benchmarks, AI appears impressive.

Yet this framing mirrors the limitations of traditional intelligence testing itself: cultural bias, and a reward for familiarity and practice. The rise of AI should therefore prompt more thought about what we mean by intelligence, pushing us to move beyond narrow cognitive metrics, and even beyond popular expansions such as emotional intelligence, toward richer, more contextual definitions.

Intelligence is not individual brilliance

Human cognitive achievements are often attributed to exceptional individuals, but this is misleading. Research[3] in cognitive science and anthropology shows that even our most advanced ideas emerge from collective processes: shared language, cultural transmission, cooperation and cumulative learning across generations.

No scientist, engineer or artist works alone. Scientific discovery depends on shared methods, peer review and institutions. Language itself – arguably humanity’s most powerful cognitive technology – is a collective achievement, refined and modified over thousands of years through social interaction.

Studies of “collective intelligence” consistently show[4] that groups can outperform even their most capable members when diversity of perspectives, communication and coordination are present. This collective capacity is not an optional add-on to human intelligence; it is its foundation.

AI systems, by contrast, do not cooperate, negotiate meaning, form social bonds or engage in shared moral reasoning. They process information in isolation, responding to prompts without awareness, intention or accountability.

Embodiment and social understanding matter

Human intelligence is also embodied. Our thinking is shaped by physical experience, emotion and social interaction. Developmental psychology shows[5] that learning begins in infancy through touch, movement, imitation and shared attention with others. These embodied experiences ground abstract reasoning later in life.

AI lacks this grounding. Language models learn statistical patterns from text, not meaning from lived experience. They do not understand concepts in the way humans do; they approximate linguistic responses based on correlations in data.

This limitation becomes clear in social and ethical contexts. Humans navigate norms, values and emotional cues through interaction and shared cultural understandings we are socialised into. Machines do not.

A narrow slice of humanity

Proponents of AI progress often point to[6] the vast amounts of data used to train modern systems. Yet this data represents a remarkably narrow slice of humanity.

Around 80% of online content is produced in just ten languages[7]. Although more than 7,000 languages are spoken worldwide, only a few hundred are consistently represented on the internet – and far fewer in high-quality, machine-readable form.

This matters because language carries culture, values and ways of thinking. Training AI on a largely homogenised data set means embedding the perspectives, assumptions and biases of a relatively small portion of the world’s population.

Human intelligence, by contrast, is defined by diversity. Eight billion people, living in different environments and social systems, contribute to a shared but plural cognitive landscape.

AI does not have access to this richness, nor can it generate it independently. The data on which it is trained stems from a highly biased sample, representing only a percentage of world knowledge.

The limits of scaling

Another issue rarely addressed in claims about “superhuman” AI is data scarcity. Large models improve by ingesting more high-quality data, but this is a finite resource. Researchers have already warned[8] that models are approaching the limits of available human-generated text suitable for training.

One proposed solution is to train AI on data generated by other AI systems. But this risks creating a feedback loop in which errors, biases and simplifications are amplified rather than corrected. Instead of learning from the world, models learn from distorted reflections of themselves.

This is not a path to deeper understanding. It is closer to an echo chamber.

Useful tools, not superior minds

None of this is to deny that AI systems are powerful tools. They can increase efficiency, assist research, support decision-making and expand access to information. Used carefully and with oversight, they can be socially beneficial.

But usefulness is not the same as intelligence in the human sense. AI remains narrow, derivative and dependent on human input, evaluation and correction. It does not form intentions, participate in collective reasoning or contribute to the cultural processes that make human intelligence what it is.

The rapid progress of AI has generated excitement – and, in some quarters, exaggerated expectations. The danger is not that machines will out-think us tomorrow, but that inflated narratives distract from real issues: bias, governance, labour impacts and the responsible integration of these tools into society.

A category error

Comparing AI to human intelligence as though they are competing on the same terms is ultimately a category error. Humans are not isolated information processors. We are social beings whose intelligence emerges from cooperation, diversity and shared meaning.

Until machines can participate in that collective, embodied and ethical dimension of cognition – and there is no evidence they can – the idea that AI will surpass human intelligence remains more hype than insight.

References

  1. ^ some commentators (www.darioamodei.com)
  2. ^ superintelligence (www.ibm.com)
  3. ^ Research (doi.org)
  4. ^ consistently show (doi.org)
  5. ^ shows (doi.org)
  6. ^ point to (www.psu.edu)
  7. ^ just ten languages (languagemagazine.com)
  8. ^ already warned (doi.org)

Read more https://theconversation.com/why-comparisons-between-ai-and-human-intelligence-miss-the-point-274621

Times Magazine

Epson launches ELPCS01 mobile projector cart

Designed for the EB-810E[1] projector and provides easy setup for portable displays in flexible ...

Governance Models for Headless CMS in Large Organizations

Where headless CMS is adopted by large enterprises, governance is the single most crucial factor d...

Narwal Freo Z Ultra Robotic Vacuum and Mop Cleaner

Rating: ★★★★☆ (4.4/5)Category: Premium Robot Vacuum & Mop ComboBest for: Busy households, ha...

Shark launches SteamSpot - the shortcut for everyday floor mess

Shark introduces the Shark SteamSpot Steam Mop, a lightweight steam mop designed to make everyda...

Game Together, Stay Together: Logitech G Reveals Gaming Couples Enjoy Higher Relationship Satisfaction

With Valentine’s Day right around the corner, many lovebirds across Australia are planning for the m...

AI threatens to eat business software – and it could change the way we work

In recent weeks, a range of large “software-as-a-service” companies, including Salesforce[1], Se...

The Times Features

AI could help us more accurately screen for breast cancer – new research

At least 20,000[1] Australian women are diagnosed with breast cancer each year. And more than ...

Housing ACT tenants left in unsafe conditions

An ACT Ombudsman report has found that Housing ACT tenants have been left waiting in unsafe and haza...

Shark SteamSpot S2001 Review: A Chemical-Free Way to Tackle Messes and Stubborn Stains

If you're looking for a reliable steam mop that can handle both everyday spills and stubborn stains ...

How Businesses Are Generating Profits in a High-Inflation Economic Environment

Inflation in Australia and globally has surged to multi-decade highs since 2021, driven by pande...

The Effects of the War in the Middle East on Australian Small Businesses

The war in the Middle East is not a distant geopolitical event for Australia. In an interconnect...

Back at uni? How to help your wellbeing while you study

University can be a time of great opportunities, but it can also be very stressful[1]. Many stud...

Taste Port Douglas celebrates 10 years of world-class flavour in the tropics

30+ events, new sunrise and wellness experiences, 20+ chefs and a headline Michelin-star line-up...

Oztent RV tent range. Buy with caution

A review of the Oztent RV "30 second tent" range. Three years ago we bought an RV-4 from BCF Mack...

Essential Upgrades for a Smarter, Safer Australian Home

As we settle into 2026, the concept of the "dream home" has fundamentally shifted. The focus has m...