The Times Australia
Google AI
The Times Technology News

.

AI is failing ‘Humanity’s Last Exam’. So what does that mean for machine intelligence?

  • Written by Kai Riemer, Professor of Information Technology and Organisation, University of Sydney

How do you translate ancient Palmyrene script from a Roman tombstone? How many paired tendons are supported by a specific sesamoid bone in a hummingbird? Can you identify closed syllables in Biblical Hebrew based on the latest scholarship on Tiberian pronunciation traditions?

These are some of the questions in “Humanity’s Last Exam”, a new benchmark introduced in a study[1] published this week in Nature. The collection of 2,500 questions is specifically designed to probe the outer limits of what today’s artificial intelligence (AI) systems cannot do.

The benchmark represents a global collaboration of nearly 1,000 international experts across a range of academic fields. These academics and researchers contributed questions at the frontier of human knowledge. The problems required graduate-level expertise in mathematics, physics, chemistry, biology, computer science and the humanities. Importantly, every question was tested against leading AI models before inclusion. If an AI could not answer it correctly at the time the test was designed, the question was rejected.

This process explains why the initial results looked so different from other benchmarks. While AI chatbots score above 90% on popular tests[2], when Humanity’s Last Exam was first released in early 2025, leading models struggled badly. GPT-4o managed just 2.7% accuracy. Claude 3.5 Sonnet scored 4.1%. Even OpenAI’s most powerful model, o1, achieved only 8%.

The low scores were the point. The benchmark was constructed to measure what remained beyond AI’s grasp. And while some commentators have suggested[3] that benchmarks like Humanity’s Last Exam chart a path toward artificial general intelligence, or even superintelligence – that is, AI systems capable of performing any task at human or superhuman levels – we believe this is wrong for three reasons.

Benchmarks measure task performance, not intelligence

When a student scores well on the bar exam, we can reasonably predict they’ll make a competent lawyer. That’s because the test was designed to assess whether humans have acquired the knowledge and reasoning skills needed for legal practice – and for humans, that works. The understanding required to pass genuinely transfers to the job.

But AI systems are not humans preparing for careers.

When a large language model scores well on the bar exam, it tells us the model can produce correct-looking answers to legal questions. It doesn’t tell us the model understands law, can counsel a nervous client, or exercise professional judgment in ambiguous situations.

The test measures something real for humans; for AI it measures only performance on the test itself.

Using human ability tests to benchmark AI is common practice, but it’s fundamentally misleading. Assuming a high test score means the machine has become more human-like is a category error, much like concluding that a calculator “understands” mathematics because it can solve equations faster than any person.

Human and machine intelligence are fundamentally different

Humans learn continuously from experience. We have intentions, needs and goals. We live lives, inhabit bodies and experience the world directly. Our intelligence evolved to serve our survival as organisms and our success as social creatures.

But AI systems are very different[4].

Large language models derive their capabilities from patterns in text during training. But they don’t really learn[5].

For humans, intelligence comes first and language serves as a tool for communication – intelligence is prelinguistic[6]. But for large language models, language is the intelligence – there’s nothing underneath.

Even the creators of Humanity’s Last Exam acknowledge[7] this limitation:

High accuracy on [Humanity’s Last Exam] would demonstrate expert-level performance on closed-ended, verifiable questions and cutting-edge scientific knowledge, but it would not alone suggest autonomous research capabilities or artificial general intelligence.

Subbarao Kambhampati, professor at Arizona State University and former president of the Association for the Advancement of Artificial Intelligence, puts it more clearly[8]:

Humanity’s essence isn’t captured by a static test but rather by our ability to evolve and tackle previously unimaginable questions.

Developers like leaderboards

There’s another problem. AI developers use benchmarks to optimise their models for leaderboard performance. They’re essentially cramming for the exam. And unlike humans, for whom the learning for the test builds understanding, AI optimisation just means getting better at the specific test.

But it’s working.

Since Humanity’s Last Exam was published online in early 2025, scores have climbed dramatically[9]. Gemini 3 Pro Preview now tops the leaderboard at 38.3% accuracy, followed by GPT-5 at 25.3% and Grok 4 at 24.5%.

Does this improvement mean these models are approaching human intelligence? No. It means they’ve gotten better at the kinds of questions the exam contains. The benchmark has become a target to optimise against.

The industry is recognising this problem.

OpenAI recently introduced a measure called GDPval[10] specifically designed to assess real-world usefulness.

Unlike academic-style benchmarks, GDPval focuses on tasks based on actual work products such as project documents, data analyses and deliverables that exist in professional settings.

What this means for you

If you’re using AI tools in your work or considering adopting them, don’t be swayed by benchmark scores. A model that aces Humanity’s Last Exam might still struggle with the specific tasks you need done.

It’s also worth noting the exam’s questions are heavily skewed toward certain domains. Mathematics alone accounts for 41% of the benchmark, with physics, biology and computer science making up much of the rest. If your work involves writing, communication, project management or customer service, the exam tells you almost nothing about which model might serve you best.

A practical approach is to devise your own tests based on what you actually need AI to do, then evaluate newer models against criteria that matter to you. AI systems are genuinely useful – but any discussion about superintelligence remains science fiction and a distraction from the real work of making these tools relevant to people’s lives.

References

  1. ^ study (doi.org)
  2. ^ popular tests (arxiv.org)
  3. ^ commentators have suggested (uk.finance.yahoo.com)
  4. ^ are very different (scholarspace.manoa.hawaii.edu)
  5. ^ they don’t really learn (theconversation.com)
  6. ^ intelligence is prelinguistic (doi.org)
  7. ^ acknowledge (doi.org)
  8. ^ puts it more clearly (the-decoder.com)
  9. ^ scores have climbed dramatically (lastexam.ai)
  10. ^ OpenAI recently introduced a measure called GDPval (openai.com)

Read more https://theconversation.com/ai-is-failing-humanitys-last-exam-so-what-does-that-mean-for-machine-intelligence-274620

Times Magazine

Narwal Freo Z Ultra Robotic Vacuum and Mop Cleaner

Rating: ★★★★☆ (4.4/5)Category: Premium Robot Vacuum & Mop ComboBest for: Busy households, ha...

Shark launches SteamSpot - the shortcut for everyday floor mess

Shark introduces the Shark SteamSpot Steam Mop, a lightweight steam mop designed to make everyda...

Game Together, Stay Together: Logitech G Reveals Gaming Couples Enjoy Higher Relationship Satisfaction

With Valentine’s Day right around the corner, many lovebirds across Australia are planning for the m...

AI threatens to eat business software – and it could change the way we work

In recent weeks, a range of large “software-as-a-service” companies, including Salesforce[1], Se...

Worried AI means you won’t get a job when you graduate? Here’s what the research says

The head of the International Monetary Fund, Kristalina Georgieva, has warned[1] young people ...

How Managed IT Support Improves Security, Uptime, And Productivity

Managed IT support is a comprehensive, subscription model approach to running and protecting your ...

The Times Features

Small, realistic increases in physical activity shown to significantly reduce risk of early death

Just Five Minutes More a Day Could Prevent Thousands of Deaths, Landmark Study Finds Small, rea...

WITH ONE GLOBAL RESORTS FEATURING ON SCREEN THIS SEASON

As Married At First Sight returns to Australian screens in 2026, viewers are once again getting a ...

Migraine is more than just a headache. A neurologist explains the 4 stages

A migraine attack[1] is not just a “bad headache”. Migraine is a debilitating neurological co...

Marketers: Forget the Black Box. If You Aren't Moving the Needle, What Are You Doing?

Two years ago, I entered the digital marketing space with the mindset of an engineering student ...

Extreme weather growing threat to Australian businesses in storm and fire season

  Australian small businesses are being hit harder than ever by costly disruptions...

Join Macca’s in supporting Clean Up Australia Day

McDonald’s Australia is once again rolling up its sleeves for Clean Up Australia Day, marking 36...

IFTAR Turns Up The Heat With The Return of Ramadan Nights From 18 February

Iftar returns to IFTAR, with the Western Sydney favourite opening after dark for Ramadan  IFTA...

What causes depression? What we know, don’t know and suspect

Depression is a complex and deeply personal experience. While almost everyone has periods of s...

5 Cool Ways to Transform Your Interior in 2026

We are at the end of the great Australian summer, and this is the perfect time to start thinking a...