Google AI
The Times Australia
The Times World News

.

AI was born at a US summer camp 68 years ago. Here’s why that event still matters today

  • Written by: Sandra Peter, Director of Sydney Executive Plus, University of Sydney

Imagine a group of young men gathered at a picturesque college campus in New England, in the United States, during the northern summer of 1956.

It’s a small casual gathering. But the men are not here for campfires and nature hikes in the surrounding mountains and woods. Instead, these pioneers are about to embark on an experimental journey that will spark countless debates for decades to come and change not just the course of technology – but the course of humanity.

Welcome to the Dartmouth Conference – the birthplace of artificial intelligence (AI) as we know it today.

What transpired here would ultimately lead to ChatGPT and the many other kinds of AI which now help us diagnose disease, detect fraud, put together playlists and write articles (well, not this one). But it would also create some of the many problems the field is still trying to overcome. Perhaps by looking back, we can find a better way forward.

The summer that changed everything

In the mid 1950s, rock’n’roll was taking the world by storm. Elvis’s Heartbreak Hotel was topping the charts, and teenagers started embracing James Dean’s rebellious legacy.

But in 1956, in a quiet corner of New Hampshire, a different kind of revolution was happening.

The Dartmouth Summer Research Project on Artificial Intelligence[1], often remembered as the Dartmouth Conference, kicked off on June 18 and lasted for about eight weeks. It was the the brainchild of four American computer scientists – John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon – and brought together some of the brightest minds in computer science, mathematics and cognitive psychology at the time.

These scientists, along with some of the 47 people they invited, set out to tackle an ambitious goal: to make intelligent machines.

As McCarthy put it in the conference proposal[2], they aimed to find out “how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans”.

Five elderly men standing on a stage in front of a commemorative plaque
Trenchard More, John McCarthy, Marvin Minsky, Oliver Selfridge and Ray Solomonoff were among those who attended the Dartmouth Conference on artificial intelligence in 1956. Joe Mehling, CC BY[3][4]

The birth of a field – and a problematic name

The Dartmouth Conference didn’t just coin the term “artificial intelligence”; it coalesced an entire field of study. It’s like a mythical Big Bang of AI – everything we know about machine learning, neural networks and deep learning now traces its origins back to that summer in New Hampshire.

But the legacy of that summer is complicated.

Artificial intelligence won out as a name over others proposed or in use at the time. Shannon preferred the term “automata studies”, while two other conference participants (and the soon-to-be creators of the first AI program), Allen Newell and Herbert Simon, continued to use “complex information processing” for a few years still.

But here’s the thing: having settled on AI, no matter how much we try, today we can’t seem to get away from comparing AI to human intelligence.

This comparison is both a blessing and a curse.

On the one hand, it drives us to create AI systems that can match or exceed human performance in specific tasks. We celebrate when AI outperforms humans in games such as chess or Go, or when it can detect cancer in medical images with greater accuracy than human doctors.

On the other hand, this constant comparison leads to misconceptions.

When a computer beats a human at Go[5], it is easy to jump to the conclusion that machines are now smarter than us in all aspects – or that we are at least well on our way to creating such intelligence. But AlphaGo is no closer to writing poetry than a calculator.

And when a large language model sounds human, we start wondering if it is sentient[6].

But ChatGPT is no more alive than a talking ventriloquist’s dummy.

The overconfidence trap

The scientists at the Dartmouth Conference were incredibly optimistic about the future of AI. They were convinced they could solve the problem of machine intelligence in a single summer.

A commemorative plaque of the Dartmouth Summer Research Project on artificial intelligence 2006 marked the 50th anniversary of the Dartmouth Summer Research Project on artificial intelligence. Joe Mehling, CC BY[7][8]

This overconfidence has been a recurring theme in AI development, and it has led to several cycles of hype and disappointment.

Simon stated in 1965[9] that “machines will be capable, within 20 years, of doing any work a man can do”. Minsky predicted in 1967[10] that “within a generation […] the problem of creating ‘artificial intelligence’ will substantially be solved”.

Popular futurist Ray Kurzweil now predicts[11] it’s only five years away: “we’re not quite there, but we will be there, and by 2029 it will match any person”.

Reframing our thinking: new lessons from Dartmouth

So, how can AI researchers, AI users, governments, employers and the broader public move forward in a more balanced way?

A key step is embracing the difference and utility of machine systems. Instead of focusing on the race to “artificial general intelligence”, we can focus on the unique strengths of the systems we have built[12] – for example, the enormous creative capacity of image models.

Shifting the conversation from automation to augmentation is also important. Rather than pitting humans against machines, let’s focus on how AI can assist and augment human capabilities[13].

Let’s also emphasise ethical considerations. The Dartmouth participants didn’t spend much time discussing the ethical implications of AI. Today, we know better, and must do better.

We must also refocus research directions. Let’s emphasise research into AI interpretability and robustness, interdisciplinary AI research and explore new paradigms of intelligence that aren’t modelled on human cognition.

Finally, we must manage our expectations about AI. Sure, we can be excited about its potential. But we must also have realistic expectations, so that we can avoid the disappointment cycles of the past.

As we look back at that summer camp 68 years ago, we can celebrate the vision and ambition of the Dartmouth Conference participants. Their work laid the foundation for the AI revolution we’re experiencing today.

By reframing our approach to AI – emphasising utility, augmentation, ethics and realistic expectations – we can honour the legacy of Dartmouth while charting a more balanced and beneficial course for the future of AI.

After all, the real intelligence lies not just in creating smart machines, but in how wisely we choose to use and develop them.

References

  1. ^ Dartmouth Summer Research Project on Artificial Intelligence (dl.acm.org)
  2. ^ McCarthy put it in the conference proposal (dl.acm.org)
  3. ^ Joe Mehling (www.semanticscholar.org)
  4. ^ CC BY (creativecommons.org)
  5. ^ computer beats a human at Go (www.scientificamerican.com)
  6. ^ we start wondering if it is sentient (www.scientificamerican.com)
  7. ^ Joe Mehling (www.semanticscholar.org)
  8. ^ CC BY (creativecommons.org)
  9. ^ Simon stated in 1965 (www.jstor.org)
  10. ^ Minsky predicted in 1967 (dl.acm.org)
  11. ^ Ray Kurzweil now predicts (www.independent.co.uk)
  12. ^ the unique strengths of the systems we have built (www.sciencedirect.com)
  13. ^ how AI can assist and augment human capabilities (theconversation.com)

Read more https://theconversation.com/ai-was-born-at-a-us-summer-camp-68-years-ago-heres-why-that-event-still-matters-today-237205

Times Magazine

Australian Wine Guide

A Quick but Informed Guide to the Varieties and Popular Brands of Australian WinesDon’t let a wine...

What next from Apple

The question of what comes next for Apple Inc. is no longer theoretical. With leadership transitio...

Leapmotor Hybrid EV Review

The Leapmotor hybrid EV—most notably the Leapmotor C10 REEV (range-extended electric vehicle)—has ...

Navman Gets Even Smarter with 2026 MiVue™ Dash Cams

Introducing NEW Integrated Smart Parking and Australia-First Extended Recording Mode Navman to...

Why Interactive Panels Are Replacing Traditional Whiteboards in Perth

Whiteboards have been part of classrooms and meeting rooms for decades. They’re familiar, flexible...

The Engineering Innovations Transforming the Australian Heavy Transport Fleet

Australia is a massive continent, and its national supply chain relies almost entirely on the road...

The Times Features

Australian Wine Guide

A Quick but Informed Guide to the Varieties and Popular Brands of Australian WinesDon’t let a wine...

Chef knives: Setting up a home or upgrading, does price…

For anyone serious about cooking—whether setting up a first kitchen or upgrading an existing one—t...

Solo Travel: why? Do as you like, when you like, anywhe…

There was a time when travel was almost always a shared experience—family holidays, group tours, c...

Moving to Cairns? These are the suburbs offering a seas…

For Australians looking to trade congestion, cold winters and rising property costs for sunshine a...

GINA WILLIAMS & GUY GHOUSE LIVE AT THE ELLINGTON’ D…

After 15 years of performing around the world, recording studio albums and unveiling two opera works...

The Quiet Luxury of Ink: Rediscovering the Joy of Writi…

In an age dominated by screens, taps and instant communication, the simple act of writing by hand ...

Owning a Restaurant: Buying One or Braving the Challeng…

Owning a restaurant has long been one of the most alluring—and misunderstood—paths in small busine...

Supermarket Prices Are Up — and So Is Dinner at a Modes…

For many Australians, the weekly grocery shop and a simple night out for dinner have quietly becom...

In 2006, The Devil Wears Prada Became One of the First …

When The Devil Wears Prada premiered in 2006, it was marketed as a sharp, entertaining adaptation ...