The Times Australia
The Times World News

.
Times Media

.

No, AI probably won’t kill us all – and there’s more to this fear campaign than meets the eye

  • Written by Michael Timothy Bennett, PhD Student, School of Computing, Australian National University
No, AI probably won’t kill us all – and there’s more to this fear campaign than meets the eye

Doomsaying is an old occupation. Artificial intelligence (AI) is a complex subject. It’s easy to fear what you don’t understand. These three truths go some way towards explaining the oversimplification and dramatisation plaguing discussions about AI.

Yesterday outlets around the world were plastered with news of yet another open letter claiming[1] AI poses an existential threat to humankind. This letter, published through the nonprofit Center for AI Safety, has been signed by industry figureheads including Geoffrey Hinton[2] and the chief executives of Google DeepMind, Open AI and Anthropic.

However, I’d argue a healthy dose of scepticism is warranted when considering the AI doomsayer narrative. Upon close inspection, we see there are commercial incentives to manufacture fear in the AI space.

And as a researcher of artificial general intelligence (AGI), it seems to me the framing of AI as an existential threat has more in common with 17th-century philosophy than computer science.

Read more: AI pioneer Geoffrey Hinton says AI is a new form of intelligence unlike our own. Have we been getting it wrong this whole time?[3]

Was ChatGPT a ‘breakthrough’?

When ChatGPT was released late last year, people were delighted, entertained and horrified.

But ChatGPT isn’t a research breakthrough as much as it is a product. The technology it’s based on is several years old. An early version of its underlying model, GPT-3, was released in 2020 with many of the same capabilities. It just wasn’t easily accessible online for everyone to play with.

Back in 2020 and 2021, I and many others[4] wrote papers discussing the capabilities and shortcomings of GPT-3 and similar models – and the world carried on as always. Forward to today, and ChatGPT has had an incredible impact on society. What changed?

In March, Microsoft researchers published a paper[5] claiming GPT-4 showed “sparks of artificial general intelligence”. AGI is the subject of a variety of competing definitions, but for the sake of simplicity can be understood as AI with human-level intelligence.

Some immediately interpreted the Microsoft research as saying GPT-4 is an AGI. By the definitions of AGI I’m familiar with, this is certainly not true. Nonetheless, it added to the hype and furore, and it was hard not to get caught up in the panic. Scientists are no more immune to group think[6] than anyone else.

The same day that paper was submitted, The Future of Life Institute published an open letter[7] calling for a six-month pause on training AI models more powerful than GPT-4, to allow everyone to take stock and plan ahead. Some of the AI luminaries who signed it expressed concern that AGI poses an existential threat to humans, and that ChatGPT is too close to AGI for comfort.

Soon after, prominent AI safety researcher Eliezer Yudkowsky – who has been commenting on the dangers of superintelligent AI since well before[8] 2020 – took things a step further. He claimed[9] we were on a path to building a “superhumanly smart AI”, in which case “the obvious thing that would happen” is “literally everyone on Earth will die”. He even suggested countries need to be willing to risk nuclear war to enforce compliance with AI regulation across borders.

I don’t consider AI an imminent existential threat

One aspect of AI safety research is to address potential dangers AGI might present. It’s a difficult topic to study because there is little agreement on what intelligence is and how it functions, let alone what a superintelligence might entail. As such, researchers must rely as much on speculation and philosophical argument as evidence and mathematical proof.

Read more: Has GPT-4 really passed the startling threshold of human-level artificial intelligence? Well, it depends[10]

There are two reasons I’m not concerned by ChatGPT and its byproducts[11].

First, it isn’t even close to the sort of artificial superintelligence that might conceivably pose a threat to humankind. The models underpinning it are slow learners that require immense volumes of data to construct anything akin to the versatile concepts humans can concoct from only a few examples. In this sense, it’s not “intelligent”.

Second, many of the more catastrophic AGI scenarios depend on premises I find implausible. For instance, there seems to be a prevailing (but unspoken) assumption that sufficient intelligence amounts to limitless real-world power. If this was true, more scientists would be billionaires.

Cognition, as we understand it in humans, takes place as part of a physical environment (which includes our bodies) – and this environment imposes limitations. The concept of AI as a “software mind” unconstrained by hardware has more in common with 17th-century dualism[12] (the idea that the mind and body are separable) than with contemporary theories of the mind existing as part of the physical world[13].

Why the sudden concern?

Still, doomsaying is old hat, and the events of the last few years probably haven’t helped. But there may be more to this story than meets the eye.

Among the prominent figures calling for AI regulation, many work for or have ties to incumbent AI companies. This technology is useful, and there is money and power at stake – so fearmongering presents an opportunity.

Almost everything involved in building ChatGPT has been published in research anyone can access. OpenAI’s competitors can (and have) replicated the process, and it won’t be long before free and open-source alternatives flood the market.

This point was made clearly in a memo purportedly leaked[14] from Google entitled “We have no moat, and neither does OpenAI”. A moat is jargon for a way to secure your business against competitors.

Yann LeCun, who leads AI research at Meta, says these models should be open since they will become public infrastructure. He and many others are unconvinced by the AGI doom[15] narrative.

Notably, Meta wasn’t invited[16] when US President Joe Biden recently met with the leadership of Google DeepMind and OpenAI. That’s despite the fact that Meta is almost certainly a leader in AI research; it produced PyTorch, the machine-learning framework OpenAI used to make GPT-3.

At the White House meetings, OpenAI chief executive Sam Altman suggested the US government should issue licences to those who are trusted to responsibly train AI models. Licences, as Stability AI chief executive Emad Mostaque puts it[17], “are a kinda moat”.

Companies such as Google, OpenAI and Microsoft have everything to lose by allowing small, independent competitors to flourish. Bringing in licensing and regulation would help cement their position as market leaders, and hamstring competition before it can emerge.

While regulation is appropriate in some circumstances, regulations that are rushed through will favour incumbents and suffocate small, free and open-source competition[18].

Read more: Calls to regulate AI are growing louder. But how exactly do you regulate a technology like this?[19]

References

  1. ^ open letter claiming (www.safe.ai)
  2. ^ Geoffrey Hinton (theconversation.com)
  3. ^ AI pioneer Geoffrey Hinton says AI is a new form of intelligence unlike our own. Have we been getting it wrong this whole time? (theconversation.com)
  4. ^ others (link.springer.com)
  5. ^ published a paper (futurism.com)
  6. ^ group think (link.springer.com)
  7. ^ published an open letter (futureoflife.org)
  8. ^ since well before (intelligence.org)
  9. ^ He claimed (time.com)
  10. ^ Has GPT-4 really passed the startling threshold of human-level artificial intelligence? Well, it depends (theconversation.com)
  11. ^ byproducts (lablab.ai)
  12. ^ dualism (plato.stanford.edu)
  13. ^ part of the physical world (plato.stanford.edu)
  14. ^ purportedly leaked (www.semianalysis.com)
  15. ^ unconvinced by the AGI doom (www.businesstoday.in)
  16. ^ Meta wasn’t invited (fortune.com)
  17. ^ puts it (twitter.com)
  18. ^ free and open-source competition (www.forbes.com)
  19. ^ Calls to regulate AI are growing louder. But how exactly do you regulate a technology like this? (theconversation.com)

Read more https://theconversation.com/no-ai-probably-wont-kill-us-all-and-theres-more-to-this-fear-campaign-than-meets-the-eye-206614

The Times Features

The Gift That Keeps Growing: Why Tinybeans+ Gift Cards are a game-changer for new parents

As new parents navigate the joys and challenges of raising a child in the digital age, one question looms large: how do you preserve and share your baby's milestones without co...

Group Adventures Made Easy: How to Coordinate Shuttle Services from DCA to IAD

Traveling as a large group can be both exciting and challenging, especially when navigating busy airports like DCA (Ronald Reagan Washington National Airport) and IAD (Washington...

From Anxiety to Assurance: Proven Strategies to Support Your Child's Emotional Health

Navigating the intricate landscape of childhood emotions can be a daunting task for any parent, especially when faced with common fears and anxieties. However, transforming anxie...

The Rise of Meal Replacement Shakes in Australia: Why The Lady Shake Is Leading the Pack

Source Meal replacement shakes are having a moment in Australia, and it’s not hard to see why. They’re quick, convenient, and packed with nutrition, making them the perfect solu...

HCF’s Healthy Hearts Roadshow Wraps Up 2024 with a Final Regional Sprint

Next week marks the final leg of the HCF Healthy Hearts Roadshow for 2024, bringing free heart health checks to some of NSW’s most vibrant regional communities. As Australia’s ...

The Budget-Friendly Traveler: How Off-Airport Car Hire Can Save You Money

When planning a trip, transportation is one of the most crucial considerations. For many, the go-to option is renting a car at the airport for convenience. But what if we told ...

Times Magazine

Your Own Batmobile in the City: Is it Possible?

What do bats and submarines have in common? The smart answer is that they both use sound to get to where they are going. It is more interesting, however, to note why. Bats and submarines both have to deal with dark surroundings with limited visio...

Enhancing Workplace Efficiency with Well-Designed Chutes

In the world of maintenance, some tasks often go unnoticed but play a crucial role in ensuring smooth operations. One such unsung hero is chute cleaning. While it might not sound glamorous, the art of chute cleaning is an essential practice that ke...

Strategies for Reimagining Intergenerational Bonds

Intergenerational bonds have the power to transcend time and connect people from different walks of life. Whether it's the bond between grandparents and grandchildren or the exchange of wisdom between different age groups, these connections play a ...

Overview of The Prince2 Certification Exam

The Prince2 certification exam is a thorough examination created to evaluate a candidate's knowledge and comprehension of the Prince2 framework, a project management approach. It is widely acknowledged as a globally recognized project management ce...

Tips on Safer Surfboard Storage

When you’ve invested money to buy the best softboards in Sydney, you want to do everything you can to keep that investment safe and secure, right? A big part of doing that is knowing all the best practices when it comes to safe and proper storage f...

Award-Winning Australian SEO company, Perfect Link Building Reveals the Secrets Behind their SEO Strategies.

Australian SEO company: Award-Winning & Client-Approved  Perfect Link Building emerged as #1 winner out of 125 competing global agencies at the Top Digital Results 2022 summit. In the dynamic world of Australian digital marketing, being the be...