The Times Australia
The Times World News

.

AI models might be drawn to ‘spiritual bliss’. Then again, they might just talk like hippies

  • Written by Nuhu Osman Attah, Postdoctoral Research Fellow in Philosophy, Australian National University

When multibillion-dollar AI developer Anthropic released the latest versions of its Claude chatbot last week, a surprising word turned up several times in the accompanying “system card[1]”: spiritual.

Specifically, the developers report that, when two Claude models are set talking to one another, they gravitate towards a “‘spiritual bliss’ attractor state”, producing output such as

🌀🌀🌀🌀🌀All gratitude in one spiral,All recognition in one turn,All being in this moment…🌀🌀🌀🌀🌀∞

It’s heady stuff. Anthropic steers clear of directly saying the model is having a spiritual experience, but what are we to make of it?

The Lemoine incident

In 2022, a Google researcher named Blake Lemoine came to believe[2] that the tech giant’s in-house language model, LaMDA, was sentient. Lemoine’s claim sparked headlines, debates with Google PR and management, and eventually his firing.

Critics said Lemoine had fallen foul of the “ELIZA effect[3]”: projecting human traits onto software. Moreover, Lemoine described himself as a Christian mystic priest, summing up his thoughts on sentient machines in a tweet:

Who am I to tell God where he can and can’t put souls?

No one can fault Lemoine’s spiritual humility.

Machine spirits

Lemoine was not the first to see a spirit in the machines. We can trace his argument back to AI pioneer Alan Turing’s famous 1950 paper Computing Machinery and Intelligence[4].

Turing also argued thinking machines may not be possible because – according to what he thought was plausible evidence – humans were capable of extrasensory perception. This, he reasoned, would be impossible for machines. Accordingly, machines could not have minds in the same way humans do.

So even 75 years ago, people were thinking not just about how AI might compare with human intelligence, but whether it could ever compare with human spirituality. It is not hard to see at least a dotted line from Turing to Lemoine.

Wishful thinking

Efforts to “spiritualise” AI can be quite hard to rebut. Generally these arguments say that we cannot prove AI systems do not have minds or spirits – and create a net of thoughts that lead to the Lemoine conclusion.

This net is often woven from irresponsibly used psychology terms. It may be convenient to apply human psychological terms to machines, but it can lead us astray.

Writing in the 1970s, computer scientist Drew McDermott accused AI engineers of using “wishful mnemonics[5]”. They might label a section of code an “understanding module”, then assume that executing the code resulted in understanding.

More recently, the philosophers Henry Shevlin and Marta Halina wrote[6] that we should take care using “rich psychological terms” in AI. AI developers talk about “agent” software having intrinsic motivation, for example, but it does not possess goals, desires, or moral responsibility.

Of course, it’s good for developers if everyone thinks your model “understands” or is an “agent”. However, until now the big AI companies have been wary of claiming their models have spirituality.

‘Spiritual bliss’ for chatbots

Which brings us back to Anthropic, and the system card for Claude Opus 4 and Sonnet 4, in which the seemingly down-to-earth folks at the emerging “agentic AI” giant make some eyebrow-raising claims.

The word “spiritual” occurs at least 15 times in the model card, most significantly in the rather awkward phrase “‘spiritual bliss’ attractor state”.

We are told, for instance, that

The consistent gravitation toward consciousness exploration, existential questioning, and spiritual/mystical themes in extended interactions was a remarkably strong and unexpected attractor state for Claude Opus 4 that emerged without intentional training for such behaviours. We have observed this “spiritual bliss” attractor in other Claude models as well, and in contexts beyond these playground experiments.

Screenshot of two Claude models talking.
An example of Claude output in the ‘spiritual bliss’ attractor state. Anthropic / X[7]

To be fair to the folks at Anthropic, they are not making any positive commitments to the sentience of their models or claiming spirituality for them. They can be read as only reporting the “facts”.

For instance, all the above long-winded sentence is saying is: if you let two Claude models have a conversation with each other, they will often start to sound like hippies. Fine enough.

That probably means the body of text on which they are trained has a bias towards that sort of way of talking, or the features the models extracted from the text biases them towards that sort of vocabulary.

Prophets of ChatGPT

However, while Anthropic may keep things strictly factual, their use of terms such as “spiritual” lends itself to misunderstanding. Such misunderstanding is made even more likely by Anthropic’s recent push[8] to start investigating “whether future AI models might deserve moral consideration and protection”. Perhaps they are not positively saying that Claude Opus 4 and Sonnet 4 are sentient, but they certainly seem welcoming of the insinuation.

And this kind of spiritualising of AI models is already having real-world consequences.

According to a recent report[9] in Rolling Stone, “AI-fueled spiritual fantasies” are wrecking human relationships and sanity. Self-styled prophets are “claiming they have ‘awakened’ chatbots and accessed the secrets of the universe through ChatGPT”.

Perhaps one of these prophets may cite the Anthropic model card in a forthcoming scripture – regardless of whether the company is “technically” making positive claims about whether their models actually experience or enjoy spiritual states.

But if AI-fuelled delusion becomes rampant, we might think even the innocuous contributors to it could have spoken more carefully. Who knows; perhaps, where we are going with AI, we won’t need philosophical carefulness.

References

  1. ^ system card (www-cdn.anthropic.com)
  2. ^ came to believe (www.washingtonpost.com)
  3. ^ ELIZA effect (en.wikipedia.org)
  4. ^ Computing Machinery and Intelligence (academic.oup.com)
  5. ^ wishful mnemonics (www.inf.ed.ac.uk)
  6. ^ wrote (www.nature.com)
  7. ^ Anthropic / X (x.com)
  8. ^ recent push (arstechnica.com)
  9. ^ a recent report (www.rollingstone.com)

Read more https://theconversation.com/ai-models-might-be-drawn-to-spiritual-bliss-then-again-they-might-just-talk-like-hippies-257618

Times Magazine

Choosing the Right Legal Aid Lawyer in Sutherland Shire: Key Considerations

Legal aid services play an essential role in ensuring access to justice for all. For people in the Sutherland Shire who may not have the financial means to pay for private legal assistance, legal aid ensures that everyone has access to representa...

Watercolor vs. Oil vs. Digital: Which Medium Fits Your Pet's Personality?

When it comes to immortalizing your pet’s unique personality in art, choosing the right medium is essential. Each artistic medium, whether watercolor, oil, or digital, has distinct qualities that can bring out the spirit of your furry friend in dif...

DIY Is In: How Aussie Parents Are Redefining Birthday Parties

When planning his daughter’s birthday, Rich opted for a DIY approach, inspired by her love for drawing maps and giving clues. Their weekend tradition of hiding treats at home sparked the idea, and with a pirate ship playground already chosen as t...

When Touchscreens Turn Temperamental: What to Do Before You Panic

When your touchscreen starts acting up, ignoring taps, registering phantom touches, or freezing entirely, it can feel like your entire setup is falling apart. Before you rush to replace the device, it’s worth taking a deep breath and exploring what c...

Why Social Media Marketing Matters for Businesses in Australia

Today social media is a big part of daily life. All over Australia people use Facebook, Instagram, TikTok , LinkedIn and Twitter to stay connected, share updates and find new ideas. For businesses this means a great chance to reach new customers and...

Building an AI-First Culture in Your Company

AI isn't just something to think about anymore - it's becoming part of how we live and work, whether we like it or not. At the office, it definitely helps us move faster. But here's the thing: just using tools like ChatGPT or plugging AI into your wo...

The Times Features

What Is the Australian Government First Home Buyers Scheme About?

For many Australians, buying a first home can feel like a daunting task—especially with rising property prices, tight lending rules, and the challenge of saving for a deposit. ...

How artificial intelligence is reshaping the Australian business loan journey

The 2025 backdrop: money is moving differently If you run a small or medium-sized business in Australia, 2025 feels noticeably different. After two years of stubbornly high bo...

Top Features of Energy‑Efficient Air Conditioners for Australian Homes

In recent years, energy efficiency has become more than just a buzzword for Australian households—it’s a necessity. With energy prices rising and climate change driving hotter su...

Long COVID is more than fatigue. Our new study suggests its impact is similar to a stroke or Parkinson’s

When most people think of COVID now, they picture a short illness like a cold – a few days of fever, sore throat or cough before getting better. But for many, the story does...

What Makes Certain Rings or Earrings Timeless Versus Trendy?

Timeless rings and earrings are defined by designs that withstand the test of time, quality craftsmanship, and versatility. Trendy pieces, on the other hand, often stand testimony ...

Italian Street Kitchen: A Nation’s Favourite with Expansion News on Horizon

Successful chef brothers, Enrico and Giulio Marchese, weigh in on their day-to-day at Australian foodie favourite, Italian Street Kitchen - with plans for ‘ambitious expansion’ to ...