The Times Australia
Google AI
The Times World News

.

OpenAI will put ads in ChatGPT. This opens a new door for dangerous influence

  • Written by Raffaele F Ciriello, Senior Lecturer in Business Information Systems, University of Sydney

OpenAI has announced[1] plans to introduce advertising in ChatGPT in the United States. Ads will appear on the free version and the low-cost Go tier, but not for Pro, Business, or Enterprise subscribers.

The company says ads will be clearly separated from chatbot responses and will not influence outputs. It has also pledged not to sell user conversations, to let users turn off personalised ads, and to avoid ads for users under 18 or around sensitive topics such as health and politics.

Still, the move has raised concerns[2] among some users. The key question is whether OpenAI’s voluntary safeguards will hold once advertising becomes central to its business.

Why ads in AI were always likely

We’ve seen this before. Fifteen years ago, social media platforms struggled to turn vast audiences into profit.

The breakthrough came with targeted advertising: tailoring ads to what users search for, click on, and pay attention to. This model became the dominant revenue source for Google[3] and Facebook[4], reshaping their services so they maximised user engagement.

Read more: Why is the internet overflowing with rubbish ads – and what can we do about it?[5]

Large-scale artificial intelligence (AI) is extremely expensive[6]. Training and running advanced models requires vast data centres, specialised chips, and constant engineering. Despite rapid user growth, many AI firms still operate at a loss. OpenAI alone expects to burn US$115 billion over the next five years[7].

Only a few companies can absorb these costs. For most AI providers, a scalable revenue model is urgent and targeted advertising is the obvious answer. It remains the most reliable way to profit from large audiences.

What history teaches us about OpenAI’s promises

OpenAI says[8] it will keep ads separate from answers and protect user privacy. These assurances may sound comforting, but, for now, they rest on vague and easily reinterpreted commitments.

The company proposes not to show ads “near sensitive or regulated topics like health, mental health or politics”, yet offers little clarity[9] about what counts as “sensitive,” how broadly “health” will be defined, or who decides where the boundaries lie.

Most real-world conversations with AI will sit outside these narrow categories. So far OpenAI has not provided any details on which advertising categories will be included or excluded. However, if no restrictions were placed on the content of the ads, it’s easy to picture that a user asking “how to wind down after a stressful day” might be shown alcohol delivery ads. A query about “fun weekend ideas” could surface gambling promotions.

These products are linked to recognised health and social harms[10]. Placed beside personalised guidance at the moment of decision-making, such ads can steer behaviour in subtle but powerful ways, even when no explicit health issue is discussed.

Similar promises[11] about guardrails marked the early years of social media. History shows[12] how self-regulation weakens under commercial pressure, ultimately benefiting companies while leaving users exposed to harm.

Advertising incentives have a long record of undermining the public interest. The Cambridge Analytica scandal[13] exposed how personal data collected for ads could be repurposed for political influence. The “Facebook files[14]” revealed that Meta knew its platforms were causing serious harms, including to teenage mental health, but resisted changes that threatened advertising revenue.

More recent investigations[15] show Meta continues to generate revenue from scam and fraudulent ads even after being warned about their harms.

Why chatbots raise the stakes

Chatbots are not merely another social media feed. People use them[16] in intimate, personal ways for advice, emotional support and private reflection. These interactions feel discreet and non-judgmental, and often prompt disclosures people would not make publicly.

That trust amplifies persuasion in ways social media does not. People seek help and make decisions when they consult chatbots. Even with formal separation from responses, ads appear in a private, conversational setting rather than a public feed.

Messages placed beside personalised guidance – about products, lifestyle choices, finances or politics – are likely to be more influential than the same ads seen while browsing.

As OpenAI positions ChatGPT as a “super assistant[17]” for everything from finances to health[18], the line between advice and persuasion blurs.

For scammers and autocrats, the appeal of a more powerful propaganda tool is clear. For AI providers, the financial incentives to accommodate them will be hard to resist.

The root problem is a structural conflict of interest. Advertising models reward platforms for maximising engagement, yet the content that best sustains attention is often misleading, emotionally charged or harmful to health.

This is why voluntary restraint by online platforms has repeatedly failed.

Is there a better way forward?

One option is to treat AI as digital public infrastructure[19]: these are essential systems designed to serve the public rather than maximise advertising revenue.

This need not exclude private firms. It requires at least one high-quality public option[20], democratically overseen – akin to public broadcasters alongside commercial media.

Elements of this model already exist. Switzerland developed the publicly funded AI system Apertus[21] through its universities and national supercomputing centre. It is open source, compliant with European AI law, and free from advertising.

Australia could go further. Alongside building our own AI tools, regulators could impose clear rules on commercial providers: mandating transparency, banning health-harming or political advertising, and enforcing penalties – including shutdowns – for serious breaches.

Advertising did not corrupt social media overnight. It slowly changed incentives[22] until public harm became the collateral damage of private profit. Bringing it into conversational AI risks repeating the mistake, this time in systems people trust far more deeply.

The key question is not technical but political: should AI serve the public, or advertisers and investors?

References

  1. ^ OpenAI has announced (openai.com)
  2. ^ has raised concerns (www.reddit.com)
  3. ^ Google (s206.q4cdn.com)
  4. ^ Facebook (investor.atmeta.com)
  5. ^ Why is the internet overflowing with rubbish ads – and what can we do about it? (theconversation.com)
  6. ^ extremely expensive (www.reuters.com)
  7. ^ over the next five years (www.reuters.com)
  8. ^ OpenAI says (openai.com)
  9. ^ offers little clarity (help.openai.com)
  10. ^ linked to recognised health and social harms (iris.who.int)
  11. ^ Similar promises (www.hrlc.org.au)
  12. ^ History shows (issueone.org)
  13. ^ Cambridge Analytica scandal (bipartisanpolicy.org)
  14. ^ Facebook files (www.wsj.com)
  15. ^ More recent investigations (www.reuters.com)
  16. ^ People use them (hbr.org)
  17. ^ super assistant (openai.com)
  18. ^ health (www.theguardian.com)
  19. ^ digital public infrastructure (thepolicymaker.appi.org.au)
  20. ^ public option (doi.org)
  21. ^ Apertus (ethz.ch)
  22. ^ slowly changed incentives (theconversation.com)

Read more https://theconversation.com/openai-will-put-ads-in-chatgpt-this-opens-a-new-door-for-dangerous-influence-273806

Times Magazine

Freak Weather Spikes ‘Allergic Disease’ and Eczema As Temperatures Dip

“Allergic disease” and eczema cases are spiking due to the current freak weather as the Bureau o...

IPECS Phone System in 2026: The Future of Smart Business Communication

By 2026, business communication is no longer just about making and receiving calls. It’s about speed...

With Nvidia’s second-best AI chips headed for China, the US shifts priorities from security to trade

This week, US President Donald Trump approved previously banned exports[1] of Nvidia’s powerful ...

Navman MiVue™ True 4K PRO Surround honest review

If you drive a car, you should have a dashcam. Need convincing? All I ask that you do is search fo...

Australia’s supercomputers are falling behind – and it’s hurting our ability to adapt to climate change

As Earth continues to warm, Australia faces some important decisions. For example, where shou...

Australia’s electric vehicle surge — EVs and hybrids hit record levels

Australians are increasingly embracing electric and hybrid cars, with 2025 shaping up as the str...

The Times Features

How to get managers to say yes to flexible work arrangements, according to new research

In the modern workplace, flexible arrangements can be as important as salary[1] for some. For ma...

Coalition split is massive blow for Ley but the fault lies with Littleproud

Sussan Ley may pay the price for the implosion of the Coalition, but the blame rests squarely wi...

How to beat the post-holiday blues

As the summer holidays come to an end, many Aussies will be dreading their return to work and st...

One Nation surges above Coalition in Newspoll as Labor still well ahead, in contrast with other polls

The aftermath of the Bondi terror attacks has brought about a shift in polling for the Albanese ...

The Fears Australians Have About Getting Involved With Cryptocurrency

Cryptocurrency is no longer a fringe topic. It is discussed in boardrooms, on trading apps, and at...

The Quintessential Australian Road Trip

Mallacoota to Coolangatta — places to stay and things to see There are few journeys that captur...

Fitstop Just Got a New Look - And It’s All About Power, Progress and Feeling Strong

Fitstop has unveiled a bold new brand look designed to match how its members actually train: strong...

What We Know About Zenless Zone Zero 2.6 So Far

Zenless Zone Zero is currently enjoying its 2.5 version update with new characters like Ye Shunguang...

For Young People, Life Is an All-New Adventure. For Older People, Memories of Good Times and Lost Friends Come to Mind

Life does not stand still. It moves forward relentlessly, but it does not move the same way for ...