The Times Australia
Google AI
The Times World News

.

OpenAI will put ads in ChatGPT. This opens a new door for dangerous influence

  • Written by Raffaele F Ciriello, Senior Lecturer in Business Information Systems, University of Sydney

OpenAI has announced[1] plans to introduce advertising in ChatGPT in the United States. Ads will appear on the free version and the low-cost Go tier, but not for Pro, Business, or Enterprise subscribers.

The company says ads will be clearly separated from chatbot responses and will not influence outputs. It has also pledged not to sell user conversations, to let users turn off personalised ads, and to avoid ads for users under 18 or around sensitive topics such as health and politics.

Still, the move has raised concerns[2] among some users. The key question is whether OpenAI’s voluntary safeguards will hold once advertising becomes central to its business.

Why ads in AI were always likely

We’ve seen this before. Fifteen years ago, social media platforms struggled to turn vast audiences into profit.

The breakthrough came with targeted advertising: tailoring ads to what users search for, click on, and pay attention to. This model became the dominant revenue source for Google[3] and Facebook[4], reshaping their services so they maximised user engagement.

Read more: Why is the internet overflowing with rubbish ads – and what can we do about it?[5]

Large-scale artificial intelligence (AI) is extremely expensive[6]. Training and running advanced models requires vast data centres, specialised chips, and constant engineering. Despite rapid user growth, many AI firms still operate at a loss. OpenAI alone expects to burn US$115 billion over the next five years[7].

Only a few companies can absorb these costs. For most AI providers, a scalable revenue model is urgent and targeted advertising is the obvious answer. It remains the most reliable way to profit from large audiences.

What history teaches us about OpenAI’s promises

OpenAI says[8] it will keep ads separate from answers and protect user privacy. These assurances may sound comforting, but, for now, they rest on vague and easily reinterpreted commitments.

The company proposes not to show ads “near sensitive or regulated topics like health, mental health or politics”, yet offers little clarity[9] about what counts as “sensitive,” how broadly “health” will be defined, or who decides where the boundaries lie.

Most real-world conversations with AI will sit outside these narrow categories. So far OpenAI has not provided any details on which advertising categories will be included or excluded. However, if no restrictions were placed on the content of the ads, it’s easy to picture that a user asking “how to wind down after a stressful day” might be shown alcohol delivery ads. A query about “fun weekend ideas” could surface gambling promotions.

These products are linked to recognised health and social harms[10]. Placed beside personalised guidance at the moment of decision-making, such ads can steer behaviour in subtle but powerful ways, even when no explicit health issue is discussed.

Similar promises[11] about guardrails marked the early years of social media. History shows[12] how self-regulation weakens under commercial pressure, ultimately benefiting companies while leaving users exposed to harm.

Advertising incentives have a long record of undermining the public interest. The Cambridge Analytica scandal[13] exposed how personal data collected for ads could be repurposed for political influence. The “Facebook files[14]” revealed that Meta knew its platforms were causing serious harms, including to teenage mental health, but resisted changes that threatened advertising revenue.

More recent investigations[15] show Meta continues to generate revenue from scam and fraudulent ads even after being warned about their harms.

Why chatbots raise the stakes

Chatbots are not merely another social media feed. People use them[16] in intimate, personal ways for advice, emotional support and private reflection. These interactions feel discreet and non-judgmental, and often prompt disclosures people would not make publicly.

That trust amplifies persuasion in ways social media does not. People seek help and make decisions when they consult chatbots. Even with formal separation from responses, ads appear in a private, conversational setting rather than a public feed.

Messages placed beside personalised guidance – about products, lifestyle choices, finances or politics – are likely to be more influential than the same ads seen while browsing.

As OpenAI positions ChatGPT as a “super assistant[17]” for everything from finances to health[18], the line between advice and persuasion blurs.

For scammers and autocrats, the appeal of a more powerful propaganda tool is clear. For AI providers, the financial incentives to accommodate them will be hard to resist.

The root problem is a structural conflict of interest. Advertising models reward platforms for maximising engagement, yet the content that best sustains attention is often misleading, emotionally charged or harmful to health.

This is why voluntary restraint by online platforms has repeatedly failed.

Is there a better way forward?

One option is to treat AI as digital public infrastructure[19]: these are essential systems designed to serve the public rather than maximise advertising revenue.

This need not exclude private firms. It requires at least one high-quality public option[20], democratically overseen – akin to public broadcasters alongside commercial media.

Elements of this model already exist. Switzerland developed the publicly funded AI system Apertus[21] through its universities and national supercomputing centre. It is open source, compliant with European AI law, and free from advertising.

Australia could go further. Alongside building our own AI tools, regulators could impose clear rules on commercial providers: mandating transparency, banning health-harming or political advertising, and enforcing penalties – including shutdowns – for serious breaches.

Advertising did not corrupt social media overnight. It slowly changed incentives[22] until public harm became the collateral damage of private profit. Bringing it into conversational AI risks repeating the mistake, this time in systems people trust far more deeply.

The key question is not technical but political: should AI serve the public, or advertisers and investors?

References

  1. ^ OpenAI has announced (openai.com)
  2. ^ has raised concerns (www.reddit.com)
  3. ^ Google (s206.q4cdn.com)
  4. ^ Facebook (investor.atmeta.com)
  5. ^ Why is the internet overflowing with rubbish ads – and what can we do about it? (theconversation.com)
  6. ^ extremely expensive (www.reuters.com)
  7. ^ over the next five years (www.reuters.com)
  8. ^ OpenAI says (openai.com)
  9. ^ offers little clarity (help.openai.com)
  10. ^ linked to recognised health and social harms (iris.who.int)
  11. ^ Similar promises (www.hrlc.org.au)
  12. ^ History shows (issueone.org)
  13. ^ Cambridge Analytica scandal (bipartisanpolicy.org)
  14. ^ Facebook files (www.wsj.com)
  15. ^ More recent investigations (www.reuters.com)
  16. ^ People use them (hbr.org)
  17. ^ super assistant (openai.com)
  18. ^ health (www.theguardian.com)
  19. ^ digital public infrastructure (thepolicymaker.appi.org.au)
  20. ^ public option (doi.org)
  21. ^ Apertus (ethz.ch)
  22. ^ slowly changed incentives (theconversation.com)

Read more https://theconversation.com/openai-will-put-ads-in-chatgpt-this-opens-a-new-door-for-dangerous-influence-273806

Times Magazine

Epson launches ELPCS01 mobile projector cart

Designed for the EB-810E[1] projector and provides easy setup for portable displays in flexible ...

Governance Models for Headless CMS in Large Organizations

Where headless CMS is adopted by large enterprises, governance is the single most crucial factor d...

Narwal Freo Z Ultra Robotic Vacuum and Mop Cleaner

Rating: ★★★★☆ (4.4/5)Category: Premium Robot Vacuum & Mop ComboBest for: Busy households, ha...

Shark launches SteamSpot - the shortcut for everyday floor mess

Shark introduces the Shark SteamSpot Steam Mop, a lightweight steam mop designed to make everyda...

Game Together, Stay Together: Logitech G Reveals Gaming Couples Enjoy Higher Relationship Satisfaction

With Valentine’s Day right around the corner, many lovebirds across Australia are planning for the m...

AI threatens to eat business software – and it could change the way we work

In recent weeks, a range of large “software-as-a-service” companies, including Salesforce[1], Se...

The Times Features

AI could help us more accurately screen for breast cancer – new research

At least 20,000[1] Australian women are diagnosed with breast cancer each year. And more than ...

Housing ACT tenants left in unsafe conditions

An ACT Ombudsman report has found that Housing ACT tenants have been left waiting in unsafe and haza...

Shark SteamSpot S2001 Review: A Chemical-Free Way to Tackle Messes and Stubborn Stains

If you're looking for a reliable steam mop that can handle both everyday spills and stubborn stains ...

How Businesses Are Generating Profits in a High-Inflation Economic Environment

Inflation in Australia and globally has surged to multi-decade highs since 2021, driven by pande...

The Effects of the War in the Middle East on Australian Small Businesses

The war in the Middle East is not a distant geopolitical event for Australia. In an interconnect...

Back at uni? How to help your wellbeing while you study

University can be a time of great opportunities, but it can also be very stressful[1]. Many stud...

Taste Port Douglas celebrates 10 years of world-class flavour in the tropics

30+ events, new sunrise and wellness experiences, 20+ chefs and a headline Michelin-star line-up...

Oztent RV tent range. Buy with caution

A review of the Oztent RV "30 second tent" range. Three years ago we bought an RV-4 from BCF Mack...

Essential Upgrades for a Smarter, Safer Australian Home

As we settle into 2026, the concept of the "dream home" has fundamentally shifted. The focus has m...