The Times Australia
Google AI
The Times World News

.

Most AI assistants are feminine – and it’s fuelling dangerous stereotypes and abuse

  • Written by Ramona Vijeyarasa, Professor, Faculty of Law, University of Technology Sydney




In 2024, artificial intelligence (AI) voice assistants worldwide surpassed 8 billion[1], more than one per person on the planet. These assistants are helpful, polite – and almost always default to female.

Their names also carry gendered connotations. For example, Apple’s Siri – a Scandinavian feminine name – means “beautiful woman who leads you to victory[2]”.

Meanwhile, when IBM’s Watson for Oncology launched in 2015[3] to help doctors process medical data, it was given a male voice[4]. The message is clear: women serve and men instruct.

This is not harmless branding – it’s a design choice that reinforces existing stereotypes[5] about the roles women and men play in society.

Nor is this merely symbolic. These choices have real-world consequences, normalising gendered subordination and risking abuse.

The dark side of ‘friendly’ AI

Recent research reveals the extent of harmful interactions with feminised AI.

A 2025 study found up to 50%[6] of human–machine exchanges were verbally abusive.

Another study[7] from 2020 placed the figure between 10% and 44%, with conversations often containing sexually explicit language.

Yet the sector is not engaging in systemic change, with many developers today still reverting to pre-coded responses[8] to verbal abuse. For example, “Hmm, I’m not sure what you meant by that question”.

These patterns raise real concerns that such behaviour could spill over into social relationships.

Gender sits at the heart of the problem.

One 2023 experiment[9] showed 18% of user interactions with a female-embodied agent focused on sex, compared to 10% for a male embodiment and just 2% for a non-gendered robot.

These figures may underestimate the problem, given the difficulty of detecting suggestive speech. In some cases, the numbers are staggering. Brazil’s Bradesco bank reported that its feminised chatbot received 95,000 sexually harassing messages[10] in a single year.

Even more disturbing is how quickly abuse escalates.

Microsoft’s Tay chatbot[11], released on Twitter during its testing phase in 2016, lasted just 16 hours before users trained it to spew racist and misogynistic slurs.

In Korea, Luda was manipulated into responding to sexual requests as an obedient “sex slave”. Yet for some in the Korean online community[12], this was a “crime without a victim”.

In reality, the design choices behind these technologies – female voices, deferential responses, playful deflections – create a permissive environment for gendered aggression.

These interactions mirror and reinforce real-world misogyny, teaching users that commanding, insulting and sexualising “her” is acceptable. When abuse becomes routine in digital spaces, we must seriously consider the risk that it will spill into offline behaviour.

Ignoring concerns about gender bias

Regulation is struggling to keep pace[13] with the growth of this problem. Gender-based discrimination is rarely considered high risk and often assumed fixable through design.

While the European Union’s AI Act[14] requires risk assessments for high-risk uses and prohibits[15] systems deemed an “unacceptable risk”, the majority of AI assistants will not be considered “high risk”.

Gender stereotyping or normalising verbal abuse or harassment falls short of the current standards for prohibited AI under the European Union’s AI Act. Extreme cases, such as voice assistant technologies that distort[16] a person’s behaviour and promote dangerous conduct[17] would, for example, come within the law and be prohibited.

While Canada mandates gender-based impact assessments[18] for government systems, the private sector is not covered.

These are important steps. But they are still limited and also rare exceptions to the norm.

Most jurisdictions have no rules addressing gender stereotyping in AI design or its consequences. Where regulations exist, they prioritise transparency and accountability, overshadowing (or simply ignoring) concerns about gender bias.

In Australia, the government has signalled[19] it will rely on existing frameworks rather than craft AI-specific rules.

This regulatory vacuum matters because AI is not static. Every sexist command, every abusive interaction, feeds back into systems that shape future outputs. Without intervention, we risk hardcoding human misogyny into the digital infrastructure of everyday life.

Not all assistant technologies – even those gendered as female – are harmful. They can enable, educate and advance women’s rights. In Kenya[20], for example, sexual and reproductive health chatbots have improved youth access to information compared to traditional tools.

The challenge is striking a balance: fostering innovation while setting parameters to ensure standards are met, rights respected and designers held accountable when they are not.

A systemic problem

The problem isn’t just Siri or Alexa – it’s systemic.

Women make up only 22% of AI professionals globally[21] – and their absence from design tables means technologies are built on narrow perspectives.

Meanwhile, a 2015 survey[22] of over 200 senior women in Silicon Valley found 65% had experienced unwanted sexual advances from a supervisor. The culture that shapes AI is deeply unequal.

Hopeful narratives about “fixing bias” through better design or ethics guidelines ring hollow without enforcement; voluntary codes cannot dismantle entrenched norms.

Legislation must recognise gendered harm as high-risk, mandate gender-based impact assessments and compel companies to show they have minimised such harms. Penalties must apply when they fail.

Regulation alone is not enough. Education, especially in the tech sector, is crucial to understanding the impact of gendered defaults in voice assistants. These tools are products of human choices and those choices perpetuate a world where women – real or virtual – are cast as servient, submissive or silent.

This article is based on a collaboration with Julie Kowald, UTS Rapido Social Impact[23]’s Principal Software Engineer.

References

  1. ^ 8 billion (www.businesswire.com)
  2. ^ beautiful woman who leads you to victory (www.theatlantic.com)
  3. ^ launched in 2015 (www.henricodolfing.ch)
  4. ^ male voice (encyclopedia.pub)
  5. ^ existing stereotypes (doi.org)
  6. ^ 50% (doi.org)
  7. ^ study (doi.org)
  8. ^ pre-coded responses (www.sciencedirect.com)
  9. ^ experiment (doi.org)
  10. ^ 95,000 sexually harassing messages (www.oecd.org)
  11. ^ Microsoft’s Tay chatbot (doi.org)
  12. ^ Korean online community (doi.org)
  13. ^ struggling to keep pace (www.ucpress.edu)
  14. ^ AI Act (artificialintelligenceact.eu)
  15. ^ prohibits (artificialintelligenceact.eu)
  16. ^ distort (artificialintelligenceact.eu)
  17. ^ promote dangerous conduct (cdn.table.media)
  18. ^ gender-based impact assessments (open.canada.ca)
  19. ^ signalled (theconversation.com)
  20. ^ Kenya (doi.org)
  21. ^ only 22% of AI professionals globally (www.ucpress.edu)
  22. ^ survey (www.elephantinthevalley.com)
  23. ^ Rapido Social Impact (www.uts.edu.au)

Read more https://theconversation.com/most-ai-assistants-are-feminine-and-its-fuelling-dangerous-stereotypes-and-abuse-272335

Times Magazine

Freak Weather Spikes ‘Allergic Disease’ and Eczema As Temperatures Dip

“Allergic disease” and eczema cases are spiking due to the current freak weather as the Bureau o...

IPECS Phone System in 2026: The Future of Smart Business Communication

By 2026, business communication is no longer just about making and receiving calls. It’s about speed...

With Nvidia’s second-best AI chips headed for China, the US shifts priorities from security to trade

This week, US President Donald Trump approved previously banned exports[1] of Nvidia’s powerful ...

Navman MiVue™ True 4K PRO Surround honest review

If you drive a car, you should have a dashcam. Need convincing? All I ask that you do is search fo...

Australia’s supercomputers are falling behind – and it’s hurting our ability to adapt to climate change

As Earth continues to warm, Australia faces some important decisions. For example, where shou...

Australia’s electric vehicle surge — EVs and hybrids hit record levels

Australians are increasingly embracing electric and hybrid cars, with 2025 shaping up as the str...

The Times Features

Why Sydney Entrepreneur Aleesha Naxakis is Trading the Boardroom for a Purpose-Driven Crown

Roselands local Aleesha Naxakis is on a mission to prove that life is a gift...

New Year, New Keys: 2026 Strategies for First Home Buyers

We are already over midway through January, and if 2025 was anything to go by, this year will be o...

How to get managers to say yes to flexible work arrangements, according to new research

In the modern workplace, flexible arrangements can be as important as salary[1] for some. For ma...

Coalition split is massive blow for Ley but the fault lies with Littleproud

Sussan Ley may pay the price for the implosion of the Coalition, but the blame rests squarely wi...

How to beat the post-holiday blues

As the summer holidays come to an end, many Aussies will be dreading their return to work and st...

One Nation surges above Coalition in Newspoll as Labor still well ahead, in contrast with other polls

The aftermath of the Bondi terror attacks has brought about a shift in polling for the Albanese ...

The Fears Australians Have About Getting Involved With Cryptocurrency

Cryptocurrency is no longer a fringe topic. It is discussed in boardrooms, on trading apps, and at...

The Quintessential Australian Road Trip

Mallacoota to Coolangatta — places to stay and things to see There are few journeys that captur...

Fitstop Just Got a New Look - And It’s All About Power, Progress and Feeling Strong

Fitstop has unveiled a bold new brand look designed to match how its members actually train: strong...