The Times Australia
Fisher and Paykel Appliances
The Times Australia
.

Google is rolling out its Gemini AI chatbot to kids under 13. It’s a risky move

  • Written by Lisa M. Given, Professor of Information Sciences & Director, Social Change Enabling Impact Platform, RMIT University

Google has announced it will roll out its Gemini artificial intelligence (AI) chatbot[1] to children under the age of 13.

While the launch starts within the next week in the United States and Canada, it will launch in Australia[2] later this year. The chatbot will only be available to people via Google’s Family Link accounts[3].

But this development comes with major risks. It also highlights how, even if children are banned from social media, parents will still have to play a game of whack-a-mole with new technologies as they try to keep their children safe.

A good way to address this would be to urgently implement a digital duty of care for big tech companies such as Google.

How will the Gemini AI chatbot work?

Google’s Family Link accounts[4] allow parents to control access to content and apps, such as YouTube.

To create a child’s account, parents provide personal details[5], including the child’s name and date of birth. This may raise privacy concerns for parents concerned about data breaches, but Google says children’s data when using the system will not be used to train the AI system[6].

Chatbot access will be “on” by default, so parents need to actively turn the feature off to restrict access. Young children will be able to prompt the chatbot for text responses, or to create images, which are generated by the system.

Google acknowledges[7] the system may “make mistakes”. So assessment of the quality and trustworthiness of content is needed. Chatbots can make up information (known as “hallucinating[8]”), so if children use the chatbot for homework help, they need to check facts with reliable sources.

What kinds of information will the system provide?

Google and other search engines retrieve original materials for people to review. A student can read news articles, magazines and other sources when writing up an assignment.

Generative AI tools are not the same as search engines. AI tools look for patterns in source material and create new text responses (or images) based on the query – or “prompt” – a person provides. A child could ask the system to “draw a cat” and the system will scan for patterns in the data of what a cat looks like (such as whiskers, pointy ears, and a long tail) and generate an image that includes those cat-like details.

Understanding the differences between materials retrieved in a Google search and content generated by an AI tool will be challenging for young children. Studies show even adults can be deceived by AI tools[9]. And even highly skilled professionals – such as lawyers[10] – have reportedly been fooled into using fake content generated by ChatGPT and other chatbots.

Will the content generated be age-appropriate?

Google says the system will include “built-in safeguards designed to prevent the generation of inappropriate or unsafe content[11]”.

However, these safeguards could create new problems. For example, if particular words (such as “breasts”) are restricted to protect children from accessing inappropriate sexual content, this could mistakenly also exclude children from accessing age-appropriate content about bodily changes during puberty.

Many children are also very tech-savvy[12], often with well-developed skills for navigating apps and getting around system controls. Parents cannot rely exclusively on inbuilt safeguards. They need to review generated content and help their children understand how the system works, and assess whether content is accurate.

Close up photo of Google logo sign.
Google says there will be safeguards to minimise the risk of harm for children using Gemini, but these could create new problems. Dragos Asaeftei/Shutterstock[13]

What risks do AI chatbots pose to children?

The eSafety Commission[14] has issued an online safety advisory on the potential risk of AI chatbots, including those designed to simulate personal relationships, particularly for young children.

The eSafety advisory explains AI companions can “share harmful content, distort reality and give advice that is dangerous”. The advisory highlights the risks for young children, in particular, who “are still developing the critical thinking and life skills needed to understand how they can be misguided or manipulated by computer programs, and what to do about it”.

My research team has recently examined a range of AI chatbots, such as ChatGPT, Replika, and Tessa[15]. We found these systems mirror people’s interactions based on the many unwritten rules that govern social behaviour – or, what are known as “feeling rules”. These rules are what lead us to say “thank you” when someone holds the door open for us, or “I’m sorry!” when you bump into someone on the street.

By mimicking these and other social niceties, these systems are designed to gain our trust.

These human-like interactions will be confusing, and potentially risky, for young children. They may believe content can be trusted, even when the chatbot is responding with fake information. And, they may believe they are engaging with a real person, rather than a machine.

A mother teaching her child the alphabet.
AI chatbots such as Gemini are designed to mimic human behaviour and gain our trust. Ground Picture[16]

How can we protect kids from harm when using AI chatbots?

This rollout is happening at a crucial time in Australia, as children under 16 will be banned[17] from holding social media accounts in December this year.

While some parents may believe this will keep their children safe from harm, generative AI chatbots show the risks of online engagement extend far beyond social media. Children – and parents – must be educated in how all types of digital tools can be used appropriately and safely.

As Gemini’s AI chatbot is not a social media tool, it will fall outside Australia’s ban.

This leaves Australian parents playing a game of whack-a-mole with new technologies as they try to keep their children safe. Parents must keep up with new tool developments and understand the potential risks their children face. They must also understand the limitations of the social media ban in protecting children from harm.

This highlights the urgent need to revisit Australia’s proposed digital duty of care[18] legislation. While the European Union and United Kingdom launched digital duty of care legislation in 2023, Australia’s has been on hold since November 2024. This legislation would hold technology companies to account by legislating that they deal with harmful content, at source, to protect everyone.

References

  1. ^ Gemini artificial intelligence (AI) chatbot (www.nytimes.com)
  2. ^ launch in Australia (www.abc.net.au)
  3. ^ Family Link accounts (families.google)
  4. ^ Family Link accounts (families.google)
  5. ^ parents provide personal details (www.techbusinessnews.com.au)
  6. ^ will not be used to train the AI system (www.theverge.com)
  7. ^ Google acknowledges (support.google.com)
  8. ^ hallucinating (theconversation.com)
  9. ^ deceived by AI tools (www.sciencedirect.com)
  10. ^ such as lawyers (www.abc.net.au)
  11. ^ built-in safeguards designed to prevent the generation of inappropriate or unsafe content (www.techbusinessnews.com.au)
  12. ^ tech-savvy (www.theguardian.com)
  13. ^ Dragos Asaeftei/Shutterstock (www.shutterstock.com)
  14. ^ eSafety Commission (www.esafety.gov.au)
  15. ^ AI chatbots, such as ChatGPT, Replika, and Tessa (osf.io)
  16. ^ Ground Picture (www.shutterstock.com)
  17. ^ children under 16 will be banned (theconversation.com)
  18. ^ digital duty of care (theconversation.com)

Read more https://theconversation.com/google-is-rolling-out-its-gemini-ai-chatbot-to-kids-under-13-its-a-risky-move-256204

Shocking true cost of BOM’s disaster website revealed at $96 million

Leader of The Nationals David Littleproud said there should be consequences after revelations the Bureau of Mete...

Times Magazine

Australia’s electric vehicle surge — EVs and hybrids hit record levels

Australians are increasingly embracing electric and hybrid cars, with 2025 shaping up as the str...

Tim Ayres on the AI rollout’s looming ‘bumps and glitches’

The federal government released its National AI Strategy[1] this week, confirming it has dropped...

Seven in Ten Australian Workers Say Employers Are Failing to Prepare Them for AI Future

As artificial intelligence (AI) accelerates across industries, a growing number of Australian work...

Mapping for Trucks: More Than Directions, It’s Optimisation

Daniel Antonello, General Manager Oceania, HERE Technologies At the end of June this year, Hampden ...

Can bigger-is-better ‘scaling laws’ keep AI improving forever? History says we can’t be too sure

OpenAI chief executive Sam Altman – perhaps the most prominent face of the artificial intellig...

A backlash against AI imagery in ads may have begun as brands promote ‘human-made’

In a wave of new ads, brands like Heineken, Polaroid and Cadbury have started hating on artifici...

The Times Features

98 Lygon St Melbourne’s New Mediterranean Hideaway

Brunswick East has just picked up a serious summer upgrade. Neighbourhood favourite 98 Lygon St B...

How Australians can stay healthier for longer

Australians face a decade of poor health unless they close the gap between living longer and sta...

The Origin of Human Life — Is Intelligent Design Worth Taking Seriously?

For more than a century, the debate about how human life began has been framed as a binary: evol...

The way Australia produces food is unique. Our updated dietary guidelines have to recognise this

You might know Australia’s dietary guidelines[1] from the famous infographics[2] showing the typ...

Why a Holiday or Short Break in the Noosa Region Is an Ideal Getaway

Few Australian destinations capture the imagination quite like Noosa. With its calm turquoise ba...

How Dynamic Pricing in Accommodation — From Caravan Parks to Hotels — Affects Holiday Affordability

Dynamic pricing has quietly become one of the most influential forces shaping the cost of an Aus...

The rise of chatbot therapists: Why AI cannot replace human care

Some are dubbing AI as the fourth industrial revolution, with the sweeping changes it is propellin...

Australians Can Now Experience The World of Wicked Across Universal Studios Singapore and Resorts World Sentosa

This holiday season, Resorts World Sentosa (RWS), in partnership with Universal Pictures, Sentosa ...

Mineral vs chemical sunscreens? Science shows the difference is smaller than you think

“Mineral-only” sunscreens are making huge inroads[1] into the sunscreen market, driven by fears of “...