The Times Australia
The Times Australia

.

Google is rolling out its Gemini AI chatbot to kids under 13. It’s a risky move

  • Written by Lisa M. Given, Professor of Information Sciences & Director, Social Change Enabling Impact Platform, RMIT University

Google has announced it will roll out its Gemini artificial intelligence (AI) chatbot[1] to children under the age of 13.

While the launch starts within the next week in the United States and Canada, it will launch in Australia[2] later this year. The chatbot will only be available to people via Google’s Family Link accounts[3].

But this development comes with major risks. It also highlights how, even if children are banned from social media, parents will still have to play a game of whack-a-mole with new technologies as they try to keep their children safe.

A good way to address this would be to urgently implement a digital duty of care for big tech companies such as Google.

How will the Gemini AI chatbot work?

Google’s Family Link accounts[4] allow parents to control access to content and apps, such as YouTube.

To create a child’s account, parents provide personal details[5], including the child’s name and date of birth. This may raise privacy concerns for parents concerned about data breaches, but Google says children’s data when using the system will not be used to train the AI system[6].

Chatbot access will be “on” by default, so parents need to actively turn the feature off to restrict access. Young children will be able to prompt the chatbot for text responses, or to create images, which are generated by the system.

Google acknowledges[7] the system may “make mistakes”. So assessment of the quality and trustworthiness of content is needed. Chatbots can make up information (known as “hallucinating[8]”), so if children use the chatbot for homework help, they need to check facts with reliable sources.

What kinds of information will the system provide?

Google and other search engines retrieve original materials for people to review. A student can read news articles, magazines and other sources when writing up an assignment.

Generative AI tools are not the same as search engines. AI tools look for patterns in source material and create new text responses (or images) based on the query – or “prompt” – a person provides. A child could ask the system to “draw a cat” and the system will scan for patterns in the data of what a cat looks like (such as whiskers, pointy ears, and a long tail) and generate an image that includes those cat-like details.

Understanding the differences between materials retrieved in a Google search and content generated by an AI tool will be challenging for young children. Studies show even adults can be deceived by AI tools[9]. And even highly skilled professionals – such as lawyers[10] – have reportedly been fooled into using fake content generated by ChatGPT and other chatbots.

Will the content generated be age-appropriate?

Google says the system will include “built-in safeguards designed to prevent the generation of inappropriate or unsafe content[11]”.

However, these safeguards could create new problems. For example, if particular words (such as “breasts”) are restricted to protect children from accessing inappropriate sexual content, this could mistakenly also exclude children from accessing age-appropriate content about bodily changes during puberty.

Many children are also very tech-savvy[12], often with well-developed skills for navigating apps and getting around system controls. Parents cannot rely exclusively on inbuilt safeguards. They need to review generated content and help their children understand how the system works, and assess whether content is accurate.

Close up photo of Google logo sign.
Google says there will be safeguards to minimise the risk of harm for children using Gemini, but these could create new problems. Dragos Asaeftei/Shutterstock[13]

What risks do AI chatbots pose to children?

The eSafety Commission[14] has issued an online safety advisory on the potential risk of AI chatbots, including those designed to simulate personal relationships, particularly for young children.

The eSafety advisory explains AI companions can “share harmful content, distort reality and give advice that is dangerous”. The advisory highlights the risks for young children, in particular, who “are still developing the critical thinking and life skills needed to understand how they can be misguided or manipulated by computer programs, and what to do about it”.

My research team has recently examined a range of AI chatbots, such as ChatGPT, Replika, and Tessa[15]. We found these systems mirror people’s interactions based on the many unwritten rules that govern social behaviour – or, what are known as “feeling rules”. These rules are what lead us to say “thank you” when someone holds the door open for us, or “I’m sorry!” when you bump into someone on the street.

By mimicking these and other social niceties, these systems are designed to gain our trust.

These human-like interactions will be confusing, and potentially risky, for young children. They may believe content can be trusted, even when the chatbot is responding with fake information. And, they may believe they are engaging with a real person, rather than a machine.

A mother teaching her child the alphabet.
AI chatbots such as Gemini are designed to mimic human behaviour and gain our trust. Ground Picture[16]

How can we protect kids from harm when using AI chatbots?

This rollout is happening at a crucial time in Australia, as children under 16 will be banned[17] from holding social media accounts in December this year.

While some parents may believe this will keep their children safe from harm, generative AI chatbots show the risks of online engagement extend far beyond social media. Children – and parents – must be educated in how all types of digital tools can be used appropriately and safely.

As Gemini’s AI chatbot is not a social media tool, it will fall outside Australia’s ban.

This leaves Australian parents playing a game of whack-a-mole with new technologies as they try to keep their children safe. Parents must keep up with new tool developments and understand the potential risks their children face. They must also understand the limitations of the social media ban in protecting children from harm.

This highlights the urgent need to revisit Australia’s proposed digital duty of care[18] legislation. While the European Union and United Kingdom launched digital duty of care legislation in 2023, Australia’s has been on hold since November 2024. This legislation would hold technology companies to account by legislating that they deal with harmful content, at source, to protect everyone.

References

  1. ^ Gemini artificial intelligence (AI) chatbot (www.nytimes.com)
  2. ^ launch in Australia (www.abc.net.au)
  3. ^ Family Link accounts (families.google)
  4. ^ Family Link accounts (families.google)
  5. ^ parents provide personal details (www.techbusinessnews.com.au)
  6. ^ will not be used to train the AI system (www.theverge.com)
  7. ^ Google acknowledges (support.google.com)
  8. ^ hallucinating (theconversation.com)
  9. ^ deceived by AI tools (www.sciencedirect.com)
  10. ^ such as lawyers (www.abc.net.au)
  11. ^ built-in safeguards designed to prevent the generation of inappropriate or unsafe content (www.techbusinessnews.com.au)
  12. ^ tech-savvy (www.theguardian.com)
  13. ^ Dragos Asaeftei/Shutterstock (www.shutterstock.com)
  14. ^ eSafety Commission (www.esafety.gov.au)
  15. ^ AI chatbots, such as ChatGPT, Replika, and Tessa (osf.io)
  16. ^ Ground Picture (www.shutterstock.com)
  17. ^ children under 16 will be banned (theconversation.com)
  18. ^ digital duty of care (theconversation.com)

Read more https://theconversation.com/google-is-rolling-out-its-gemini-ai-chatbot-to-kids-under-13-its-a-risky-move-256204

Video games can help trans players feel seen and safe. It all starts with design

There is a comfort in finding and being yourself. Video games offer opportunities for this comfort. They all...

Times Magazine

Headless CMS in Digital Twins and 3D Product Experiences

Image by freepik As the metaverse becomes more advanced and accessible, it's clear that multiple sectors will use digital twins and 3D product experiences to visualize, connect, and streamline efforts better. A digital twin is a virtual replica of ...

The Decline of Hyper-Casual: How Mid-Core Mobile Games Took Over in 2025

In recent years, the mobile gaming landscape has undergone a significant transformation, with mid-core mobile games emerging as the dominant force in app stores by 2025. This shift is underpinned by changing user habits and evolving monetization tr...

Understanding ITIL 4 and PRINCE2 Project Management Synergy

Key Highlights ITIL 4 focuses on IT service management, emphasising continual improvement and value creation through modern digital transformation approaches. PRINCE2 project management supports systematic planning and execution of projects wit...

What AI Adoption Means for the Future of Workplace Risk Management

Image by freepik As industrial operations become more complex and fast-paced, the risks faced by workers and employers alike continue to grow. Traditional safety models—reliant on manual oversight, reactive investigations, and standardised checklist...

From Beach Bops to Alpine Anthems: Your Sonos Survival Guide for a Long Weekend Escape

Alright, fellow adventurers and relaxation enthusiasts! So, you've packed your bags, charged your devices, and mentally prepared for that glorious King's Birthday long weekend. But hold on, are you really ready? Because a true long weekend warrior kn...

Effective Commercial Pest Control Solutions for a Safer Workplace

Keeping a workplace clean, safe, and free from pests is essential for maintaining productivity, protecting employee health, and upholding a company's reputation. Pests pose health risks, can cause structural damage, and can lead to serious legal an...

The Times Features

The Role of Your GP in Creating a Chronic Disease Management Plan That Works

Living with a long-term condition, whether that is diabetes, asthma, arthritis or heart disease, means making hundreds of small decisions every day. You plan your diet against m...

Troubleshooting Flickering Lights: A Comprehensive Guide for Homeowners

Image by rawpixel.com on Freepik Effectively addressing flickering lights in your home is more than just a matter of convenience; it's a pivotal aspect of both home safety and en...

My shins hurt after running. Could it be shin splints?

If you’ve started running for the first time, started again after a break, or your workout is more intense, you might have felt it. A dull, nagging ache down your shins after...

Metal Roof Replacement Cost Per Square Metre in 2025: A Comprehensive Guide for Australian Homeowners

In recent years, the trend of installing metal roofs has surged across Australia. With their reputation for being both robust and visually appealing, it's easy to understand thei...

Why You’re Always Adjusting Your Bra — and What to Do Instead

Image by freepik It starts with a gentle tug, then a subtle shift, and before you know it, you're adjusting your bra again — in the middle of work, at dinner, even on the couch. I...

How to Tell If Your Eyes Are Working Harder Than They Should Be

Image by freepik Most of us take our vision for granted—until it starts to let us down. Whether it's squinting at your phone, rubbing your eyes at the end of the day, or feeling ...