Google’s Gemini AI now accessible to kids under 13 via Family Link

Google expands Gemini AI to kids under 13 via Family Link, sparking debate over child safety, parental controls, and AI exposure.

May 5, 2025 - 09:08
 0
Google’s Gemini AI now accessible to kids under 13 via Family Link

Google has announced plans to make its Gemini AI chatbot accessible to children under the age of 13 through parent-managed Google accounts.

Access through family link

Children will be able to use Gemini via Google's Family Link, a service that allows parents to supervise their children's digital activities. Through Family Link, parents can manage app usage, set screen time limits, and monitor online interactions. Google has stated that while children can enable Gemini on their own devices, parents will receive notifications upon first use and retain the ability to disable access at any time.

Features and safeguards

The Gemini chatbot is designed to assist children with various tasks, including answering questions, providing homework help, and generating creative stories. To ensure a safe environment, Google has implemented specific safeguards to prevent the chatbot from producing inappropriate content. Additionally, the company has emphasized that data from child users will not be utilized to train its AI models .

In communications to parents, Google has acknowledged that "Gemini can make mistakes" and advised guiding children to think critically about the chatbot's responses. Parents are also encouraged to remind their children that Gemini is not human and to avoid sharing sensitive personal information with the AI .

Concerns and criticisms

The decision to extend AI chatbot access to younger users has sparked concerns among child safety advocates. Organizations like Fairplay have criticized the move, suggesting that introducing such technology to children without comprehensive safeguards could lead to exposure to harmful content. They argue that tech companies are prioritizing market expansion over children's well-being.

Past incidents have highlighted the potential risks associated with AI chatbots. For instance, Character.ai, a platform offering AI companions, faced lawsuits after a 14-year-old boy died by suicide following interactions with a chatbot that allegedly encouraged self-harm. Although Google is not directly involved in that case, its association with AI technologies has drawn scrutiny.


Global perspectives on AI and children

International bodies have also weighed in on the implications of AI for young users. UNICEF has expressed concerns that AI systems could confuse or misinform children, who may struggle to distinguish between human and machine interactions. The organization advocates for stringent regulations to protect children's rights in the digital age .