
Starting next week, Google will allow children under 13 to access its Gemini chatbot. This move applies to those with parent-managed accounts through Family Link, a service that lets guardians oversee their child’s use of Google products. As reported by The New York Times, the decision marks a major shift in how tech giants approach younger audiences.
According to Google, Gemini has been designed with specific protections for children. These safety measures are meant to minimize risks and ensure a responsible user experience. Furthermore, the company clarified it will not use children’s interactions to train its AI models. This could reassure many parents, although concerns about AI and kids remain widespread.
Why the Shift Matters Now
With AI competition intensifying, companies are eager to introduce these tools to broader audiences—including young users. While this may help boost adoption, it also raises red flags. Chatbots are still far from perfect. Sometimes they give incorrect answers, and occasionally they behave unpredictably. That’s why the move has already sparked conversation among educators, parents, and tech critics.
Late last year, UNESCO called for tighter regulations on AI in education. Their proposal included strict age limits and stronger safeguards for data privacy. The organization stressed that without these protections, young users could face harmful consequences. Although Google’s new policy includes some controls, it remains to be seen whether they will be sufficient.
Looking Ahead
As AI becomes more common in classrooms and homes, the pressure to balance innovation with safety is growing. Google’s rollout of Gemini for children may set a precedent. Other tech companies are likely watching closely and may follow suit. However, questions around child protection, privacy, and responsible use are far from settled.
For now, parents using Family Link can decide whether to enable Gemini for their children. While this feature may help kids explore and learn in new ways, it also demands careful oversight. As with any new tool, especially one powered by AI, guidance and vigilance remain essential.