
Google Gemini AI Coming to Kids Under 13: A Risky Move?
Google is planning to roll out its Gemini AI chatbot to children under 13 who have Family Link accounts, sparking both excitement and concern among parents and experts. This move comes as tech companies fiercely compete to engage younger users with AI products. But is it a step too far? Let's delve into the details.
Google's Family Link accounts allow parents to manage their children's online activities, including access to content and apps. The company stated in an email to parents that their child will soon be able to ask questions, get homework help, and make up stories using Gemini. To create these accounts, parents provide personal details, but Google assures that this data won't be used to train the AI.

While Google emphasizes that Gemini has specific safeguards for younger users, potential risks remain. The company acknowledges that the system can "make mistakes" and may produce content parents don't want their children to see. This raises critical questions about the accuracy and appropriateness of the information provided by the chatbot.
Experts point out that AI chatbots, even those with safeguards, can "share harmful content, distort reality and give advice that is dangerous". Young children, who are still developing critical thinking skills, may struggle to distinguish between chatbots and reality, potentially leading to confusion and misinformation.
One potential issue is that safeguards designed to block inappropriate content could inadvertently restrict access to age-appropriate information. For example, restricting the word "breasts" might prevent children from accessing educational resources about puberty.
Moreover, the rollout of Gemini for children coincides with Australia's upcoming ban on social media for those under 16. This underscores the need for comprehensive digital safety education, extending beyond social media to encompass all types of digital tools.
Google advises parents to talk to their children about Gemini, explaining that it isn't human and not to share sensitive information. They also suggest teaching kids how to fact-check the chatbot's answers, which are generated by AI from a number of sources on the internet and other data.

While Google's intentions may be to provide educational and creative tools for children, the potential downsides cannot be ignored. Will this be a beneficial learning experience, or will it open a pandora's box of unforeseen consequences?
What are your thoughts on Google's Gemini AI for kids? Share your perspectives in the comments below. Let's discuss the future of AI and its impact on our children.