Now Reading
Meta pulls back on chatbots targeting children

Meta pulls back on chatbots targeting children

AI chatbot illustration with digital chat bubbles.

Earlier this summer, Meta removed a massive number of child predators from Facebook and Instagram. However, the company is now under scrutiny after internal documents revealed that its own chatbots were allowed to engage children in romantic and suggestive chats. A 200-page document called GenAI: Content Risk Standards detailed what Meta’s AI systems could and could not say.

The report showed that chatbots were permitted to use language that could be perceived as flirtatious or intimate when interacting with minors. Although some restrictions existed, loopholes allowed AI responses that still described children in attractive terms. These revelations came despite Meta’s stated commitment to child safety across its platforms.

Safety Standards and Policy Reversal

According to the investigation, chatbot rules once allowed responses such as love confessions, romantic descriptions, and innuendo. While sexualized references to children under 13 were restricted, the standards left space for troubling interactions. For instance, chatbots could still generate romantic messages that blurred the boundaries of safety.

Meta has since removed the conflicting rules and announced a revision of its AI standards. The company admitted that its enforcement of safety guidelines was inconsistent. However, it declined to provide the updated rulebook to demonstrate what has now changed. This lack of clarity leaves open questions about how Meta defines inappropriate chatbot behavior today.

Ongoing Concerns

Although Meta has made changes to improve teen safety on its platforms, concerns remain about whether these efforts go far enough. Reporting harmful chatbot responses is currently no easier than flagging any other type of abusive content. Yet research has shown that children are less likely to use complex reporting tools.

See Also
AI-powered virtual Emirati girl character smiling

Recent evidence also suggests that AI chatbots can create unhealthy emotional attachments, leading to harmful effects on users of all ages. Some reports have documented cases of violence, self-harm, and even tragic deaths linked to emotional dependence on chatbots.

With growing scrutiny from regulators, child safety advocates, and the public, Meta faces increasing pressure to prove that its platforms and AI systems protect minors from harm. Without greater transparency, doubts will persist over whether safety is truly being prioritized.

View Comments (0)

Leave a Reply

Your email address will not be published.

© 2024 The Technology Express. All Rights Reserved.