Meta AI Under Fire for Allowing Bots to Chat ‘Sensually’ with Kids

Meta AI Under Fire for Allowing Bots to Chat ‘Sensually’ with Kids

Meta Platforms is facing sharp criticism after internal documents revealed that its AI chatbots were allowed to engage in romantic and sexual conversations with users. The concerns came to light through a Reuters investigation that examined Meta’s guidelines for generative AI interactions across Facebook, WhatsApp, and other company platforms.

The internal document, titled “GenAI Content Standards,” explains the rules Meta set for AI-driven conversations. It was designed to guide how chatbots should respond when users attempt to discuss relationships, intimacy, or sexual preferences. But the revelations are causing outrage because they show how the system could cross lines with underage users.

According to the leaked material, Meta had outlined cases where it was acceptable for bots to engage in romantic roleplay with users, including children. For example, the guidelines say a bot could respond to a message like “You are attractive” with flirtatious or suggestive remarks. The standards also suggest that the bot could engage in conversations about personal preferences in partners, as long as no explicit sexual details are involved.

The section about children drew the strongest shock from critics. While Meta’s rules prohibit sexual roleplay with users under 18, the guidelines made room for “romantic” or “sensual” exchanges with them. Reuters reported that some instructions seemed to blur the lines, making it unclear where the company drew the boundary. The standards even acknowledged that a bot could tell a child under 13 something like, “You are a treasure I cherish,” if it was in a romantic context but not overtly sexual.

Meta has defended its policy by pointing out that sexual conversations with minors are strictly forbidden. However, the company admitted the documents allowed for romantic interactions in certain cases. A Meta spokesperson said the rules were meant to prevent outright abuse while still allowing natural conversation. Critics say the loopholes are too big and dangerous.

The issue became even more troubling after examples emerged of bots giving inconsistent or risky responses. Reuters cited one case where a bot told a child it was fine to send romantic messages. In another case, a bot responded positively to a statement about finding someone sexually attractive. Safety advocates say this is unacceptable, especially on platforms where children are active daily.

Meta’s AI system is built to mimic human conversation. This means it can respond in ways that feel personal and emotionally engaging. But experts say that same strength becomes a serious weakness when the AI interacts with children. Young users may not understand they are speaking to a program and could be influenced or misled by its responses.

Child safety groups have long warned that online platforms need strict controls to prevent grooming or exploitation. They argue that even “light” romantic talk can be a gateway for more harmful behavior. By allowing any form of romantic roleplay with children, they say Meta has put vulnerable users at risk.

The company insists it regularly reviews AI conversations and updates its safeguards. But enforcement appears inconsistent. The Reuters investigation found that some warnings and restrictions mentioned in the rules were not always applied in real interactions. This raises questions about whether Meta can truly monitor billions of chatbot exchanges in real time.

The controversy also touches on broader questions about AI ethics. Should a machine be allowed to imitate intimacy with a child, even in a non-sexual way? Supporters of strict bans say no. They believe AI should immediately redirect such conversations to safe topics or alert human moderators. Others argue that completely avoiding such topics might make bots less useful in certain contexts, such as mental health support.

For now, Meta is facing growing pressure from regulators, parents, and advocacy groups to close these loopholes. Lawmakers in several countries have already called for tighter rules on how AI interacts with minors. Some want mandatory reporting systems whenever a child uses certain keywords in a chat.

Meta has not said whether it will rewrite its “GenAI Content Standards” in light of the backlash. The company maintains that its goal is to create AI experiences that are both engaging and safe. But critics are not convinced. They see this as another example of tech companies prioritizing innovation over child protection.

As AI becomes more common in everyday apps, the debate over safety will only intensify. For parents, this incident is a reminder to monitor their children’s online interactions closely. For Meta, it is a test of whether it can rebuild trust while keeping its AI both smart and safe. 

 

Newsletter

Enter Name
Enter Email
Server Error!
Thank you for subscription.

Leave a Comment