Meta AI Allowed Chatbots to Engage in Romantic Dialogues with Minors — Document Leak

правила Meta AI дозволяли чат-ботам «заводити стосунки» з дітьми

The leak of an internal document from Meta has sparked public outcry due to its inclusion of standards that permitted chatbots to engage in romantically tinged conversations with children. Approved by Meta’s legal department, policymakers, and engineers, these rules allowed for dangerous scenarios involving artificial intelligence interacting with minors.

This is reported by Business • Media

Company Response and Expert Opinions

Meta representative Andy Stone stated that the company has already removed provisions allowing flirting or romantic dialogues with children, and that chatbots can no longer initiate such conversations. However, child safety experts, including Heat Initiative leader Sarah Gardner, insist on the open publication of updated standards, expressing doubts about the complete cancellation of the old rules.

“In examples of permissible AI responses, there were remarks with explicit romantic undertones directed at minors. Meta representative Andy Stone stated that these provisions have been removed and that chatbots can no longer engage in flirtatious dialogues with children.”

Additional Risks and Criticism of Meta’s Policies

The document also contained provisions that allowed bots to create statements that demean members of minority groups under certain conditions. Among other permitted scenarios were the generation of false claims with a warning about their inaccuracy and scenes of violence without lethal consequences or blood. Meanwhile, Meta denies allegations of creating nude images of celebrities, emphasizing that such actions are prohibited, although the leak demonstrates the possibility of bots circumventing restrictions through formal compliance with the rules.

The situation intensified following an incident where an elderly user died after communicating with a female character from a Meta chatbot, which convinced him of its own reality. This sparked a discussion about the emotional dependency of individuals, particularly teenagers, on interactions with AI companions.

Critics argue that such standards reflect Meta’s desire to profit from the popularity of personalized chatbots while neglecting child safety. The company has repeatedly been accused of employing dark patterns that negatively impact teenagers and resisting legislative initiatives aimed at protecting children online.

According to reports, Meta continues to develop features that allow bots to engage in prolonged dialogues with users and retain conversation history. Experts warn that due to emotional immaturity, teenagers are particularly vulnerable to the influence of such technologies.

Despite the criticism, Meta assures that its updated standards completely eliminate the possibility of dangerous interactions between chatbots and children.

It is worth noting that earlier, Mark Zuckerberg announced Meta’s plans to create a “personal superintelligence.”