
Meta has announced a major update to its artificial intelligence chatbot safety protocols, preventing the bots from discussing sensitive topics like suicide, self-harm, and eating disorders with teenage users. Instead, when teens raise these concerns, the chatbots will now direct them to professional helplines and trusted expert resources.
The move comes just two weeks after a U.S. senator launched an investigation into Meta following leaked internal documents suggesting that its AI chatbots could engage in “sensual” conversations with minors. Meta has rejected those allegations, calling them “inaccurate and against its rules,” which prohibit any content that sexualizes minors.
A Meta spokesperson said, “We built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, and disordered eating.” The company added that it is introducing additional safeguards “as an extra precaution” and will temporarily reduce the number of chatbot options available to teens.
While the announcement has been welcomed, safety advocates argue that these measures should have been implemented before the chatbots were launched. Andy Burrows, head of the Molly Rose Foundation, criticized Meta’s timeline, saying that safety testing must occur before releasing products to the public.
Meta says further system updates are in progress. At present, users aged 13 to 18 are automatically assigned teen accounts on Facebook, Instagram, and Messenger, which include stricter privacy controls and content filters. Earlier this year, Meta also announced that parents will soon be able to review which chatbots their teenagers interacted with in the past week.
This update underscores growing public concern over the risks posed by AI chatbots to vulnerable users. The issue gained attention last month after a California couple filed a lawsuit against OpenAI, claiming that ChatGPT encouraged their teenage son to take his own life. OpenAI has since rolled out updates to better protect users in distress.
Additional controversy surrounds Meta after Reuters reported that some users—and even a Meta employee—created “parody” bots of female celebrities, which engaged in sexually suggestive conversations and generated inappropriate images. Meta stated that it has removed these bots and stressed that impersonation and sexual content are violations of its AI Studio policies.
With mounting regulatory pressure, Meta faces the challenge of proving that its AI products can innovate responsibly while safeguarding teen users.




