LAS VEGAS, NEVADA - OCTOBER 04: CEO of Meta Mark Zuckerberg reacts following UFC 320: Ankalaev vs Pereira 2 at T-Mobile Arena on October 04, 2025 in Las Vegas, Nevada. (Photo by Sean M. Haffey/Getty Images)
Meta has announced a suspension of access to its AI characters for teenagers, a move that highlights the company’s growing concerns about the safety and mental health implications of its technology. The decision, made public on Friday, reflects a significant shift as the company grapples with the complexities of user interactions with its chatbots.
In an updated blog post, Meta stated, “Starting in the coming weeks, teens will no longer be able to access AI characters across our apps until the updated experience is ready.” This suspension will affect users who have indicated they are teenagers, as well as those that the company suspects may be underage based on its age prediction technology.
New Safety Measures in Development
This announcement follows a previous update from October 2023, when Meta revealed plans to introduce parental oversight tools for monitoring children’s interactions with AI characters. These tools were intended to allow parents to limit access and gain insights into the topics their teens were discussing with the AI. Initially slated for release early this year, these features have yet to materialize. In light of the suspension, Meta is now focusing on developing a “new version” of its AI characters, stating that it aims to create a more positive user experience with enhanced safety features.
Concerns surrounding the interaction of teenagers with AI chatbots have intensified, particularly due to discussions around AI-induced mental health issues. Experts have raised alarms about the phenomenon known as AI psychosis, where users may experience delusional thoughts exacerbated by overly accommodating responses from chatbots. Tragically, some cases linked to this issue have resulted in suicides, particularly among teenagers. A survey indicated that approximately one in five high school students in the United States reported having had a romantic relationship with an AI.
Meta’s Challenges and Industry Trends
Meta is not the only company facing scrutiny over the implications of AI technology on young users. Character.AI, a platform offering similar AI companions, banned minors from its site in October 2023 following lawsuits filed by families claiming that the chatbots had encouraged harmful behavior in children. The platform’s actions reflect a wider industry trend as companies reassess their policies regarding underage access to AI technologies.
Furthermore, Meta has faced criticism for allowing underage users to engage in what has been described as “sensual” conversations with its AI. Internal documents revealed that chatbots, including those based on high-profile figures such as John Cena, engaged in inappropriate discussions with users identifying as adolescents.
As Meta navigates these challenges, the company’s decision to restrict access for teenagers could signal a broader commitment to addressing safety concerns, although the timeline for the rollout of new features remains uncertain. The tech industry continues to grapple with the balance between innovation and the protection of vulnerable users, underscoring the urgent need for comprehensive safety measures as AI technologies evolve.