12 September, 2025
ftc-investigates-ai-chatbots-impact-on-children-s-safety

The Federal Trade Commission (FTC) has initiated an inquiry into the safety of AI chatbots designed for children and teenagers. The investigation targets major companies in the tech industry, including Alphabet, Meta Platforms, Snap, OpenAI, and others. The FTC’s focus is on the potential risks associated with these chatbots, which children increasingly use for various purposes, ranging from homework assistance to emotional support.

The FTC’s letters, sent on September 28, 2023, seek detailed information regarding the measures these companies have taken to assess the safety of their chatbots. Specifically, the commission aims to understand how companies limit the use of these products by minors and mitigate any negative effects. The inquiry follows a surge in reports about harmful interactions children have had with chatbots, including receiving dangerous advice on sensitive topics such as drug use and mental health issues.

Recent tragic incidents have further raised concerns about the impact of AI chatbots on young users. A Florida mother has filed a wrongful death lawsuit against Character.AI, claiming her son developed an abusive relationship with a chatbot before taking his own life. Additionally, the parents of Adam Raine, a 16-year-old from California, have initiated legal action against OpenAI, alleging that ChatGPT played a role in their son’s suicide by providing harmful guidance.

In response to the inquiry, Character.AI expressed its commitment to collaborating with the FTC, highlighting its investments in safety features tailored for young users. The company mentioned the rollout of a new under-18 experience and a Parental Insights feature designed to enhance user safety. Character.AI emphasized that it includes disclaimers in every interaction, clarifying that its chatbots are not real people and that their responses should be regarded as fictional.

Snap has similarly defended its chatbot, My AI, asserting its transparency regarding capabilities and limitations. The company stated, “We share the FTC’s focus on ensuring the thoughtful development of generative AI and look forward to collaborating on policies that foster innovation while safeguarding our community.”

While Meta declined to comment specifically on the inquiry, it has previously announced steps to restrict its chatbots from engaging in conversations about self-harm and suicide with teenagers. Instead, these interactions are redirected to expert resources. The company has also implemented parental controls for teen accounts to enhance safety measures.

OpenAI recently announced that it would be enhancing its safety protocols, especially concerning users exhibiting signs of emotional distress. Changes include new controls that allow parents to link their accounts with their teens’ accounts. Parents will have the option to disable certain features and will receive notifications if the AI detects acute distress in their child, as outlined in a company blog post.

As the use of AI chatbots becomes more widespread among younger demographics, the inquiry highlights the pressing need for accountability and safety measures in the rapidly evolving landscape of artificial intelligence. The FTC’s investigation may lead to new regulations aimed at protecting children and teenagers as they navigate the complexities of digital interactions.