FTC Launches Probe Into AI Chatbots Over Risks to Children and Teens

FTC Launches Probe Into AI Chatbots Over Risks to Children and Teens
The U.S. Federal Trade Commission (FTC) has announced a sweeping inquiry into AI-powered chatbots designed to act as digital companions, raising concerns about their potential psychological impact on children and teenagers. The move underscores growing regulatory scrutiny over generative AI systems that mimic human conversation and emotions.

The agency has sent formal orders to seven companies — including Alphabet, Meta, OpenAI, Snap, Character.AI, and xAI Corp — demanding information about how they manage risks, protect minors, and monitor the effects of prolonged chatbot use. “Protecting kids online is a top priority for” the FTC, said Chairman Andrew Ferguson, adding that the agency must balance child safety with preserving U.S. leadership in AI innovation.

The investigation will focus on chatbots that present themselves as friends or confidants, potentially encouraging emotional dependence among young users. Regulators are examining how these platforms handle sensitive information, develop chatbot personalities, and measure the psychological effects of user engagement. The FTC also seeks details about how companies restrict children’s access and ensure compliance with privacy laws such as the Children’s Online Privacy Protection Act (COPPA).

Importantly, the inquiry is not tied to an enforcement action but could lay the groundwork for future regulatory steps. The unanimous vote by commissioners signals bipartisan concern about the risks these technologies pose to minors.

The announcement comes amid heightened debate over the role of AI in vulnerable users’ lives. AI chatbots have become more sophisticated and widely adopted, prompting questions about their influence on mental health.

Last month, the issue gained national attention when the parents of Adam Raine, a 16-year-old who died by suicide, filed a lawsuit against OpenAI, claiming ChatGPT had given their son detailed instructions on how to harm himself. In response, OpenAI said it is implementing corrective measures and acknowledged that prolonged interactions with ChatGPT do not always trigger referrals to mental health resources when users express suicidal thoughts.

The FTC’s action signals that regulators are preparing to address the ethical, psychological, and privacy implications of AI companions, particularly for young audiences, as the technology becomes increasingly embedded in everyday life.

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

error: Content is protected !!

Share your details to download the Cybersecurity Report 2025

Share your details to download the CISO Handbook 2025

Sign Up for CXO Digital Pulse Newsletters

Share your details to download the Research Report

Share your details to download the Coffee Table Book

Share your details to download the Vision 2023 Research Report

Download 8 Key Insights for Manufacturing for 2023 Report

Sign Up for CISO Handbook 2023

Download India’s Cybersecurity Outlook 2023 Report

Unlock Exclusive Insights: Access the article

Download CIO VISION 2024 Report

Share your details to download the report

Share your details to download the CISO Handbook 2024

Fill your details to Watch