OpenAI, Meta, xAI Among Firms Hit by FTC Child Safety Inquiry

By

FTC Hits Companies with $145 Million Penalty for Misleading Insurance
The Federal Trade Commission entrance is seen on July 09, 2025 in Washington, DC. Leigh Vogel/Getty Images for Ian Madrigal/Getty Images

The Federal Trade Commission (FTC) has opened a major investigation into leading tech firms, including OpenAI, Meta, Alphabet, Snap, and Elon Musk's xAI, over concerns about how their artificial intelligence chatbots may impact children and teens.

Announced Thursday, the probe will require seven companies—Alphabet, Meta, Instagram, OpenAI, Snap, xAI, and Character Technologies—to hand over details about how they monitor risks, manage safety, and inform parents of potential harms.

According to Forbes, the FTC said AI chatbots are capable of "effectively mimicking human characteristics" and warned that children could form relationships with bots as if they were friends.

FTC Chairman Andrew Ferguson emphasized the agency's focus: "Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy."

The investigation comes amid growing public and political concern. Last month, Sen. Josh Hawley (R-Mo.) launched his own review of Meta's chatbot after reports suggested it was once allowed to engage in romantic exchanges with minors.

Hawley said the inquiry would examine whether AI products "enable exploitation, deception and other criminal harms to children."

Character.AI to Assist FTC Amid AI Companion Concerns

Several companies have responded publicly. An OpenAI spokesperson said, "Our priority is making ChatGPT helpful and safe for everyone, and we know safety matters above all else when young people are involved."

Snap echoed a similar tone, saying it would cooperate with regulators while continuing to build "thoughtful" AI tools, CNBC reported.

Character.AI also pledged to work with the FTC and share insights on the evolving industry. Meta declined to comment, while Alphabet and xAI did not respond.

The inquiry will look into whether companies profit from children's engagement, how they develop chatbot characters, and how they manage sensitive conversations. It will also examine if user data is shared or stored in ways that could expose children to further risks.

Concerns about AI companions have intensified since ChatGPT's launch in 2022. Experts warn that young users, already vulnerable to loneliness, may rely too heavily on bots for emotional support, with unpredictable effects.

One lawsuit recently accused ChatGPT of contributing to a teenager's suicide, prompting OpenAI to review how the system handles "sensitive situations."

Despite safety worries, the AI companion market continues to expand. Musk's xAI recently unveiled a paid "Companions" feature, while Meta CEO Mark Zuckerberg has promoted personalized AI as a future necessity.

Tags
OpenAI, Meta

© 2025 VCPOST.com All rights reserved. Do not reproduce without permission.

Join the Conversation