Meta announced today it is temporarily pausing teen access to its AI characters across Facebook, Instagram, and WhatsApp globally. The move, impacting users identified as under 18, comes as the company prepares to implement more robust parental controls and safety features for these interactions. This decision reflects growing concerns and legal pressures surrounding the impact of artificial intelligence on young users and their well-being.
The suspension will affect anyone with a listed teen birthday, as well as individuals Meta suspects are underage based on its age prediction technology. The company stated the pause is not a cancellation of the feature, but a necessary step to develop a safer and more controlled experience for adolescent users. This action follows previous efforts to limit potentially harmful content accessible through AI interactions.
Addressing Safety Concerns Around AI Characters
Meta’s decision arrives amidst heightened scrutiny of social media platforms and their responsibility to protect children. A trial is scheduled to begin in New Mexico where Meta is accused of failing to adequately safeguard kids from sexual exploitation on its apps. Separately, the company is also facing a trial next week alleging its platforms contribute to social media addiction, with CEO Mark Zuckerberg expected to testify.
Additionally, Meta has recently been attempting to limit the scope of discovery related to the mental health impacts of social media on teenagers, according to reports. These legal challenges appear to be accelerating the company’s focus on proactive safety measures, particularly concerning emerging technologies like AI.
Parental Controls and Content Restrictions
In October, Meta previewed parental controls designed to offer oversight of teen interactions with AI characters. These controls included the ability to monitor conversation topics and block access to specific characters, as well as a complete opt-out for parents. However, the company has determined that further refinement is needed before releasing these features.
Prior to this pause, Meta had already begun implementing content restrictions inspired by movie rating systems. These restrictions limited teen access to AI-generated content depicting extreme violence, nudity, and graphic drug use. The upcoming update aims to build upon these initial safeguards with more comprehensive and age-appropriate responses from the AI.
The new version of AI characters will be designed to focus on educational, sporting, and hobby-related topics, steering clear of potentially harmful or exploitative content. Meta intends for these characters to provide a more curated and positive experience for younger users.
Industry-Wide Response to Teen AI Safety
Meta is not alone in re-evaluating its approach to AI and teen safety. Several other companies in the artificial intelligence space have faced similar pressures. Character.AI, for example, recently disallowed open-ended conversations with its chatbots for users under 18 and is now developing interactive stories for children.
OpenAI, the creator of ChatGPT, has also introduced new safety rules specifically for teenage users and is utilizing age prediction technology to apply content restrictions. These moves demonstrate a growing awareness within the tech industry of the potential risks associated with unrestricted AI access for minors.
The broader conversation around social media safety and the protection of children online is intensifying. Regulators worldwide are exploring new legislation and enforcement actions to hold platforms accountable for the content and experiences they offer to young users. This increased pressure is likely to continue driving changes in how tech companies approach AI development and deployment.
While Meta has not provided a specific timeline for the re-launch of AI characters with updated controls, the company indicated it will occur “in the coming weeks.” The effectiveness of the new parental controls and age prediction technology remains to be seen, and ongoing monitoring of the platform’s impact on teen well-being will be crucial. Observers will be watching closely to see how Meta balances innovation with its responsibility to protect its youngest users, especially as the New Mexico trial progresses and further legal challenges are anticipated.

