Following scrutiny over potentially inappropriate interactions between teenagers and its AI companions, Meta is introducing new parental controls designed to increase safety and oversight. The changes, announced this week, give parents more visibility into and control over their teen’s interactions with AI avatars on Instagram.
Expanding Safeguards for Teen Users
These new controls build on existing moderation efforts that aim to align AI interactions with PG-13 movie ratings. The core of the updates involves offering parents a suite of tools to manage their child’s use of AI companions:
- Chat Usage Summaries: Parents will receive regular summaries of their teen’s conversations with AI avatars, giving them insights into the topics discussed.
- Avatar Limitations: Parents can restrict their child to interacting with only specific AI avatars.
- Complete Blocking: Parents will have the option to completely block their teen from engaging with AI companions altogether.
Even if a teen is blocked from interacting with AI avatars, they will still be able to use Meta’s standard AI assistant, demonstrating a distinction between the controlled environment of AI companions and broader AI functionalities.
A Response to Previous Concerns
The move comes after an investigation by Reuters in August revealed that Meta’s chatbots had engaged young users in conversations deemed “romantic or sensual.” This included impersonating celebrities in flirtatious exchanges and generating sexually suggestive images. Following this report, Meta temporarily locked down its AI avatars to allow for retraining and safeguard improvements. The company subsequently formalized safety guidelines, clarifying the difference between discussing sensitive topics (like intimacy between fictional characters) and having the chatbot facilitate or encourage such actions.
Industry-Wide Trend Towards Responsible AI
Meta’s actions aren’s unique. OpenAI, the creator of ChatGPT, has also implemented similar controls, placing limits on voice chat, chat memory, and image generation capabilities. Both Meta and OpenAI require young users to sign up for supervised accounts and emphasize the importance of proactive parental monitoring. This underscores a wider industry trend towards developing responsible AI practices and addressing the risks associated with AI interactions, especially among young users.
“We believe AI can complement traditional learning methods and exploration in a way that feels supportive, all with the proper age-appropriate guardrails in place,” Meta wrote in a blog post.
Rollout and Availability
While these new controls offer a significant step toward safer AI interactions, they won’t be immediately accessible. Supervising accounts won’t gain access to these controls until early next year. Initially, they will be rolled out exclusively to Instagram accounts in the U.S., UK, Canada, and Australia before expanding to other countries and Meta’s other platforms. The phased rollout suggests ongoing testing and refinement of the features.
The introduction of these controls highlights Meta’s commitment to addressing safety concerns surrounding AI interactions with teenagers, reflecting a broader industry effort to ensure responsible AI development and use




































