
Meta has announced a new AI-powered system designed to identify underage users on Facebook and Instagram by analyzing visual and behavioral cues, including height and bone structure. The initiative is part of the company’s broader effort to strengthen child safety measures and enforce age restrictions across its platforms.
According to the company, the system does not rely on facial recognition or identity matching. Instead, the AI evaluates “general visual cues” from photos and videos—such as physical proportions, posture, and contextual indicators—to estimate whether a user may be younger than the platform’s minimum age requirements. Meta says the technology also combines text-based signals from bios, captions, comments, and interactions to improve age estimation accuracy.
The visual analysis system is currently being tested in select countries, with plans for broader deployment over time. If the AI suspects that an account belongs to an underage user, the account may be temporarily disabled until the individual completes an age verification process.
The rollout comes amid increasing regulatory pressure on social media platforms to improve protections for minors. Meta recently faced significant legal scrutiny over child safety practices, including a major case in New Mexico related to allegations that the company failed to adequately protect younger users online.
At the same time, Meta is expanding its “Teen Accounts” protections across additional regions, including all 27 European Union countries and Brazil. These accounts include stricter privacy settings, messaging limitations, content restrictions, and enhanced parental supervision features.
The move reflects a growing industry trend toward AI-driven age estimation technologies, as platforms increasingly look for alternatives to traditional ID verification methods. However, the approach is also expected to raise debates around accuracy, privacy, and the broader implications of biometric-style analysis in consumer technology platforms.




