Meta is expanding its use of artificial intelligence to identify underage users across Facebook and Instagram, introducing systems that analyze physical characteristics, behavioral patterns, and account activity to estimate a user’s age. The move is part of the company’s broader push to strengthen age verification and comply with increasing regulatory pressure around child safety online.
According to Meta, the new AI systems are designed to detect users who may have entered false birth dates when creating accounts. The company says the technology examines signals such as profile activity, interactions, content engagement, and visual cues from uploaded media to estimate whether someone is likely under 13 or under 18.
One aspect of the rollout drawing significant attention is Meta’s use of visual analysis tied to body and facial characteristics, including height and bone structure. Meta insists the technology is not facial recognition because it is not designed to identify individuals. Instead, the company describes it as an age estimation technology that predicts approximate age ranges rather than confirming identity.
If the system flags an account as potentially underage, Meta may place restrictions on the profile or require the user to complete additional verification steps before regaining full access.
The rollout comes amid growing pressure from governments and regulators demanding stronger protections for minors online. Meta has faced criticism in both Europe and the United States over concerns that children can easily bypass platform age limits simply by entering false birthdays. Lawmakers have increasingly argued that social media companies should bear more responsibility for verifying user age and limiting harmful content exposure.
Meta says AI-based detection is necessary because traditional age gates are ineffective. However, privacy advocates and digital rights groups warn that the company’s latest approach introduces a new set of concerns centered on biometric analysis, surveillance, and data collection.
Critics argue that analyzing facial structure, body proportions, and behavioral patterns at scale could normalize invasive monitoring practices across social media platforms. While Meta says the system does not identify users individually, privacy experts note that age estimation still relies on sensitive biometric data derived from photos and videos uploaded by users.
Some researchers have compared the technology to automated profiling systems that make assumptions about users based on appearance. Online critics have described the approach as “AI phrenology,” warning that such systems could produce inaccurate or biased results depending on facial features, ethnicity, lighting conditions, camera quality, or gender presentation.
There are also concerns about false positives. Adults with younger-looking appearances could potentially face account restrictions or verification requests, while some minors may still bypass the system using makeup, altered lighting, VPNs, or manipulated photos.
Academic research has already shown that many AI age estimation tools can be fooled relatively easily. Some studies found that cosmetic tricks such as fake facial hair, glasses, or subtle image manipulation can significantly reduce accuracy rates.
Privacy advocates are additionally questioning how long Meta may retain age estimation data and whether the underlying systems could eventually expand into broader biometric analysis tools. The company says its AI is only used for safety purposes, but critics warn that once large-scale biometric infrastructure is deployed, its future use cases can become difficult to limit.
The debate reflects a broader challenge facing the tech industry. Governments are demanding stronger age verification systems, while users and privacy groups remain wary of handing platforms more sensitive personal data. Companies increasingly find themselves caught between regulatory pressure to identify minors and public concern over expanding surveillance technologies.
Meta argues that AI-driven age estimation is currently one of the only scalable ways to enforce age restrictions across billions of accounts. But the backlash surrounding the rollout highlights how online child safety measures are increasingly colliding with questions about privacy, consent, and biometric monitoring.
