Meta Tests AI System to Detect Underage Users on Instagram
Key Points:
-
AI-Powered Age Detection
-
Meta is testing a new AI system in the U.S. to identify underage Instagram users who falsely registered as adults.
-
The system analyzes behavioral signals (interactions, engagement, text clues like birthday messages) to detect mismatches (e.g., a user listed as 25 but receiving “Happy 14th!” comments).
-
-
Automatic “Teen Account” Restrictions
-
If AI suspects an account belongs to a minor, it will automatically enforce “Teen Account” settings, including:
-
Stricter messaging controls (limiting messages from strangers).
-
Reduced exposure to sensitive content.
-
Default private settings.
-
-
Users can request a review if misclassified.
-
-
Age Verification Options
-
Users flagged as underage can verify their age via:
-
ID uploads.
-
Peer confirmation.
-
Video selfie.
-
-
-
Parental Involvement
-
Parents of teen users will receive prompts to verify their child’s age.
-
Meta collaborated with child psychologists to guide parents on discussing online safety.
-
-
Regulatory & Privacy Concerns
-
The move follows growing scrutiny from EU and U.S. lawmakers over child safety on social media.
-
Critics question the accuracy of AI classification and potential privacy implications.
-
-
Rollout Plan
-
Currently in U.S. testing phase, with potential global expansion if successful.
-
Meta’s new AI system aims to enhance child safety by automatically restricting accounts suspected of belonging to underage users. While the technology could reduce risks like unwanted contact or harmful content, concerns remain over false positives and data privacy. With regulators intensifying pressure, Meta’s approach may set a precedent for how social platforms balance safety, automation, and user rights. The system’s effectiveness in the U.S. will likely determine its future adoption worldwide.