Instagram announced a new set of AI-age verification tools designed to protect the young user and unveiled new AI technology aimed at preventing underage children from creating accounts and blocking adults from contacting young users they don't know. But the tools won't be used, at least not yet, to try to keep children off of the popular photo and video-sharing app. The current test only involves verifying that someone is 18 or older. According to Instagram the age information would not be visible to others but would help in creating age-appropriate and safer experiences on the social network with more than a billion users.
While many people are honest about their age, young people can lie about their date of birth. Verifying people's age online is complex and something many in our industry are grappling with. To address this challenge, Meta is developing new AI and ML technology to help us keep teens safer and apply new AI age-appropriate features.
The use of face-scanning AI, especially on teenagers, raised some alarm bells. To use the face-scanning option, a user has to upload a video selfie. That video is then sent to Yoti, a company that uses people's facial features to estimate their age. This feature relies on our work to predict peoples' ages using machine learning technology, and the age people give us when they sign up. And these tools will allow parents to monitor how much time a kid spends on the app, and be updated about accounts.
Additionally, in the US, Instagram has also collaborated with The Child Mind Institute and ConnectSafely to publish a new Parents Guide. This updated Guide was already launched in India last month. The new set of supervision AI features lends parents and guardians some crucial transparency into young users' Instagram habits. But this social media peer checking the age of users could put the network in violation of the Child Online Privacy Protection Act.