Instagram recently introduced a suite of changes for direct messaging as a way to protect the app’s younger audience. One of the biggest additions is a restriction on direct messages (DMs). Adult users now will find themselves blocked from messaging users under the age of 18 if those users aren’t following them. While the feature seems like a good move on paper, experts say it doesn’t offer nearly enough protection to make a difference without outside help. “Disallowing unsolicited messages from adults to children could cut down on scams, phishing, and predatory behavior targeting minors,” Paul Bischoff, a privacy advocate with Comparitech, told Lifewire in an email. “However, it’s easy for Instagram users to lie about their age, and difficult for Instagram to verify a user’s age.”

The Age Problem

While experts like Bischoff are happy to see Instagram working toward new ways to protect users on the app, there are still too many ways for predatory users to get around those new features.  One the defining aspects for these new safety prompts is the user’s age. However, age has long been a huge point of contention in the online world. After all, when one is placed behind a screen, the anonymity of the web becomes a playground for users to create a profile of who they want to be—with many often lying about their age to get access to apps and features they might not typically be allowed to use. “Instagram’s feature to not let adults message users under 18 only works if those users are being honest about their age,” Annie Ray, a social media expert at Buildingstars Operations, told us via email.  Ray says many younger people on the internet get used to lying about their age to get access to adult websites, and that Instagram is no exception to the rule. The age problem isn’t a new issue, though, and Instagram isn’t blind to it. “We want to do more to stop this from happening,” Instagram writes on its website, “but verifying people’s age online is complex and something many in our industry are grappling with. To address this challenge, we’re developing new artificial intelligence and machine learning technology to help us keep teens safer and apply new age-appropriate features…”  

Working in Tandem

Machine learning, while effective, will still take time to perfect. Even when utilized correctly, users may still find ways around it if they really want to. Because of this, some experts say Instagram’s safety prompts need parents to help make them more effective. “There are no foolproof solutions that will guarantee a safe, online experience for your child,” Monia Eaton-Cardone, co-founder and chief operating officer of Chargebacks911, said over email. “Is it a good thing for Instagram to try to restrict adults from pestering kids? Of course. Is it anywhere close to being sufficient to stop predators completely? Of course not.” Eaton-Cardone says parents shouldn’t rely on these new safety features to keep their children safe, stating there’s no substitute for an involved parent. Instead, she recommends parents use those features to compliment their own check-ins and inquiries. “Ask them if they’ve been getting any weird messages from strangers. Ask them if their friends are having negative experiences online,” she said. “In previous generations, parents worried about predators targeting their kids when they left the house,” Eaton-Cardone explained. “Children were taught not to talk to strangers and to be wary of suspicious-looking people on the streets—but the assumption was they were safe at home. Today, because of the Internet and flaws in cybersecurity, our homes can be even more dangerous than the outside world.”