Australia’s Senate recently voted for a ban on social media for youth under age 16. Social media companies have one year to find a way to stop Australian kids under 16 from opening new accounts on their apps or risk billions in fines. At the moment, there is no clear plan for how to make this happen, but the stakes are considerable and other countries are considering similar rules.
Some parents and activists see this as a win and believe this will make kids safe, while many other parents, educators and experts are concerned. We're concerned about young people's rights and free speech. And we are worried that these bans take focus away from regulating the apps and teaching young people HOW to use digital tools thoughtfully.
Why are tech privacy experts and adolescent development researchers worried about Australia’s ban and similar efforts around the world?
Let’s unpack some of the reasons why:
Five problems with banning under 16s
Problem # 1 Age bans could let companies off the hook
One of the biggest concerns experts have is that age-bans do not hold social media companies accountable. These new laws lets the companies off the hook on their neglected task of making necessary changes to the user experience on their platforms. Over a decade ago, Emily Bazelon went inside Facebook (now Meta) and showed how limited their resources were for responding to reports of bullying, hate accounts and more. Rather than hold these mega-companies accountable, a increased age “gate” absolves them of their mandate to respond. When parents report a tween or younger teen experiencing bullying, social media companies can refer them to the age guidelines.
As I’ve argued here and in Newsweek—companies like Meta, SnapChat, and TikTok need to be responsive to concerns from *all* users, not just kids. When people report harassment, violence, dangerous and misleading content, their messages need to be responded to. The resources these companies devote to supporting users would be much better spent by making the algorithms less invasive, making privacy policies more clear and responding quickly and thoughtfully to reports of harassment, impersonation, violent footage, etc.
There are still plenty of ways to improve the design tech to make it safer for younger adolescents. Check out this guide from a panel of expert researchers at the National Scientific Council on Adolescence. It proposes limitations on targeted advertising and providing training tools to young users. It also suggests that families make decisions about allowing access based on age, maturity, etc.
16 is also quite old for an age ban. Unless you also plan to keep people from accessing Internet search until then-since YouTube is full of videos about how to get around such measures. A 13-year-old is perfectly capable of researching some simple workarounds to the ban. Pushing 14-year-olds off mainstream social media apps to more unregulated platforms or unwittingly encouraging them to utilize VPNS makes them less safe, not more.
Making apps safer for younger people is crucial. So is educating young people and their caregivers about safe and thoughtful ways to connect. Age bans will push tech company efforts in a different direction–allowing them to avoid liability or blame parents when kids skirt the bans.
Problem # 2 Age gating 16 for social apps is anti-democratic.
It silences activists, isolates kids in oppressed groups, and fails to prepare young citizens to vote at 18.
When I was writing Growing Up in Public, I interviewed incredible young creators and activists. Some of these young leaders started social media channels or engaging with public dialog at ages 14 or 15. In a world where kids are activists, keeping them off social platforms shuts down or marginalizes their voices. We know that bans place particular burdens on kids who are already marginalized, such as members of LGBTQ+ communities.
Further, many of us get our news and engage in civic conversations via social platforms. Even though these platforms are imperfect, and algorithms may limit the ideas we are exposed to, we can't just ignore how social platforms are a political space where issues are debated and discussed. We can work on holding apps accountable for sharing misinformation and for prioritizing conflict.
Shutting kids out of mainstream discourse until two years before they can vote is anti-democratic...it doesn't give teens enough time to learn the ropes, become media literate and socially literate. In Australia, voting is mandatory, so 18 year olds had better be ready. In the US, under half of voters under 29 showing up…so we have some work to do to prepare and engage young voters.
Social media can encourage young people to vote and be aware of social issues. Instagram has a banner at the top of its app encouraging people to register, celebrities and influencers post about voting and student organizations post to try to boost civic engagement. While we may wish people got their news in other ways…many people of all ages, especially young people, are getting their news via social media. We don’t want to put 13-15 year olds in a news blackout. Political and media literacy is best begun before the age of 16, as young people need to be informed, ready and registered by the time they are 18.
Problem # 3 Age 16 does not make sense developmentally or socially as a launching year for social media.
If we saw an effective social media age ban to 16 in the United States, it would divide high school communities in half. One of the best things about high schools is multi-age communities where kids learn and collaborate on a team and activities across an age range. Picture the Debate Team SnapChat. An age ban would create an in and an out group or eliminate an opportunity for community building.
A caring adult might be a fan of the age-ban because they are worried about kids and their searchable reputations when they are applying to college. Many parents and educators worry that new users might exuberantly post something thoughtless, overly revealing and harm their reputation— just as they are getting started on social media. Many of those mistakes will be self-correcting. Young people learn from them and move forward. Making mistakes can be part of learning.
But age bans that set up older teens to launch into the social media waters right as they might be looking for a first job, applying for scholarships and forging a more public identity in the world. Joining the social media universe for the first time at 16 puts pressure on older adolescents to get it right from the beginning.
Further, younger kids need support and mentorship as they navigate social media for the first time.The opportunity to teach kids how to make healthy choices in their digital lives is best begun earlier–well before they have one foot out the door. By age 16, young people are more independent–they are working on separating from adult authority and are less likely to lean into the adults in their lives for mentorship and direction on this front.
No one at any age is automatically ready and a thoughtful participant in social communities. Age bans involve a kind of magical thinking–that at 16, young people will be ready for the challenges of interacting in social apps.
What if we made the driving age 16 but did nothing to mentor new drivers or verify that they are ready to hit the road? We need to lean into digital wellness and emotional and information literacy curricula well before that and consider what ages kids will be receptive to our mentoring and modeling.
Age gating to 16 might mean schools will skimp on incorporating digital citizenship curriculum in elementary and middle schools. Or they might be pressured towards an abstinence-only approach.
We know how well that works.
Problem #4: How will social media companies know who is under 16?
Age verification has big privacy risks.
No one has a good plan for this. In the US and other places, privacy experts worry that this will pave the way to requiring all Internet users to have a Digital ID, which could dramatically limit both privacy and freedom of expression. What could go wrong? In the hands of a repressive government…a lot. In the US, police have flown drones with facial tracking software over protests.
You don’t need to be a heavy reader of dystopian fiction to understand the problems with that level of surveillance, but if you want some help imagining the downside, these novels paint a vivid and compelling picture of risks: Our Missing Hearts by Celeste Ng or Memory Piece by Lisa Ko. These stories follow compelling characters from in a world that will seem very familiar to an adjacent time when they are either caught up in a technologically surveilled state or living outside of it at their peril. Don’t like novels? Read Digital IDs Are More Dangerous Than You Think.
If Digital IDs aren’t worrying enough, some have proposed biometric facial scanning. This type of verification is dubious. Meta has used human evaluation of pictures to decide when to remove an account flagged as underage for over a decade… with unimpressive results. Giving the job to facial scanners/AI is unlikely to improve matters. We’ve all known at least one 14-year-old that looks like a college student.
Further, facial recognition software is especially terrible at interpreting images of people of color—with devastating consequences. Over relying on these technologies to decide who is “of age” does nothing to support younger social media users, and will create barriers to access that exacerbate inequality.
Problem #5: What is Social Media anyway? “I’ll know it when I see it”
While we’re handwringing over Instagram, TikTok and Snapchat, their popularity with teens has been slightly declining…The only app that has grown in popularity among teens in the last two years is WhatsApp. This platform is not (by some) considered to be social media–but it is owned by Meta.
Whether or not we consider WhatsApp to be social media–we need to mentor kids on interacting there, or any other texting apps. Ask any 6th grader (or their parents) if group texts can be a scourge, filled with yucky content and nastiness—the answer is 100% yes! Banning social apps for under 16s could simply push kids into new spaces. The risk is that we’ll default to laziness instead of mentorship, since we “solved the problem” with the age ban.
What we need to do
Instead of banning, we can regulate the companies in other ways to make apps more developmentally appropriate, compell them to respond to reports of abuse and harassment and to limit their use of manipulative and harmful algorithms.
Meanwhile, at home and at school—we can mentor kids on HOW to create boundaries? Help them identify some kinds of content as misleading, dangerous or toxic? We need to remind them only to be in contact with people they actually know when they are younger and how to safely interact with new people online as they get older. Parents should absolutely teach kids to get the heck off their phones and night and get some sleep.
Wherever they go in person or online, kids must have safe people to talk to if someone violates their boundaries, threatens them, etc. Will a 14-year-old being harassed on SnapChat be able to report it if they are afraid to admit they used a VPN or lied about their birth year to join the app? OR will they be pushed to darker and less regulated corners of the Internet?
But age bans that fail to hold tech companies accountable are not the answer. Allowing seven year olds to play Roblox on public servers but saying they can’t have a Snapchat account until they are 16 misses the point spectacularly. Pushing kids to less regulated spaces could make them LESS safe. Silencing young people’s voices when they are trying to save the world is repressive.
We can do better. We have to do better.
Outstanding points, I loved the article!