**Australia Blocks Social Media Access for Under 16s: Only Companies Punished, Non-Login Access Allowed Through Age Identification Technologies**
Starting from the 10th, Australia will block users under the age of 16 from using social media platforms to protect teenagers. This initiative, a first among major countries, is attracting attention as other nations consider implementing similar systems, with a focus on the specifics and efficacy of the measure.
Under Australian legislation passed late last year, social media platforms face fines of up to AUD 49.5 million (about 48.5 billion KRW) if they fail to take reasonable steps to prevent users under 16 from holding accounts. Affected platforms include Facebook, Instagram, Threads, YouTube, TikTok, X (formerly Twitter), Snapchat, Reddit, Twitch, and KIK, with more potentially being added.
While users can still access content without logging into these social media platforms, eSafety, the Australian online safety regulator, clarifies that this is a “suspension of account use” rather than a “ban.” Importantly, there are no penalties for users or their parents.
The Australian government believes preventing under-16s from holding accounts will shield them from addictive features like algorithms and push notifications, the most harmful aspects of social media. According to eSafety, teenagers are more likely to be exposed to overwhelming pressure and risks when logged into accounts, which arise from social media design, encouraging longer screen time and exposure to negative or manipulative content.
The law requires platforms to delete existing accounts of users under 16 or deactivate them until they turn 16 and prevent new account registrations. About 96% of Australians under 16, around one million young people, currently hold social media accounts.
**Can Under-16s be Effectively Blocked?**
Social media platforms need to identify and filter out users under 16, a challenge in countries like Australia without national identification systems like Korea. While platforms might require ID submissions, AI-based facial recognition technology could play a key role. Companies like the UK startup Yoti, which analyzes selfies to verify age, already offer services to Meta and TikTok.
Further techniques to identify users’ ages include analyzing voice, location, and usage patterns. While Meta claims to use multiple methods to filter under-16s, it does not disclose specifics to prevent users from finding ways to bypass restrictions. There is a possibility of misclassification leading to users over 16 being wrongly restricted due to “considerable” margins of error aimed at minimizing privacy intrusions.
Australian authorities expect platforms to offer mechanisms for correcting identification errors. Additionally, eSafety acknowledges that under-16s could find ways to bypass restrictions, similar to how they sometimes circumvent regulations on smoking or drinking.
However, like alcohol and tobacco regulations, social media restrictions are deemed beneficial despite potential non-compliance by some teenagers. The Australian government concedes the regulation won’t be perfect initially, allowing time for platforms to refine blocking measures.
**Debates on Effectiveness:**
Opinions are divided on the regulation’s effectiveness, with much attention on the outcomes post-implementation. Annika Wells, Australian Minister for Communications, emphasized protecting the Alpha Generation (born post-2010) from predatory algorithms. University of Melbourne psychology professor Scott Griffiths anticipates meaningful protection for youth wellness from major social media companies, though some predict users may shift to less safe platforms or circumvent restrictions.
Cambridge psychologist Amy Orben highlighted the need to assess the ban, given correlations between teen IT use and mental health issues. Meanwhile, the regulated platforms, despite opposing the measure, mostly plan to comply, with Meta expressing alignment with Australia’s safety goals but concern over cutting youth off from their communities. YouTube argued the law might increase risks for Australian children on their platform but agreed to block under-16 logins starting from the 10th.
**Global Emulation of Australia’s Regulation:**
Australia’s example has spurred similar initiatives globally. Denmark plans to block social media access for those under 15, and Malaysia will adopt a similar stance for under-16s next year. New Zealand’s ruling National Party is considering legislation akin to Australia’s, and Spain now requires legal guardian approval for those under 16.
Countries like Norway, Singapore, and Indonesia are exploring similar measures, inspired by Australia. The European Parliament passed a resolution allowing only those over 16 to access social media and AI chatbots without parental consent, influenced by Australia’s pioneering policy. EU Commission President Ursula von der Leyen acknowledged the inspiration from Australia’s social media policy, suggesting possible EU-wide regulation.
Julia Inman Grant, eSafety Commissioner, recently stated at an event that Australia’s action marks a turning point and could be the “first domino” in global efforts to regulate big tech.
Contact: [email protected]
