Introduction
Australia has taken a significant step in digital regulation by imposing, as of December 10, 2025, a ban on the use of major social media platforms for minors under 16 years old. This move aims to reduce exposure to harmful content, sexual harassment, and adverse effects on mental health during crucial developmental stages.
The New Law and Its Implications
The law mandates platforms such as TikTok, Instagram, YouTube, Snapchat, X, Facebook, Reddit, Twitch, Kick, and Threads to deactivate accounts not meeting the minimum age or face hefty fines. The Australian Department of Infrastructure, Transport, Regional Development, Communications, Sport, and Arts is responsible for this restriction.
The law enforces age verification obligations, monthly reports to the eSafety Commissioner, and penalties of up to 50 million Australian dollars for platforms failing to demonstrate reasonable compliance efforts.
Exemptions include services with educational, professional, or healthcare purposes, which are excluded from the definition of restricted platforms.
Why Australia Took This Step
Australia’s decision is backed by scientific evidence linking intense social media use to higher prevalence of psychological distress, self-harm, and depression among adolescents. Public opinion surveys show widespread support for stricter measures regarding minors’ social media use.
However, these scientific findings do not justify an absolute ban without considering potential side effects. Critics argue that the measure may inadvertently push young users towards less regulated online spaces with no protection.
Reactions from Major Platforms
Major platforms have responded with compliance and criticism. Meta and Google announced measures to disconnect accounts of users under 16 and provide appeal paths or documentation verification in case of errors. Snap and TikTok acknowledged the need to protect young users but warned about verification methods’ effectiveness and security, as well as the risk of pushing them towards less secure online areas.
Technical and ethical criticisms also arise. Government-commissioned pilot tests revealed flaws in verification tools, both in accuracy and privacy respect. Ensuring age verification without creating massive biometric databases is a technical challenge.
Moreover, mandatory verification may normalize document and facial image collection, opening other risks like data breaches and unwanted commercial use. There’s also a real danger of diverting users to less regulated apps and forums with no protection.
Public Policy Perspective
From a public policy viewpoint, the ban is a punitive measure where preventive education and strengthening family and school environments are more beneficial paths. Interventions with better potential outcomes include early digital literacy, transparent and auditable parental control tools, algorithm-targeted regulation, and accessible mental health services for youth.
Banning does not eliminate the social appeal of platforms or incentives for creating false accounts or using VPNs to bypass age restrictions.
Global Impact
The ripple effect is global. France’s National Assembly recently approved a measure to ban social media for minors under 15 years old, moving to the Senate for ratification. The UK is in public consultations on a minimum age of 16 for unrestricted access without parental controls.
These experiences illustrate regulatory attempts towards age-based access restrictions in some democracies. However, Europe faces simultaneous criticism for a regulatory approach that undermines competitiveness and innovation. The discussion should not polarize protection versus innovation but how to balance both objectives.
In Latin America, the dominant view remains connecting, enabling fundamental rights, and fostering innovation for economic growth. The priority is closing access gaps and elevating digital experience quality before restricting it. A coherent approach includes connectivity, early digital literacy, transparent and auditable parental control policies, algorithm-focused regulation, and accessible youth mental health services.
The debate is not about protecting children and teens from online risks but how to do so without creating new institutional harms, pushing them towards greater dangers, or eroding privacy and freedom.
Young people have seen digital spaces as a refuge from perceived insecurity and fears caused by the physical world, with violence, crime, drugs, and other vices. Adults have neglected authentic social spaces, leaving youth vulnerable to criminals and insecurity.
According to the National Urban Public Security Survey by INEGI, people feel insecure at ATMs, on the street, public transportation, roads, banks, markets, parks, in cars, and shopping centers. How can we blame young people for immersing themselves in smartphones and social media when the world is perceived as threatening?
The survey shows that schools are where people feel most secure, presenting an opportunity: in education.
Key Questions and Answers
- What is Australia’s new law about? Australia has banned minors under 16 from using major social media platforms to protect them from harmful content and mental health risks.
- Which platforms are affected? TikTok, Instagram, YouTube, Snapchat, X, Facebook, Reddit, Twitch, Kick, and Threads must enforce age restrictions or face penalties.
- Why did Australia implement this ban? The decision is based on scientific evidence linking intense social media use to mental health issues among adolescents and public support for stricter measures.
- How are major platforms responding? Platforms like Meta, Google, Snap, and TikTok are complying with the law while critiquing its potential flaws and effectiveness.
- What are the global implications? Other countries like France and the UK are considering similar age-based social media restrictions, sparking a global debate on digital regulation and its balance with innovation and fundamental rights.