Political intention to introduce age restrictions on social media is intensifying across Europe. Several European Union member states are advancing national initiatives, reflecting growing concerns about the impact of social media on minors. France, Denmark, Greece and Spain are among the countries that have announced or introduced measures to restrict social media use by children and youth people. At the European supranational level, the debate is also gaining momentum. The European Union has set up a panel of child protection experts to advise on possible EU-wide age restrictions, with recommendations expected by the end of the summer.
As the discussion becomes more prevalent in everyday media discourse, Germany is working towards its own solution. In this context, it seems worthwhile to look at other models in order to understand what has worked, what challenges have emerged, and what lessons can be drawn for European policymaking. To explore these questions, Interface invited Tom Sulston, Head of Public Policy at Digital Rights Watch in Australia, a non-profit organization that closely monitors and engages in internet policymaking in the country, to share his insights in a background discussion.
Australia introduced an age restriction on certain social media platforms for children and teenagers under the age of 16 at the end of last year (10 December 2025). The measure forms part of a broader Online Safety Amendment Act (Online Safety Amendment (Social Media Minimum Age) Bill 2024 – Parliament of Australia), which requires ten major social media platforms, including Meta services such as Instagram and Facebook, as well as others such as Snapchat and TikTok, which must comply with the new regulations.
Implementation of the Bill: How Did We Get There?
According to Tom Sulston, the bill was effectively “smashed” through both houses of Parliament, the Senate and the House of Representatives, within just eight days (21st - 28th of November 2024). The public consultation was reportedly very short, and civil society organisations were given limited time to respond, with submissions restricted to no more than one page. Critics argue that this process reflects a lack of thorough policy consideration and meaningful stakeholder engagement.
Elaborating on the historical development of the so-called “social media ban” in Australia, the idea originated among state politicians and was amplified by commercial campaigns such as the Murdoch press’s “Let Them Be Kids” initiative and the Commercial Radio DG “36 Months”, which initially framed the proposal as an online anti-bullying measure.
The measure also proved widely popular among parents: „They wanted their kids off their phones“, as Tom Sulston explained. Combined with this broad public support and increasing political pressure to “act now,” the policy quickly gained momentum. In this climate of consensus and urgency, the proposal became not only a regulatory response but also a politically attractive campaign issue for elected officials.
Participation of Children and Young People
One of the central questions asked during the Interface webinar was whether children and young people had been meaningfully involved in the development of the bill. Formally, there was some level of participation, as Tom Sulston put it: „But functionally, it was close to zero". During the short consultation period, a number of young people submitted contributions. However, the limited timeframe and strict submission constraints significantly reduced the depth of engagement. In fact, the eSafety Commissioner engaged AYAC to lead consultations with children and young people. This report summarises virtual consultations with 53 children and young people aged 13-23 years old in July and August 2025.
However, it is worth noting, that youth representatives were not calling for a blanket ban. The eSafety Youth Council, for example, advocated for stronger safety features, improved moderation, and better digital literacy training rather than exclusion from platforms altogether (eSafety Youth Council): „For most young people, social media is a tool to excerise these rights“.
Reflecting all these circumstances Tom Sulston called the social media ban “a victory of politics over policy“.
The effects of social media age bans
At this stage, there is only limited empirical evidence regarding the societal effects of the ban, as its introduction dates back only two months. It remains unclear whether the measure has reduced harmful use of social media or improved the well-being of young people.
Although media reports suggest that around five million accounts have been deleted, this figure alone reveals little. It does not indicate whether overall usage has actually declined, whether young people have migrated to alternative platforms, or whether they are circumventing age-verification systems.
Nevertheless, according to Tom Sulston from Digital Rights Watch, several preliminary conclusions can already be drawn. All age verification methods currently used in Australia face challenges regarding accuracy. Estimates suggest that systems struggle to reliably distinguish between a 15- and a 16-year-old. This creates a high risk of misclassification, potentially resulting in users being wrongly excluded from platforms. In addition, there are privacy concerns related to the large amount of data and information that must be processed in order to determine a user’s age. On the one hand, this may create risks regarding the protection of such data; on the other hand, it may also produce effects that discourage users from using certain services.
According to Sulston, vulnerable groups in particular may be disproportionately affected. Young people in remote regions—including Aboriginal communities in Australia—often rely heavily on digital platforms to maintain social connections. Likewise, LGBTQ+ teenagers frequently use online spaces to find support networks that may not exist locally. For teenagers with disabilities, digital platforms can also provide essential opportunities for communication and participation. For some young people, online spaces are not an optional addition but rather essential for social inclusion.
It is also assumed that, as a consequence of exclusion from established platforms, young people may migrate to less regulated or less visible parts of the internet. In such spaces there may be fewer safeguards, less moderation, and reduced oversight. Moving away from environments that are at least partially regulated and visible could therefore reduce transparency regarding the content and risks young people encounter online.
Platforms and Responsibility: A Real Change?
According to the eSafety authorities, social media platforms were initially reluctant to accept stricter obligations. Ultimately, however, many participated in co-creation processes and voluntary codes of conduct and formally committed to implementing the new rules. Critics argue that enforcement has so far remained relatively lenient. The Australian eSafety authorities have been described as comparatively generous in their approach and reluctant to impose strong sanctions in cases of non-compliance, as Tom Sulston notes.
From a business perspective, platforms may have limited incentives to strongly oppose the ban. Advertising models are often more profitable when targeting adults, who generally have greater purchasing power and can be more precisely micro-targeted. In this sense, restricting younger users does not necessarily undermine the core revenue structures of these platforms.
Looking Ahead: Will This Lead to Real Change?
A broader criticism is that the ban does not address the structural problems associated with social media. Many risks—such as misinformation, addictive design, or exposure to harmful content—affect adults as well as minors. Excluding users under the age of 16 does not fundamentally reform platform architecture or the underlying business models. Moreover, the regulation implicitly assumes that 16-year-olds suddenly become capable of managing digital risks independently, without necessarily providing stronger digital literacy education or structural safeguards.
Beyond the age ban itself, the broader issue concerns the structural power of large platforms. These companies exert significant influence over the public sphere by shaping information flows, public debate, and social interaction. If the objective is genuinely to reduce harm to children, Sulston argues, the focus should shift away from age-based exclusion and toward systemic reform.
Current Developments in Germany
An important reference point in the German debate is the paper published in summer 2025 by the Deutsche Akademie der Naturforscher Leopoldina (Leopoldina), which has provided a key foundation for the ongoing discussion.
In autumn 2025, the Bundesministerium für Bildung, Familie, Senioren, Frauen und Jugend appointed an expert commission tasked with developing a comprehensive strategy to better protect children and young people in the digital world.
The commission has emphasized the importance of allowing sufficient time for societal and political deliberation. A central element of this process is the meaningful inclusion of the perspectives of children and young people when developing recommendations for future regulatory approaches, in line with Article 12 of the Convention on the Rights of the Child. At the same time, the governing parties, the Sozialdemokratische Partei Deutschlands (SPD) and the Christlich Demokratische Union Deutschlands (CDU), have presented proposals advocating the introduction of a legal minimum age of 14 for access to social media platforms. They have also called for stricter age-verification mechanisms.
Concurrently, General Comment No. 25 on children’s rights in the digital environment and the Digital Services Act (DSA) place responsibility on social media providers to offer safe services for young users. Platforms are subject to clear requirements regarding risk assessment, child protection, and the mitigation of harmful content. Guidance on implementation is provided, among others, by the European Commission’s guidelines on Article 28 of the DSA, in line with the principles set out in General Comment No. 25. As stated in paragraph 56: “States parties should ensure that digital service providers comply with relevant guidelines, standards and codes and enforce lawful, necessary and proportionate content moderation rules.”
Possible age-verification mechanisms should be designed in a privacy-preserving, proportionate, and user-friendly manner without excluding users. In this context, the European Commission is promoting the introduction of the EUDI Wallet across Member States.
Overall, there is a clear need to develop solutions that balance protection and participation. Children and young people should be supported in their development while also being able to participate safely and autonomously in the digital world.