Ten recommendations to the European Commission to really protect children online

This is the summary of our policy recommendations to the European Commission. The Commission has expanded the very short article 28 of the Digital Service Act into a set of Guidelines, that clarify how the Commission will interpret article 28 when implementing the law.

Date: 4 June 2025
The Commission’s draft guidelines are a welcome effort to clarify the Digital Service Act article 28 which concerns minors and their privacy, safety and security.

However, they fall far short of responding to the documented negligence of dominant online platforms, and fail to reflect the urgency, severity, and scale of harm these companies have enabled or ignored.

Our policy demands to the European Commission are:

  1. Adopt European wide age thresholds of 16 years old, at least until there is independent verification that these guidelines (that need further improvement as stated in this document) have been fully implemented and that young people are not being harmed anymore. The guidelines only mention the age limit to access pornographic or gambling sites, determined to be at least 18 in the guidelines. For any other ‘harm’ that the platforms are causing children and teens (such as allowing content that promotes eating disorders, child grooming, or illegal activities), the platforms themselves are trusted to decide the age limit. We believe it is not up to industry to set the limit, but that there should be a minimum European-wide age threshold of 16 for access to platforms with documented harm to child wellbeing, in line with scientific evidence on cognitive development, vulnerability and impulse control. The threshold could potentially be reviewed upon the condition that all the guidelines as described below are being adopted and implemented by Very Large Online Platforms (VLOPs), and have proven to work.

  2. Include the full overview of described harms that VLOPs are having on children, going well beyond what pornography and gambling. Other forms of harmful content, including content promoting violence, self-harm, disordered eating, or discriminatory behaviour (e.g., misogyny or anti-LGBTQI rhetoric), are well-documented mental health risks to minors and need to be included. In addition, the observed impacts on cognitive development (attention, reduced vocabulary) as well as physical health (eye health, sedentary lives of children) must be included as risks. Online platforms must not only consider risks to individual children but also systemic risks such as normalization of surveillance, commercial pressure, and mental health deterioration.

  3. Demand timely publication of risk review assessments in accessible machine readable formats. The guidelines currently do not include any requirements for platforms to publish their risk assessments and hence leave it up to Online Platforms to decide and potentially conceal risks. This creates an environment where risks are not known or disclosed publicly. Public disclosure of these assessments should be mandated to allow regulators, civil society, and other stakeholders to evaluate platforms' adherence to the guidelines and their efforts to mitigate risks to minors. Risk disclosures should follow a standardised reporting format and mandated reporting timeline and include essential indicators. This enables independent analysis and comparison across platforms. Public access to such data facilitates monitoring and supports evidence-based policy decisions.

  4. Establish independent oversight, which is necessary to ensure the accuracy and completeness of risk assessments. We know that self-regulation alone does not lead to adequate protection, particularly when commercial interests are involved. External review is necessary to introduce a verification layer that strengthens both compliance and public trust.

  5. Explicitly ban all manipulative design including infinite scroll, autoplay, Snapstreaks, and algorithmic content loops for under-18s. Such persuasive design features are widely acknowledged to expose minors to excessive screen time and sometimes harmful content. The guidelines should provide clear definitions of such design features and explicitly prohibit their use for minors. In fact, prohibiting them for all users would be good too.

  6. Require public, independent child rights impact assessments. Platforms operate within a commercial framework where user engagement often translates directly into profitability. This creates a conflict of interest when implementing safety measures that will by definition reduce engagement. The guidelines should explicitly address this issue by requiring platforms to demonstrate how they prioritize child safety over commercial goals, with regular reporting and independent reviews to ensure compliance. Platforms should be obliged to submit child safety impact assessments which should be independently reviewed and made publicly available.

  7. Set up enforcement mechanisms (e.g. fines, shutdowns, legal liability), so that compliance is not left on a voluntary basis for platforms. Effective regulation depends not only on clear rules but also on credible deterrents. Without enforceable mechanisms such as fines, legal liability, or operational restrictions, compliance with the guidelines remains voluntary. Voluntary frameworks have historically proven insufficient in driving meaningful change, particularly in cases where compliance may conflict with commercial interests. Establishing clear penalties for non-compliance is essential to ensure that platforms prioritize the implementation of safety measures. The absence of sanctions reduces the incentives for platforms to invest in systematic risk mitigation. A tiered enforcement model, including administrative penalties and operational restrictions, should be considered. Individual children and parents should be able to hold online platforms to account in courts for harm done.

  8. The guidelines should include specific requirements for the safe development, testing, and deployment of AI tools on platforms accessible to minors, along with independent oversight to verify compliance. The use of AI tools, such as chatbots and recommendation systems, introduces unique risks for society as a whole, but even more so for children. These tools can influence minors' behaviour and decision-making, especially when their functionality is inadequately tested or lacks transparency. The deployment of AI tools that interact with minors should be subject to pre-launch testing, explainability requirements, and human oversight procedures. All AI options that are accessible to minors to date must be immediately banned until testing has been done. A risk classification system for AI tools, based on their potential to influence behaviour, could help tailor regulation appropriately.

  9. Extend protective obligations to the broader tech ecosystem, including device manufacturers, telecom providers, and education tech platforms. The guidelines narrowly focus on online platforms, ignoring the wider digital ecosystem that shapes children’s online environments. Yet device manufacturers, telecom providers, and edtech platforms also play a significant role in children's access to and experience of digital services — and must share responsibility for risk mitigation. a) Device manufacturers should be required to offer child-specific settings, including pre-installed safety tools, usage tracking, and meaningful parental controls as default. Operating systems should enable age-based restrictions at system level, not just at the level of individual apps. b) Telecom providers should offer child-friendly service plans, restrict unfiltered access to harmful content by default, and provide clear, accessible tools for guardians to manage children’s data use, screen time, and app access. The current commercial model that bundles always-on mobile data with no meaningful safeguards must be reformed. c) Educational technology platforms, widely used in schools, should be subject to the same standards of risk assessment and safety-by-design requirements as social platforms. These tools often include chat functions, third-party integrations, or behavioural analytics that can pose privacy and safety risks to minors, yet currently operate with little oversight. An integrated approach is needed. Safety must be embedded across the entire digital value chain — not delegated solely to front-facing platforms. The Commission should clarify in the guidelines that all entities enabling access to or interaction with online environments used by children bear proportionate, enforceable obligations to protect child users.

  10. Incorporate parent, child, and expert input into platform governance, oversight boards, and evaluation cycles. The guidelines refer to the importance of child rights but fail to establish a mechanism through which children and their caregivers are meaningfully involved in shaping platform governance or evaluating the success of safety measures. This is a critical omission. Platforms must be required to: a) Engage with parents and educators in structured focus groups and feedback panels, especially when changes affect how guardians can monitor or support their children online. b) Integrate child development experts and independent civil society representatives in safety advisory boards and content moderation policy review processes. c) Consult with children and young people on platform design, safety tools, and policy changes, using age-appropriate participatory methods d) Publicly document how stakeholder input has been incorporated into decisions, safety assessments, and updates to platform features or policies