Flemish Government Sets Binding Age 13 Minimum for Harmful Social Media Platforms in Bold Bid to Protect Children

  


Brussels, April 4, 2026 – The Flemish government has taken a decisive step to shield young people from the risks of social media by introducing a legally binding minimum age of 13 for access to platforms considered harmful to minors. The new decree, announced on Friday, aims to force technology companies to implement robust age verification systems, marking one of the most concrete regional efforts in Europe to tackle the growing concerns surrounding children’s online safety.

Under the updated rules, platforms such as TikTok, Snapchat, Instagram, and others placed on an official government list of “harmful social media” will be required to prevent users under 13 from creating accounts or accessing their services. While many platforms already state in their terms of service that users must be at least 13 years old — in line with global standards influenced by regulations like the U.S. Children’s Online Privacy Protection Act (COPPA) — enforcement has been notoriously weak, with children routinely bypassing restrictions through false declarations or parental devices.

Flemish authorities acknowledged this enforcement gap as the driving force behind the new measure. “The trend is clear: more and more people are deeply concerned about the impact these platforms have on our society and, in particular, on our children,” said Flemish Media Minister Cieltje Van Achter. She warned that major tech companies must take meaningful steps to protect young users or risk being barred from the Flemish market. “Big tech has a responsibility, and we are prepared to enforce it,” she added.

The decision builds directly on the “Safe Online” action plan approved by the Flemish government late last year. Initially, policymakers had considered a stricter ban on social media use until age 16 but ultimately opted for a more targeted approach: mandatory age verification combined with restrictions on addictive design features such as infinite scrolling, autoplay videos, and algorithm-driven content feeds that experts link to mental health issues among adolescents.

The new decree empowers the government to maintain an official list of platforms subject to the rules. Compliance will be monitored closely, with potential fines, operational restrictions, or outright removal from the Belgian market for repeated violations. Officials from the Flemish Department of Media emphasized that the measure applies specifically to platforms deemed “harmful,” focusing on those with high engagement among minors and proven risks rather than all digital services.

Growing public and scientific debate over social media’s impact on youth has intensified in Flanders and across Belgium. Studies have linked heavy social media use among children to increased rates of anxiety, depression, cyberbullying, exposure to explicit content, and disrupted sleep patterns. Recent reports from Flemish youth organizations highlighted cases of unsafe interactions with strangers, the spread of misinformation, and the influence of harmful challenges or trends that have led to real-world harm.

“Age verification is no longer optional — it is essential,” said Dr. Liesbeth Vranken, a child psychologist at Ghent University who advised on the Safe Online plan. “We have seen enough evidence that current self-regulation by platforms fails our children. A binding age limit with real technical enforcement can change that.”

The Flemish initiative comes amid a broader European push to regulate digital platforms more stringently. The European Union’s Digital Services Act (DSA) already imposes obligations on very large online platforms to assess and mitigate systemic risks to minors, but enforcement varies by member state. Belgium’s regions — Flanders, Wallonia, and Brussels-Capital — each hold significant authority over media and youth policy, allowing Flanders to move faster than the federal level.

In neighboring countries, similar debates are underway. France has experimented with age verification pilots, while the Netherlands and Ireland have pushed for stronger parental controls. The United Kingdom’s Online Safety Act sets ambitious targets for protecting children, including requirements for platforms to prevent underage access to certain content. Flanders’ approach stands out for its explicit minimum age enforcement and threat of market exclusion.

Tech industry representatives have expressed cautious support mixed with practical concerns. A spokesperson for a major platform operating in Belgium noted that effective age verification remains technically challenging without compromising user privacy. Methods under consideration include facial age estimation, government-issued digital IDs, or biometric checks, all of which raise data protection questions under the EU’s strict GDPR rules.

Civil liberties groups have also voiced reservations. While supporting child protection, organizations such as the Belgian League for Human Rights warn that overly rigid systems could lead to excessive data collection or exclude vulnerable young people who rely on social media for support networks. “The goal is noble, but implementation must respect privacy and avoid creating new digital divides,” said one advocate.

For parents and educators in Flanders, the announcement has been largely welcomed. School associations reported that many teachers struggle daily with the consequences of unregulated social media use — from classroom distractions to mental health crises among students as young as 10. “Finally, we have a government taking concrete action instead of just talking about the problem,” said Marieke Dubois, president of a Flemish parents’ federation.

The decree will now enter a formal adoption process before taking full legal effect. Authorities plan a phased rollout, giving platforms time to adapt their systems while preparing public awareness campaigns to inform families about the new requirements. The government also intends to invest in digital literacy programs to help children develop healthier online habits once they reach the permitted age.

This move reflects a growing consensus that leaving children’s online safety solely to voluntary industry standards has failed. With smartphones in the hands of ever-younger users and artificial intelligence making content more addictive and personalized, regional governments like Flanders are stepping up where national and European efforts have been slower to deliver tangible results.

Critics of big tech argue that platforms have profited enormously from underage users while shifting responsibility to parents. Minister Van Achter’s firm language signals a new era of accountability: platforms that refuse to invest in credible age gates may lose access to one of Europe’s wealthiest and most digitally connected regions.

As the Flemish government leads by example, eyes across Belgium and Europe will watch closely. Success could inspire similar legislation in Wallonia and Brussels, potentially creating a unified Belgian front on children’s digital rights. Failure, or weak enforcement, could embolden critics who claim such rules are unenforceable in a borderless internet.

For now, the message from Flanders is unmistakable: protecting the mental health and development of the next generation outweighs the convenience of unchecked social media access. The coming months will test whether technology giants are prepared to meet this new standard or continue business as usual at the expense of children’s wellbeing.

Cherriton David

I am a Doctorate degree holder of Mass Communication from the University of Benin. I love engaging myself in entertainment, politics and all trending news around the world. I am a movie addict and a die-hard Arsenal fan.

Previous Post Next Post

نموذج الاتصال