wezebo
Back
ArticleApril 29, 2026 · 4 min read

Europe’s Meta case puts age checks back at the center of platform regulation

The European Commission says Meta is not doing enough to keep under-13 users off Instagram and Facebook. The finding raises the pressure on every large platform relying on self-declared ages.

Wezebo
Abstract editorial image of layered social media safety gates and privacy shields in a dark digital environment with no text or logos

The European Commission has preliminarily found that Meta is breaching the Digital Services Act by failing to stop children under 13 from using Instagram and Facebook.

The case is not just about Meta. It is a signal that Europe wants large platforms to treat age assurance as a core safety system, not a checkbox in the signup flow.

The decision in plain English

The Commission says Meta’s own terms set 13 as the minimum age for Facebook and Instagram, but that the company’s enforcement does not appear effective. In practice, a child can enter a false birth date and pass through without what regulators describe as effective controls.

The EU also says Meta’s tools for reporting underage users are hard to use and often do not lead to prompt removal. The preliminary finding argues that this fails the DSA requirement for very large online platforms to identify, assess and mitigate systemic risks.

Meta can respond before the Commission reaches a final decision. If the EU confirms non-compliance, penalties can reach up to 6 percent of global annual turnover. The Verge notes that, based on Meta’s reported 2025 revenue, the theoretical maximum could be roughly $12 billion.

Why this is harder than it sounds

Age checks are one of the messiest problems in consumer tech. A platform can ask users to declare their birth date, but that is easy to evade. It can demand identity documents, but that raises privacy, security and exclusion concerns. It can infer age from behavior, but that creates another layer of opaque profiling.

That tension is why this finding matters. The EU is not merely saying that Meta needs a better warning screen. It is pushing the company toward a more reliable system while still operating under European privacy expectations.

For Meta, the practical challenge is scale. Instagram and Facebook serve hundreds of millions of users in Europe and many more worldwide. Any stricter age gate will create false positives, appeals, parent complaints and friction for legitimate users. Regulators are effectively saying that friction is now part of the cost of running social networks for minors.

The platform-wide ripple effect

Other large services should read this as a preview. TikTok, YouTube, Snapchat, gaming platforms and messaging apps all face versions of the same issue: children want access, parents have mixed expectations, and regulators are no longer satisfied with self-certification.

The DSA gives Europe a lever to move from general child-safety criticism to specific operational demands. That means compliance teams will need evidence: how many underage accounts were detected, how reports were handled, what risk models were used, and whether the platform changed its product after finding gaps.

The next fight will be over what counts as “effective” age assurance. If the bar becomes too low, the rule changes little. If it becomes too high, platforms may collect more sensitive data than users want to hand over.

What to watch next

Meta will likely argue that it already uses a mix of signals, reporting tools and teen-safety features, and that blunt age verification could create privacy tradeoffs. The Commission’s press release suggests those arguments have not yet convinced investigators.

The useful takeaway is simple: age gates are becoming infrastructure. Platforms that treat them as a legal formality are going to have a hard time in Europe. The companies that solve this well will not be the ones that collect the most IDs. They will be the ones that can prove, with minimal data and clear audit trails, that their safety systems actually work.