In the shadow of Capitol Hill, where lawmakers debate the future of online child safety, a damning new report reveals that Instagram’s much-touted teen safety features are largely ineffective, with 64% of them either no longer available or easily circumvented.

The report, “Teen Accounts, Broken Promises,” released by former Meta engineering leader Arturo Béjar alongside researchers from Cybersecurity for Democracy, Fairplay, and the Molly Rose Foundation, systematically tested 47 safety features that Meta claims protect young users on Instagram.

“Time and time again, Meta has proven they simply cannot be trusted,” write Maurine Molak and Ian Russell in the report’s foreword. Both lost their teenage children to suicide after harmful experiences on Instagram.

The researchers found that despite Meta’s public assurances, Teen Accounts still received recommendations for sexual content, violent videos, and material promoting self-harm. Adult strangers could easily contact teens, and reporting tools for inappropriate content were cumbersome and ineffective.

“With only 1 in 5 of its safety tools working effectively and as described, many may conclude that its rollout of Teen Accounts has been driven more by performative PR than by a focused and determined effort to make Instagram safe for teens,” the authors write.

Meta has rejected the findings, claiming the report “repeatedly misrepresents” its safety efforts.

Why It Matters

The report comes at a critical moment as Congress considers the Kids Online Safety Act (KOSA), which would create a duty of care for social media companies to protect minors from harmful content and design features.

Between the Lines

The timing of Meta’s Teen Accounts launch last September—just before a key congressional hearing on KOSA—suggests the company may be more focused on avoiding regulation than implementing effective safety measures.

The researchers used “red team” testing methods common in cybersecurity to evaluate each safety feature, finding that many tools that appeared protective on paper failed in practice.

The Big Picture

The report reveals a troubling pattern: Instagram’s algorithms recommended content featuring children who claimed to be as young as 6, and the platform’s design appeared to reward young users who posted sexualized content with dramatically higher view counts.

“Videos where girls of this age group raised their shirts to show their bellies attracted tens of thousands to over a hundred thousand Views, far in excess of the usual Views generated by their posts,” the report states.

For parents who believed Teen Accounts would shield their children from harmful content, the findings suggest a false sense of security. The report concludes that “children are not safe on Instagram.”

The Sources

  • “Teen Accounts, Broken Promises: How Instagram is Failing to Protect Minors” report by Arturo Béjar, Cybersecurity for Democracy, Fairplay, and the Molly Rose Foundation
  • Meta’s public statements on Teen Accounts and safety features
  • Congressional hearings on the Kids Online Safety Act

Leave a comment

Your email address will not be published. Required fields are marked *