A new Meta survey has revealed alarming figures about Instagram use among young teenagers. According to court filings made public as part of a federal lawsuit in California, 19 percent of Instagram users aged 13 to 15 reported seeing unwanted nude or sexual images that they did not want to view on the platform. The revelations raise fresh concerns about online safety and how social media companies protect underage users.
The disclosure emerged from a 2021 survey of Instagram users that was referenced in portions of a March 2025 deposition by Instagram head Adam Mosseri. That survey asked teens directly about their experiences on the platform, rather than analysing data from the posts themselves.
What the Numbers Show
About 1 in 5 young teens told Meta that they encountered nudity or sexual images they did not want to see. The figures highlight the challenges social media platforms face in controlling what appears to users’ feeds, especially when the content is shared privately rather than publicly.
The survey also found that roughly 8 percent of users in the same age group said they had seen someone harm themselves or threaten to do so on Instagram. These findings have been cited in legal battles accusing Meta of allowing harmful and addictive content to spread among young users.
Despite these figures, Meta spokesperson Andy Stone emphasised that the explicit images cited in the survey were self-reported by users and were not derived from a review of Instagram’s content itself. The company says it has since taken steps to address harmful material.
Read More: Instagram Under Fire: Zuckerberg Denies Targeting Children in LA Trial
Platform Steps and Safety Efforts
In late 2025, Meta updated its policies regarding content visible to teen users. The company said it would remove images and videos “containing nudity or explicit sexual activity, including when generated by AI,” with exceptions made only for educational or medical content. Meta’s goal with this change was to limit exposure to harmful material for its youngest users.
However, experts note that the way content is distributed on platforms like Instagram often makes moderation difficult, especially when explicit material appears in private messages or direct chats. According to safety guidelines published by Meta, the company uses a combination of automated tools and human reviewers to detect and remove harmful content, particularly around sextortion and other abusive behaviour. These efforts are part of broader campaigns to protect children online.
Challenges of Monitoring Teen Experience
The data from the 2021 survey is significant because it reflects actual experiences reported by users, revealing gaps between reported behaviour and platform moderation. Teens are increasingly targeted with explicit material through direct messages, public posts, recommended content and even algorithmic suggestions. This is partly due to practices like sexting, which involves sending explicit images or messages via digital platforms.
Sociologists and digital safety advocates say that teenagers are especially vulnerable online because they are still developing social and emotional skills and may not recognise harmful content until after exposure. Even with privacy settings and age-based limits, teens can encounter content that affects their wellbeing.
Read More: Plaintiff Lawyer Says Meta, Google Built Platforms to Addict Children
Legal and Social Implications
Meta is facing thousands of lawsuits in federal and state courts in the United States that allege its social networks contribute to mental health problems among minors and fail to protect young users from exposure to harmful content. These legal challenges could shape how social media companies regulate content in the future.
Advocates call for stronger safeguards, clearer reporting mechanisms, better parental controls, and more robust moderation of private messages as part of a comprehensive strategy to keep teens safer online.
