
Former Meta employees testified before a U.S. Senate subcommittee on September 9, 2025, alleging the technology giant systematically suppressed internal research that revealed significant harms to young users on its virtual reality (VR) platforms. The testimony, supported by detailed disclosures from six current and former employees represented by Whistleblower Aid, paints a picture of a corporate culture that prioritized profit and avoided regulatory scrutiny over user safety. These allegations extend beyond VR to include Meta’s Marketplace and Dating products, suggesting a company-wide pattern of behavior. The whistleblowers have filed their evidence with Congress, the Securities and Exchange Commission (SEC), and the Federal Trade Commission (FTC), potentially opening the door to regulatory and legal action.1,2,3,4
Direct Interference and Research Manipulation
The core of the allegations centers on direct interference from Meta’s legal and management teams to prevent the collection and dissemination of damaging data. According to The Washington Post, researchers were explicitly instructed to avoid studying or collecting data on users under the age of 13, despite internal knowledge that underage children were actively using VR products.1 One whistleblower, referred to as “Charlie,” was reportedly told by Meta’s director of VR research, Tim Loving, to “swallow that ick” after expressing discomfort with this directive. In a more extreme case, a researcher (whistleblower “Alpha”) was allegedly told by Meta’s legal team not to record data from participants who discussed harms and to delete any such information if it was captured. A manager even insisted on running a VR study through a third-party vendor specifically to create a buffer for erasing what was termed “risky data.”1,4 This systematic suppression was designed to establish plausible deniability for company leadership in the wake of previous leaks.
Censorship of Findings and Project Salsa
Beyond suppressing data collection, Meta is accused of actively censoring and manipulating research findings before they could be finalized. Surveys designed to study VR harms faced heavy restrictions, including the removal of questions about emotion, well-being, and psychological harm.1 User responses detailing experiences of sexual harassment and propositioning were edited or removed from final research reports. Kristen Zobel, a member of Meta’s legal team, reportedly justified these restrictions by stating the company did not want data showing “psychological and emotional harm” to exist if Meta was audited, citing public opinion and previous “leaks.”1 Concurrently, whistleblowers revealed an internal effort codenamed “Project Salsa” – a name chosen because the project was considered “spicy” or likely to draw regulator scrutiny. This project aimed to lower the minimum age for Meta’s VR platforms from 13 to 10 years old, all while the company was allegedly suppressing evidence that children under 13 were already being harmed on the platform.4
Specific Incidents of Harm and Independent Verification
The allegations are supported by specific, documented incidents of harm to minors. In one interview with a German family, a teenage boy reported that his under-10-year-old brother was frequently approached by strangers in VR and had been “sexually propositioned” by adults; the researcher was allegedly told to delete this information.1 In another study on unwanted interactions, a young girl reported being solicited to “kiss” another user in a VR environment.4 These internal claims find support from independent testing conducted by the U.S. PIRG Education Fund in 2023. Their testing found that even on “junior accounts” designed for users aged 10-12, it was easy to access inappropriate content. A test “10-year-old” avatar was quickly placed in a game of Russian Roulette with real players who were recommended by the app’s own algorithm.5 This independent verification lends significant credibility to the whistleblowers’ claims about the platform’s dangers.
Political and Regulatory Fallout
The Senate testimony prompted strong reactions from lawmakers. Senator Marsha Blackburn (R-Tenn.) accused Meta of silencing employees, burying evidence, and using kids as “pawns to line their pockets,” calling it a “disgusting web of lies.”3,4 Senator Richard Blumenthal (D-Conn.) called the Metaverse a “cesspool, filled with pedophiles, exploiters, groomers, traffickers” and stated Meta took the “wrong lesson” from the Frances Haugen disclosures by choosing to suppress truth-telling rather than address harms.3,4 Both senators, along with others including Amy Klobuchar (D-Minn.), urged the passage of the Kids Online Safety Act (KOSA).3 The scandal has also renewed scrutiny on Meta’s approach to child safety in AI, with Senator Ed Markey (D-Mass.) sending a letter to CEO Mark Zuckerberg accusing the company of ignoring warnings about its AI chatbots and urging Meta to cut off minors’ access to them.6
Meta’s Response and Broader Implications
Meta has issued a firm denial of the allegations. Spokesperson Andy Stone dismissed them as “nonsense” and a “false narrative” built on “selectively leaked internal documents.”1,3 Stone stated that Meta has approved 180 studies related to its Reality Labs (VR division) since 2022, including research on youth safety, and asserted there was “never any blanket prohibition on conducting research with young people.”3 Another spokesperson, Dani Lever, added that Meta has introduced features to limit unwanted contact in VR and provided parental supervision tools.3 Despite these denials, the allegations represent a continuation of a persistent pattern of whistleblowers and congressional hearings confronting Meta on issues of child safety. The detailed filings with regulatory bodies like the FTC and SEC suggest this issue will likely result in prolonged legal and regulatory challenges for the company, significantly increasing pressure for the passage of comprehensive online safety legislation.