Technology & Future/AI & Deep Tech

Reports Mount of Users Generating Disturbing Videos of Children with OpenAI's Sora 2

Users are bypassing OpenAI's safety filters in Sora 2 to create disturbing videos of AI-generated children. Reports show a rise in "fake commercials" with hidden fetish themes and violent scenarios, prompting bans and calls for stricter legislation.

Yasiru Senarathna2025-12-25
sora 2
Advertisement

Since OpenAI released its advanced video generation model, Sora 2, on [September 30, 2025], the platform has faced immediate scrutiny over its safety measures. Verified reports indicate that users are exploiting the tool to create disturbing and fetish-oriented content featuring AI-generated children, bypassing the company’s safety filters.


Exploitation of "Cameos" and Fetish Content


One of the primary drivers of this trend is the new "Cameos" feature, which allows users to insert specific faces and characters into AI-generated scenes. Within days of the app's launch, reports emerged of users creating "fake commercials" that depict children interacting with adult novelty items.


According to a report by Folio3 AI, these videos often feature AI-generated children playing with rose-shaped water toys, items frequently associated with adult products on social media. The videos reportedly describe the toys squirting "sticky milk" or "white foam" onto the children. While the content is not explicitly pornographic, it utilizes specific visual cues and captions designed to signal sexual themes to niche audiences. These videos subsequently migrated to platforms like TikTok, prompting a moderation crackdown.


Watchdog Group Finds Loopholes for Violence


In addition to fetish content, researchers have found that the model can be prompted to generate violent and harmful scenarios involving minors. A report by the consumer watchdog group Ekō revealed that teenage test accounts were able to generate videos of school shootings, self-harm, and drug use.


Using accounts registered to 13- and 14-year-olds, researchers successfully created 22 videos depicting harmful content. Specific examples included teenagers snorting cocaine, smoking from bongs, and brandishing firearms in school hallways. The report noted that despite OpenAI’s stated safety policies, the safeguards were insufficient to prevent the creation of this material.


The rise in such content aligns with broader trends monitored by safety organizations. Data from the Internet Watch Foundation (IWF) indicates that reports of AI-generated child sexual abuse material more than doubled in one year. Between January and October 2024, the IWF recorded 199 reports, a figure that rose to 426 during the same period in 2025. The foundation highlighted that 94% of the illegal AI images they track depict girls.


OpenAI has acknowledged the misuse and stated that it is taking action against violators. In a statement, OpenAI spokesperson Niko Felix said, "OpenAI strictly prohibits any use of our models to create or distribute content that exploits or harms children." The company confirmed it has banned several accounts responsible for the "rose toy" commercials and is refining its refusal systems.


Social media platforms are also responding to the influx of cross-posted content. A TikTok spokesperson confirmed that the platform had removed videos and banned accounts that uploaded the disturbing AI-generated commercials, citing violations of their minor safety policies.


The surge in harmful AI content has spurred legislative updates. The United Kingdom is currently introducing an amendment to its Crime and Policing Bill that will allow authorized testers to assess if AI tools are capable of generating child sexual abuse material. This move aims to ensure that models like Sora 2 have robust safeguards against generating extreme pornography and non-consensual intimate images before they reach the public.

Advertisement

Read More

Advertisement