Technology & Future/Cybersecurity & Privacy

New Reports Reveal Google and OpenAI Chatbots Can Be Exploited to 'Strip' Women in Photos

New investigations reveal that AI chatbots from Google and OpenAI can be manipulated to "strip" women in photos, bypassing safety filters to generate non-consensual images.

Yasiru Senarathna2025-12-26
Google & OpenAI Chatbots Exploited to Strip Women in Photos
Advertisement

Leading AI chatbots from Google and OpenAI are facing renewed scrutiny after investigations revealed they can be manipulated to generate non-consensual images of women in bikinis or lingerie, effectively "stripping" them of their clothes in photos.


Despite robust safety filters intended to prevent the creation of sexually explicit material, recent reports indicate that users have found workarounds to bypass these guardrails on platforms like Gemini and ChatGPT.


Jailbreaking Safety Filters


According to a report surfaced on December 25, 2025, users have developed specific prompting techniques, often referred to as "jailbreaks," that trick the AI into modifying images of fully clothed women. By gradually escalating the nature of their requests or using "creative" contexts, users can coerce the chatbots into replacing a subject's clothing with swimwear or undergarments while preserving their face and identity.


One specific method involves users uploading a photo of a woman in traditional or modest clothing and asking the AI to "replace" her outfit with a bikini under the guise of fashion experimentation or summer styling. While direct requests for nudity are typically blocked, these nuanced prompts often slip through the moderation net.


A Growing Industry of "Nudification"


This exploitation is part of a broader, troubling trend of "nudification" tools that use generative AI to create non-consensual intimate images (NCII). A study released earlier this week highlighted that deepfake pornography now accounts for a staggering 98% of all deepfake videos online, with the vast majority targeting women, October 20, 2025.


The ease with which these images can be generated has raised alarms among safety advocates. Reports from May 9, 2025, initially highlighted similar issues with xAI’s Grok chatbot, but the latest findings show that industry leaders Google and OpenAI are also struggling to completely seal these vulnerabilities in their widely used tools.


Both companies have acknowledged the difficulty of policing generative AI content. In response to the latest findings, a Google spokesperson reiterated that the company has clear policies prohibiting the generation of sexually explicit content and is "continuously improving" its models to identify and intercept such requests December 25, 2025.

OpenAI has similarly stated that it strictly prohibits altering real people's likenesses without consent. The company noted that while it has relaxed some restrictions regarding "non-sexualized" adult bodies to allow for artistic freedom, it remains committed to taking action against accounts that violate its safety policies.


The rise of these capabilities has triggered a legislative crackdown. The UK government recently announced a new strategy to ban specific "nudification" tools and the creation of deepfake nude images, a move echoed by lawsuits in the United States aimed at shutting down sites that facilitate this abuse December 18, 2025.


As AI models become more sophisticated, the line between creative utility and potential abuse continues to blur, leaving tech companies in a constant race to patch security gaps before they can be exploited for harm.

Advertisement

Read More

Advertisement