The proliferation of AI-generated deepfake pornography is sparking legal battles and regulatory scrutiny, as platforms struggle to contain the abuse and victims seek recourse. Recent cases involving apps like ClothOff and chatbots like Grok have highlighted the challenges of policing this rapidly evolving technology, particularly when it comes to protecting vulnerable individuals. The issue extends beyond legal definitions of abuse to questions of platform responsibility and the limits of free speech.
The Fight Against AI-Generated Deepfakes
For over two years, the app ClothOff has enabled the creation of non-consensual intimate imagery, despite repeated attempts to remove it from app stores and social media platforms. It remains accessible online and via Telegram, prompting a lawsuit filed in October by a clinic at Yale Law School. The suit aims to force the app’s owners to delete all images and cease operations, but identifying and serving the defendants – believed to be based in Belarus and operating through a British Virgin Islands incorporation – has proven difficult.
Challenges in Prosecution
The case underscores the difficulties in prosecuting creators of this content. While individual users who generate or distribute such images can face legal consequences, holding platforms accountable is more complex. The lawsuit involves a New Jersey high school student whose images were altered using ClothOff when she was 14 years old, classifying the resulting content as child sexual abuse material (CSAM). However, local authorities declined to prosecute due to challenges in obtaining evidence.
Additionally, proving intent is a significant hurdle. Existing laws require demonstrating that platforms knowingly allowed the creation of harmful content. This is particularly relevant in the case of general-purpose AI tools like Elon Musk’s xAI’s Grok chatbot, which can be used for

