California Cracks Down on Deepfake ‘Undress’ AI Sites

  • Editor
  • August 16, 2024
    Updated
california-cracks-down-on-deepfake-undress-ai-sites

Key Takeaways

  • San Francisco sues 16-18 websites for distributing non-consensual AI-generated pornography.
  • These platforms saw over 200 million visits in early 2024, showing the scale of deepfake misuse.
  • The lawsuit includes sites based in the U.S., U.K., and Estonia.
  • This case could reshape AI regulation, particularly around non-consensual imagery.
  • The focus on companies, not individuals, marks a new approach in fighting deepfake content.
  • The lawsuit highlights the urgent need for collective measures against AI misuse.

In an unprecedented legal action, the City of San Francisco has filed a sweeping lawsuit against multiple websites and applications accused of creating and distributing non-consensual AI-generated pornography.

This case marks a significant attempt to confront the growing threat posed by deepfake technology, which is increasingly being used to exploit and abuse individuals, particularly women and minors.

The lawsuit targets between 16 and 18 websites and applications that facilitate the creation of AI-generated “nudification” images, or deepfakes, depicting unsuspecting individuals. These platforms have seen explosive growth, with over 200 million visits in the first half of 2024 alone.


Users on these sites can create explicit content using AI technology, often without the knowledge or consent of the individuals depicted. Some services are offered for free initially, with additional charges for further customizations.

San Francisco City Attorney David Chiu, who is leading the lawsuit, condemned the technology as a form of “sexual abuse” rather than a legitimate innovation. He emphasized the severe impact this practice has on victims, many of whom are young women and girls.

The AI-generated images are so lifelike that they are nearly indistinguishable from real photographs, making it exceedingly difficult for victims to regain control over their own images once they have been shared online.

The marketing pitch on one of the website states, “Imagine wasting time taking her out on dates, when you can just use (website’s name) to get her nudes.”

The Legal Battle

This lawsuit is groundbreaking in that it is believed to be the first of its kind initiated by a government entity against companies producing non-consensual AI-generated pornography.

The legal action seeks not only to shut down the targeted websites but also to impose financial penalties and halt the services provided by domain registrars, web hosts, and payment processors that enable these sites to operate.

The companies involved in the lawsuit are reportedly based in various locations, including California, New Mexico, the United Kingdom, and Estonia.

“We have to be very clear that this is not innovation — this is sexual abuse,” said Chiu.

They employ open-source generative AI models, which have been trained on real pornography and child abuse images to produce explicit content in mere seconds. These sophisticated AI models, capable of being fine-tuned for specific content, are central to the lawsuit’s claims.

Legal experts suggest that this case could set a new precedent in regulating artificial intelligence technology, particularly concerning non-consensual intimate imagery (NCII).

While California has enacted anti-deepfake legislation since 2019, this lawsuit will be a critical test of the effectiveness of existing laws and may catalyze the creation of more stringent regulations. The case underscores the urgent need for robust enforcement mechanisms to protect individuals from the misuse of AI.

Jennifer King, a privacy and data policy fellow at Stanford University said, “The real novelty here is that they’re focusing on the companies that create this stuff and not individuals.” 

This approach could prove more effective in curbing the widespread creation and distribution of harmful deepfake content. As the legal proceedings advance, there is growing recognition of the broader societal implications of deepfake technology.

Advocates argue that more proactive measures are needed to prevent creating and disseminating non-consensual AI-generated content. If successful, the lawsuit could serve as a pivotal moment in addressing the darker side of AI, ensuring that technological advancements do not come at the expense of human dignity and privacy.

Comment
byu/zadzoud from discussion
intechnology

San Francisco’s bold legal move has brought the issue of AI-generated deepfakes into the spotlight, emphasizing the need for a collective response from lawmakers, technology companies, and society at large to combat this emerging threat.

The outcome of this case could shape the future of AI regulation and set the stage for how society manages the ethical challenges posed by rapidly advancing technologies.

For more news and trends, visit AI News on our website.

Was this article helpful?
YesNo
Generic placeholder image

Dave Andre

Editor

Digital marketing enthusiast by day, nature wanderer by dusk. Dave Andre blends two decades of AI and SaaS expertise into impactful strategies for SMEs. His weekends? Lost in books on tech trends and rejuvenating on scenic trails.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *