Nearly a year after AI-generated nude images of high school girls devastated a community in southern Spain, a juvenile court sentenced 15 classmates to a year of probation. Despite this, the artificial intelligence (AI) tool used to create the damaging deepfakes remains easily accessible online, promising to “undress any photo” uploaded within seconds. Tech bros fuck up everything, this is the latest way they harm us vs help us — burner account 8888 (@8888Burner) August 15, 2024 In response, a new effort to shut down the app and similar platforms is being pursued in California. This week, San Francisco filed a first-of-its-kind lawsuit, which experts believe could set a legal precedent, although it will undoubtedly face many hurdles. These sites are tied to entities located in California, New Mexico, Estonia, Serbia, the United Kingdom, and other countries. The lawsuit, brought on behalf of the people of California, alleges that the services violated numerous state laws against fraudulent business practices, nonconsensual pornography, and the sexual abuse of children. Why is this a SF city issue? It should be handled at the federal level, as the internet doesn’t conform to US city/state/federal boundaries. Looks like usual SF virtue signalling and an inability to stick to their mandated scope of SF citywide. — Craig Forrest (@Javajaded) August 16, 2024 These AI tools are often used to create realistic fakes that “nudify” photos of clothed women, including celebrities, without their consent. These images have also surfaced in schools worldwide, from Australia to Beverly Hills in California, where boys have used the tools to create and circulate images of female classmates on social media. Much needed step. — Dragon Farmer (@durdharsha) August 16, 2024 However, she emphasized that the responsibility lies not solely with society, education, parents, and schools but also with digital giants that profit from such harmful content. Dr. Miriam Al Adib Mendiri applauded San Francisco’s action but stressed the need for further efforts, including from larger companies like Meta Platforms and its subsidiary WhatsApp, which was used to circulate the images in Spain. I agree! I’m not sure how successful it’s going to be as I’m skeptical of the way they did it under the revenge porn statute, but I hope it works. These websites are horrible. — Ariel Givner (@GivnerAriel) August 16, 2024 The European Union, for instance, has noted that the app used in Almendralejo “does not appear” to fall under the bloc’s new rules for online safety because it is not a big platform. Organizations tracking the growth of AI-generated child sexual abuse material are closely following the San Francisco case. This is all sorts of wrong to even have such a service. It’s one thing to make a random image … but uploading people’s pictures should be illegal — 🛡️CryptoShields.eth (@cryptoShields) August 16, 2024 However, Riana Pfefferkorn, a researcher at Stanford University, noted that because many defendants are based outside the U.S., it will be challenging to bring them to justice. This could happen if the city wins by default in their absence and obtains orders affecting domain-name registrars, web hosts, and payment processors, effectively shuttering those sites even if their owners never appear in the litigation. This lawsuit marks a huge legal challenge against the rising tide of AI-generated deepfake content, an issue gaining increasing visibility as the technology becomes more widespread. Generally agree with this. For all the talk from Elon around AI responsibility, it’s been surprising to see how reckless Grok 2.0 has been with AI image capabilities. — Neil Hartner (@illneil) August 16, 2024 For more news and insights, visit AI News on our website.
The app, though not available in mainstream app stores, can still be found on the internet, raising concerns about the accessibility and misuse of such technologies.
However, determining who runs these apps is often difficult, complicating efforts to enforce legal actions. Some services have claimed by email that their “CEO is based and moves throughout the USA” but have declined to provide evidence or answer further questions.
In one of the first widely publicized cases in September last year, in Almendralejo, Spain, a physician who helped bring attention to the issue after her daughter was victimized expressed satisfaction with the severity of the sentence given to their classmates.
While schools and law enforcement agencies have taken steps to punish those who create and share deepfakes, authorities have struggled with addressing the tools themselves.
The lawsuit “has the potential to set legal precedent in this area,” said Emily Slifer, director of policy at Thorn, an organization combating the sexual exploitation of children.
While AI holds the potential for numerous positive applications, its misuse in creating deepfakes has raised serious ethical and legal concerns, particularly regarding privacy and the spread of misinformation.
San Francisco Cracks Down: Sues Sites Creating Deepfake Nudes of Women and Girls!
Key Takeaways:
“The proliferation of these images has exploited a shocking number of women and girls across the globe,” said David Chiu, the elected city attorney of San Francisco, who brought the case against a group of widely visited websites.
“These images are used to bully, humiliate and threaten women and girls,” Chiu said in an interview with The Associated Press. “And the impact on the victims has been devastating on their reputation, mental health, loss of autonomy, and in some instances, causing some to become suicidal.”
“There are a number of sites where we don’t know at this moment exactly who these operators are and where they’re operating from, but we have investigative tools and subpoena authority to dig into that,” Chiu said. “And we will certainly utilize our powers in the course of this litigation.”
Chiu “has an uphill battle with this case but may be able to get some of the sites taken offline if the defendants running them ignore the lawsuit,” Pfefferkorn said.
Was this article helpful?
YesNo