See How Visible Your Brand is in AI Search Get Free Report

13-Year-Old Girl Expelled After Defending Herself Against Boys Who Shared AI-Generated Nude Images

  • December 22, 2025
    Updated
13-year-old-girl-expelled-after-defending-herself-against-boys-who-shared-ai-generated-nude-images

A Louisiana middle school case shows how AI-generated sexual deepfakes can spread fast, while school discipline and reporting systems struggle to keep up.

📌 Key Takeaways

  • AI-generated nude deepfakes circulated at a Louisiana middle school, targeting multiple students.
  • A 13-year-old victim was expelled after a bus fight linked to the images.
  • Investigators later charged two boys under a new state law aimed at AI-made explicit images.
  • The case exposes gaps in school policies, training, and evidence collection on disappearing apps.


What Happened In Thibodaux, And Why It Escalated

At a middle school in Thibodaux, Louisiana, students circulated AI-generated nude images of classmates, mostly shared through social apps where posts can vanish quickly.

Multiple girls sought help early, but the images were difficult to locate on platforms designed for disappearing messages, and school staff initially struggled to confirm what was being shared.

Later that day, the situation boiled over on a school bus when a classmate was seen showing the manipulated images, according to accounts referenced in a disciplinary process and follow-up reporting.

The 13-year-old girl, described as having no prior discipline record, was removed from school and sent to an alternative setting for weeks after the fight. At the same time, questions lingered about how the alleged sharing and creation were handled inside the school.

“Kids lie a lot.” — Danielle Coriell, Principal


The Core Problem: AI Makes Harm Easy, Proof Hard

This case is not only about bullying, but it’s also about how AI tools lower the barrier to creating realistic sexual deepfakes from ordinary photos, then pushing them into group chats at speed.

Schools often investigate with the same playbook they used for older forms of cyberbullying. Still, deepfakes add two complications: the content can look convincing, and the evidence may disappear before adults see it.

In this situation, officials and families described hitting dead ends because the material was spreading on apps that erase content quickly, which can leave victims feeling dismissed or disbelieved.

The district’s broader posture matters too. Reporting indicates the school system was still developing AI guidance, and cyberbullying training materials were not built for today’s deepfake-driven reality.

“Just kind of burying their heads in the sand, hoping that this isn’t happening.” — Sameer Hinduja, Co-Director, Cyberbullying Research Center


Charges, Laws, And The Policy Catch-Up Race

Authorities later charged two boys with multiple counts related to sharing AI-created explicit images, using a Louisiana law designed to address this exact pattern of harm.

That statute targets the malicious distribution or sale of AI-generated explicit imagery of another person, reflecting a broader trend of states creating new rules for generative deepfakes.

Separate reporting on the Louisiana case also points to lawmakers discussing harsher penalties, including proposals to elevate certain conduct from misdemeanors to felonies, as deepfake incidents become more visible.

Beyond Louisiana, national reporting and advocacy groups have warned that schools may face more cases like this as AI tooling spreads, with rising volumes of AI-generated sexual abuse imagery being reported to major tip lines.


How Schools And Parents Can Respond In The First 24 Hours

The fastest wins come from acting like it’s a safety incident, not a rumor, and documenting the spread without re-sharing the content.

  • Stop: Do not forward, joke about, or “show someone” for confirmation.
  • Huddle: Bring in a trusted adult, school lead, or counselor immediately.
  • Inform: Report the posts/accounts inside the app using in-app reporting tools.
  • Evidence: Capture who shared what, when, and where, without downloading explicit content.
  • Limit: Reduce social exposure while adults coordinate subsequent actions and support.
  • Direct: Connect the student to mental health support and a clear school safety plan.

After the first response, schools should separate involved students, preserve device-level evidence through proper channels, and ensure the victim is not punished for reporting or reacting in distress.

Parents should ask for a written timeline, what the school is doing to stop the spread, and what supports are being offered, then escalate to district leadership or law enforcement if the school’s response stalls.


Conclusion

This Louisiana case highlights a harsh reality: AI deepfakes can create a crisis in hours, while institutional response can lag, especially when evidence is fleeting, and adults are unsure what they’re looking for.

The fix is not just stricter laws; it’s clearer school protocols, faster reporting pathways, and practical playbooks that protect victims first, before discipline decisions compound the harm.


For the recent AI News, visit our site.


If you liked this article, be sure to follow us on X/Twitter and also LinkedIn for more exclusive content.

Was this article helpful?
YesNo
Generic placeholder image
Articles written 861

Khurram Hanif

Reporter, AI News

Khurram Hanif, AI Reporter at AllAboutAI.com, covers model launches, safety research, regulation, and the real-world impact of AI with fast, accurate, and sourced reporting.

He’s known for turning dense papers and public filings into plain-English explainers, quick on-the-day updates, and practical takeaways. His work includes live coverage of major announcements and concise weekly briefings that track what actually matters.

Outside of work, Khurram squads up in Call of Duty and spends downtime tinkering with PCs, testing apps, and hunting for thoughtful tech gear.

Personal Quote

“Chase the facts, cut the noise, explain what counts.”

Highlights

  • Covers model releases, safety notes, and policy moves
  • Turns research papers into clear, actionable explainers
  • Publishes a weekly AI briefing for busy readers

Related Articles

Leave a Reply