Get Your Brand Cited by LLMs With Wellows Try Now!

AI Mistook a Bag of Chips for a Gun — and a 16-Year-Old Paid the Price [Watch the Bodycam Footage to See What Really Happened]

  • October 27, 2025
    Updated
ai-mistook-a-bag-of-chips-for-a-gun-and-a-16-year-old-paid-the-price-watch-the-bodycam-footage-to-see-what-really-happened

A 16-year-old at Kenwood High in Baltimore County was detained at gunpoint after an AI security system flagged his bag of Doritos as a firearm on 20 October 2025.

📌 Key Takeaways

  • False Alert: AI misread a crumpled chip bag as a gun, prompting an armed police response.
  • Bodycam Released: Footage shows officers realizing the “weapon” was only chips.
  • Vendor Identified: Local outlets name Omnilert as the system in use.
  • Policy Review: County leaders and police are reviewing procedures after the incident.


What Happened Outside Kenwood High

Police converged on the student after an AI camera alert. He was ordered to the ground and cuffed. No weapon was found once officers searched the scene and questioned him.

Body-worn video released on 26 October shows officers second-guessing the alert before confirming it was a bag of chips, not a gun.

“It was like eight cop cars… with guns, saying get on the ground.” — Taki Allen, Student

Multiple students being detained at Kenwood High School in Essex, Maryland, on Oct. 20, 2025


How The AI Flagged A Snack As A Threat

Local coverage identifies the vendor as Omnilert, which analyzes video feeds for firearm-like shapes. In this case, a shiny, folded bag and hand posture were interpreted as a handgun.

Omnilert’s marketing materials emphasize high accuracy and low false positives, typically with human verification. The Kenwood incident shows how a single misread can escalate quickly.

The cost to do animations is in the hundreds of thousands — this set-up is orders of magnitude smaller.


Bodycam, Timeline, And The Immediate Aftermath

Officers responded around 7:20 p.m., detained the student, then cleared the scene once the error was discovered. The bag of chips was recovered nearby.

County leaders requested a formal review of the system and procedures. Police emphasized they acted on information available from the AI alert.

“Thank God it was not worse.” — Baltimore County Council Leadership


Accuracy Questions Beyond One Vendor

Independent reporting shows other school scanners can generate heavy noise. One Illinois district logged 3,248 alerts in a year with only 3 confirmed contraband items, a >99% false-positive profile. That system was Evolv, a different vendor.

Separately, the FTC sanctioned Evolv in 2024 over deceptive accuracy claims, underlining wider scrutiny of AI screening promises. Again, Evolv is not the Kenwood vendor.


What Baltimore County Is Reviewing Now

Here’s a quick bridge into the moving pieces being examined by officials and school leaders, based on public statements and local reporting.

  • Alert Workflow: How AI alerts are verified before dispatch and what human checks exist.
  • Officer Guidance: How responding units are briefed on AI uncertainty and image ambiguity.
  • Threshold Tuning: Whether detection sensitivity or model retraining is needed at campuses.
  • Student Impact: Counseling offers and communication protocols after traumatic false alerts.

“Humans will always check what the tool produces — that’s non-negotiable.”


Why This Incident Matters For AI In Schools

AI surveillance can expand coverage and speed response, yet misclassification risks are real. Clear verification steps and conservative escalation policies reduce harm from rare but high-impact mistakes.

The Kenwood case illustrates how perception errors cascade into police tactics. Transparent audits help determine where process, model tuning, or operator training should change.


Conclusion

Baltimore County is reassessing how AI alerts feed into real-world decisions. The goal is a safer campus, supported by technology that is verified, well-tuned, and paired with careful human judgment.

The Kenwood misfire is a cautionary data point. Reliability claims must match lived outcomes, especially when seconds and assumptions shape high-stakes responses.


For the recent AI News, visit our site.


If you liked this article, be sure to follow us on X/Twitter and also LinkedIn for more exclusive content.

Was this article helpful?
YesNo
Generic placeholder image
Articles written 878

Khurram Hanif

Reporter, AI News

Khurram Hanif, AI Reporter at AllAboutAI.com, covers model launches, safety research, regulation, and the real-world impact of AI with fast, accurate, and sourced reporting.

He’s known for turning dense papers and public filings into plain-English explainers, quick on-the-day updates, and practical takeaways. His work includes live coverage of major announcements and concise weekly briefings that track what actually matters.

Outside of work, Khurram squads up in Call of Duty and spends downtime tinkering with PCs, testing apps, and hunting for thoughtful tech gear.

Personal Quote

“Chase the facts, cut the noise, explain what counts.”

Highlights

  • Covers model releases, safety notes, and policy moves
  • Turns research papers into clear, actionable explainers
  • Publishes a weekly AI briefing for busy readers

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *