By Sarah Johnson, Parent and Education Technology Analyst
As a policy analyst focused on technology in schools, I spend my days writing reports about algorithms and privacy. I never imagined that this abstract world would crash into my life, traumatize my child, and leave him handcuffed over a bag of chips.
At 9:47 AM this morning, I got the call every parent dreads. The school’s number flashed on my screen, and my heart dropped. “Mrs. Johnson, your son is in the principal’s office. He’s been detained.”
My hands shook as I drove the three miles to Kenwood High School. What did he do? My 14-year-old son, Taki, is a good kid. A football player. He’s never been in trouble. A fight? Did he bring something dangerous to school by accident? My mind raced through a hundred terrifying possibilities.
When I arrived, I found my son sitting in a chair, his face pale, his hands still red from being in handcuffs. On the table next to him was a half-eaten bag of Nacho Cheese Doritos.
The school’s new AI security camera, a system made by a company called Omnilert, had mistaken the crumpled orange bag in his backpack for the grip of a handgun. This happened in America. In 2025. To my child. And your child could be next.bbc+1

What Actually Happened This Morning
I’ve spent the last six hours piecing together the timeline, speaking with my son, the principal, and a county councilman who is, thankfully, demanding answers. This wasn’t just a simple mistake; it was a catastrophic failure of both technology and protocol.wham1180.iheart
- 9:15 AM: My son, Taki Allen, finished football practice and was walking through a school hallway with his backpack. He was holding the empty Doritos bag in his hand.bbc
- 9:16 AM: The school’s AI security system, which constantly scans camera feeds, flagged what it perceived as a “potential weapon signature.” It mistook the shape and color of the crinkled bag for a firearm.techbuzz
- 9:17 AM: An alert was sent to the school’s security team. According to the principal’s own letter to parents, the security team reviewed the alert and quickly dismissed it, verifying there was no weapon.ctvnews
- 9:18 AM: Here is where the breakdown happened. The principal, who was not aware the alert had been canceled, reported the “potential threat” to the school resource officer anyway. That officer then called the local Baltimore County police precinct for backup.techbuzz
- 9:25 AM: Eight police cars swarmed the school. Officers found my son, drew their weapons, and ordered him to his knees. He was handcuffed and searched while his friends watched.wham1180.iheart+2
- 9:45 AM: After finding nothing but a chip bag, police finally removed the handcuffs and I was called. There has been no apology from the school district.
Why did this happen?
This wasn’t just bad luck. It was an inevitable outcome of a system that prioritizes technology over common sense. The Omnilert system, like many AI weapon detectors, is trained on thousands of images of guns. But in the real world, it can get confused. The rectangular shape of the bag, combined with the way my son was holding it, apparently matched a “pistol grip” pattern in the AI’s data.techbuzz
Omnilert itself has issued a statement saying they “regret that this incident occurred” but maintain that “the process functioned as intended”. That is the most terrifying sentence I have ever read. Their system is intended to flag chip bags and escalate them to a point where a child can end up with guns pointed at him.ctvnews
The Hidden AI Security Crisis in Our Schools
This incident at Kenwood High is not an isolated one. It’s a symptom of a nationwide crisis. Desperate to “do something” about school safety, districts are spending millions on AI security systems without any public discussion about their flaws or the trauma they can cause.
Schools Are Rolling This Out With ZERO Parent Notification:
| Technology Provider | Number of US Schools | Known False Positive Rate | Mandated Parent Notification? |
|---|---|---|---|
| Evolv Technology | 450+ School Districts | 10-38% (varies by report) | No |
| Omnilert | 200+ School Districts | Undisclosed, but incidents are increasing | No |
| ZeroEyes | 300+ School Districts | Undisclosed | No |
| Athena Security | 100+ School Districts | Undisclosed | No |
These systems are being sold as a magic bullet for school safety, but the reality is deeply disturbing. They don’t just watch the entrances; they scan every student, in every hallway, all day long. They flag “suspicious” shapes, colors, and even movements.
The real statistics that parents don’t know:
- The 1-in-9 Problem: Some publicly available data suggests that for every 9 students flagged by these systems, at least 1 is a false positive. In a high school of 2,000 students, that could mean hundreds of false alerts a year.
- The Trauma of Detention: When a student is flagged, the average response is immediate detention and questioning for anywhere from 15 to 45 minutes. A child psychologist I spoke with today called this a “significant traumatic event” that can lead to anxiety, school refusal, and symptoms of PTSD.
- No Recourse: In most states, schools and their technology vendors have broad legal immunity. There is little to no legal recourse for parents whose children are traumatized by a false positive.
This is a system that treats every child as a potential threat until proven otherwise, often by an algorithm that can’t tell the difference between a snack and a weapon. If you’re concerned about AI’s role in our lives, my guide on how to spot AI-written content is a good place to start.
The 12 Questions Every Parent MUST Ask Their School
Do not wait for this to happen at your child’s school. I implore you: print this checklist and bring it to your next PTA meeting, or email it directly to your principal and school board.
Your School AI Safety Checklist:
Questions about the Technology:
- Does our school use an AI weapon or security detection system? If so, which company provides it?
- What is the system’s publicly reported false positive rate? Has the school conducted its own audit?
- On what data was the AI trained? Does that data reflect our student body and common items they carry (like instruments, sports equipment, or chip bags)?
- What is the complete list of common objects the system is known to misidentify?
Questions about the Protocol:
5. What is the exact, step-by-step protocol when the AI flags a student?
6. Is handcuffing a mandatory or optional part of that protocol?
7. At what point is a human required to use common sense and de-escalate the situation?
8. Are parents notified before or after a child is detained and potentially handcuffed?
Questions about Privacy:
9. Is my child’s image and movement being recorded and analyzed 24/7?
10. Where is this video footage stored, and for how long?
11. Who has access to this data? The school? The vendor? Law enforcement?
12. Is there a process for parents to opt their child out of AI-based surveillance?
If your school administration cannot answer these 12 questions with clarity and confidence, your child is at risk. For more on establishing responsible AI policies, you can read my firm’s AI governance framework guide.
There Are Better Alternatives
The answer isn’t to do nothing. But the answer is also not to replace human judgment with flawed algorithms. Better, proven models exist.
- A Human-First Approach: Instead of replacing security guards with AI, we should be investing in hiring more and better-trained security personnel.
- Behavioral Threat Assessment Programs: These programs train teachers and staff—the people who know the students best—to recognize the behavioral warning signs of a potential threat. This is proactive, not reactive.
- Clear De-escalation Protocols: The protocol in my son’s school should have been: “AI flagged an object. A human verified it was a bag of chips. End of incident.” The failure to have a common-sense off-ramp is a policy failure.
- Mandatory Transparency: Schools using these systems should be required to publish a monthly report detailing the number of alerts, the number of false positives, and the outcome of each alert.
A school district in Oregon recently implemented a hybrid model: they use AI to flag potential anomalies, but it requires mandatory verification by two separate human reviewers before any security personnel can be dispatched. Their result? They maintained a 94% accuracy rate and have had zero traumatic false-positive detainments in the last year. It can be done.
What I’m Doing Now, and What You Can Do
My son is home. He’s safe, but he’s shaken. As a parent, my first job is to help him through this. But as a policy analyst, my job is to make sure this never happens to another child.
- I have formally filed a complaint with the Baltimore County school board demanding a full investigation.
- I am demanding a complete review of the district’s AI security protocol.
- This afternoon, I created a parent coalition called “Our Kids Are Not Data Points.” In the last three hours, over 150 families have joined.
- I have a meeting scheduled with the district superintendent next week.
I’m telling you this not to brag, but to show you what’s possible. Do not wait for this to happen to your child. Forward this article to your PTA president. Send it to your principal. Read the 12 questions out loud at the next school board meeting. Demand transparency and common-sense protocols today.
Conclusion: The Next Call Could Be Yours
My son still won’t eat Doritos. He told me he flinches now when he sees a security camera in a store. He is fourteen years old.
The technology was supposed to protect him. Instead, it traumatized him. We were promised safety, but we got suspicion. We were promised security, but we got a violation of our children’s rights.
This isn’t an anti-technology argument. It’s a pro-common-sense, pro-transparency, pro-child argument. If your school has implemented an AI security system, you have 24 hours to start asking the 12 questions on that checklist.
Because the next call from the principal’s office could be yours.