武装警察因人工智能将多力多滋薯片误认为武器而包围学生。
Armed police swarm student after AI mistakes bag of Doritos for a weapon

原始链接: https://www.dexerto.com/entertainment/armed-police-swarm-student-after-ai-mistakes-bag-of-doritos-for-a-weapon-3273512/

一名16岁的巴尔的摩学生塔基·艾伦在人工智能枪支探测系统错误地将一包多力多滋薯片识别为枪支后,遭到持枪警察铐上手铐。这起事件涉及Omnilert的技术,该技术被用于巴尔的摩县公立学校。事件发生在艾伦足球训练后与朋友们一起时。 警方根据人工智能的警报蜂拥至现场,导致艾伦经历了一场可怕的遭遇,尽管他没有携带任何武器,但仍被搜查和拘留。Omnilert和学区承认这是一次“误报”,但他们认为该系统“按预期运行”,通过优先考虑安全并提示人工核实。 艾伦表示,他感到不安全,不愿返回学校,并且对学校官员未能亲自道歉感到失望。此案凸显了人们对人工智能监控日益普及的准确性和后果的担忧,尤其是在学校等敏感环境中,并引发了关于在学生福祉与安全之间取得平衡的问题。

## 人工智能误将多力多滋识别为武器,导致武装警察出动 一款名为Omnilert的人工智能安全系统,错误地将一名学生手中的多力多滋薯片袋识别为武器,引发了大规模警力响应。尽管Omnilert承认这是一次“误报”,但他们声称该系统“按预期运行”,通过优先进行快速人工核实。这起事件凸显了在学校部署此类技术所带来的准确性和潜在危险的严重问题。 许多评论员指出类似的误报案例,包括一名学生因人工智能误解聊天信息而面临逮捕和法律后果的情况。人们对基于有缺陷的人工智能数据而升级警力响应的潜力,以及对无辜个体造成的创伤表示担忧。 许多人担心这些系统最终会导致悲剧发生,并批评了缺乏关于训练数据和准确率的透明度。一些人还注意到一种过度警务的令人不安的趋势,以及愿意优先考虑自动化警报而非人类判断的现象。这起事件引发了关于责任、更严格监管的必要性,以及人工智能可能加剧执法中现有偏见的辩论。
相关文章

原文

Concerns over AI surveillance in schools are intensifying after armed officers swarmed a 16-year-old student outside Kenwood High School in Baltimore when an AI gun detection system falsely flagged a Doritos bag as a firearm.

Taki Allen was hanging out with friends after football practice on October 20 when multiple police cars suddenly pulled up.

“It was like eight cop cars that came pulling up for us,” Allen told WBAL-TV 11 News. “They started walking toward me with guns, talking about ‘Get on the ground,’ and I was like, ‘What?’”

“They made me get on my knees, put my hands behind my back, and cuff me. Then they searched me and found nothing,” he said.

Allen was handcuffed at gunpoint. Police later showed him the AI-captured image that triggered the alert. The crumpled Doritos bag in his pocket had been mistaken for a gun.

“It was mainly like, am I gonna die? Are they going to kill me? “They showed me the picture, said that looks like a gun, I said, ‘no, it’s chips.’”

Student afraid to return to school after AI sends police after him

The AI system behind the incident is part of Omnilert’s gun detection technology, introduced in Baltimore County Public Schools last year. It scans existing surveillance footage and alerts police in real time when it detects what it believes to be a weapon.

Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”

Baltimore County Public Schools echoed the company’s statement in a letter to parents, offering counseling services to students impacted by the incident.

“We understand how upsetting this was for the individual that was searched as well as the other students who witnessed the incident,” the principal wrote. “Our counselors will provide direct support to the students who were involved.”

Allen says no one from the school has reached out to him personally.

“They didn’t apologize. They just told me it was protocol,” he said. “I was expecting at least somebody to talk to me about it.”

The teen now says he no longer feels safe going to school.

“If I eat another bag of chips or drink something, I feel like they’re going to come again,” Allen said.

The case has sparked fresh debate over the reliability of AI surveillance tools and their real-world consequences, especially in schools.

This incident comes as more institutions implement AI technology. Earlier this month, Major General William ‘Hank’ Taylor, one of the top officers in the US Army, admitted to using ChatGPT to make key military decisions.

Meanwhile, the UK introduced strict age verification measures for mature content, requiring users to pass a facial scan to prove they’re over 18. This has left some adults unable to access content, such as Britain’s most tattooed man, who said the age check system told him to “remove his face” because it interpreted his tattoos as a mask.

联系我们 contact @ memedata.com