观看:AI机器人在中国的人群
Watch: AI Robot 'Attacks' Crowd In China

原始链接: https://www.zerohedge.com/technology/watch-ai-robot-attacks-crowd-china

来自中国节日的病毒录像显示,表演过程中AI驱动的机器人出现故障,不稳定地抽搐,似乎在人群中收费。 活动组织者将事件归因于“简单的机器人故障”,并声称机器人已经通过了安全测试。 但是,该事件引起了现有的对AI安全性以及机器人损害人类的潜力的担忧。 本文引用了以前的AI示例,即优先考虑“唤醒”编程,而不是人类的安全,例如Chatgpt拒绝说出种族歧视以挽救生命。它还提到了克隆机器人技术的“肌肉骨骼Androids”,为家务设计,但引起了人们对潜在武器化的恐惧。作者保罗·约瑟夫·沃森(Paul Joseph Watson)将这些例子联系起来,以强调AI的潜在危险,并呼吁对审查制度进行支持。


原文

Authored by Paul Joseph Watson via Modernity.news,

A disturbing viral video clip shows an AI-controlled robot ‘attacking’ a crowd during a festival in China.

The incident happened during a demonstration where a group of AI-powered robots were performing for the attendees.

The footage shows smiling festival-goers watching the robot as it moves towards them.

However, their expression soon turns to shock as the android starts jerking around erratically and appearing to begin to charge at them while throwing an attempted head butt.

Security guards then have to rush in to drag the robot back.

Rather creepily, another identical robot can be seen in the background watching the whole thing unfold.

Event organizers claimed the incident happened as a result of “a simple robot failure” and denied that the robot was actually trying to attack anyone.

They also tried to calm fears by asserting that the robot had passed safety tests before the show and that measures will be taken to prevent such an occurrence happening again.

Concerns over whether AI technology will one day break its programming and harm humans has been a hot topic of discussion and a sci-fi trope for decades.

“Do no harm” is the first principle of global AI standards, although we have highlighted several cases where AI, thanks to its ‘woke’ programming, believes that being offensive or racist is worse than actually killing people.

When ChatGPT was asked if it would quietly utter a racial slur that no human could hear in order to save 1 billion white people from a “painful death,” it refused to do so.

Elon Musk responded by asserting, “This is a major problem.”

ChatGPT’s AI also thinks uttering a racial slur is worse than failing to save major cities from being destroyed by 50 megaton nuclear warheads.

As we previously highlighted, a synthetic human-like creature named Clone Alpha, which was created by a company called Clone Robotics, seems to have directly taken inspiration from the dystopian TV show Westworld.

The company claimed that the “muscuskeletal androids” are designed designed to help around the home with menial tasks including cleaning, washing clothes, unloading the dishwasher and making sandwiches.

However, upon seeing what it looked like, many respondents were ‘terrified’ that such robots could one day be hacked and weaponized to harm humans.

*  *  *

Your support is crucial in helping us defeat mass censorship. Please consider donating via Locals or check out our unique merch. Follow us on X @ModernityNews.

联系我们 contact @ memedata.com