人工智能增加了核毁灭的风险
AI Increases The Risk Of Nuclear Annihilation

原始链接: https://www.zerohedge.com/geopolitical/ai-increases-risk-nuclear-annihilation

一篇文章讨论了 OpenAI 的准备团队专注于预防灾难性风险,特别是与化学、生物、放射性和核品种相关的风险。 由于俄罗斯总统普京等强大领导人的行动,核毁灭的风险不断增加,专家警告说,人工智能和核技术的结合会增加危险。 1983年,斯坦尼斯拉夫·彼得罗夫中校通过仔细判断阻止了潜在的核战争,并强调如果机器取代人类扮演这一角色,需要谨慎行事。 随着人工智能改变战争能力,中国在战争管理中利用人工智能的计划处于领先地位,并有可能监督所有战争行为。 然而,人工智能严重依赖数据,而数据有时被证明是不可靠的。 专家表示,将人工智能纳入核指挥和控制可能会增加错误和错误信息,从而导致灾难性后果。 最终,人类面临的最大威胁仍然是核战争,它对环境和社会造成重大破坏,甚至可能导致人类灭绝。

相关文章

原文

Authored by John Mac Ghlionn via The Epoch Times,

OpenAI, the company responsible for ChatGPT, recently announced the creation of a new team with a very specific task: to stop AI models from posing “catastrophic risks” to humanity.

Preparedness, the aptly titled team, will be overseen by Aleksander Madry, a machine-learning expert and Massachusetts Institute of Technology-affiliated researcher. Mr. Madry and his team will focus on various threats, most notably those of “chemical, biological, radiological and nuclear” variety. These might seem like far-fetched threats—but they really shouldn’t.

As the United Nations reported earlier this year, the risk of countries turning to nuclear weapons is at its highest point since the Cold War. This report was published before the horrific events that occurred in Israel on Oct. 7. A close ally of Vladimir Putin's, Nikolai Patrushev, recently suggested that the "destructive" policies of "the United States and its allies were increasing the risk that nuclear, chemical or biological weapons would be used," according to Reuters.

Merge AI with the above weapons, particularly nuclear weapons, cautions Zachary Kallenborn, a research affiliate with the Unconventional Weapons and Technology Division of the National Consortium for the Study of Terrorism and Responses to Terrorism (START), and you have a recipe for unmitigated disaster.

Mr. Kallenborn has sounded the alarm, repeatedly and unapologetically, on the unholy alliance between AI and nuclear weapons. Not one to mince words, the researcher warned, “If artificial intelligences controlled nuclear weapons, all of us could be dead.”

He isn't exaggerating. Exactly 40 years ago, as Mr. Kallenborn, a policy fellow at the Schar School of Policy and Government, described, Stanislav Petrov, a Soviet Air Defense Forces lieutenant colonel, was busy monitoring his country’s nuclear warning systems. All of a sudden, according to Mr. Kallenborn, “the computer concluded with the highest confidence that the United States had launched a nuclear war.” Mr. Petrov, however, was skeptical, largely because he didn’t trust the current detection system. Moreover, the radar system lacked corroborative evidence.

Thankfully, Mr. Petrov concluded that the message was a false positive and opted against taking action. Spoiler alert: The computer was completely wrong, and the Russian was completely right.

“But,” noted Mr. Kallenborn, a national security consultant, “if Petrov had been a machine, programmed to respond automatically when confidence was sufficiently high, that error would have started a nuclear war.”

Furthermore, he suggested, there's absolutely “no guarantee” that certain countries “won’t put AI in charge of nuclear launches,” because international law “doesn’t specify that there should always be a 'Petrov' guarding the button."

"That’s something that should change, soon,” Mr. Kallenborn said.

He told me that AI is already reshaping the future of warfare.

Artificial intelligence, according to Mr. Kallenborn, “can help militaries quickly and more effectively process vast amounts of data generated by the battlefield; make the defense industrial base more effective and efficient at producing weapons at scale, and may be able to improve weapons targeting and decision-making."

Take China, arguably the biggest threat to the United States, for example, and its AI-powered military applications. According to a report out of Georgetown University, in the not-so-distant future, Beijing may use AI not just to assist during wartime but to actually oversee all acts of warfare.

This should concern all readers.

“If the launch of nuclear weapons is delegated to an autonomous system,” Mr. Kallenborn fears that they “could be launched in error, leading to an accidental nuclear war.”

“Adding AI into nuclear command and control," he said, “may also lead to misleading or bad information.”

He’s right. AI depends on data, and sometimes data are wildly inaccurate.

Although there isn't one particular country that keeps Mr. Kallenborn awake at night, he's worried by “the possibility of Russian President Vladimir Putin using small nuclear weapons in the Ukraine conflict.” Even limited nuclear usage “would be quite bad over the long-term” because “the nuclear taboo” would be removed, thus “encouraging other states to be more cavalier with nuclear weapons usage.”

“Nuclear weapons,” according to Mr. Kallenborn, are the “biggest threat to humanity."

"They are the only weapon in existence that can cause enough harm to truly cause human extinction,” he said.

As mentioned earlier, throwing AI into the nuclear mix appears to increase the risk of mass extinction. The warnings of Mr. Kallenborn, a well-respected researcher who has dedicated years of his life to researching the evolution of nuclear warfare, carry a great deal of credibility.

联系我们 contact @ memedata.com