微软支持Anthropic阻止五角大楼的“供应链风险”标签。
Microsoft Backs Anthropic's Bid To Block Pentagon's 'Supply-Chain Risk' Label

原始链接: https://www.zerohedge.com/ai/microsoft-backs-anthropics-bid-block-pentagons-supply-chain-risk-label

微软正在支持Anthropic公司对国防部的诉讼,此前五角大楼将这家人工智能公司列为对国家安全构成供应链风险。这一认定源于Anthropic拒绝向五角大楼提供对其Claude人工智能模型的无限制访问,担心可能被用于国内监控或自主武器——五角大楼对此予以否认。 微软作为一家使用Anthropic技术的五角大楼承包商,认为立即实施这一认定将扰乱军事行动,并对更广泛的科技行业产生负面影响。他们主张暂时阻止该认定,以便更平稳地过渡,并警告说限制对先进人工智能的访问可能会阻碍美国战斗人员。 Anthropic声称这一认定是报复行为,侵犯了他们的第一修正案权利。五角大楼拒绝就正在进行的诉讼发表评论,但一位五角大楼官员此前指责Anthropic试图控制军事行动。Claude系统目前被用于关键功能,如情报分析和网络行动。

相关文章

原文

Authored by Aldgra Fredly via The Epoch Times,

Microsoft on March 10 filed an amicus brief backing Anthropic’s lawsuit against the Department of War, seeking a court order to temporarily stop the Pentagon from labeling Anthropic as a supply-chain risk.

Anthropic filed the suit on March 9 after the Pentagon designated it a supply chain risk to national security, a label that would hinder the Pentagon and its contractors from using Anthropic’s artificial intelligence technology in their work for the U.S. military.

The designation stemmed from Anthropic’s rejection of the Pentagon’s request for unrestricted access to its Claude models over concerns that the technology could be used for mass domestic surveillance or fully autonomous weapons. The Pentagon has denied that it planned to use Claude for such purposes.

In its amicus brief filed March 10, Microsoft said it was directly affected by the Pentagon’s designation of Anthropic because it uses Anthropic’s technologies in products made available to the Pentagon.

The tech giant said that a temporary block on the designation would “enable a more orderly transition and avoid disrupting the American military’s ongoing use of advanced AI.”

Microsoft warned that U.S. warfighters could be hampered “at a critical point in time” if companies are required to immediately alter existing product and contract configurations used by the Pentagon.

It also warned that putting the Pentagon’s designation of Anthropic into immediate effect will have “broad negative ramifications for the entire technology sector and the American business community.”

Microsoft said the Pentagon gave itself a six-month period to transition services away from Anthropic’s technologies but did not provide the same transition timeline for contractors that use Anthropic products.

“Should this action proceed without the entry of a temporary restraining order, Microsoft and other government contractors with expertise in developing solutions to support U.S. government missions will be forced to account for a new risk in their business planning,” it stated.

“Should companies choose to forgo the opportunity to work with the U.S. government due to the attendant risks, the U.S. government, its missions, and the people it serves would lose access to state-of-the-art technological solutions,” Microsoft said.

The Pentagon said it does not comment on ongoing litigation.

Anthropic alleged in its lawsuit that the federal government designated the company in retaliation for its viewpoint protected under the First Amendment.

Secretary of War Pete Hegseth on Feb. 27 accused Anthropic of trying to dictate military operations by denying the Pentagon permission to use its Claude models for all lawful purposes.

“Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable,” Hegseth said in a post on X.

The Pentagon used the Claude AI system for mission-critical functions, including intelligence analysis, modeling and simulation, operational planning, and cyber operations.

联系我们 contact @ memedata.com