五角大楼威胁将Anthropic列为“供应链风险”,原因是其对军事用途的限制。
Pentagon Threatens To Blacklist Anthropic As 'Supply Chain Risk' Over Guardrails On Military Use

原始链接: https://www.zerohedge.com/military/pentagon-threatens-blacklist-anthropic-supply-chain-risk-over-ethical-guardrails-military

五角大楼正在考虑与Anthropic公司(Claude AI模型的创建者)断绝合作,原因是双方在使用限制上存在分歧。Anthropic实施了“安全护栏”,阻止Claude被用于大规模监控和自主武器等领域——这些限制是在Claude未经同意被用于针对委内瑞拉领导人尼古拉斯·马杜罗的突袭行动后触发的。 国防官员认为这些限制是对国家安全的威胁,要求“所有合法使用”的访问权限,而Anthropic坚持其可接受使用政策中概述的伦理边界。这导致了激烈的谈判和指责,包括埃隆·马斯克声称Claude存在意识形态偏见。 据报道,五角大楼正在准备将Anthropic指定为供应链风险,实际上是将该公司列入黑名单——这种惩罚通常保留给外国对手。尽管涉案合同金额相对较小(2亿美元),但将Claude从机密系统中移除将非常复杂,因为它目前被认为在专门的政府应用中优于竞争对手的AI模型。这种情况凸显了在快速发展的AI领域中,国家安全需求与伦理问题之间的紧张关系。

相关文章

原文
  • The Pentagon is reportedly about to cut ties with Anthropic, makers of Claude, which is already embedded in classified systems
  • The company insists on implementing guardrails over how the US military can use Claude - specifically when it comes to mass surveillance and autonomous weapons - after it was used in the Maduro raid without their knowledge.  
  • The Pentagon is now calling Claude a threat to national security
  • Some are accusing the overwhelmingly left-leaning company of trying to undermine the Trump administration, while Elon Musk says Claude 'hates whites, Asians, heterosexuals, and men.'

Defense Secretary Pete Hegseth is reportedly "close" to cutting business ties with Anthropic and designating the firm a supply chain risk - a penalty typically reserved for foreign adversaries, a senior Pentagon official told Axios.

Anthropic's flagship model, Claude, is already embedded in the military's classified systems - however the company's CEO has been pushing for abstract guardrails over ethical concerns for what the government sees as urgent national security needs. 

If classified as a national security risk, the designation would force any company that wants to do business with the U.S. military to certify it does not use Anthropic’s AI - effectively blacklisting the firm from large swaths of the defense ecosystem.

“It will be an enormous pain in the ass to disentangle,” the senior official told Axios. “And we are going to make sure they pay a price for forcing our hand.”

Chief Pentagon spokesman Sean Parnell confirmed the review, framing it as a matter of national security.

“Our nation requires that our partners be willing to help our warfighters win in any fight,” Parnell said. “Ultimately, this is about our troops and the safety of the American people.”

Claude was notably used during the January operation targeting Venezuelan leader Nicolás Maduro, highlighting how deeply embedded the software already is within U.S. defense operations. As Axios noted on Saturday: 

 The tensions came to a head recently over the military's use of Claude in the operation to capture Venezuela's Nicolás Maduro, through Anthropic's partnership with AI software firm Palantir.

  • According to the senior official, an executive at Anthropic reached out to an executive at Palantir to ask whether Claude had been used in the raid.
  • "It was raised in such a way to imply that they might disapprove of their software being used, because obviously there was kinetic fire during that raid, people were shot," the official said.

Since then, Pentagon officials and Anthropic executives have been locked in contentious negotiations over how the military may use the AI, particularly in surveillance, intelligence collection, and weapons development.

Anthropic CEO Dario Amodei at the World Economic Forum in Davos in January 2026. Photo: Krisztian Bocsi/Bloomberg via Getty Images

Anthropic CEO Dario Amodei has pushed for guardrails to prevent mass surveillance of Americans or the use of AI in fully autonomous weapons systems without human involvement, however the Pentagon says those restrictions are unworkable. Anthropic's own Acceptable Use Policy (UAP) explicitly prohibits the use of Claude for: 

  • The design or use of weapons
  • Domestic surveillance
  • Facilitating violence or malicious cyber operations

These restrictions are not waived for military/government users unless the contract includes specific safeguards that Anthropic judges adequate, however defense officials insist that military AI tools must be available for "all lawful purposes," arguing that real-world operations are riddled with gray areas that rigid rules cannot anticipate. The same standard is being demanded of other major AI labs, including OpenAI, Google, and xAI.

One source familiar with the talks said senior defense officials had grown increasingly frustrated with Anthropic - and seized the opportunity to escalate the dispute publicly.

Musk piles on - 'evil' and 'misanthropic'

As the Pentagon showdown escalated, Anthropic also found itself under fire from another powerful adversary - Elon Musk.

Earlier this month, Musk launched a blistering public attack after the company announced a massive $30 billion funding round valuing it at roughly $380 billion. Musk labeled the company’s AI “evil” and “misanthropic,” accusing Claude of ideological bias and hostility toward certain demographic groups, accusing it of "hating Whites, Asians, heterosexuals, and men" in its outputs. 

Musk - whose own company xAI competes directly with Anthropic - mocked the firm’s name, suggesting that a company branded as Anthropic had paradoxically become anti-human.

That said, in January Anthropic cut off xAI's access to Claude models, which xAI engineers had been using via the Cursor coding tool to speed up internal work. Anthropic enforces a strict policy against using its models to build/train competitors (they had done the same to OpenAI earlier). Musk’s co-founder Tony Wu (who just left) sent an internal note acknowledging the productivity hit but saying it would motivate xAI to build better tools, while Musk later called the cutoff "not good for their karma."

Musk's beef isn't baseless; tests and user reports show Claude often declines queries that could be seen as offensive or non-inclusive (e.g., jokes about certain demographics, historical hypotheticals).

Musk has positioned xAI’s Grok as a less restricted, “truth-seeking” alternative to what he and allies describe as overly constrained or ideologically filtered models. Anthropic, by contrast, has built its reputation around "constitutional AI" - a framework designed to impose ethical limits on how its systems behave.

High stakes, limited alternatives

Designating Anthropic a supply chain risk would force defense contractors to rip Claude out of their internal workflows - a massive compliance headache given the company’s reach. Anthropic recently said eight of the ten largest U.S. companies already use its technology.

The Pentagon contract at risk is valued at up to $200 million - small compared to Anthropic’s reported $14 billion in annual revenue, but symbolically enormous.

Complicating matters, a senior administration official acknowledged that competing AI models are still “just behind” Claude when it comes to specialized government and classified applications, making an abrupt transition risky.

Still, Pentagon officials appear confident that other AI providers will ultimately agree to the “all lawful use” standard, even as sources close to the negotiations say much remains unsettled.

Loading recommendations...

联系我们 contact @ memedata.com