五角大楼与一家人工智能公司发生争端,这是一个坏兆头。
The Pentagon Feuding with an AI Company Is a Bad Sign

原始链接: https://foreignpolicy.com/2026/02/25/anthropic-pentagon-feud-ai/

2025年7月,专注于安全的AI初创公司Anthropic获得美国国防部2亿美元合同,为其保密军事系统提供其“Claude”AI。 这项旨在为Anthropic上市助一臂力的交易,迅速引发冲突。 Anthropic表达了对该技术可能被用于致命自主武器的担忧,而这种用途被其指南禁止——这一立场与谷歌和OpenAI等公司形成对比,这些公司提供更开放的部署。 在Claude被用于一次军事行动后,紧张局势升级,引发了内部辩论,导致国防部长赫格塞思要求不受限制地访问该AI,并威胁如果Anthropic拒绝,就将其指定为“供应链风险”。 这反映了特朗普政府更广泛的权力斗争,该政府对Anthropic的“觉醒AI”议程持不信任态度,并优先考虑军事控制而非公司限制。 这场争端凸显了一个关键问题:谁来监管强大的前沿技术? 政府削弱五角大楼内部保障和问责制,引发了对负责任的AI部署的担忧,而仅仅依靠私营公司并非可行方案。 专家认为,国会必须介入,为军事AI应用建立明确的规则和监督。

## 五角大楼与人工智能公司纠纷 - 摘要 五角大楼与人工智能公司Anthropic之间的纠纷引发了对人工智能伦理使用的担忧。Anthropic获得了一份2亿美元的合同,但正在抵制重新谈判条款的压力,特别是那些限制军方将其人工智能模型(Claude)用于武器开发、暴力活动和大规模监控等活动的条款。 评论员对Anthropic声称的人工智能安全承诺表示怀疑,一些人认为其背后隐藏着某种议程或更大的政治分歧。许多人强调了军方/情报机构与硅谷之间长期存在的合作关系,认为人工智能开发*一直*具有双重用途。 一个关键的争议点是,军方是否应该遵守与其他用户相同的用法准则——这是原始合同的核心原则。人们担心五角大楼试图绕过这些保障措施,从而可能实现存在问题的监控能力。这一情况凸显了关于人工智能负责任的开发和部署的更广泛辩论,特别是关于其潜在的致命应用。
相关文章

原文

In July 2025, San Francisco-based AI start-up Anthropic signed a $200 million contract with the Pentagon to provide the military with frontier AI technology. Under the agreement, Anthropic’s Claude would be deployed within the military’s classified systems, where it was viewed as a state-of-the-art platform. The deal represented a growing push by the company as it readied itself for a public offering to court national security business. Executives were bullish about the partnership, announcing that the award “opens a new chapter” for the firm.

Soon, however, disagreements emerged over the military’s future use of Anthropic’s systems. Company officials grew concerned that the technology could eventually be used to carry out lethal autonomous operations. The Pentagon pushed back, arguing that decisions about the models were not Anthropic’s to make and should instead be left to the military, as with any other government-acquired technology. Anthropic’s stance differed from other companies, such as Google, Elon Musk’s xAI, and OpenAI, which had “agreed in principle” to allow their models to be deployed for any purpose allowed by law. By contrast, Anthropic had built its brand around promoting AI safety, emphasizing red lines it said it wouldn’t cross. Its usage guidelines contain strict limitations that prohibit Claude from facilitating violence, developing or designing weapons, or conducting mass surveillance.

In July 2025, San Francisco-based AI start-up Anthropic signed a $200 million contract with the Pentagon to provide the military with frontier AI technology. Under the agreement, Anthropic’s Claude would be deployed within the military’s classified systems, where it was viewed as a state-of-the-art platform. The deal represented a growing push by the company as it readied itself for a public offering to court national security business. Executives were bullish about the partnership, announcing that the award “opens a new chapter” for the firm.

Soon, however, disagreements emerged over the military’s future use of Anthropic’s systems. Company officials grew concerned that the technology could eventually be used to carry out lethal autonomous operations. The Pentagon pushed back, arguing that decisions about the models were not Anthropic’s to make and should instead be left to the military, as with any other government-acquired technology. Anthropic’s stance differed from other companies, such as Google, Elon Musk’s xAI, and OpenAI, which had “agreed in principle” to allow their models to be deployed for any purpose allowed by law. By contrast, Anthropic had built its brand around promoting AI safety, emphasizing red lines it said it wouldn’t cross. Its usage guidelines contain strict limitations that prohibit Claude from facilitating violence, developing or designing weapons, or conducting mass surveillance.

After months of tension, things reached a boiling point in January. Following the U.S. military’s operation to capture former Venezuelan President Nicolás Maduro, an Anthropic employee raised concerns with Palantir about how Claude was used in the operation. (Palantir supplies the underlying platform that integrates Anthropic’s model.) Palantir contacted the Pentagon, expressing alarm that Anthropic might disapprove of its technology being used in similar future missions. The matter came to the attention of Defense Secretary Pete Hegseth, who reacted angrily.

Days later, Hegseth issued a memo directing AI companies to remove restrictions on their technology. The memo prompted Anthropic to renegotiate its contract; company officials requested that the Pentagon impose limits on how it used Anthropic’s AI tools. This action caused a further backlash from the Pentagon. Axios reported that Hegseth was “close” to ending the partnership with Anthropic and considered designating the firm a “supply chain risk,” a penalty normally reserved for foreign adversaries. It would mean that any firm wanting to do business with the military would have to cut ties with Anthropic. Said Hegseth at a speech in Texas, “We will not employ AI models that won’t allow you to fight wars.”

Given the circumstances, the Trump administration’s forceful response was hardly unexpected. From the get-go, top aides were distrustful of Anthropic. Months earlier, White House AI czar David Sacks had singled out the firm, accusing it of pursuing an agenda “to backdoor Woke AI and other AI regulations through Blue states like California.” In September of last year, Anthropic incensed Trump officials when it refused requests from federal law enforcement agencies to allow its AI tools to be used to surveil U.S. citizens. (In response, one government official described Anthropic’s stance as akin to “making a moral judgment about how law enforcement agencies do their jobs.”) At one point, the venture firm 1789 Capital, where one of U.S. President Donald Trump’s sons is a partner, considered making a nine-figure investment in Anthropic but pulled the plug, “citing ideological reasons,” such as the firm’s “history of criticizing President Trump.”

Now, Hegseth has reportedly given the company a deadline of Friday evening to allow the Pentagon unfettered access to its AI model, or else face harsh penalties (Hegseth has simultaneously threatened to invoke the Defense Production Act to compel Anthropic to give the military access to Claude, or to designate the company a supply chain risk, which would bar the Pentagon from using Anthropic’s products.) The Trump administration sees the dispute as a broader struggle over its power and authority—whether a company can impose limits on how its products are used, or whether the government has the right to deploy those technologies as it sees fit, particularly for national security purposes. But in reality, that argument is a distraction. The actual issue at stake—who should regulate frontier technologies, and how—is far more serious.


For decades, the influence of corporations in international politics has grown at the expense of governments. Today, businesses are central to geopolitical competition and modern warfare; decisions made by CEOs influence a nation’s ability to project power and wage war. Militaries are no longer primarily responsible for defining the modern battlefield and tasking industry to develop technological solutions. Instead, private companies increasingly shape perceptions of the contemporary battlespace, offering militaries digital tools and products that promise strategic advantages.

This wasn’t always the case, however. In the “state-centric postwar world” of the 1950s and 1960s, companies were generally subordinate to state authority. They relied on governments for security and legal order while exercising limited autonomy to pursue their own market objectives. But the 1970s marked a turning point, with the collapse of the Bretton Woods system and the loosening of capital controls, alongside breakthroughs in information and communications technologies. These factors paved the way for corporations to globalize. What resulted was a power shift in favor of private actors and away from nation-states. Now, firms like Microsoft independently decide whether their technology should support Israel’s mass surveillance campaign against Palestinians in Gaza. Musk determines whether to shut down Starlink services used by Ukrainian soldiers against Russian military forces and whether to keep Starlink services connected in Iran (yes) or in Uganda (no). This dynamic has given rise to a core tension: When the interests of private firms clash with the government’s objectives, whose preferences should prevail?

But Trump’s dispute with Anthropic involves additional, even more complicated stakes. At the root of Anthropic’s claim is the belief that the Trump White House is an unreliable custodian of AI military and surveillance technologies, and that the firm must impose independent guardrails to prevent the Pentagon and other agencies from potential misuse. Does Anthropic have a point?

After a year in office, the administration’s conduct has set off warning bells. Domestically, a growing body of evidence shows that agencies like Immigration and Customs Enforcement (ICE) are operating at the edges of the law in their use of AI surveillance and related tools. In Minnesota, for example, ICE agents have relied on two facial recognition programs, developed by Clearview AI and NEC’s Mobile Fortify, to track individuals and intercept suspects. ICE’s largest contractor, Geo Group, won more than $800 million worth of contracts in 2025 alone to mine data from commercial databases and physically surveil people who are suspected to be undocumented immigrants. (The company’s core business is operating private prisons, but it has shifted toward surveillance services to capitalize on Trump’s deportation push.) Meanwhile, Department of Homeland Security agents make use of a Palantir-run database that combines government and commercial data to identify “real-time locations” for individuals of interest. Said Nathan Freed Wessler, a lawyer with the ACLU, “The conglomeration of all these technologies together is giving the government unprecedented abilities.”

In the military arena, the Pentagon is making a growing push to incorporate AI technologies into its arsenal. From autonomous drones to AI targeting systems, there is a growing recognition that the future of war will be swift, automated, and lethal—increasingly dependent on AI tools for battlefield supremacy. The administration has signaled its intent to double down on the use of AI technology; in Hegseth’s words, to become an “‘AI-first’ warfighting force across all components, from front to back.” What does this mean in practice? Conflicts in Gaza and Ukraine show the devastating potential of these systems.

Israel’s use of AI targeting technology against suspected Hamas militants is instructive. Reporting suggests that these systems have played a crucial role in the war, generating “automated recommendations” for identifying and striking targets. In the six weeks following Hamas’s Oct. 7, 2023, attacks against Israel, one of the Israel Defense Forces’ AI systems, known as Lavender, produced at least 37,000 target recommendations, carrying out over 15,000 strikes. Its error rate was reportedly 10 percent, meaning that “thousands of civilians may have been misidentified as members of Hamas.” As the war dragged on, sources described these platforms as a “mass assassination factory,” generating targets at rates far beyond what was previously possible. Without question, these technologies are ushering in a new age of lethality, heightening the imperative for responsible governance.

And yet, even as Trump has pushed to accelerate America’s military AI dominance, he has deliberately weakened safeguards and internal accountability. Soon after getting narrowly confirmed as defense secretary, Hegseth summarily fired the top lawyers for the military services, known as judge advocates general (JAGs), because he felt they were part of a “soft, social-justice obsessed” military that had atrophied in the past two decades. JAG officers function as a critical accountability check, overseeing military conduct and ensuring U.S. soldiers conform to the laws of armed conflict; removing their leadership was a major symbolic blow.

Similarly, Hegseth eliminated offices at the Pentagon charged with preventing and responding to civilian harm during combat operations. Employees staffing the Civilian Harm Mitigation and Response office as well as the Civilian Protection Center of Excellence were dismissed, leaving a major gap in assessing risks to civilians during drone strikes and related missions. He even sacked the interim director and slashed personnel at the office of operational test and evaluation, which Congress established in 1983 in response to concerns that the Pentagon was fielding weapons systems that failed to operate safely or effectively—an especially troubling development in an era of AI-driven warfare. And the weakening of accountability has seemingly spread to military operations, as with the Pentagon’s strikes against boats that Trump claims are smuggling drugs.


Returning to Anthropic, its leadership has been justifiably concerned about how its systems will be used, particularly for war and policing. CEO Dario Amodei has long maintained that AI requires strict limits to stop it from causing damage to the world. In a series of essays, Amodei has warned that deploying AI for “domestic mass surveillance and mass propaganda” is illegitimate and that AI weapons were “prone to abuse” and should not be rushed into service without proper safeguards. Not everyone is convinced that Amodei has the public’s best interests in mind. After all, Anthropic is a profit-making firm whose overriding goal, like other AI startups, is to have a successful public offering and maximize revenue. Nonetheless, Amodei’s words should be taken seriously. Even if the Trump administration showed far more adherence to legal norms and principles, there would still be huge risks to diving headlong into deploying military AI tools.

But if the Trump administration can’t be trusted to safely oversee these technologies, that isn’t an argument for turning control over to Anthropic, either. (For one, even if Anthropic sticks to its principles, the Pentagon could simply substitute a rival firm, say xAI’s Grok, which has a documented history of spreading biased, misleading, and extremist outputs.) AI’s military and surveillance applications are no longer hypothetical; their relevance on the battlefield and in policing is already evident. Accordingly, other institutional actors, particularly Congress, need to get off the sidelines. Legislators ought to play a direct role in defining the boundaries of acceptable uses of military AI and requiring transparent reporting about how the Pentagon is deploying these systems. As scholars like Alan Z. Rozenshtein write, the rules governing military AI “shouldn’t depend on the ethical commitments of whichever CEO happens to be in charge, or the political preferences of whichever defense secretary happens to be in office.” In the absence of congressional involvement, AI policy will continue to be forged through improvised deals between the executive branch and individual firms—without public accountability and with little durability beyond the next administration.

Trump officials say their dispute with Anthropic is about correcting an imbalance of power—to stop a private company from dictating how the military can use its tools. But the actual question to ask is: Can the Trump administration or tech companies be trusted to safely and responsibly develop powerful AI technologies for national security? The answer so far is no.

联系我们 contact @ memedata.com