The Pentagon has labelled artificial intelligence firm Anthropic as posing an “unacceptable risk” to military supply chains, as it defends its decision in a legal dispute with the company.
New Zealand Crush South Africa by 68 Runs to Level T20 Series
In a filing to a California federal court, US officials argued that continued access by Anthropic to sensitive defence infrastructure could expose operations to vulnerabilities, citing concerns that AI systems are susceptible to manipulation.
The government claimed Anthropic’s Claude AI model could potentially be altered or disabled by the company if its internal policies or “red lines” were breached during military use.
The dispute comes amid growing scrutiny of AI in defence, with Anthropic recently attracting attention over reports that its technology may have been used in identifying targets during US operations involving Iran.
At the same time, the company has publicly refused to allow its systems to be used for mass surveillance in the United States or for fully autonomous lethal weapons, a stance that US officials say raises reliability concerns.
The designation as a “supply chain risk” — typically applied to firms linked to foreign adversaries such as Huawei — could effectively bar government contractors from working with Anthropic.
Anthropic has challenged the move in court, arguing the classification is unjustified.
Meanwhile, Microsoft, which both collaborates with Anthropic and supplies technology to the US military, has supported the company, warning that such restrictions could undermine the broader AI ecosystem.
The case highlights increasing tensions between government security priorities and private tech firms’ ethical boundaries, as artificial intelligence becomes more deeply integrated into military operations.
