Senator Elizabeth Warren (D-Mass.), ranking member of the Senate Banking Committee, has accused the Trump administration of attempting to “extort” Anthropic into removing critical safety restrictions from its Claude model, after the Pentagon labeled the company a supply chain risk to national security.
Warren’s statement came hours after Defense Secretary Pete Hegseth announced the designation, barring any U.S. military contractor, partner, or supplier from working with Anthropic. The move followed the collapse of negotiations over a $200 million Pentagon contract signed last summer. The Defense Department had demanded unrestricted access to Claude for “all lawful purposes,” including applications explicitly prohibited by Anthropic’s terms of service: fully autonomous lethal weapons and mass domestic surveillance without human oversight.
The senator described the Pentagon’s final offer – delivered Wednesday night – as an effort to force Anthropic to drop “common sense guardrails that protect Americans from mass surveillance and fully autonomous weapons with no human decision-makers that can kill with impunity.” She called the supply chain risk label an “extraordinary abuse of power” with serious national security consequences and demanded Hegseth testify before Congress immediately.
Sen. Ed Markey (D-Mass.) joined the criticism, calling the designation “reckless and unprecedented” and an attempt to “cripple an American firm for requesting legitimate safeguards.” Both senators urged swift congressional action to reverse the decision.
Anthropic rejected the Pentagon’s terms Thursday, stating “virtually no progress” had been made in talks and it could not “in good conscience” accept conditions allowing unrestricted military or surveillance use. The company labeled the supply chain risk designation “legally unsound” and said it sets a “dangerous precedent,” noting such labels have historically been reserved for U.S. adversaries, never publicly applied to an American company.
President Trump reacted Friday, accusing Anthropic of being run by “Leftwing nut jobs” attempting to “strong-arm the Department of War” and force compliance with their terms over constitutional obligations. He directed all federal agencies to “immediately cease” using Anthropic technology.
The dispute underscores a broader tension between the Trump administration and frontier AI developers over military applications. OpenAI, Anthropic’s chief rival, has quietly built a far larger Pentagon portfolio – exceeding $1.15 billion in contract ceiling value across multiple agencies:
-
$100 million Defense Innovation Unit (DIU) pilot (mid-2024): tested GPT-4o for administrative and analytical tasks (after-action reports, intel summaries, training materials, logistics planning). Explicitly excluded weapons or targeting use.
-
$200 million Chief Digital and Artificial Intelligence Office (CDAO) contract (early 2025): integrated large language models into joint all-domain command and control (JADC2) systems for real-time sensor-data processing, intelligence fusion, and commander decision support.
-
$350 million National Geospatial-Intelligence Agency (NGA) and Defense Intelligence Agency (DIA) deal (late 2025): focused on GEOINT enhancement—automated analysis of satellite imagery, drone footage, and open-source video for pattern detection and rapid intel summaries.
-
$500 million indefinite-delivery/indefinite-quantity (IDIQ) contract with U.S. Army Futures Command (February 17, 2026): supports experimentation, simulation, wargaming, and synthetic training. First task order ($85 million) issued for adversary-behavior modeling.
OpenAI maintains strict guardrails: no autonomous lethal weapons, no direct targeting, no real-time lethal decision-making, mandatory human-in-the-loop for high-consequence use. The company’s Acceptable Use Policy and enterprise terms prohibit weapons of mass destruction, oppressive biometric surveillance, or fully autonomous killing systems.
Anthropic’s refusal to drop similar restrictions has now triggered the supply chain risk label and federal usage ban. Warren framed the administration’s response as punishment for maintaining ethical boundaries, warning it risks stifling U.S. AI innovation while adversaries like China face no comparable constraints.
The confrontation could reshape government-AI industry relations, especially as the Pentagon pushes for accelerated adoption of frontier models in national security contexts
