Written by 11:09 AM Tech

Despite Trump’s ban, the US uses Anthropic AI in strikes on Iran.

The Wall Street Journal (WSJ) and other US media reported on the 1st (local time) that the United States recently used the AI model “Claude” in its airstrike operations in Iran.

This came just hours after US President Donald Trump ordered all federal agencies to stop using technology from Anthropic, the developer of Claude.

The report highlights the deep involvement of AI tools like Claude in military operations, which might also explain President Trump’s decision to implement a phased suspension period of six months.

WSJ confirmed that several commands worldwide, including the US Central Command (CENTCOM), are using Anthropic’s Claude.

The US Central Command reportedly uses Claude for intelligence assessments, target identification, and battlefield simulations despite heightened tensions between Anthropic and the Department of Defense.

Claude is currently the only AI utilized in US military classified systems, having been used in January in the process of arresting Venezuelan President Nicolás Maduro.

However, the use of Claude has been a point of contention between the Department of Defense and Anthropic.

The Department of Defense has called for an open-ended military use of AI, while Anthropic has maintained that its technology should not be used for massive surveillance or fully autonomous lethal weapons.

Consequently, President Trump instructed federal agencies to cease using Anthropic technology.

On February 27th, he labeled Anthropic as a “radical leftist woke company,” criticizing that their selfishness endangered American lives and jeopardized military and national security.

Nonetheless, he announced a six-month phased suspension period, given the ongoing use of Anthropic products by the Department of Defense.

Meanwhile, OpenAI, Anthropic’s competitor that filled the void left by Claude’s withdrawal, claimed that their contract with the Department of Defense included stronger safety measures than Anthropic’s.

While Anthropic merely required that its technology not be used for large-scale domestic surveillance or autonomous weapons, OpenAI asserted it would also not use it for areas like social credit high-risk automated decisions.

OpenAI emphasized that its model is distributed in a cloud-based form, unlike a device-bound “Edge” form within the Department of Defense, allowing security-approved personnel to continuously monitor safety-related requirements.

They mentioned not knowing why Anthropic couldn’t reach an agreement with the Department of Defense, while hoping other AI companies would consider following their contract model.

Previously, Sam Altman, CEO of OpenAI, expressed a stance supporting Anthropic during its conflict with the Department of Defense, affirming to employees that they would maintain similar principles and create a negotiable path other AI firms could follow.

Altman commented on X (formerly Twitter) that the negotiations were rushed and didn’t look good but stated that if their judgment was correct and resulted in easing tensions between the Department of Defense and the industry, they would be seen as geniuses and as a company that endured much pain for the industry.

Meanwhile, Claude’s popularity outside the US government increased.

After Trump’s administration’s withdrawal decision, Claude surpassed ChatGPT to rank first among free apps in the US Apple App Store.

An Anthropic spokesperson told CNBC that the number of new subscribers this week hit a record high, with free users rising over 60% since January and paid subscriptions doubling since the beginning of the year.

Visited 1 times, 1 visit(s) today
Close Search Window
Close