**Trump Orders Anthropic’s Immediate Removal from Government Agencies**
**Severe Measures Against Domestic AI Companies as Adversaries**
**Growing Ideological Conflicts in Washington and Silicon Valley… A Turning Point for AI Industry**
As the United States showcases its military prowess by precision-striking the residence of Iran’s Supreme Leader Ali Khamenei (89) abroad, domestically, it moves to curb its corporations from providing artificial intelligence (AI) for military use. The intensifying debate underscores a peak in the ideological tussle between Washington, pushing for the ‘militarization of AI,’ and Silicon Valley, prioritizing ‘AI safety.’
**Anthropic, Victim of Government Retaliation**
A day before the operation to strike Khamenei, on the 27th (local time), U.S. President Donald Trump fiercely criticized Anthropic, a leading American AI company, on Truth Social as “leftist fanatics,” directing all federal agencies to immediately cease using Anthropic’s technology. Previously, Anthropic had clashed with the Department of Defense over the military use of AI, and President Trump has now moved to expel Anthropic for opposing AI militarization. He stated, “The fate of our country will be decided by us, not by an out-of-control radical left AI company run by people who have no understanding of reality.”
Following Trump’s directive, Secretary of Defense Pete Hegseth officially designated Anthropic as a “supply chain risk to national security.” This is the first time the U.S. government has marked a domestic firm, rather than one from adversary nations like China or Russia, as a supply chain risk. Consequently, Anthropic is barred from any commercial transactions with thousands of contractors working with the Department of Defense. The aim is to isolate and incapacitate Anthropic for defying the U.S. government. The New York Times remarked, “This dispute’s timing is crucial,” as this move reflects the U.S. government’s stance on AI in war planning and weapons development.
Last July, Anthropic secured a $200 million contract to provide AI technology to the Department of Defense, becoming the first AI integrated into the department’s Classified Network. But conflicts deepened when Anthropic insisted on not using its AI for domestic surveillance or autonomous weapon development. Secretary Hegseth had issued an ultimatum expiring at 5 PM on the 27th to lift these conditions, which Anthropic “could not in good conscience” comply with. Consequently, due to Trump’s directive, a comprehensive expulsion of Anthropic from government agencies commenced.
President Trump, conceding that Anthropic’s AI is embedded throughout U.S. government systems, proposed a six-month phase-out period, warning that failure to cooperate would result in severe civil and criminal liabilities. This is seen as an ultimatum for Anthropic to retract its stance within six months and align with U.S. government interests. However, Anthropic issued a formal statement promising no threat or punishment would change its position, denouncing the supply chain risk designation as unjust and planning to challenge it in court.
**Forgotten AI Safety… Silicon Valley Firms Under Pressure?**
There is significant backlash in Silicon Valley against government actions. Online communities are pushing to boycott OpenAI, which continued contracts with the government, urging the use of Anthropic’s “Claude” instead. “Claude” indeed topped the U.S. Apple App Store’s free app rankings after Trump’s orders.
As Anthropic faces full expulsion, OpenAI announced on the 27th that its AI would integrate into the Department of Defense’s Classified Network. OpenAI CEO Sam Altman stated, “We adhere to principles opposing domestic mass surveillance and the use of AI in autonomous weapon systems, with the Department of Defense agreeing to these limitations.” It promised to establish technical safeguards and deploy personnel to ensure these models function properly.
The Department of Defense hasn’t offered specifics on why OpenAI was accepted while Anthropic was not, although the Silicon Valley AI sector suggests the distinction lies in Anthropic formally requiring a contractual prohibition against using AI for surveillance and weaponry. In contrast, OpenAI settled on staffing oversight without a contractual clause. This difference has led to criticism that OpenAI is tacitly cooperating with AI’s military use, given that challenging it in contractual terms could be unfeasible. Indeed, over 50 OpenAI and 175 Google employees recently issued a joint letter urging their management to reject the Department of Defense’s demands.
The AI industry deems this episode a critical watershed moment. With a clear precedent of government retaliation against AI companies not aligning with governmental direction, the AI safety issues that garnered attention at AI’s inception may now be sidelined. The New York Times highlighted, “The Ukraine conflict heralded the drone warfare era, making the future of autonomous weaponry a reality,” noting that as AI capabilities improve, motivations to incorporate AI into military applications will only strengthen.
