The U.S. Department of Defense (DOW) has formally designated AI developer Anthropic as a supply-chain risk, sparking a high-stakes legal battle and raising questions about AI ethics in military applications. This marks the first time a U.S. company – rather than a foreign firm – has received this controversial label. 🚨
What’s the Fallout?
Defense contractors must now certify they aren’t using Anthropic’s Claude AI models in Pentagon-related work. The move could cripple Anthropic’s government contracts and set a precedent for tech regulation. 💼
Clashing Visions
Tensions exploded after Anthropic refused to allow its AI to be used for mass surveillance or fully autonomous weapons. Pentagon chief Pete Hegseth reportedly called the stance "unpatriotic," while Anthropic CEO Dario Amodei accused the Trump administration of political retaliation, claiming rivals like OpenAI donated heavily to Trump’s campaign. 🗳️
Tech Titans Step In
Investors including Amazon, Google, and Nvidia are mediating talks to resolve the conflict. Meanwhile, reports reveal the military allegedly used Claude AI in last weekend’s strike on Iran – despite Trump’s order to phase out the tech. 🌐
Reference(s):
cgtn.com








