Microsoft confirmed Thursday it sold AI and cloud tech to the Israeli military during the Gaza conflict, but insists its tools weren’t used to target civilians. The revelation—buried in a unsigned blog post—marks the company’s first public admission of its wartime role, sparking debates over tech’s expanding military ties. 🌍💻
While Microsoft claims its AI helped "save lives" by locating hostages, critics argue such systems risk deadly errors. Human rights groups warn AI-driven targeting could lead to civilian casualties, comparing it to "letting algorithms play Call of Duty IRL." 🎮⚠️
The tech giant admitted it lacks visibility into how clients use its software on private servers—a loophole experts say makes accountability impossible. "It’s like selling someone a knife but not asking if they’ll chop veggies or stab people," said Emelia Probasco of Georgetown University. 🔪🤷♂️
Employee activists aren’t buying Microsoft’s defense. A group called No Azure for Apartheid demanded full transparency, accusing the company of "PR whitewashing" amid Gaza’s humanitarian crisis. Former worker Hossam Nasr, fired after organizing a Gaza vigil, called the statement "a corporate smoke screen." 🕯️🚨
As Google, Amazon, and Palantir also supply tech to militaries, the controversy highlights a new era of Silicon Valley warfare—where lines between innovation and ethics blur faster than a ChatGPT response. 🤖⚖️
Reference(s):
Microsoft says it provided AI to Israeli military, denies use for kill
cgtn.com