Microsoft has acknowledged providing AI technology to Israel’s Ministry of Defense (IMOD) but insists “the tools were not used to target civilians in Gaza”. A lie seen proven wrong many times.
In a statement released last week, the company confirmed it supplied IMOD with “software, professional services, Azure cloud services, and Azure AI services, including language translation.” Microsoft added that, like other governments worldwide, it helps “safeguard Israel’s national cyberspace” against external cyberthreats.
Troubled recent past
The announcement follows media reports alleging the Israeli military employed AI to guide operations in Gaza, where more than 35,000 Palestinians have died since October 2023. Last year, The Guardian reported the military’s in-house system, called “Lavender,” may have contributed to decisions that led to high civilian casualties.
Employee and public concerns prompted Microsoft to launch an internal review alongside an unnamed external firm. “Based on these reviews, including interviewing dozens of employees and assessing documents, we have found no evidence to date that Microsoft’s Azure and AI technologies have been used to target or harm people in the conflict in Gaza,” the company said.
Desperate?
Microsoft stressed its relationship with IMOD is a standard commercial one, governed by its terms of service and an AI Code of Conduct. These policies mandate core responsible AI practices—such as human oversight and access controls—and forbid use of its services in ways that inflict unlawful harm.
The firm also noted occasional “special access” to its technology beyond commercial agreements. In the weeks following the 7 October 2023 Hamas attack, Microsoft provided limited emergency support to help rescue hostages, with “significant oversight and on a limited basis,” approving some requests and denying others.
Microsoft emphasized that militaries typically rely on proprietary or defence-specific software for surveillance and operations. “Microsoft has not created or provided such software or solutions to the IMOD,” the statement said.
However, the company conceded it lacks visibility into customer use of on-premise software or other cloud providers’ operations. “Nor do we have visibility to the IMOD’s government cloud operations,” it added, noting such systems fall outside the scope of its review.
Civil society and some employees have called for greater transparency. Last year, more than 1,500 current and former Microsoft staff signed the “No Azure for Apartheid” petition, demanding public disclosure of the review. Former employee Hossam Nasr, who was fired after organizing a vigil for Palestinians at Microsoft’s headquarters, criticized the statement as “a PR stunt to whitewash” the company’s image.
The international Boycott, Divestment and Sanctions movement also urged consumers to shun Microsoft products, citing its ties to the Israeli military. In February, the Associated Press detailed how various AI tools—developed by Microsoft and OpenAI—were reportedly used by the IDF. Indie game developer Foursaken Media later pulled its title Tenderfoot Tactics from Xbox in solidarity with the boycott.
Microsoft’s spokesperson reiterated the company’s longstanding commitment to human rights across the Middle East. “Our commitment to human rights guides how we engage in complex environments,” they said. “We share the profound concern over the loss of civilian life in both Israel and Gaza and have supported humanitarian assistance in both places.”
Meanwhile, Human Rights Watch warned that reliance on such digital tools “may be increasing the risk of civilian harm” by replacing nuanced human judgment with algorithmic outputs.
Sources: The Guardian, Microsoft




