
According to The Washington Post, Google moved swiftly to provide AI tools to the Israeli military following Hamas’ attack. Through Project Nimbus, Google has supported the Israeli military in conducting data analysis and intelligence gathering more quickly and efficiently. Concerns have grown as AI-based analysis tools could be used to assess battlefield situations and identify targets.
Similarly, The Guardian reported that Microsoft has also strengthened its cooperation with the Israeli military. Microsoft provided cloud computing and AI technology to support operations in Gaza, enhancing military precision and enabling rapid data processing.
Internal Backlash and Employee Protests
These moves by Google and Microsoft have sparked strong opposition from their employees. According to Time, employees of Google DeepMind drafted an open letter to executives, urging the company to terminate its military contracts. They emphasized ethical responsibility, warning of the potential misuse of AI technology for military purposes.
In particular, employees expressed concerns that Project Nimbus could be used to aid Israeli airstrikes and military operations. As a result, hundreds of Google employees reportedly signed a petition calling for the cancellation of this contract. The controversy over the military use of AI is now spreading beyond Google, prompting broader ethical debates within the tech industry.

Ethical Issues and Social Reactions
This controversy extends beyond internal company matters and is now affecting society as a whole. Human rights organizations are raising concerns that the military use of AI technology could violate international human rights norms. Although Google and Microsoft insist that Project Nimbus is intended for administrative support rather than military purposes, the lack of transparency regarding the contract details continues to fuel controversy.
According to The Verge, Google’s cooperation with the Israeli military may extend beyond simple cloud service provision to research and development of AI-based surveillance and reconnaissance technologies. As warnings about the weaponization of AI increase, discussions regarding the social responsibility of tech companies are becoming more prominent.
Future Outlook
This situation serves as a case study of how AI technology, beyond being a mere business tool, can influence international conflicts and security issues. It remains uncertain whether Google and Microsoft will maintain their cooperation with the Israeli military or revise their policies in response to internal backlash and societal pressure. However, this controversy is undoubtedly drawing greater attention to the ethical use of AI technology and corporate responsibility.