Israel is reportedly using AI to guide its war in Gaza and is treating its decisions almost like gospel. In fact, one of the AI systems in use is literally called “Gospel.”
A major study published last month by Israeli news outlet +972 Magazine found that Israel relies on AI to decide who to target for killing, especially in the early stages of a war. The role of humans is surprisingly small. war. The study builds on previous revelations by the same news outlet, which described three AI systems working in tandem.
“Gospel” marks a building said to be used by Hamas militants. Lavender is trained on data about known extremists and examines surveillance data on nearly everyone in Gaza, from photos to phone contacts, to assess the likelihood that each person is an extremist. Those with higher ratings will be placed on the kill list. and “Where's Daddy?” Israeli intelligence agents told +972 that they track these targets and inform the military when they are in their family's homes. Because it's easier to bomb there than inside a protected military building.
result? According to Israeli intelligence officials interviewed by +972, approximately 37,000 Palestinians were targeted for assassination, and thousands of women and children were killed as collateral damage due to AI-generated decisions. As +972 writes, “Lavender played a central role in the unprecedented bombing of Palestinians that began shortly after Hamas's deadly attack on Israeli civilians on October 7.”
The war's high death toll (at least 34,735 to date) may be partially explained by the use of AI, which has led to international criticism of Israel and the International Court of Justice. They are even accused of genocide.
While there are still “involved humans” (technical term for those who approve or disagree with the AI's recommendations), Israeli soldiers told +972 that they essentially control the output of the AI. “They treated it as if it were a human decision,” and sometimes just as if it were a human decision, he said. He said he spent “20 seconds” observing targets before bombing and encouraged military leadership to automatically approve Lavender's kill list in the first weeks of the war. According to +972, this was “despite knowing that the system would make what would be considered an 'error' in about 10% of cases.”
The Israeli military denies using AI to select human targets, instead saying the Israeli military has a “database intended to cross-reference intelligence sources.” . But UN Secretary-General António Guterres said he was “deeply troubled” by the report, and White House national security spokesman John Kirby said the US was investigating the matter.
What should the rest of us think about the role of AI in Gaza?
AI proponents often say that the technology is neutral (“it’s just a tool”) or claim that AI will make war more humane (“it will help make it more accurate”). Yes, even so, but Israel's reported use of military AI arguably points to just that. Opposition.
“Very often these weapons are not Elke Schwartz, a political theorist who studies the ethics of military AI at Queen Mary University of London, told me. “The incentive is to use this system on a large scale and in a way that increases violence, not diminishes it.”
Schwartz argues that our technology actually shapes the way we think and value. We think we are driving technology, but to some extent technology is driving us. Last week, I spoke to her about how military AI systems can lead to moral complacency, encourage users to take action over inaction, and lead people to prioritize speed over thoughtful ethical reasoning. We talked about something. A transcript of our conversation, edited for length and clarity, follows below.
Sigal Samuel
Are you surprised to learn that Israel is reportedly using AI systems to direct the war in Gaza?
Elke Schwarz
No, not at all. There have been reports for years that Israel is very likely in possession of various types of AI-powered weapons. And there is no secret in this pursuit, as they have made it very clear that they have developed these capabilities and see themselves as one of the most advanced digital military forces in the world.
If you just look at Project Maven in the US, systems like Lavender and Gospel are not surprising. [the Defense Department’s flagship AI project], which started as a video analysis algorithm, has now become a targeted recommendation system. So we always thought we would go in that direction, and that's exactly what happened.
Sigal Samuel
What struck me was how out of touch human decision makers were. Israeli military officials said they would only spend about “20 seconds” on each target before authorizing bombing.Oh really Do you want to surprise me?
Elke Schwarz
No, that wasn't it either. For the past five years, a debate within the military has been the idea of accelerating the “kill chain,” or using AI to increase lethality. A phrase that is often used is “shortening the sensor-to-shooter timeline,” which basically means making the time from input to weapon firing much faster.
The appeal and allure of these AI systems is that they operate at very high speeds and vast scales, suggesting a huge number of targets in a short period of time. In other words, humans become like automatons that push a button and say, “Okay, this looks good.''
Defense publications have always talked about Project Convergence, another America. [military] This program is actually designed to reduce the sensor-to-photographer timeline from minutes to seconds. Therefore, the value of 20 seconds is very clearly consistent with what has been reported over the years.
Sigal Samuel
For me, this raises questions about technological determinism, the idea that technology determines the way we think and value. Military scholar Christopher Coker once said: “We must choose our tools carefully, not because they are inhumane (all weapons are), but because the more we depend on them, the more they affect our world. Because it shapes your perspective.”
You wrote a reminder of that in your 2021 paper. “When AI and human reasoning form an ecosystem, the possibilities for human control are limited.” What did you mean by that? How will AI suppress human agency and reshape humans as moral agents?
Elke Schwarz
In many ways. One is about cognitive load. With all the data being processed, you have to trust the machine's decisions. First, you don't know what data will be collected and how it will be applied to the model. But there's also a cognitive difference between the way the human brain processes things and the way his AI system does calculations. This leads to what we call “automation bias.” This basically means that as humans we tend to submit to the authority of machines. Because we assume machines are better, faster, and cognitively more powerful than we are.
The other thing is situational awareness. What data is received? What is the algorithm? Are there any biases? These are all questions that operators and others in the loop need to know about, but most often don't, limiting their own situational awareness of the context they need to monitor. If everything you know is displayed on a screen of data, points, and graphics, you take it for granted, but your own sense of the situation on the battlefield becomes very limited.
And then there's the element of speed. AI systems are simply not good enough [mental] Resources that are not Take their suggestions as a call to action. We don't have the means to intervene based on human logic. It's similar to how mobile phones are designed so that users feel the need to react. For example, when a red dot pops up in an email, your first instinct is not to not click on it, but to click on it. Click! As such, they are more likely to prompt users to take action than not. And the fact is that if you are presented with a binary choice to kill or not kill, and you are in a dire situation, you are more likely to take action and discharge your weapon.
Sigal Samuel
How does this relate to what philosopher Shannon Valler calls “moral deskilling,” her term for the negative impact technology has on our moral upbringing? Or?
Elke Schwarz
There is an inherent tension between moral deliberation, or thinking about the consequences of our actions, and the imperatives of speed and scale. Ethics is about reflection and taking the time to ask, “Are these really the parameters we want, or is what we're doing only increasing civilian casualties?''
If you are not given the space and time to practice these moral concepts that every military should have and usually have, you will become an automaton. In other words, you're saying, “I'm part of the machine.” The moral calculation has already been done by someone else somewhere, and it is no longer my responsibility. ”
Sigal Samuel
This ties into another thing I've always wondered about: the question of intent. In the context of international law, such as the genocide trials against Israel, demonstrating intent among human decision-makers is key. But how should we think about intent when decision-making is outsourced to AI? If technology reshapes our cognition, who will take moral responsibility for misbehavior in war encouraged by AI systems? Will it be even more difficult to say who is responsible?
Elke Schwarz
One counterargument is that humans are always somewhere in the loop, at least because humans are making the decisions to use these AI systems. But that's not all there is to moral responsibility. Something as morally important as war has multiple nodes of responsibility, and there are many morally problematic aspects of decision-making.
And if you have a system that distributes intent, any subsystem can be validly denied. You can say, “Here's what our intent was, and the AI system will do it, and we'll see the results.” Therefore, it is difficult to determine intent, which makes it very difficult. Machines don't do interviews.
Sigal Samuel
AI is a general-purpose technology that can be used for a variety of purposes, sometimes beneficial and sometimes harmful. So how can we predict where AI might do more harm than good and prevent such uses?
Elke Schwarz
All tools can be modified to become weapons. Even a pillow can become a weapon if it's bad enough. You can kill someone with a pillow. We are not banning all pillows. But if the current of society tends to use pillows for evil purposes, and pillows are so easy to access, some people actually design pillows made to suffocate people. If so, then you should ask. Some questions!
To do this, we need to pay attention to society, its trends and trends. You can't bury your head in the sand. And at this point, there are enough reports about how AI is being used for problematic purposes.
People always say that AI will make war more ethical. The same was true for drones. Because we have surveillance, we can be more precise and we don't have to throw cluster bombs or conduct large-scale air operations. And of course there's something to that. However, often these weapons are not used in such a precise manner.
It actually lowers the threshold for using violence by making it much easier to apply it. The incentive is to use the system on a large scale in ways that increase violence rather than reduce it.
Sigal Samuel
What struck me most about the +972 study was that Israel's AI systems did not cause violence, but rather escalated it. The Lavender System marked 37,000 Palestinians for assassination. Once the military gains the technical capacity to do it, soldiers will be under pressure to keep up. A senior source told +972: “We were constantly under pressure to 'bring in more targets.' They were really screaming at us. It's over. [killing] Our goal will be achieved very quickly. ”
Elke Schwarz
That's a kind of capitalist logic, isn't it? It's conveyor belt logic. That means we need more data and more action. If it involves murder, that's a real problem.