- Reports of Israel's use of AI in its war against Hamas highlight a number of questions regarding future wars.
- Inaccuracies and lack of meaningful human oversight can lead to mistakes and tragedies.
- AI has military advantages, but the tools to suppress it are not sufficiently advanced.
Artificial intelligence is playing an important role in Israel's war in Gaza, and from some perspectives, a very worrying role.
According to a recent investigative report, the Israeli military targeted thousands of Hamas operatives early in the conflict, leading to reckless and inaccurate killings, rampant destruction, and the destruction of thousands of civilians. It is suggested that he may have been involved in the casualties. The IDF flatly rejects this claim.
The report offers a frightening glimpse of where the war is headed, experts told Business Insider. It also clearly shows how bad things can get if humans put new technologies like his AI on the back burner, especially in matters of life and death.
“When we talked about autonomous systems, AI, and lethality in warfare, this was the central discussion,” said Mick Ryan, a retired Australian major general and strategist focused on the evolution of warfare. told. “The decision to kill someone is a very big decision.”
Earlier this month, a joint investigation by +972 Magazine and Local Call revealed that the Israel Defense Forces had been using an AI program named “Lavender” to generate suspected Hamas targets in the Gaza Strip. This was revealed citing interviews with six intelligence officers.
The report said the IDF relied heavily on Lavender and essentially treated information about who to kill “as if it were a human decision,” officials said. Sources said that once a Palestinian with ties to Hamas and his home were identified, the IDF effectively rubber-stamped the decision by a machine that took seconds to conduct its own review.
The joint investigation found that the speed of Israeli targeting was so great that there was little effort to mitigate harm to nearby civilians.
Last fall, details of Israel's Gospel program were revealed, revealing that the system has increased Israel's ability to generate targets from about 50 per year to more than 100 per day.
When asked about the report on Lavender, the IDF told BI that IDF spokesperson Lt. Col. Nadav Shoshani (S.) wrote last week that “the IDF does not use AI systems to select targets for attack.” I introduced the posted statement. Other claims demonstrate a lack of sufficient knowledge about the IDF process. ”
Shoshani characterized the system as a cross-check database that is “intended to aid human analysis, not replace it.” However, potential risks exist as well.
Israel is not alone in researching the potential of AI in warfare, and this work ties in with the growing focus on the use of unmanned systems, as the world has frequently seen in places such as Ukraine. In this field, fear of killer robots is no longer the stuff of science fiction.
“Just as AI is becoming more commonplace in our work and personal lives, so too is it in warfare,” Peter Singer, a future of warfare expert at the New America think tank, told BI. We are living in an era of a new industrial revolution.” And just like the last time due to mechanization, our world is changing, for better or worse. ”
AI is developing faster than the tools to suppress it
Experts said reports of Israel's use of lavender raise a number of concerns that have been at the heart of discussions about AI in future wars.
Many countries, including the United States, Russia, and China, have prioritized the introduction of AI programs into their militaries. The US' Project Maven, which has made significant strides since 2017 to assist ground forces by sifting through vast amounts of incoming data, is just one example.
However, this technology is often developing faster than governments can keep up.
According to Ryan, the general trend is that “technology and battlefield requirements outweigh considerations of the legal and ethical issues surrounding the application of AI in warfare.”
In other words, things are moving too fast.
“The current system of government and bureaucratic decision-making around these things is simply not going to be able to catch up,” Ryan said, adding, “It may never be possible.”
Last November, many governments voiced concerns at a United Nations conference that new laws were needed to regulate the use of lethal autonomous programs, or AI-powered machines that engage in decision-making to kill humans. expressed.
However, some countries, particularly those currently at the forefront of developing and deploying these technologies, have been reluctant to impose new restrictions. That is, the United States, Russia, and Israel all seemed particularly reluctant to support new international law on this issue.
“A lot of military members said, 'Trust me, we'll take responsibility.'” Paul Schall, an expert on autonomous weapons at the Center for a New American Security, told BI. But many will not trust the lack of oversight or the use of AI by some countries, such as Israel. I'm not very confident that the military will always use new technology responsibly.
Programs like Lavender don't sound like science fiction, as has been reported, and are very much in line with how the world's militaries aim to use AI, Schaal said.
The military “collects, analyzes and understands information to determine targets for attack, whether it be people who are part of rebel networks or organizations, or potentially the military.” “It's targets like tanks and artillery that will go through a process,” he told BI.
The next step is to move all that information into a targeting plan, associate it with a specific weapon or platform, and then actually act on that plan.
This takes time, and in Israel's case it was likely due to a desire to develop many targets quickly, Shaar said.
Experts have expressed concerns about the accuracy of these AI targeting programs. Israel's Lavender program reportedly obtains data from a variety of information channels, including social media and phone usage, to determine its targets.
In +972 Magazine and Local Call reports, sources said the program's 90% accuracy rate was considered acceptable. The obvious problem there is the remaining 10%. Considering the scale of Israeli air combat and the significant increase in available targets provided by the AI, this is a significant number of errors.
And AI is constantly learning, for better or worse. Each time you use these programs, you gain knowledge and experience that you use to inform future decisions. As the report shows, with 90% accuracy, Lavender's machine learning can enhance both correct and incorrect kills, Ryan told his BI. “We don't know,” he said.
Leave war decision-making to AI
In future wars, AI could work alongside humans to process vast amounts of data and suggest potential courses of action during combat. However, there are several possibilities that can undermine such a partnership.
The data collected may be too much for humans to process or understand. If an AI program is processing large amounts of information to create a list of potential targets, humans can quickly become overwhelmed and unable to meaningfully contribute to decision-making.
You can also act too quickly and make assumptions based on data, increasing the chance of mistakes.
International Committee of the Red Cross military and armed groups advisor Reuben Stewart and legal advisor Georgia Hines wrote about these issues in October 2023.
“One of the touted military benefits of AI is that it increases the tempo of the user's decision-making relative to the adversary, as faster tempo often poses additional risks to civilians. “This is why tempo-reducing techniques such as 'tactical patience' are employed to reduce civilian casualties,” they said.
In the desire to act quickly, humans can take their hands off the wheel and trust AI, which is rarely overlooked.
According to a report in +972 Magazine and Local Call, targets chosen by the AI are typically reviewed for only about 20 seconds just to make sure the potential killer is male before the attack is allowed. It is said that
Recent reports raise serious questions about the extent to which humans were “involved” in the decision-making process. process. Singer said this is also a potential “example of a phenomenon known as 'automation bias,'” where “humans assume that because a machine has provided an answer, it must be true.” .
“So even though humans are 'in the loop,' they're not doing the job that's expected of them,” Singer added.
Last October, UN Secretary-General António Guterres and International Committee of the Red Cross President Mirjana Sporjaric said that “military forces must act now to maintain human control over the use of force” in combat. jointly called.
“Human control must be maintained in life and death decisions. Machines autonomously targeting humans is a moral line that should not be crossed,” they said. “Machines with the power and discretion to take human life without human intervention should be prohibited by international law.”
However, while there are risks, AI can be used in many military applications, such as helping humans process a wide range of data and sources to make informed decisions, or exploring different options on how to deal with a situation. may bring significant benefits.
Meaningful “stakeholder” cooperation may be helpful, but in the end, it is humans who maintain such relationships, thus maintaining authority and control of the AI.
“Humans have been using tools and machines for as long as humans have existed,” says Ryan, a retired major general. Said. “Whether we fly airplanes or drive ships or tanks, we are masters of machines.”
But with many of these new autonomous systems and algorithms, the military will not be using machines, but rather “partnering with them,” he said.
Many militaries are not ready for such changes. As Ryan and Clint Hinote wrote in their commentary on Reef War earlier this year, “Within the next decade, military agencies may find themselves outnumbered by unmanned systems.”
“Currently, military tactics, training, and leadership models are designed for military organizations that are primarily human, and those humans have tight control over machines,” they wrote.
“Changes in education and training that allow humans to not only use, but partner with machines are a necessary but difficult cultural evolution,” the researchers said. But it's still a work in progress for many militaries.