Lisa Wiese and Charlotte Langer – Gaza, Artificial Intelligence, and Kill Lists

The Israeli army’s artificial intelligence-based system called “Lavender” automates the process of identifying possible Hamas or Islamic Jihad (PIJ) fighters. But this approach can weaken decision-making responsibility and has attracted the attention of human rights NGOs.

Lisa Wiese is a research assistant and doctoral candidate at the Chair of European Law, Public International Law and Public Law at Leipzig University. Charlotte Langer is a research assistant and doctoral candidate at the Chair of European Law, Public International Law and Public Law at Leipzig University.

Cross-posted from Verfassungsblog

Photo: Creative Commons

One of the greatest challenges in warfare is the identification of military targets. The Israeli army has developed an artificial intelligence-based system called “Lavender” that automates this process by sifting enormous amounts of surveillance data and identifying possible Hamas or Islamic Jihad (PIJ) fighters based on patterns in that data. This approach promises faster and more accurate targeting; however, human rights organizations such as Human Rights Watch (HRW) and the International Committee of the Red Cross (ICRC) have warned of deficits in responsibility for violations of International Humanitarian Law (IHL) arguing that with these semi- or even fully automated systems, human officers experience a certain “routinization” which reduces “the necessity of decision making” and masks the life-and-death significance of the decision. Moreover, military commanders who bear the onus of responsibility for faulty targeting (IHL-breaches) may not have the capacity anymore to supervise the algorithmic “black box” advising them.

In the following, we will examine these concerns and show how responsibility for violations of IHL remains attributable to a state that uses automated or semi-automated systems in warfare. In doing so, we will demonstrate that even though the new technological possibilities present certain challenges, existing IHL is well equipped to deal with them.

AI in Warfare — Advantages and Risks

The advantages of AI in warfare are essentially the same as in any other field. Due to its capacity to process enormous amounts of data very quickly, identify patterns in data and apply those findings to new data sets, AI promises a significant increase in speed, accuracy, and efficiency of military decision-making. Thus, AI offers advantages not only for military officials looking to identify relevant targets for attacks, but also for the protection of civilians. If programmed and used well, AI systems are capable of flagging protected civilian structures more accurately and quickly than human officers, and of planning and executing more precise strikes to reduce civilian casualties. Decreasing human involvement in decision-making may also contribute to the protection of civilians by removing the source of unintentional human bias.

However, AI systems have reached a level of sophistication and complexity that often makes it impossible for humans to understand the reasons behind their assessments, which is why these systems are often referred to as “black boxes”. This gives rise to the concern that human operators might escape responsibility by claiming that they were unable to exercise meaningful control over the machine and thus cannot be held accountable for its decisions. This deflection of legal responsibility from humans to software is what organizations like Human Rights Watch describe as “responsibility gap”. To put it bluntly: we do not believe that such a responsibility gap exists. To alleviate concerns about AI-induced “responsibility gaps” and show how responsibility can still be assigned, this article first illustrates how such a system functions exemplified by the Israeli system called “Lavender”, which is used in the current Gaza war, before turning to an in-depth analysis of responsibility for its recommendations under IHL.

Automated Decision Support Tool in Targeting — “Lavender”

According to a report published by +972 magazine and Local Call, Israel has used this system to mark tens of thousands of potential Hamas and Islamic Jihad targets for elimination in Gaza. The system was fed data about known Hamas operatives and asked to find common features among them. Such features might be membership in certain chat groups or frequently changing one’s cell phone and address. Having found those patterns, the system could then be fed new data about the general population and asked to track down those common features that presumably indicate Hamas affiliation. In essence, this approach is not very different from the procedure carried out before by human intelligence officers, but automation makes it much faster.

According to the testimony of six anonymous Israeli intelligence officers, all of whom served during the current war in Gaza and had first-hand experience with this system, the army relied almost entirely on Lavender for target identification in the first weeks of the war. During that time, the system flagged up to 37,000 Palestinians as suspected militants, marking them and their homes for possible airstrikes. A second AI system named “Where’s Daddy” was built specifically to look for them in their family homes rather than during military activity because it was easier to locate the targets when they were in their private houses. According to the report, the system accepted collateral damage of 15-20 civilians for a single low-ranking Hamas or Islamic Jihad (PIJ) fighter and over 100 civilian causalities for a high-ranking commander. One source reports that the army gave sweeping approval for officers to adopt the target list generated by “Lavender” without additional examination, despite knowing that the system has an error rate of about ten percent and occasionally marked individuals with only loose connections to a militant group or none at all. Human personnel reported that they often served only as a “rubber stamp” for the machine’s decisions, adding that they would personally devote about “20 seconds” to each target before authorizing a bombing, often confirming only that the target was male. Additionally, the sources explained that sometimes there was a substantial gap between the moment when “Where’s Daddy” alerted an officer that a target had entered their house and the bombing itself, leading to the killing of whole families without even hitting the intended target

Constraints of International Humanitarian Law

The described practices raise many questions regarding potential International Humanitarian Law (IHL) violations, e.g. principle of distinction (Art. 48 AP I; see also Art. 51(1) and (2) AP I; Art. 13(1) and (2) AP II; ICRC Customary Rules 1, 7) the requirement to strictly distinguish between civil and military objectives. Considering the permissible collateral damage programmed in the system, violations of the principle of proportionality (Art. 51 (5b) AP I; ICRC Customary Rule 14) or the failure to take precautions (Art. 57, 58 AP I; ICRC Customary Rule 15) seem likely. The fact that Hamas and PIJ militants (both being non-governmental organized armed groups) were targeted at their homes is especially problematic if they are not considered as combatants according to Art. 43(I) AP I (for discussion of applicable IHL norms and conflict classification see here and here). Combatant status for Hamas and PIJ-fighters is controversial because the conflict is classified as either an international or non-international armed conflict by different parties. Combatant status does not apply in non-international armed conflict. Israel (and many other observers) do not consider the conflict between Israel and Hamas an international conflict as Hamas does not represent a State. Also, granting combatant status to Hamas fighters would permit Hamas, from the perspective of the law of conflict, to attack Israeli Defence Force (IDF) soldiers. However, on the flipside, without combatant status, individuals can only be legally attacked if they are actively engaged in hostilities at the time (Art. 51 (3) AP I), which cannot be assumed if they are staying in their homes to sleep.

According to Art. 91 AP I (Art. 3 Hague Convention Concerning the Laws and Customs of War on Land of 1907 and Customary Rule 149, 150), a party to a conflict that violates international humanitarian law shall be liable to pay compensation. In that case, a state can be held responsible for all acts committed by persons forming part of its armed forces, persons, or entities it empowered to exercise elements of governmental authority, persons or groups acting on the state’s instructions, direction or control and persons or groups which the state acknowledges and adopts as its own conduct.

Accordingly, the acts of all State organs carried out in its official capacity, be they military or civilian, are attributable to the State.

State responsibility exists in addition to the requirement to prosecute individuals for grave breaches of IHL (Customary Rule 151, Art. 51 First Geneva Convention; Art. 52 Second Geneva Convention; Art. 131 Third Geneva Convention; Art. 148 Fourth Geneva Convention). Numerous military manuals affirm individual criminal responsibility for war crimes and it is implemented in the legislation of many states.

A State is also responsible for failure to act on the part of its organs when they are under a duty to act, such as in the case of commanders and other superior officers who are responsible for preventing and punishing war crimes (see Customary Rule 153 and Art. 2 ILC-Draft Articles on State Responsibility).

Ensuring Compliance with IHL When Using AI in Warfare

These IHL rules must be observed in warfare, regardless of how decisions are reached. Importantly, this entails that states must ensure that the tools they use — or even advanced tools to which they delegate entire decisions — conform to these rules as well.

If decisions are delegated to automated systems, there are three key points at which certain responsibilities arise: firstly, at the programming stage; secondly, at the command level, where decisions are made concerning the overall strategic use of the finished program; and thirdly, at the stage of day-to-day, ground-level use.

At the programming stage, the principles of IHL must be incorporated into the code of the AI system itself. This entails that training data is carefully selected to rule out false positives later on, that key settings and safeguards are established to reflect the rules of IHL, and that the necessary degree of human oversight is guaranteed to catch errors or malfunctions. The training stage is the point at which the system is fed labeled data and asked to find patterns that distinguish one group from another, such as Hamas operatives from unaffiliated persons. In the case of Lavender, according to the +972 report, “they used the term ‘Hamas operative’ loosely, and included people who were civil defence workers in the training dataset.” This may prove to be a crucial act relevant to IHL, as software engineers thereby “taught” the program to look for the common features of not just militant Hamas operatives, but civilians as well.

Secondly, commanding officers bear responsibility for appropriate use. This entails exercising oversight over the entire process and ensuring that human operators of the AI system in question observe IHL (ICRC, Customary Rules 15–24). In the case examined here, a potential breach of this principle can be found in the fact that commanding officers gave sweeping approval to adopt the kill lists generated by the AI system without further review, thus reducing the procedure of human oversight to a “rubber stamp”.

Thirdly, in the execution stage, human operators must comply with their obligation under Art. 57 ZP I (ICRC Customary Rule 16) to do everything feasible to verify that targets are military objectives (on IDF targeting before October 7 see here) and that the decision reflects a balance between military necessity and humanitarian considerations (principle of proportionality). This is questionable in the present case, where review of each individual case allegedly took only 20 seconds, during which time the human operator would often only confirm that the target was male.

Thus, even if “Lavender” and “Where’s Daddy” are labelled under the (ill-defined) umbrella term “artificial intelligence” and might be perceived to be “autonomous” by some, their development and operation are still determined by human decisions and human conduct, which makes those individuals accountable for their choices. Human officials cannot evade legal responsibility by hiding behind an AI system and claiming lack of control when they simply do not exercise the control that they have.

The main problem that arises from advanced AI military tools is thus one of scale: since AI allows flagging thousands of potential targets almost simultaneously, it challenges human capacity for review and verification. It might, therefore, be tempting for human officials to rely on AI’s results without proper verification, thus delegating their decision-making power and responsibility to the machine. It is the responsibility of states, commanding officers, and ground-level operators to withstand that temptation and ensure the responsible and lawful use of these new technologies. However, if they fail to do so, the existing regulations of IHL remain a viable tool to ensure state accountability for violations of the rules of armed conflict.

Conclusion

Before concluding it must be emphasized that this is not a final assessment of the legality of any particular Israeli attack. Such assessments are often unreliable in the fog of war due to the lack of officially confirmed inside information on how the IDF is executing its strikes. But it can be stated that the use of Lavender and other AI-based target selecting tools may make IHL violations more likely.

Crucially, using AI-tools like Lavender does not create responsibility gaps for IHL-violations because human decisions continue to determine what these applications can and cannot do and how they are used. For these decisions and the decision-making process (e.g. fully relying on the AI-generated target list), human officials can be held accountable by IHL. Gaza and the Occupied Territories have long been a proving ground for new surveillance technologies and AI warfare. It is important to evaluate these systems and ensure that technologies that intrinsically (due to human programming) violate IHL do not perpetuate. For such an evaluation, transparency regarding the training data and intelligence processes is essential.

Nonetheless, the existing rules of IHL are well equipped to deal with the new challenges that arise with the use of AI in armed conflict. While humans continue to exercise oversight and control over AI systems, they can be held accountable for their actions if they violate IHL; if they relinquish that control, this decision may in itself be a violation of IHL.

Due to the Israeli war crimes in Gaza we have increased our coverage from five to six days a week. We do not have the funds to do this, but felt that it was the only right thing to do. So if you have not already donated for this year, please do so now. To donate please go HERE.

Be the first to comment

Leave a Reply

Your email address will not be published.


*