How Artificial Intelligence Kill-Lists Drive Israeli Genocide in Gaza

ARTIFICIAL INTELLIGENCE-AI, 6 May 2024

Bappa Sinha | NewsClick - TRANSCEND Media Service

Image Courtesy: Wikimedia Commons

A probe report reveals how AI programmes, such as Lavender and The Gospel, were used to target supposed Palestinian militants in Gaza through bombing attacks that resulted in mass civilian casualties.
27 Apr 2024 – An investigation conducted by Israeli-Palestinian magazine +972 has made explosive revelations about Israel’s use of artificial intelligence (AI)-generated “kill- lists” to target supposed Palestinian militants in Gaza through bombing attacks that resulted in mass civilian casualties during its genocidal campaign in Gaza since October 7, 2023.

The report states that an AI computer programme, called ‘Lavender’, played a central role in the unprecedented bombing of Palestinians, especially during the early stages of the war. Lavender was developed by the Israeli Defence Force’s (IDF) elite Unit 8200. It was used along with another AI system called ‘The Gospel.’ While The Gospel identifies buildings and structures supposedly used by militants, Lavender identifies individuals and adds them to a kill-list for targeted elimination.

The Modus Operandi

According to the report, during the first weeks of the war, the Israeli Army almost completely relied on Lavender to identify individuals and their homes for air strikes. Lavender marked as many as 37,000 Palestinians as “suspected militants” with links to Hamas. The IDF gave its officers a free hand to adopt Lavender’s kill-list without any requirement for further human oversight or cross-check with ground intelligence regarding these individuals.

Human oversight was limited to just 20 seconds per target only to verify that the target was male before authorising the bombing, since female primary targets picked by the AI programme were considered to be a mistake as Hamas does not recruit women in its military wing. This programme was used despite knowing that it often made mistakes and that it often picked targets who had no connection to militant groups, said the report.

The Israeli military systematically targeted individuals in their homes, typically at night when their entire families were present, rather than engaging them during military operations. This strategy was chosen because locating the individuals in their residence was easier.

Additionally, automated systems, including one named “Where’s Daddy?” were employed specifically to track these targeted individuals and carry out bombings when they were inside their family homes. As a result, numerous Palestinians, predominantly women, children, the elderly and other non-combatants, were killed by Israeli airstrikes due to decisions made by the AI programme.

According to the report, information collected on most of the 2.3 million residents of Gaza through surveillance is fed into the Lavender system, which then analyses and ranks the likelihood of each resident’s association with the military wing of Hamas. It gives almost every single person in Gaza a rating from 1 to 100, as an indicator of how likely it is that they are Hamas militants.

Lavender “learns” to identify characteristics of known Hamas militants, whose information had been fed into the machine as training data, and then tries to locate these same characteristics — referred to as “features” in AI terminology — among the general population. Typical “features” would include an individual’s visual information, mobile usage information, social media connections, such as membership of WhatsApp groups, battlefield information, phone contacts, and photos.

While humans select these features at first, the machine gradually comes to identify features on its own, thereby becoming completely opaque and unaccountable. An individual found to have several different incriminating features is given a high rating and thus automatically becomes a target for bombing.

Rating thresholds above which individuals were chosen for bombing were decided arbitrarily, with thresholds lowered when Israeli officers ran out of targets to bomb, says the report. IDF officers knew well that the system would mistakenly flag individuals who had communication patterns similar to known Hamas militants, including police and civil defence workers, militants’ relatives, residents with names or nicknames identical to that of a militant, and people who used a phone that once belonged to a Hamas member which happens often enough, since devices of dead individuals get passed on to relatives in the warzone.

Despite knowing that the system was in no way foolproof, the IDF relied on it extensively as a tool to generate targets for their genocide.

Massive Civilian Casualties

In an unprecedented move, the Israeli army decided that for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians!

In the past, the military had not authorised any “collateral damage” during assassinations of low-ranking militants. When it came to targeting such “junior militants” marked by Lavender, the army preferred to use only relatively inexpensive unguided missiles, commonly known as “dumb” bombs (in contrast to “smart” precision bombs), which can destroy entire buildings, wiping out many families as collateral damage. “You don’t want to waste expensive bombs on unimportant people — it’s very expensive for the country and there’s a shortage [of those bombs]” — went the Israeli logic.

In the event that the target was a senior Hamas official with the rank of battalion or brigade commander, the army authorised the killing of more than 100 civilians during the assassination of a single commander.

Lavender and systems like “Where’s Daddy?” were thus combined with deadly effect, killing entire families and sometimes wiping out whole neighbourhoods, said the report.  Most of the people killed were women and children.

In order to assassinate Ayman Nofal, the commander of Hamas’ Central Gaza Brigade, the army authorised the killing of approximately 300 civilians, wiping out more than 15 houses in the bombing of the Al-Bureij refugee camp on October 17, based on an imprecise pinpointing of Nofal’s location, it added.

Role of US Big Tech

The tech infrastructure to run these deadly AI programmes may very well have come from US companies, said the report. In April 2021, the Israeli Finance Ministry announced the award of a contract for a $1.2 billion cloud computing system jointly built by Google and Amazon named “Project Nimbus.” The official statement said: “The project is intended to provide the government, the defence establishment and others with an all-encompassing cloud solution.”

report published by the online publication,The Intercept, mentioned that Google provided the Israeli government with the full suite of machine-learning and AI tools available through Google Cloud platform. Google documents indicated that the “Project Nimbus” cloud would give Israel capabilities for facial detection, automated image categorisation, object tracking, and even sentiment analysis that claims to assess the emotional content of pictures, speech, and writing.

The tech community has discredited such dubious claims of discerning an individual’s emotions using facial expressions as pseudoscience. Many Google employees have become alarmed over the use of technologies such as AutoML, a Google AI tool offered to Israel through “Project Nimbus,” fearing both their inaccuracy and how they might be used for surveillance and military purposes.

Read Also: Google Employees Lead Sit-ins Protesting Company’s Complicity in Genocide

Google employees have been protesting their employer’s role and complicity in Israel’s genocide in Gaza for several months all across the US.  “No Tech for Apartheid,” the organisers of the protest at Google offices, have said in a recent statement. Google has fired more than 50 employees for participating in these protests.

It is well established that US Big Tech monopolies, such as Google and Amazon, are entrenched in the US Military-Industrial-Surveillance complex and derive a significant share of their revenues from this association. The entire business model of companies, such as Google and Facebook, is based on pervasive surveillance of their users through their smartphones. The US intelligence agencies, such as the NSA, now have full access to this huge tranche of users’ personal data. These same companies are now investing heavily in AI and see military applications, such as those mentioned in this article, as a huge source of revenue in the years ahead.

Ethical concerns around using their dubious AI products for conducting genocide and furthering apartheid policies of the Israeli state are hardly going to be a roadblock for these companies doing business with the Israelis or any other US government-approved entities.

These same companies have published and made loud pronouncements about “ethical AI charters” which are effectively a form of “ethicswashing,” or essentially toothless self-regulatory pledges that provide only the appearance of scruples.

The threat of AI is not from super-intelligent AI going rogue, as the “ethical AI charters” from these companies would have us believe, but from the business-as-usual activities of these tech monopolies and US imperialism- led countries whose behaviour can only be described as having “gone rogue” against any kind of world order.

_________________________________________________

Bappa Sinha is a veteran technologist interested in the impact of technology on society and politics. The views expressed are personal.

Go to Original – newsclick.in

Share this article:


DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

There are no comments so far.

Join the discussion!

We welcome debate and dissent, but personal — ad hominem — attacks (on authors, other users or any individual), abuse and defamatory language will not be tolerated. Nor will we tolerate attempts to deliberately disrupt discussions. We aim to maintain an inviting space to focus on intelligent interactions and debates.

1 + 4 =

Note: we try to save your comment in your browser when there are technical problems. Still, for long comments we recommend that you copy them somewhere else as a backup before you submit them.

This site uses Akismet to reduce spam. Learn how your comment data is processed.