The Future of Peacekeeping: Navigating Ethical Artificial Intelligence Deployment in Conflict Zones

ARTIFICIAL INTELLIGENCE-AI, 20 Jan 2025

Brenda K. Wiederhold | Liebert Publishers - TRANSCEND Media Service

21 Oct 2024 – From Lebanon to South Sudan, the United Nations’ peacekeepers are tasked with helping countries torn by conflict to create the conditions for lasting peace. In addition to monitoring and observing peace processes, peacekeepers assist ex-combatants in implementing peace agreements and deploy troops and police globally.1 However, the evolving landscape of peacekeeping missions has reached a tipping point. The introduction of artificial intelligence (AI) and similar advanced technologies raises important ethical and psychological considerations for peacekeeping in post-conflict zones, ranging from the potential for malicious cyberattacks to disrupt infrastructure to artificially altered images and videos eroding the trust that peacekeepers and related personnel foster among civilians.

These same advanced technologies are also poised to irrevocably change conflict zones in the 21st century. In the past several years, military powers have aggressively begun testing offensive cyber technologies—but this push for innovation has often sidelined efforts to ensure AI safety and ethical use.2 Now is the time to push for humane artificial intelligence (HAI): the development of intelligent systems that respect human rights and create robust safeguards to protect the people impacted by conflict zones. Peacekeepers must prepare to mitigate AI-related threats, but in order to do so, they will need to develop the skills, capacity, and frameworks required to maintain peace in the cyber realm.3 Training in advanced technologies, investing in simulations that reinforce contact skills for peacekeepers in the field, and policies designed to protect human rights in the digital age can better facilitate conflict resolution and support peace operations around the world.

AI and Advanced Technologies in Peacekeeping

Since 1948, the United Nations has launched, on average, one peacekeeping mission a year—but despite the backing of powerful countries, UN missions have not always been able to fully meet their mandates.1 This is in no way the fault of the peacekeeping troops on the ground; rather, it can be attributed in part to the limited adoption, development, and application of advanced technology. Peacekeepers need to be able to recognize new threats, strengthen awareness around existing ones, and identify responses, but advances in monitoring and surveillance technology have been largely underused.1 The concept of HAI emphasizes the development and deployment of AI systems that prioritize human values, ethics, and well-being. In peacekeeping, this means creating technologies that support human peacekeepers while adhering to ethical principles, such as the protection of human rights, minimizing harm, and ensuring equitable treatment of all individuals.

The Defense Advanced Research Projects Agency (DARPA) previously put out a request for the development of more games with scalable, interactive gaming or wargaming approaches. As these games are developed and released, a key component of this content should include peacekeeping gaming—simulations that require players to develop strategies to resolve conflicts and coordinate with team members.4 Training simulations can help better prepare peacekeepers for their work in the field. These virtual simulations should complement in-person training and support the development of contact skills, which are communication techniques used to build consensus with community members in a mission area and de-escalate potentially violent situations.5 Peacekeeping games allow players to work through realistic scenarios that teach de-escalation, cooperation, situational discernment, and restraint in the use of armed force.6 Some potential scenarios that could be simulated from the field include responding to security threats against UN personnel, next steps following attacks on civilians, and addressing the fallout from false rumors about a UN mission. Virtual reality and augmented reality (AR) can help create highly immersive training environments that not only better prepare peacekeepers for interactions in the field, but also allow them to practice making decisions in high-stress situations that emphasize judgment, empathy, and restraint in the use of force.7–9 These training simulations can also incorporate AI so the game is adaptive to the player’s choices and learns from interactions, generating a more realistic simulation.

The importance of using technology to enhance peacekeeping training directly impacts the Protection of Civilians (POC) mandates peacekeepers follow, especially in the face of new threats in the digital space. The POC mandate states that peacekeepers are responsible for protecting civilians, particularly those under threat of physical harm. The mandate includes all aspects of a peacekeeping mission—civilian, police, and military—and peacekeepers are authorized to use all means to prevent, deter, or respond to threats of physical violence against civilians. To meet this mandate in the digital age, peacekeepers need modernized training. Peacekeepers operate in high-stress environments that demand emotional resilience and cultural sensitivity. Their missions often occur in diverse cultural environments, and modern technology can help prepare peacekeepers to navigate local customs and norms, communication styles, and cultural contexts before deployment.1 While the research and development (R&D) costs may be high upfront, this is an investment in giving individuals the tools to more effectively maintain their POC mandates.

AI and Advanced Technologies in War

While AI technology can help prepare peacekeepers for deployment, it can also impact the work they do during their missions. Peacekeeping incorporates data-driven analysis to predict and respond to violence and understand the public’s sentiments toward peacekeepers. This analysis requires population datasets and supporting digital and physical infrastructures—which also function as vulnerable targets for data manipulation and adversarial information operations.3 For example, the AI algorithms used to enable autonomy in the sectors critical to the general public’s survival (such as energy, medical, and biotech) can also be manipulated, allowing bad actors to weaponize industrial control systems and manufacturing supply chains or even shut down critical systems, such as electric grids and emergency communications. Additionally, the sensitive data used in peacekeeping missions—such as data on local informants or sensitive information about refugees—are vulnerable to irresponsible handling of data, information leaks, and cyberattacks. On a broader scale, digitally altered photos, videos, and news reports can cast peacekeeping missions in false lights, hampering the ability of peacekeepers to effectively build trust with the general public.

To that end, peacekeepers must also be adequately prepared to monitor, anticipate, and report potential AI and cyberthreats to civilian populations. A key component of this involves improving the digital literacy and data management skills of UN personnel and setting strong rules around who can access sensitive information, how it will be stored, and what security measures will be used to protect sensitive data.3 Additionally, guidelines on the use of AI and machine learning in peace operations should be developed alongside adequate data-protection frameworks and data-governance mechanisms. Peacekeepers can also collaborate with technologists, civil society actors, and policymakers to better analyze and anticipate emerging threats. The International Gene-Synthesis Consortium (IGSC) is one example—the IGSC has created a common global screening platform to help prevent the misuse of DNA synthesis technologies and is capable of forecasting threats and developing related security screening.3

Modern peacekeeping must also contend with the threats posed by AI-driven autonomous systems, such as fully autonomous uncrewed vehicles (UVs) that rely on AI to make decisions. Swarms of UVs, designed to act in coordinated groups in response to a human operator, are capable of tracking targets and making lethal strikes. They have advanced to the point that UV swarms are becoming too fast for humans to meaningfully counter.10 It is not outside the realm of possibility that, in the near future, the same datasets used for UN missions can be compromised and used to inform the actions of AI-driven UVs—for monitoring or for lethally striking. Uncrewed and autonomous systems are evolving rapidly, which necessitates considerations from both a defensive perspective and a peacekeeping perspective.

Humane AI: Balancing Technology and Humanity

The use of AI in peacekeeping raises ethical questions about the delegation of decision-making to machines, especially in situations involving the use of force. Autonomous systems must be designed with ethical guidelines that prioritize human oversight and control. This includes maintaining a “human-in-the-loop” approach, where humans have the final authority over critical decisions, particularly those involving lethal actions.10 In contrast, the “human-on-the-loop” approach places humans in a supervisory capacity, overseeing the decisions AI-driven autonomous systems make—and blurring the line between humans using tools and tools using humans. As AI becomes even more firmly integrated into the defensive landscape, humans must stay involved, monitoring and responding to potential security breaches.

Additionally, the principle of “do no harm” should guide the deployment and development of AI. This involves assessing the potential risks and unintended consequences of AI technologies, such as the possibility of data breaches, surveillance misuse, and the reinforcement of existing power imbalances. AI is advancing faster than humans can keep up, and in our drive to create newer, more advanced technology, we have at times prioritized innovation over fully implementing safety measures and industry regulations. Because of this, we are now in a perilous situation where AI malware can be trained to study the behavior of social network users and plant rumors among people most likely to share them, impersonate a strategic contact to glean sensitive information from an informant, or perform biometric attacks by recognizing the facial features of human targets10—without adequate protections against these digital threats. Peacekeepers are charged with protecting civilians, but in order to do so, they need the infrastructure and tools to adequately monitor, recognize, and respond to threats before they become major incidents.

To ensure the ethical use of AI in peacekeeping, international guidelines and policies must be established and regularly updated. The United Nations and other peacekeeping organizations should work towards developing frameworks that govern the use of AI and advanced technologies, emphasizing ethical considerations, human rights protection, and the responsible use of force. These guidelines should outline the ethical principles and standards that AI systems must adhere to, including transparency, accountability, and human oversight.

The integration of AI into peacekeeping missions presents both opportunities and challenges. While AI has the potential to enhance peacekeeping operations, improve efficiency, and support human peacekeepers, it also raises critical ethical and psychological considerations. Peacekeeping in the digital age requires a multidisciplinary approach that combines technological innovation with a deep understanding of the social, psychological, and ethical implications of AI. As we navigate the complexities of integrating AI into peacekeeping, we must remain vigilant in upholding the principles of humanity, equity, and justice, ensuring that technology serves as a tool for peace.

References:

  1. Hall L, Paracha S, Hagan-Green G. Start with the human, technology comes later: Values for the digital transformation of peacekeeping. Interacting with Computers 2021;33(4):395–410. Crossref Google Scholar
  2. Williams PD. The future of peace operations: A scenario analysis, 2020–2030. International Affairs at the George Washington University; 2020. Go to Citation Google Scholar
  3. Pauwels E. Peacekeeping in an Era of converging technological & security threats. 2021. Google Scholar
  4. Dorn AW, Webb S, Pâquet S. From wargaming to peacegaming: Digital simulations with peacekeeper roles needed. International Peacekeeping 2020;27(2):289–310.

Go to Original – lieberpub.com


Tags: , , , , ,

Share this article:


DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

There are no comments so far.

Join the discussion!

We welcome debate and dissent, but personal — ad hominem — attacks (on authors, other users or any individual), abuse and defamatory language will not be tolerated. Nor will we tolerate attempts to deliberately disrupt discussions. We aim to maintain an inviting space to focus on intelligent interactions and debates.

1 + 6 =

Note: we try to save your comment in your browser when there are technical problems. Still, for long comments we recommend that you copy them somewhere else as a backup before you submit them.

This site uses Akismet to reduce spam. Learn how your comment data is processed.