The role of emerging technologies in enhancing joint training

By Alexandra Martin and Jordan Sweeney 

From "The CoESPU MAGAZINE" no. 4 - 2020

Section: "Training and Learning Architecture: Join Training and Cooperation for the Maintenance of Peace and Security ", page 26

DOI Code: 10.32048/Coespumagazine4.20.4

The spotlight shone on a myriad of consequences posed by new, destabilizing threats in the past years revealed crucial weaknesses in our global security and conflict prevention landscape. Amplified by the corona virus pandemic and subsequent challenges in 2020, the global trends look rather pessimistic despite local, regional and international efforts to curve human suffering. It is estimated that about 508,000 people died annually as a result of violence in the period 2007-2012, while for every direct war victim, between 3 and 15 people die indirectly without ever appearing in conflict death statistics[1]. Ethnic tensions, access to critical resources, increased social inequality, poverty and climate change remain just few of the triggers that continue to feed old and new conflicts around the world. Thus, the need for effective conflict prevention has never been greater. In achieving this objective, the role of UN forces, including Stability Police Units, come to the frontline. This article aims to outline the opportunities and challenges brought by emerging technologies in peace operations. Decision-makers within our global security and defense architecture should adopt a strategy of pre-emptive preparedness and invest in training that harnesses the positive capabilities of emerging technology in order to increase efficiency, effectiveness, and interoperability in peace support operations.

   

Evolution of conflict prevention

The field of conflict prevention has evolved in the last few decades since its initial conceptualization by the UN Secretary General Hammarskjöld in the 60s to be seen today as an anchor of collective security efforts. Boutros Boutros Gali’s shifting focus from “keeping regional conflicts from going global” to “prevent them from starting in the first place” (An Agenda for Peace, 2012) is today still considered a game changing moment for the evolution of the concept. This ‘early on momentum’ has become central in both the academic literature and policy prescriptions, because while “active conflict is natural and inherent in all areas of activities […] prevention is desirable to keep it from escalating out of hand, beyond its useful benefits” (Zartman, 2015). Thus over time, early warning instruments have become the linchpin of contemporary conflict prevention framework at the international level, providing capabilities that “promptly identify risks of emergence, re-emergence or escalation of violence, and swiftly adapting policy responses so as to mitigate conflict risks” (Faleg and Gaub, 2020).

However, there is “no crystal ball to precisely forecast the outbreak of violent domestic conflicts”, while the actual capacity to effectively respond in protracted situations or highly disenfranchised communities is severely limited (Stedman, 1995). This ‘precise forecast’ or accuracy of prediction remains to date a subject of debate due to its difficult nature and probability of a specific event.

But the renewed international attention on prevention is creating a new window of opportunity to revisit our understanding of peace operations and associated toolbox, to make it more effective, relevant and adaptable to the current reality of an ever more insecure global landscape. As the Secretary General of the United Nations Antonio Guterres pointed out in 2017 : “We spend far more time and resources responding to crises rather than preventing them. People are paying too high a price (…) We need a whole new approach”[1].

 

The emerging digital realm is increasingly seen as a new frontier of opportunities, sprouting from local to transnational or supranational levels, able to enhance the understanding and connection between forms of violence and expectations for peace. Research on information and communication technologies (ICTs) has exposed their potential in the context of humanitarian relief (International Federation of the Red Cross and Red Crescent Societies, 2005), pre and post electoral monitoring of violence (Bani & Sgueo, 2014) or the use of ‘big data’ for peacekeeping (Mac Ginty, 2017). More recently, we have seen the use of more advanced intelligent technologies such as drones and other unmanned vehicles for surveillance and monitoring purposes (such as the UN Mission in DRC or the OSCE SMM in Ukraine), for humanitarian and disaster intervention, or for delivering supplies in remote or inaccessible areas.     

The case for emerging technologies

Today’s emerging technologies, both in the military and civilian realms, are fundamentally changing the way we live, interact, take decisions, shape world views or engage in conflict. Technological progress has expanded the boundaries of human knowledge, information and capabilities, thus showing potential to overcome more analogue and traditional challenges, such as geography and access, mobility or communication flows. One of these [families of] technologies is Artificial Intelligence (AI), recognized for its transformative nature and transboundary reach.

While a single definition of AI is yet to be agreed upon, generally AI is thought to refer to “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment, and intention (Brookings, 2018).” Real-life applications include enhanced satellite imaging, image and speech processing capabilities, surveillance and facial recognition. They are all driven by algorithms trained through machine and deep learning, and derive computer-driven intelligence (AI) used for taking decisions. They collect and use ‘big data’ in order to build very large data sets at high speed from multiple sources, which “may be useful at times of crises and disasters” (Imran, Meier, & Boersma, 2018). These new technologies are already part of future of warfare capabilities development around the world.

AI is here to stay and will continue to evolve and improve its functions and range of applications exponentially, as showcased by progress in the last decade. The technology is showing an enormous potential to drive growth, fight poverty and climate change, intervene in disasters, contribute to social development, stability and peace, as presented in the UNCTAD report of 2018[2].

Shortcomings, Opportunities and Challenges

UN-led or regional-led peace operations have been under public scrutiny for their mixed achievements, in particular in protecting civilians, preventing high number of casualties and avoiding a re-elapse into war and violence. Experts across the board call for peace support operations to be better designed to respond to a more complex conflict environment, including by a) agreeing on more robust political mandates and larger troops deployment, and by b) making use of support technologies that could lower the human loss risk and increase efficiency of the operations.

In this regard, the embedding of emerging technologies such as AI in the joint training of stability police units presents a new kind of opportunity, with potential to offer a better understanding of the realities and dynamics on the ground for troops ready to be deployed. Furthermore, such new ‘human-machine teaming’ approach could then be spilled over in training national forces in countries of deployment, to equip them to respond to rising violence before it transforms into full fledge conflict.  

The number of opportunities presented by AI could represent a game changing moment in the design of peace infrastructure globally. The use of AI could lead to diminishing direct and indirect civilian deaths, and reduce the number of refugees and internally displaced population, which saw a five-fold increase between 2010 and 2016[3].

  1. Through the use of AI in joint training and peace operations, international and local stakeholders could have a better understanding of descriptive, predictive and pattern diagnoses featuring a conflict, in order avoid an outbreak of violence or re-escalation in a fragile conflict setting. These features could lead to better coordination between civilian, police and military forces on the ground, including through shared information and intelligence gathering, pattern recognition and more robust, 360 early warning system
     
  2. The use of AI in synchronicity with human driven analysis could enable enhanced early warning and response mechanisms available for UN military or police forces deployed on the ground, thus leading to closing the gap between early signs of conflict and political action. This has the potential to improve practices of prevention and better explain the linkage between operational and structural prevention.

 

  1. The use of AI in joint training could lead to diminishing the asymmetry in capabilities and technical capacity within multinational forces and with the host countries’ apparatus, by encouraging knowledge and expertise sharing, joint initiatives for the development of new technologies, and the convergence of norms, practices and standards in line with the UN commitment to a “human-centered and rights-based approach”.

 

While the opportunities presented by the use of AI in peace support operations could lead to a new thinking at the international level, improved knowledge and awareness on the potential misuse of this dual-edge technology is critical to prevent its hijacking for nefarious purposes. The constantly evolving digital landscape raises critical questions about privacy and human rights, restriction of freedoms, responsibility and attribution, accountability, ethics and trustworthiness[4]. Challenges faced by the international organizations and national governments include:

  1. AI and other emerging technologies develop faster than existing legal norms and standards
     

The “digital technologies have intensified the tensions between security and law” (Aradau, 2017) as the international treaties and agreements lag behind the speed of capabilities development by private, state or non-state actors. A joint ICRC-SIPRI publication on “Limits on Autonomy”[5] also emphasizes that existing legal framework does not align with the advancement of new tech, and fails to address the potential “foreseeable humanitarian impact”. In the context of law enforcement and stability police units, the use of AI in a norm vacuum “may infringe fundamental human rights, such as the right to privacy, equality and non-discrimination, as well as undermine principles of law, such as the presumption of innocence, privilege against self-incrimination and proof beyond a reasonable doubt.”[6]   

  1. Ethical boundaries, privacy, human rights and transfer of biases

 

The ethical questions around the use of AI are linked to the ownership, collection, protection and the use of data in line with human rights provisions. Without a globally agreed set of norms and practices, the reliance on highly sensitive data exposed associated risks such as biometric data sensitivity, data integrity, data bias. Facial recognition technology (FRT) in particular remains highly problematic in conflict settings because of its potential use in discriminate targeting, mass surveillance, retaliation, harming and non-proportional response. It is estimated that the FRT market value will reach $7bn by 2024[7]. Without a regulated environment that guarantees the protection of privacy and freedoms, misuse of private data could lead to invasive actions and policies, that would re-enact the cycle of violence in conflict affected environments.   

The humanitarian aid domain has also revealed weaknesses that may persist or even be amplified following the adoption of new technologies. In a webinar this past June, ICRC Digital Transformation and Data Director, Balthasar Staehelin, noted that humanitarian organizations’ recent reliance on the collection and processing of highly sensitive data renders them “vulnerable to adverse cyber operations that could impact the people that need [them] most.[8]

 

Conclusion

As our world is undergoing a massive technological transformation, we are yet to fully capture and understand its long term impact on societal, political and economic dimensions. The potential of AI and related technologies for peace operations, military and police activity in conflict and post-conflict settings, joint training and capabilities development is only in an incipient stage. Undeniably, tech advancement will push our creativity and innovation beyond traditional limits, while opening new avenues for impact driven operations. To ensure as international community we stand a chance at winning this race, it is the right moment to start encouraging investment in AI opportunities, develop practices and training capabilities for both civilian, military and police personnel involved in peace support missions. This will foster a global community of practitioners and security sector actors that have a clear understanding of the potential and inherent limitations of AI technology, could identify the best areas where human-machine teaming will prove efficient on the ground, and develop good practices in line with human rights provisions that could eventually become global norms in the peace support operations field.       

 

References:

 

 

 

[1]http://www.genevadeclaration.org/measurability/global-burden-of-armed-violence/global-burden-of-armed-violence-2015.html

[2] https://unctad.org/en/PublicationsLibrary/tir2018_en.pdf

[3] https://www.pathwaysforpeace.org/

[4] https://www.un.org/en/newtechnologies/images/pdf/SGs-Strategy-on-New-Technologies.pdf

[5] https://www.icrc.org/en/document/limits-autonomous-weapons

[6] http://www.unicri.it/index.php/towards-responsible-artificial-intelligence-innovation

[7] http://www.unicri.it/sites/default/files/2020-08/Artificial%20Intelligence%20Collection.pdf

[8] https://www.icrc.org/en/document/humanitarian-effectiveness-through-new-technology-requires-standards-control-over-data-data

×