Check-in and access this session from the IGF Schedule.

IGF 2023 WS #409 AI and EDTs in Warfare: Ethics, Challenges, Trends

    Time
    Wednesday, 11th October, 2023 (08:45 UTC) - Wednesday, 11th October, 2023 (09:45 UTC)
    Room
    WS 2 – Room A

    Organizer 1: Rosanna Fanni, 🔒
    Organizer 2: Fernando Giancotti, Center for Defense Higher Studies, Rome
    Organizer 3: Paula Gürtler, Centre for European Policy Studies (CEPS)

    Speaker 1: Rosanna Fanni, Civil Society, Western European and Others Group (WEOG)
    Speaker 2: Fernando Giancotti, Government, Western European and Others Group (WEOG)
    Speaker 3: Pete Furlong, Civil Society, Western European and Others Group (WEOG)
    Speaker 4: Shimona Mohan, Civil Society, Asia-Pacific Group

    Moderator

    Rosanna Fanni, Civil Society, Western European and Others Group (WEOG)

    Online Moderator

    Paula Gürtler, Civil Society, Western European and Others Group (WEOG)

    Rapporteur

    Paula Gürtler, Civil Society, Western European and Others Group (WEOG)

    Format

    Panel - 60 Min

    Policy Question(s)

    A. What opportunities, challenges and risks arise from the use of AI and other emerging technologies in warfare? B. How should governments balance the need for security with a responsibility toward ethical uses of AI and other emerging technologies? C. What kind of universal ethical principles should underpin the development and deployment of AI and other emerging technologies used by the military?

    What will participants gain from attending this session? Participants can gain valuable insights, knowledge, and perspectives on the topic of AI and emerging technologies in warfare, and pressing ethical considerations: • Enhanced Awareness – of the transformative potential of AI and emerging technologies in warfare, and of risks. • Expert Insights – four expert speakers with diverse backgrounds and practical expertise, participants can learn from their experiences, research, and analysis. • Policy Discussions – important policy questions will stimulate meaningful discussions among participants, who can gain a broader perspective on the complexities and nuances of AI and emerging technologies in defence. • Networking and Collaboration – fellow attendees, who share a common interest, can exchange ideas, experiences, insights, fostering collaboration and potential partnerships for future initiatives / research. • Influence and Impact – contribute to shaping the discussions and outcomes of the session by engaging in Q&A, share perspectives, ask questions.

    Description:

    We propose a session that will delve into the opportunities, challenges, and risks arising from the use of artificial intelligence (AI) and other emerging technologies in warfare. The session will explore the current state of development and deployment of AI and emerging technologies in the military, and spark a debate on what universal ethical principles should apply for their use in this highly critical context. Our session further aims to foster discussions on how governments can strike a balance between security concerns and ethical considerations in the use of these technologies. Whether it is Palantir’s ChatGPT-like AI platform for military decision-making, Clearview’s facial recognition systems to identify enemies, or autonomous drones deliberately used as lethal weapons systems: AI and emerging technologies redefine warfare. Also in the war in Ukraine, leveraging AI and robotics technology has provided the country with a strategic advantage. However, the debate over the responsible use of technology in those contexts is, unlike for the civilian domain, formative at best. On a political level, governments increasingly engage in the critical questions around AI used in and for the military. Canada, Australia, the U.S. and the UK already established guidelines for the responsible use of AI, and NATO adopted its own AI Strategy in 2021. But many nations are left without guidance when it comes to the responsible use of AI and other emerging technologies. This session will combine findings from recent research reports on the topic to discuss key issues in the ongoing use of AI and emerging technology in warfare with a broad audience.

    Expected Outcomes

    Session Outcomes: A - Raising awareness of the opportunities, challenges and risks from AI and other emerging technology on the future of warfare and international peace more broadly B – Providing a platform for all stakeholders to C – Deliberate on broad ethical principles that should underpin the development and deployment of AI and emerging technology in the context of defence Specific Outcomes: A – Report summarising the session. It will be shared online with IGF participants and stakeholders. B – Promotion of reports and other policy-related initiatives in Brussels, Italy and London C – Further develop the defined ethical principles to feed in into a global task force for diversity in military AI.

    Hybrid Format: One team member will act as “online delegate” to report the online audience experience to the onsite team. This person will also monitor the chat and communicate interventions to the onsite team in real time via an internal chat to ensure that online attendees actively are included in the session. The online and the on-site moderator will conduct a test session beforehand and will alternate during the actual event based on a detailed run sheet specifying speaking responsibilities and interaction ahead of the session. In addition to the IGF videoconferencing platform's chat function, we engage speakers and attendees with the audience engagement tool Sli.do through an instant polling. All speakers will additionally use Twitter during the session to share quotes and reply to participants’ posts, comments and threads.

    Key Takeaways (* deadline at the end of the session day)

    1. AI in the military domain goes beyond lethal weapon systems, and can impact the pace of war, as well as increase vulnerabilities due to limitations in AI.

    2. Geopolitical power considerations and lack of awareness cause deadlock in moving these conversations forward.

    Call to Action (* deadline at the end of the session day)

    1. The international community needs to define a set of concrete ethical principles applicable to the use of AI in defence to open a pathway for implementation in the style of International Humanitarian Law.

    2. International Organisations must take on more responsibility and leadership in establishing and implementing binding ethical frameworks.

    Session Report (* deadline 9 January) - click on the ? symbol for instructions

    Abstract:

    What makes this topic relevant to the IGF? AI systems are dual-use by nature, meaning that any algorithm can be used also in military contexts. We indeed already see AI and EDTs being used in conflicts today, like in the war in Ukraine. The availability of data, machine learning techniques, and coding assistance makes the use of such technologies far more accessible to non-state actors as well.

    The plethora of ethics guidelines and policy frameworks largely exclude the military context, even though the stakes appear to be much higher in these applications. As such, the European Union’s risk-based AI Act completely excludes military uses of AI. This omission raises questions regarding the consistency and fairness of the regulatory framework. 

    The debate regarding AI in the military extends beyond the legality of autonomous weapon systems. It encompasses discussions about explainable and responsible AI, the need for international ethical principles, the examination of gender and racial biases, the influence of geopolitics, and the necessity of ethical guidelines specifically tailored to military applications. These considerations highlight the complex nature of implementing AI in the military and emphasize the importance of thoughtful and deliberate decision-making.

     

    Speakers’ summaries:

    Fernando Giancotti. In the war in Ukraine, AI is primarily used in decision support systems. But in the future, Giancotti hypothesises that we can expect a major change in warfare due to the increased use of AI. According to him, the starkgap in ethics discussions is a serious issue. A recent research case study on the Italian Defense published by the speaker with Rosanna Fanni highlights the importance of establishing clear guidelines for AI deployment in warfare. Ethical awareness among commanders is high, but commanders are concerned with accountability, and the study emphasises that commanders require explicit instructions to ensure the ethical and effective use of AI tools.Commanders also worry that failure to strike the right balance between value criteria and effectiveness could put them at a disadvantage in combat. Additionally, they express concerns about the opposition's adherence to the same ethical principles, further complicating the ethical landscape of military AI usage.

    On the other hand, Giancotti also recognises that AI has the capacity to bring augmented cognition, which can help prevent strategic mistakes and improve decision-making in warfare. For example, historical wars have often been the result of strategic miscalculations, and the deployment of AI can help mitigate such errors.

    While different nations have developed ethical principles related to AI use, Giancotti points out the lack of a more general framework for AI ethics. As shown by the study, AI principles vary across countries, including the UK, USA, Canada, Australia, and NATO. On UN level, the highly polarised and dead-locked discussion on Lethal Autonomous Weapon Systems (LAWS) does not seem to produce promising results for a universal framework. Therefore, Giancotti argues for the establishment of a broad, universally applicable ethical framework that can guide the responsible use of AI technology in defence. He suggests that the United Nations (UN) should take the lead in spearheading a unified and multi-stakeholder approach to establishing this framework.

    However, Giancotti acknowledges the complexity and contradictions involved in the process of addressing ethical issues related to military AI usage. Reaching a mutually agreed-upon, perfect ethical framework may be uncertain: Nevertheless, he stresses the necessity of pushing for compliance through intergovernmental processes, although the prioritisation of national interests by countries further complicates the establishment of universally agreed policies.Upon broad agreement on AI defence ethics principles, Giancotti suggests to operationalise these through the wealthof experience with International Humanitarian Law.

     

    Peter Furlong. One of the main concerns regarding the use of AI in warfare is the lack of concrete ethical principles for autonomous weapons. The REAIM Summit aims to establish such principles; however, there remains a gap in concrete ethical guidelines. The UN Convention on Certain Conventional Weapons has also been unsuccessful in effectively addressing this issue.

    However, many technologies beyond LAWS cause risks. Satellite internet and broader use of drones in warfare are some examples. Even commercialhobby drones and other dual-use technologies are being used in warfare contexts and military operations, despite not being designed for these purposes. Since AI's capabilities are dependent on the strength of sensors, the cognition of AI is only as good as its sensing abilities. Furlong explains that the value and effectiveness of AI in warfare depend on the quality and capabilities of the sensors used. More broadly, dual-use devices might not meet performance and reliability expectations when not being trained for a warfare context.

    Furlong concludes by stating that the military use of AI and other technologies has the potential to significantly escalate the pace of war. The intent is to accelerate the speed and effectiveness of military operations, which impacts the role and space of diplomacy during such situations. Targeted and specific principles related to the military use of AI are necessary, and conferences and summits play a crucial role in driving these discussions forward.

     

    Shimona Mohan: Explainable AI (XAI) and Responsible AI (RAI) are explored by some countries, like Sweden, in their military applications. The REAIM Summit produced a global call on responsible AI with 80 countries were present, but only 60 countries signed the agreement. It appears that these non-signatory countries prioritise national security over international security regulations and laws.

    Mohan also raises gender and racial biases in military AI as important areas of concern. Gender is currently seen an add-on in defense AI application, use cases, and at most a checkbox that needs to be ticked. A Stanford study revealed that 44% of AI systems exhibited gender biases, and 26% exhibited both gender and racial biases. Another study conducted by the MIT Media Lab found that facial recognition software had difficulty recognising darker female faces 34% of the time. Such biases undermine the fairness and inclusivity of AI systems and can have serious implications in military operations.

    Likewise, biased facial recognition systems raise major ethical, but also operational risks. For instance, the use of those dual-use technologies is used in the Russia-Ukraine conflict, where soldiers were identified through these systems. This highlights the potential overlap between civilian and military AI applications and the need for effective regulations and ethical considerations in both domains.

    Mohan summarises three key issues with lacking awareness of gender and racial bias in military AI systems: 1) bias in data sets; 2) weapon review does not include gender bias review; 3) a lack of policy discourse on bias in AI systems. 

     

     

    Author's comment: Created with the DigWatch IGF Hybrid Reporting Tool. https://dig.watch/event/internet-governance-forum-2023/ai-and-edts-in-w…;