Panel: Automating Human Security: Rethinking the Role of AI in Conflict for the Protection of Civilians
Last week, I had the opportunity to participate in a panel on "Automating Human Security: Rethinking the Role of AI in Conflict for the Protection of Civilians" organized by Institute for Ethics in Artificial Intelligence - TUM at the American Haus in Munich, as a side event of the Munich Security Conference 2025.
It was an honor to be among a great list of panelists, Yasmin Al-Douri Vilas Dhar, Hichem Khadhraoui Christoph Lütge, with excellent moderation by Caitlin Corrigan.
I always need time for my reflections , but this time even more so, given the complexity of the topic, especially in the context of the world’s current state.
Starting from the title:
* What is human security? What is peace?
A negative definition of peace is simply the absence of war, while a positive one emphasizes the right to an adequate standard of living (*). My favorite interpretation though comes from Nikiforos Vretakos, the Greek poet of peace and love, who beautifully wrote (**):
And Peace is something deeper than the thing we mean when there is no war. Peace is when the human soul becomes the sun out in the universe and the sun becomes the soul into the human.”
* What is automation?
Automation nowadays often implies the use of AI. While AI excels at handling heterogeneous data, data fusion, and complex analyses, it also comes with limitations such as biases, hallucinations, flawed generalizations, and critical questions about ownership, access, and control over AI systems.
* What are conflicts?
Conflicts range from wars and civil unrest to cyber-attacks, human rights abuses, and climate-related crises.
Key takeaways from the panel (no order implied):
-- Hardware limitations matter: what happens when drones lose connection to the base?
-- Software limitations matter too: many AI systems are complex black boxes. Red-teaming is crucial to identify system vulnerabilities
-- biased (e.g., not all parties or aspects are sufficiently represented)
-- scarce, noisy, outdated, or incorrect
-- affected by misinformation (unintentional errors) or manipulated through disinformation (deliberate manipulation)
-- For example, in a disaster, an AI system optimized to maximize aid delivery speed might prioritize urban areas with better data, unintentionally neglecting remote or marginalized communities most in need, worsening existing inequalities.
Recommended by LinkedIn
-- AI models are essentially shortcuts to the optimization goal and do not explicitly eliminate undesired pathways to that goal, which can lead to biased or unintended outcomes.
-- Many AI tools and datasets are available, but not everyone benefits equally.
-- Access to resources and knowledge remains uneven, limiting who can effectively use these tools.
-- Off-the-shelf AI solutions, e.g., GPT-* models, are often integrated into existing pipelines without fully considering their limitations, leading to the propagation of issues throughout the system.
-- Who is participating in the AI ecosystem? What are the boundary conditions?
-- What legal frameworks apply? Nation-states vs global regulations vs empowered individuals (operating without clear legal frameworks) remains a key challenge.
-- AI needs data, but sharing it can introduce security risks, raising concerns about privacy, misuse, and exploitation, and potentially putting people in need at greater risk if such data falls into the hands of adversaries.
-- Surveillance data, often collected for positive purposes, can be repurposed for unethical uses.
What makes this problem extremely difficult?
There are many involved actors, conflicting goals, and multiple adversaries. Understanding the context of the problem is crucial to providing effective solutions, which requires the involvement of subject matter experts, local experts, and technical experts among other stakeholders.
While AI can help, its responsible use is essential.
What "responsible" means, however, is a complex question. While the Responsible AI principles outlined in the AI Act are necessary, I think they are not sufficient in complex scenarios involving multiple actors, conflicting objectives, and numerous adversaries. In such contexts, additional frameworks and deeper contextual understanding are crucial. It also remains an open question which tasks can be automated (with human oversight) and which should not.
Sources:
Executive Director at Institute for Ethics in Artificial Intelligence - TUM
2moThank you again Eirini Ntoutsi for joining us for this discussion!
Technology Leader - Data, Analytics & AI - Infosys Europe
2moGreat summary Eirini Ntoutsi, and very illuminating talk as well. 👏