Europe Inside Out

Is Europe Ready for AI-Driven War?

Episode Summary

Thomas de Waal, Raluca Csernatoni, and Jessica Dorsey discuss how AI is transforming warfare and the ethical challenges it raises.

Episode Notes

AI-powered technologies are transforming the nature of warfare, with profound implications for European security and the EU’s regulatory framework.

Thomas de Waal, Raluca Csernatoni, and Jessica Dorsey examine how these dual-use systems blur civilian and military lines, and their implications for strategic, legal, and ethical accountability.

 

Raluca Csernatoni, October 30, 2025, “Corporate Geopolitics: When Billionaires Rival States,” Strategic Europe, Carnegie Europe.

Raluca Csernatoni et al., September 1, 2025, “Tech Diplomacy 2.0: Examining the Intersections Between Industry and Governments in International Relations,” International Journal of Cyber Diplomacy.

Raluca Csernatoni et al., September 1, 2025, “The Future of Foreign Policy in the Age of Emerging and Disruptive Technologies,” EU Cyber Direct.

Raluca Csernatoni et al., August 11, 2025, “Myth, Power, and Agency: Rethinking Artificial Intelligence, Geopolitics and War,” Minds and Machines.

Raluca Csernatoni, May 20, 2025, “The EU’s AI Power Play: Between Deregulation and Innovation,” Carnegie Europe.

Raluca Csernatoni, July 17, 2024, “Governing Military AI Amid a Geopolitical Minefield,” Carnegie Europe.

Jessica Dorsey, January 13, 2026, "The erosion of human(e) judgement in targeting? Quantification logics, AI-enabled decision support systems and proportionality assessments in IHL," Cambridge University Press.

Jessica Dorsey, December 14, 2025, “Drug Boats, Drone Strikes and the Dangers of Avoiding Mirrors,” Opinio Juris.

Jessica Dorsey, June 27, 2025, “AI-Enabled Decision-Support Systems in the Joint Targeting Cycle: Legal Challenges, Risks, and the Human(e) Dimension,” International Law Studies, Vol. 106.

Jessica Dorsey, May 2025, “Proportionality under Pressure: AI-Based DecisionSupport Systems, the Reasonable Commander Standard and Human(e) Judgment in Targeting,” The Hague Center for Strategic Studies.

Episode Transcription

Editorialized Intro

 

Thomas de Waal

Hello and welcome to Europe Inside Out. My name is Tom De Waal. I'm a senior fellow at Carnegie Europe. Today we're asking a timely and uncomfortable question: How are AI powered technologies changing the nature of warfare? And what does that change mean for European security? We seem to be living in a new age of combat where the meaning of who is or isn't a combatant is very different. Much of that is down to artificial intelligence, the way it shapes how wars are fought, how targets are identified, how rapidly decisions are made, and crucially, how responsibility is divided between humans and machines. We see this in particular in Russia's war in Ukraine. And as, Ukraine is Europe's front line, that of course poses key questions for European efforts to build their own strategic autonomy. In this episode, we're going to ask how Europe can navigate this AI transformation responsibly, how to develop international rules, confidence building measures and ethical safeguards before these technologies further reshape the battlefield. So I'm delighted to be joined by Jessica Dorsey. Jessica is assistant professor of International Law and co-director of the Realities of Algorithmic Warfare Research Platform at, Utrecht University. And by our own Raluca Csernatoni, She's a fellow at Carnegie Europe and she focuses on European security, defense innovation, and the geopolitics of AI. Both of you, welcome to the show. 

Section 1: The Evolution of AI on the Battlefield

 

Let’s just begin. I'm going to be the kind of amateur here who doesn't understand all the details. But just let's, let's begin with what is actually new. I mean, long distance warfare is not new. I'm sitting in London, which was hit by long distance aerial bombing and long distance missiles in the Second World War, where there was no discrimination about who was hit and who isn't. So, you know, both of you just tell me why you think this is so different, this AI revolution. And indeed, if it is so different.

Raluca Csernatoni

Great. Thank you so much, Tom. A pleasure joining you and Jessica for this discussion. So the question, whether AI is really a game changer in warfare and actually what is genuinely new and what is overstated. So for me, let me start maybe by separating what is new from what sounds new but isn't. What is genuinely new is not intelligence itself. It seems to be because of the sci-fi influence sometimes that we encounter in popular culture. But, for me, actually the stakes are where and how, for instance, certain AI systems operate, not intelligence itself. So, AI, for instance, is no longer confined to the back offices of geeks of military planning, of long-term intelligence planning and analysis. It now, in my view, sits actually inside of the operational loop. It helps select targets, it processes data, it prioritizes threats, it synchronizes drones, it fuses sensors. Kind of the tech jargon, it compresses decision times to seconds. So for me actually what's new is actually this shift from support to structuring decisions. And I think that this is actually disruptive and the change is more profound. What is also new as well is the scale and the speed. AI allows militaries to process once again, faster, more data across more domains with fewer people. So combined with cheap platforms, with drones, with networks systems, this creates forms of warfare that are distributed, iterative, adaptive, and also what we see clearly what's happening in Ukraine. So the battlefield, we can argue, becomes this software problem as much as a hardware problem when you introduce military AI systems. so, that's kind of the way I would see the disruption happening. But what is often overstated is this idea that AI is an autonomous mastermind. much of today's military AI is brittle, is noisy. Jessica knows very well, even more than me, kind of the opening of the black box of these systems. Once again, some of these systems very much depend on human workarounds. So, they do not understand the battlefield as humans do and from this perspective there is a danger, and the danger is not this one of sci-fi superintelligence, but it is the danger of autonomous bias, of over trusting these systems, of technological glitches and the decision making, shifting towards this machine tempo without actually sufficient human reflection, friction. And so this is actually the disruptive effect more in the human-machine relationship. And, it's not that machines are replacing humans, but actually humans are being reorganized around machines and machine logic. And here for me, and this is maybe just to provide a bit of the bridge to Jessica's work as well, here for me there are the strategic, legal, and ethical stakes. And here we need to explore further, especially the absence of a comprehensive global governance framework for military, artificial intelligence. Responsible military artificial intelligence. But I can stop here and later on pick up some of the discussions also related to Ukraine and what we see there in the battlefield. Thank you. 

Thomas de Waal

Great. So, Jessica, what would you like to add here? S peed and scale. Anything else in particular?

Jessica Dorsey

No, I can really just only underscore what Raluca very helpfully just outlined. Speed and scale is something as part of the game changer rhetoric. And I think the implications of that speed and scale have not been sufficiently mapped out, understood. We're looking or we're seeing a lot of this happening in a very short-term acquisitions process, speeding up, and innovating and then it's being led then by that machine speed that Raluca helpfully pointed out. I think I'm not sure that the nature of war is changing, but rather the character, the how. And I think you highlighted that well. It's where humans are found in the decision-making process is changing and profoundly changing. And I think, you know Tom, you mentioned in your introductory remarks about the distancing. That's not new. That was a physical distancing however, and that's also something we saw through the trajectory of the global war on terror. The last 20/25 years of warfare has been a distancing, a physical distancing. I think what's new here is that that's extended into a different type of distancing and algorithmic distancing. We're moving ourselves as humans (commanders, operators) further away from the decision making on the battlefield and certainly in critical or safety critical scenarios in which life and death is at stake. I think that's really one of the most problematic areas that needs much more deliberation. And I think just to support, I'm not sure that the danger is really that it's about these fully autonomous systems taking over those sci-fi. We just are being haunted continually from these sci-fi, killer robots discussions. But I think we need to be really clear-eyed about what's happening now, not only in Ukraine, but also in other theaters like in Gaza, but also in the Red Sea where the U.S. is using particular systems against the Houthis in Yemen. We're also hearing reports that these AI-driven systems are being used as recently as the strikes in Venezuela. So it's going to happen everywhere again at speed and scale. And we need to be much more cognizant of the risks and you know, also the benefits, but a holistic review of things short, medium and long-term.

Section 2: Responsibility Gaps on the Use of AI

 

Thomas de Waal

So Jessica, you're an expert in international humanitarian law. I mean obviously this poses huge challenges to anyone who's trying to you know, adjudicate in in warfare, which was already I would guess problematic in the modern age. Just talk us through what you think the key issues are, speaking as someone who studies the law of war.

Jessica Dorsey

Yeah, thanks for that. I'm cognizant also that not everyone listening, may be a specialist. So I don't want to get too much into the weeds about this stuff, but I do think that the principles of international humanitarian law that underpin the entire legal framework (distinction, precautions, proportionality), those are all challenged fundamentally at this speed and scale that we've just highlighted. So distinction, meaning you need to make sure that you are only targeting a legitimate military that can be an individual, it can be an object, but it must be, it must fulfill the criteria. You cannot target, civilians, for example. And that's hard enough, as you mentioned, in just more conventional forms of warfare, and certainly when it's complicated by very flexible, and that's the most diplomatic way I can say it, very flexible interpretations of what the law actually allows or disallows. Right. If we have a very broad understanding of who a combatant might be and that that definition becomes broader and broader. When we start then translating that into programming, into these AI systems, and this has been something really at the forefront in the conflict in Gaza, then you're going to see destruction at speed and scale. And if it's something that is contrary to your legal obligations, then that's going to be problematic at machine speed. Precautions is another one that I've done quite a deep dive into. So, taking precautions, the law requires you to do everything you can to avoid, or to a very maximum extent minimized harm to civilians in conflict. In order to do that, you have to take all feasible precautions. This is how the law reads. Now, as you can imagine, when you've sped everything up to machine speed, taking precautions, some might argue, is no longer feasible. And I want to push back extraordinarily forcefully on this because that the whole idea behind why we have these legal frameworks in the first place is to protect those who are not participating in hostilities, not taking part of the war. And if they're the ones they often are, on the receiving end of this machine-mediated violence, then we have to really take a hard look in the mirror about are we really being true to our norms and values and are we really in compliance with international humanitarian law?

Thomas de Waal

Wow, that's a whole big set of issues. We already mentioned the word responsibility. If an alleged war crime is committed, Jessica, you know, how do you account responsibility? Are we taking the person who fired the weapon? I mean, can you hold an algorithm responsible for an act of war? I guess that’s an issue that, as an amateur, you know, immediately strikes me.

Jessica Dorsey

I think that's a great question about, where responsibility lies. And if there are really responsibility or accountability gaps, I think it's really important. And we're starting to see convergence around this language at the United Nations. The GGE laws, that's one of my favorite acronyms, the Group of Governmental Experts on lethal autonomous weapon systems have come up with a rolling text in which it describes that responsibility can only be borne by humans. So we cannot hold an algorithm, a computer, a computer program, or even. Yeah. The question then arises, how far upstream can we apportion that sort of accountability? But look, on a battlefield, if a commander decides to deploy a particular technology, she will be on the hook for its effects. Right? The same for an operator, if that's the choice that you make. So that's why I think all of these responsibilities and accountability questions ought to move upstream. And that's a lot of the work I'm doing right now. And that upstream moving means that we need to be programming these things in and thinking about them a lot earlier in the AI life cycle. So all the way from design through to development, deployment, testing, evaluation, verification, all the way through to, decommissioning. Throughout the entire time, rather than just seeing that there's a system that's built or being sold, you buy it off the shelf. And then at that point, first asking “hey, can this comply with law?” I think that's the wrong way around. So I think key elements that we can build into these programs can be, interfaces that are, you know, allowing for the human to retain that upper hand over the machine, that if we ensure that doctrine and guidance is aligned with the law again much further upstream, we can think about these questions, through continuous training, this iterative approach, certainly in machine learning systems that will change over time by design. Those are the things that we need to be thinking through and thinking about. I would encourage these ideas and these questions about responsibility and accountability actually to be much broader than just solely legal responsibility and legal accountability. There are a number of other mechanisms that actually free lawyers from having to do all the heavy lifting. In fact, it's more prudent not to have to bring this all to a court of law, because accountability can happen like I said, much earlier upstream.

Thomas de Waal

Great. Raluca, I want to turn to you and talk a bit about the European angle of this. How Europeans are trying to integrate this into the defense strategy. Europe is obviously desperately trying to find its own strategic autonomy between Russia and the United States. So. And yet on the other hand, there are issues about whether this is going to be state sponsored when so much of the innovation is coming from the private sector. What do you see as the main issues here?

Raluca Csernatoni

Great question. And also to build on Jessica's point about raising more political deeper questions and also democratic accountability questions. But I will start, let's say foregrounding the European perspective by actually starting from Ukraine and what's happening there. But also as Jessica mentioned, in Israel with algorithmic warfare, all that we are seeing now is actually the centering of traditional defense. This is what I call kind of blurring of lines between the civilian/military boundaries. Because some of these emerging and disruptive technologies like artificial intelligence are dual use technologies that can be used, for both civilian and military purposes. But also that means that we see a lot of civilian tech firms experimenting with AI tools and actually playing an increasing critical role in military operations. And again this raises a lot of responsibility and accountability concerns, because private companies now are becoming this crucial actors on the battlefield by providing data analytics, the very AI systems that Jessica was talking about. They can also enable drone strikes, surveillance operations, so on and so forth without going into the jargon. Actually such ventures in my view, raise a lot of concerns around the increasing militarization of artificial intelligence, and also civilian innovation, research and innovation for that matter. And also raise ethical and legal responsibility questions for the private tech sector, especially during conflict. Of course Ukraine showcases the advantages of deploying some of these technologies. Cheap, adaptable systems that can be reconfigured weekly, sometimes even daily. This iteration cycle of innovation and experimentation with the technologies and this defense innovation model that we see in Ukraine of course favors startups, engineers, grassroots movements, volunteer networks, and also in a way disrupts traditional defense hierarchies. And for me, this is really interesting to look at, it's also in a way importing a model of agility, experimentation in the battlefield. Fast iteration, moving fast at any cost. And of course in Ukraine the pressure is existential, let's be honest. That's the main driver of course, for these types of innovation. But why do I say this? Because it's very important for us to understand how such private tech actors, but also civil society become arms producers in terms of warfare. And what are some of these ethical, legal, political, democratic concerns of this blurring of lines between civil and military boundaries. And this also raises questions for Europe because the Ukraine example is now empowering this new strategic autonomy defense innovation rethink, especially around this software first emerging and disruptive technologies like AI with a dual use potential. But also, for me this new logic is a bit more destabilizing because it of course challenges the traditional ways of doing defense innovation as we know it. What is really fascinating to follow at the moment is actually what the EU calls new defense. It's a new term that was introduced more recently. But conversations around this term have been going on under different umbrellas, under different concepts. But in a recent roadmap that the European Commission, published, I will just read the title, which is very reflective of this new shift: “EU Defense Industry Transformation Roadmap. Unleashing Disruptive Innovation for Defense Readiness.” And the key here is disruptive innovation, especially highlighting the role of such emerging and disruptive technologies like artificial intelligence for building more strategic autonomy, and technological sovereignty in Europe, and emphasizing once again agility, risk taking, dual-use and innovation lessons learned from Ukraine. And also incorporating very much this civilian/tech ecosystem, innovation model into defense alongside traditional of course models of innovation. But the question is of course with this new shift, we can call it maybe paradigmatic shift towards new defense. The question would be who are now the real power brokers? Who is setting agendas when it comes to defense innovation? Who controls data, software stacks, compute, integration of these technologies? What are some of the governance guardrails around them? Can Europe innovate fast without actually hollowing out democratic control and strategic autonomy at the same time? So these are kind of the tension lines that I'm identifying at the moment. 

Section 3: Can Europe Play a Role in Regulating AI?

 

Thomas de Waal

I mean I've been in Ukraine and I met people who were you know, manufacturing drones, involved in this military/tech industry. It's very much bottom up but it's also one nation, a whole society effort. That makes a lot of sense in the context of Ukraine. But if we're talking about Europe, we're talking about the EU-27 nations, you do need a top-down strategic approach which takes into account the need, all the military needs and also the legal and humanitarian issues as well. Do you see that happening within the EU, Raluca?

Raluca Csernatoni

Yes, in the last years we see a more, let's say, “whole of Europe” approach when it comes to innovation. Building a defense technological and industrial base, building strategic autonomy, technological sovereignty around it. Some say that this is driven by big member states of course and certain big defense interests from the industry. Others say that it's more agenda set by the European Commission but to look at comprehensively at recent reports, policy frameworks, initiatives, funding initiatives such as the SAFE instrument, the Security Action for Europe. There is now a sort of a crunch time moment. It's kind of driven by of course the geopolitical tensions, the rift in the transatlantic relationship, structural factors, lots of things of course the Russian invasion of Ukraine, a lot of things that have raised awareness about this need to go at it together. And these documents also emphasize what I was saying beforehand. The strategic use of these emerging and disruptive technologies and the civilian/commercial innovation, homegrown innovation in Europe, but also prioritizing for instance newcomers and new players, startups, venture capital in this field or the so-called creation of European defense tech ecosystems in key areas. Not only AI, but other buzzwords are also there. Quantum technologies, biotech, advanced semiconductors, and so on. And I think that this is something very important to consider. I mean the geopolitical anchors, the more internal the EU is now driving this market defense industry logic. And of course, when it comes to following Ukraine is this idea of also moving fast at the moment when it comes to investments in the military sector in the defense industry in Europe. But Europe is not at war. This is something that I also want to emphasize. It's very important to build that strategic autonomy and have a vision, as you said Tom, from a top down vision, but also long-term vision, vision of how the EU will be transforming as an international organization. And also how for instance, borrowing a wartime logic, impacts this identity of the EU as an international organization. So, for me there is a risk that importing exceptional practices becomes more and more normalized without the legal and societal safeguards. And that's why I highly agree that there needs to be further public debate around rearming Europe. And without saying that the need to rearm is necessary, given the geopolitical context we are in at the moment.

Jessica Dorsey 

Can I come in on that? Because I think I can just only underscore what you say there, Raluca. I think though what's also important about that is that this also offers opportunities for Europe to then reimagine in a different way, of course, the transatlantic rift that you mentioned. That is on the top of everyone's mind. But I think it opens doors that previously seemed closed. Out of necessity as well Europe should now take this opportunity to say and to question, to interrogate our dependence on American defense tech companies or American hardware companies across the board. We need to take a long hard look at that. And I think, recently there's a discussion in the Netherlands about the Joint Strike Fighter, the F35. And then it’s sort of newest iteration is the cooperative, combat aircraft. The idea that the loyal wingman, an autonomous system that's meant to complement the F35, etc. But that's also being run through these big tech companies Raluca highlights. They are largely, predominantly completely American. And right now that does not seem like the wisest direction for Europe to be going in. But I think that these opportunities are “Okay well, we don't have to continue our relationship.” And I think that's important to really underscore these allied relationships and friendships over the years, the last 80 years, we used to rely very heavily on them. We've learned very quickly how that can crumble right before our eyes. We need to step back with some critical distance I think here in Europe and say “Okay, our allies and our partners, they are that. But we are clients of defense companies. We are not friends, we are not brothers and sisters. We are clients and we have choices where we contract.” And those contracts, and this goes back to that idea of lawful by design, is bringing that upstream and states demanding demonstrated compliance with our own legal obligations prior to procurement rather than at the very end of the thing. And I think that that would go a long way in extracting us from that dependence that you've sketched. And allow Europe to make quite some economic statements with the leverage I think we have here on this continent.

Thomas de Waal

We're almost at an end. But, but obviously I want to look just in some final thoughts. I want us to look ahead a bit. And I think what I hear from both of you, is you know, all these different tradeoffs and the tradeoff that you're also talking about is a European, you know, tradeoff between observing international law, which is something which isn't particularly in fashion at the moment, and the huge speed with which others are developing this technology. So looking forward, how do you see, both of you, Europe being able to handle all these different tradeoffs in conclusion? 

Jessica Dorsey

The way you phrase that question. I liked it because it triggered in me like a new ad campaign “Make International Law Great Again”, MILGA, that's what we're going to go for. The idea is that this robust system that we've developed over the past 80 years, yes, it's imperfect, yes, it's open to weird and poor interpretations, but it is actually quite a robust system of agreements we've made, and I think we still follow them. I think in Europe it's an opportunity to reassert the primacy of international law. I've started to see this week, particularly after the attacks in Venezuela, slowly but surely, European leaders, it was very timid at first, but I think I'm starting to see certainly in the context of what's happening in Greenland, we need to really reassert our position here and stand strong and firm in international law and understand that that is a non-negotiable. So I think in terms of how Europe should manage this in the context of algorithmic warfare or AI-enabled modalities, Europe has to think, as I mentioned earlier, mid and long-term. We need to invest in our own capacities. We need to embed those legal and ethical constraints early and often rather than sort of retrofitting them. And we need to, we've got to forge our own way forward and I really actually believe that this is doable. I think it's desirable. I would dare say that I think it's absolutely crucial for the survival of the European experiment. So those are really existential things that are at stake at the same time, again, offering lots of opportunities to make sure we embed our own norms and values and we extract ourselves from dependence on another major player who doesn't seem so enthusiastic about those same norms and values.

Raluca Csernatoni

I will even add further on in a way, the need for this “whole of society” democratic discussion around rearmament, in general, but then also the role of these disruptive technologies like artificial intelligence or military AI or so on. And also be cautious about some of the stories we are telling around some of these technologies. Because for me, disruption, whenever you talk about disruption, disruption on the battlefield, disruption of this and that, even in the commercial sector, for instance. It’s also very business-oriented and governing story. And who benefits, “cui bono,” is also very important to consider here. And when it comes to the future kind of Europe and looking ahead, it's very important to think, who benefits? And also bring in this public debate, more to the forefront to citizens to have this “whole of society” discussion about this new war imaginary that we are now witnessing, experiencing. The tensions, geopolitical tensions that Jessica highlighted around different parts of the world. And I suspect things will get worse than better. And in this respect, we don't need a new return of a new Cold War or different types of wars, types of scenarios. And here is where international law, democratic debates, accountability, responsibility come into the picture and we need to protect them. It's this notion of responsible power under geopolitical and technological pressure. And this is what the EU should become. Building that strategic autonomy, identifying the critical dependencies, as Jessica mentioned, but also embedding certain values, norms that we hold dear, such as fundamental rights, again, due diligence, democratic processes, oversight, transparency, debate. All of these need to be applied. Speed is important, but it should not, in a way, disrupt, to bring back to my first point, disrupt law, ethics, and governance. And they should be designed from the start here. I highly agree with Jessica's point.

Outro

 

Thomas de Waal

Thank you so much. It’s been a fascinating discussion. Thank you for guiding me and our listeners through this incredibly new and evolving and complicated topic. And you know, we've even, thank you Jessica, we've even got a new slogan, MILGA, which we can apply to all these issues. So, thank you so much for joining us today on Europe Inside Out.