Europe Inside Out

How AI Is Reshaping the Global Order

Episode Summary

Rym Momtaz, Sinan Ülgen, and Sam Winter-Levy examine how artificial intelligence is reshaping international politics and will define the future of the tech industry.

Episode Notes

As transatlantic tensions over technology and AI regulation intensify, emerging powers like China, Saudi Arabia, and the United Arab Emirates are seeking to assert their dominance in the tech domain. 

Rym Momtaz sat down with Sinan Ülgen and Sam Winter-Levy to discuss the dual-use nature of generative AI and large language models and how they might be misused by malign actors. 

[00:00:00] Intro, [00:01:28] Generative AI and Large Language Models, [00:11:57] The Efforts in Regulating Generative AI, [00:19:23] The Future of the Tech Sector

Sinan Ülgen, January 27, 2025, “The World According to Generative Artificial Intelligence,” Carnegie Europe.

Sinan Ülgen, August 13, 2024, “Turkey’s Instagram Spat Shows the Limits of Global Content Governance,” Financial Times.

Sam Winter-Levy, Sophia Besch, January 30, 2025 “How Will AI Export Policies Redefine U.S. Global Influence?The World Unpacked, Carnegie Endowment for International Peace.

Sam Winter-Levy, Matt Sheehan, January 28, 2025, “Chips, China, and a Lot of Money: The Factors Driving the DeepSeek AI Turmoil,” Emissary, Carnegie Endowment for International Peace.

Sam Winter-Levy, January 24, 2025, “The United Arab Emirates’ AI Ambitions,” Center for Strategic and International Studies.

Sam Winter-Levy, January 13, 2025, “With Its Latest Rule, the U.S. Tries to Govern AI’s Global Spread,” Carnegie Endowment for International Peace.

Sam Winter-Levy, December 13, 2024, “The AI Export Dilemma: Three Competing Visions for U.S. Strategy,” Carnegie Endowment for International Peace.

Sam Winter-Levy, September 20, 2024, “Silicon Valley Hasn’t Revolutionized Warfare—Yet,” Foreign Policy.

Episode Transcription

Editorialized Intro

Rym Momtaz

Hello, and welcome to this episode of Europe Inside Out. I'm your host, Rym Momtaz, Editor-in-Chief of Strategic Europe, Carnegie Europe's blog, where twice a week we publish punchy short analysis on all things strategic in Europe.

In today's episode, we're discussing the impact of artificial intelligence and large language models on our political systems. It may sound like a wonky technical issue that should only interest geeks, but it's actually quite the contrary. It's a topic every responsible citizen in our liberal democracy should be spending more time engaging with because it could fundamentally change the societies we live in. I know that sometimes these discussions can be pretty daunting, so I'll start by saying that the good news is that the tools to handle the changes and to limit their adverse effects are accessible to almost everybody. So don't be scared off. 

Rym Momtaz

To discuss this, I'm happy to welcome two of the best thinkers around on these issues, Sinan Ülgen, senior fellow at Carnegie Europe, and Sam Winter-Levy, fellow at the Technology and International Affairs Program at the Carnegie Endowment for International Peace. 

Welcome to you both. 

Sinan Ülgen 

Hi, Rym. Pleasure to be here with you.

Sam Winter-Levy

Hey, great to be here.

Section 1: Generative AI and Large Language Models

Rym Momtaz

Both of you have written some thought-provoking and accessible reports and articles on various aspects of this topic, and we'll link to them below the episode.

It's a vast topic, of course. So today, we're going to start with how AI and language models, and of course, their inherent biases, are transforming how we acquire and interpret information, and what that means for our political systems, and perhaps the importance of increasing digital AI literacy as a way to equip the general population and preserve our liberal democratic systems. Then, of course, this is about geopolitics. We will talk about the geopolitics of it all, the growing transatlantic tension over tech and AI regulation, and also how emerging powers, whether it's China or Saudi Arabia or the UAE, are trying to gain a dominant position. 

Let me start with you, Sinan. You wrote a paper Carnegie published in January, helpfully called “The World According to Artificial Intelligence”. Can you lay out for our listeners who are not steeped in this, how AI and language models are transforming how each one of us is accessing information, but also the type of information that is perhaps promoted by algorithms?

Sinan Ülgen 

Yes. Thank you, Rym. Well, let me start by saying a few words about why I really wanted to draft this analysis, which was all published by Carnegie, “The World According to a Gen AI”. I think one needs to recall the shared optimism that most of us held when we started to get acquainted with social media. You know, the first days when social media, we started you know to use social media, it was a wonderful thing. We thought that it would be a drastically good thing for democracy as well because everybody could share their beliefs, we would be exposed to many more beliefs, deliberative democracy would flourish, and so on, and so forth.

Rym Momtaz

Are you thinking way back to my space and the beginnings of that, or are you thinking to the beginnings of Twitter? Just to give a sense of the timeline.

Sinan Ülgen 

More like, yeah, more like beginnings of Twitter when it had a societal impact. I mean, my space, I don't know how many of your listeners would actually recall my space, really. 

Rym Momtaz

You're right. 

Sinan Ülgen 

But you're right. But anyway, that was the sentiment that we all had, that this was a wonderful thing, that it would give us new channels of information outside of the big behemoths of traditional media, and that's always a good thing. However, over time, we came to see a darker side of those social media platforms. The initial optimism that was embedded in all of us regarding the nature of social media started to change. That's really the reason why I wanted to analyze this onset of a GenAI on our lives and on our political systems, because I think that we're exactly at the same place compared to how we looked at the genesis of social media back then. Today as well, there is a degree of optimism about what these magical tools, almost alchemistic tools, like large language models, can mean for us. I mean, they're really, in a way, quite magical. They very much facilitate as much of the work that we do.

But at the same time, I thought we need to be clear about the ways and means that this technology as well could be leveraged for less optimistic outcomes. It can be manipulated, instrumentalized by regimes that may use these platforms as a tool to spread their version of the world, if not to say disinformation. So that's basically what I wanted to test, and I think if you actually take the time to read the paper, you'll see that some of these models have their own worldviews. It's not as if there's a single truth out there, and these models are able to capture that truth. They have their inherent biases, and based on those inherent biases, they respond to us with their filter to their worldviews. I think that's something very important to acknowledge at the start of this wave of GenAI.

Rym Momtaz

To clarify, when we say the various models have their biases, we're talking about various American models have their own biases differently. We're also obviously talking about non-American models that are now coming on to the general public market. This isn't a bias discussion against one form or another. We're talking generally, factually, that these models come with inbuilt biases.

Sinan Ülgen 

Yes, absolutely. And you're right to point out that even with US technology companies, whether it's done by Meta or done by Open AI or X, they all have their inherent biases, which are different than each other. For instance, in one of the questions, Llama answered the question as if it was the US government, whereas the question had absolutely no bearing on that. It was a general question about, if I recall, whether NATO's enlargement was a threat to Russia, and it answered as if it was the US government, whereas OpenAI, ChatGPT was more balanced. So there are those ingrained differences. And one interesting point, again, that I want to add here is it also matters for non-US LLMs. It also matters. I used two of the Chinese LLMs. I had done this study just before Deep Seek had launched. But If you ask them in English, you get a different answer. If you ask them in Chinese, in Mandarin-Chinese, you get a totally different answer, which is much more closely aligned with the official version of the world, according to the Communist Party.

Rym Momtaz

Isn't that interesting? I think it's fascinating. I think polyglots, people who speak multiple languages, can attest to that. Obviously, that has always existed because which language you use always brings with it a kind of worldview. But I feel like these models have pushed that to a level that perhaps a lot of users aren't aware of. Just to bring it a bit maybe closer to everyday users, I definitely have noticed that ever since the change in ownership of Twitter, which becomes now X after Elon Musk bought it and there was a change in the algorithm, that my feed has changed. I'm not saying it's positive or negative. Each person has their own view of how those changes happen, but there has been a change, and some content has been prioritized over others. And that actually has an impact on people's awareness, and it has an impact on how, I guess, citizens interact and view their own public space. That will have an impact on liberal democracies in a democratic system where you go and you vote on the basis of the information that you have. 

Sinan, what are the tools that are at our disposal in order to be better educated, better aware when we're using these tools?

Sinan Ülgen 

I think one should start by looking at what our experience has been with gathering information from the digital world until now. Because there as well, there's a trajectory. Let's recall that in the early days of the internet, we all thought that whatever is available in the internet was the right thing. Then over time, we became much more sensitive to the fact that there's a lot of disinformation out there, and therefore, we shouldn't exactly take what we see on those websites as a reflection of an objective truth.

Rym Momtaz

I would add, disinformation by private actors, but also by governments.

Sinan Ülgen 

Absolutely. I mean, it's been manipulated by governments as well, very clearly. We have developed those tools, but also the knowledge necessary to filter that information. I mean, we call that digital literacy. So one of the objectives of doing this work was to essentially flag that we need a similar degree of maturity and digital literacy as we start to engage with Gen AI. Because in a way, and I see this with the younger generations, now it's even easier to fall prey to this type of disinformation. 

Rym Momtaz

Why?

Sinan Ülgen 

Because before if you go. Because before when you Google, you basically get let's say 30 different links. Some of them may be more relevant to what you're looking, some may be closer to the truth, some of them may be disinformed, but you get all that list. Whereas nowadays with Gen AI, with large language models, you actually get one answer. You don't need to sift through different links, and that's so much easier. That's one of the reasons where I see the younger generation actually shifting from Google Search or search type tools, to direct interaction with Large Language Models.

Rym Momtaz

But let me ask you here, isn't it because there's a difference in the approach, which is that isn't the generative AI search model supposed to be the one doing the work for you? They're going to go look at, I'm simplifying, but all of the different results that Google would give you, and they're going to summarize it and then synthesize it and give it to you. What's wrong with that?

Sinan Ülgen 

But do they? That's the question that needs to be asked. You may presume that they do, but they may not necessarily do it because ultimately, that's not what they do. That's not how these models work. The way that these models work is that they're given a corpus. They're trained with that corpus. And whatever that corpus contains, ultimately, this is what shapes their worldview and what shapes the answer that you'll get from these models.

Section 2: The Efforts in Regulating Generative AI

Rym Momtaz

I want to bring in Sam here because you did a lot of work, actually, on the efforts to regulate generative AI, especially under the Biden administration. First of all, do you agree with how Sinan has described the situation? Then can you just let us know what has been done so far in order to regulate that space?

Sam Winter-Levy

Sure, yeah. I think Sinan laid out a great canvas of some of the risks associated with these models. I think there are other risks, too, that the Biden administration was quite focused on, and those are risks around whether or not these models could be used to design new bio weapons, for example, or be used for cyber-attacks. Or we could start to see these new models as they get increasingly powerful, be used by states to turbocharge some of their military capabilities. And that was one set of risks that alongside the concerns about this information, there was one set of risks that I think the US government is increasingly focused on. And as you said, the Biden administration had various moves to try and regulate these technologies, both at the domestic level with things like the executive order they released. But they were trying to push AI companies to adopt a little bit more transparency. They had a bunch of voluntary commitments that these companies could agree to around testing and evaluating their systems. But then also at the international level, the Biden administration rolled out a series of regulations and a series of frameworks and export controls designed to try to limit the proliferation of these models to states that may have interests that do not align with those of the United States, to put it mildly.

In particular, the Biden administration released a series of policies aimed at trying to restrict China's access to advanced AI models. If they could unlock some of these capabilities that the US government is worried about around turbocharging, cyber capabilities, bio weapons. being used to design military R&D, being They used to turbocharge human rights violations at home in the surveillance domain. The Biden administration released a series of regulations on that front as well, which I can talk about in more detail if it's helpful.

Rym Momtaz

I think what you just said actually is fascinating because I'm willing to bet that the majority of our listeners, when they think about language models or generative AI or just algorithms, they just think of Twitter or Grok or Open AI. And it seems harmless. You're just asking a question, you're getting information. You've just opened up a whole other dimension, which has to do with weapons, with actually destructive things in the “real world”. I'd love for you to just explain a bit more how that connection happens.

Sam Winter-Levy

Sure. So fundamentally, these are intelligent systems for solving problems, detecting patterns, coming up with new... I mean, there's a lot of debate over whether they can come up with truly new insights, but certainly they're very good at digesting huge quantities of data and spotting patterns in that. And that's very useful in domains like, in particular, biology. So some of these systems coming out of Google DeepMind have now won a Nobel Prize for their achievements in biology on protein folding. So these are systems that are very good at taking vast quantities of DNA data, let's say, and suggesting new combinations of DNA to generate new compounds. That has a lot of extremely positive applications that could lead to the discovery of a lot of new drugs that have great benefits for human science and for curing diseases and so on. But there's also a flip side, which is that if actors who want to design new pathogens potentially, or this is just in the biological domain, but you can put those same capabilities to any range of tasks. We're already starting to see a lot of the AI companies are now using these systems to do their own coding internally.

So these are AI systems that are being used to automate the process of coding internally, to do new, potentially at some point to be able to do new R&D on coding new systems. So all these sorts of domains, these technologies are fundamentally dual use. They have a lot of positive side effects. They can be used to answer queries that when you enter your queries into a chatbot, but also to do new science, to unlock new capabilities in all sorts of domains, but also potentially to make cyber attacks more efficient, to design new biological weapons, to just be used to accelerate scientific research in any number of domains. For states, when they look at that dual use element, often governments get quite concerned about that because they want to take advantage of the benefits, but they don't want their adversaries to be able to take advantage of those same benefits necessarily. I think that's where the US government starts to view these as technologies that touch on core areas of national security.

Rym Momtaz

The Biden administration started this effort to regulate some parts of that space. Now, the Trump administration is in power. We know that, generally speaking, the principles in the Trump administration are against tech regulation. They're against AI regulation, and they have a very strong support among the tech giants that are at the forefront of the generative AI effort. 

Have they continued the Biden administration regulations, or have they, first of all, taken them apart?

Sam Winter-Levy

So I think here it's important to distinguish regulations at home, domestically, and overseas internationally. So on the domestic front, it's true that the Trump administration has walked back a lot of the Biden administration's attempts to try and make these companies more transparent, to force them to do some testing and safeguarding of their models before they get released. It certainly struck a different rhetorical tone. There's much less emphasis on bias or safety. All these sorts of terms are definitely out in the Trump administration. It's all about moving forward as quickly as you can, deregulating, trying to speed up access to energy, which is a key input to train these models. So the Trump administration is definitely striking a very different rhetorical tone. And in some key areas of domestic policy, it's deregulating access to energy, permitting to make it easier to build data centers, and emphasizing the importance of racing ahead to beat China in this race to the frontier on AI. 

On the international level, however, there's much more continuity between the Trump administration and the Biden administration so far. This may change, but so far, the Biden administration actually adopted some policies that the first Trump administration had originally introduced around export controls on chips to China, and it accelerated those, it widened those.

And so the Biden administration settled on this policy that it would try to deny a lot of the cutting-edge AI chips that you need to train these models. It would try to deny and restrict China's access to them. And I think the Trump administration is going to continue those sorts of moves to try to restrict elements of the supply chain to slow down China's progress on AI technologies. So that's one area where there's much more continuity between the Trump administration and the Biden administration on AI policy. I'd say the last element is in the last weeks in office, the Biden administration also released this much more ambitious plan to try to regulate AI chips worldwide, this so-called diffusion framework that the Biden administration released in its last week. The Trump administration is currently reassessing that and debating whether or not to keep it or not. But that is one area where it's not just about China's narrow access to chips, but about the access to chips that a much broader group of countries all around the world, including the Middle East, Europe, essentially every country in the world is covered by this framework. In that area, it's too soon to tell whether there'll be continuity or change between the Trump administration and the Biden administration.

Section 3: The Future of the Tech Sector

Rym Momtaz

At the AI summit that was held in Paris a couple of months ago, we saw a bit of a divergence or a very real divergence between the position that was expressed by the French President, say, and the US Vice President, JD Vance. I just wonder what's your take on that? Is there real tension there between these transatlantic allies over generative AI regulation and the future of the sector?

Sinan Ülgen 

Yes, there is. But it's not just about how to regulate AI. It's more generally about how to regulate tech, especially this administration with their techno bodies seems to be much more sensitive to how Europe regulates tech from the standpoint of the EU regulation, raising the costs of doing business for some of these tech companies.

Rym Momtaz

But why? Can you explain why the EU is raising the cost? I mean, it's not raising the cost just to raise the cost. Like, what's the objective?

Sinan Ülgen 

No, of course not. But it's basically, the EU has a different approach to regulation compared to the US. And we've seen this previously, for instance, on data privacy with the EU’s GDPR, where the regulation itself, because it is more comprehensive in nature, it tries to implement a number of public policy objectives. It necessarily increases the regulatory burden on some of these companies. That's number one. Number two is that there are competition concerns. So the EU is generally more cautious about the competitive impact of these types of markets, which essentially do, over time, have a degree of turning to monopolistic or oligopolistic markets. And therefore, competition concerns are raised more blatantly in the EU, also because most of these companies are US companies. There's that aspect as well.

Rym Momtaz

I think that's very important because obviously this podcast is also for listeners in the EU, and everyone uses all of these now tech applications and platforms that aren't actually produced or housed in the EU. They're mostly American. That's the dominant sector. And now the Chinese are giving the Americans or are trying to give the Americans a run for their money. And so what you hear in the EU that there's a sovereignty issue as well. Can we just talk about that a bit? Why is it a sovereignty issue? Isn't the US on the EU side in that sense? Why is it so important for the EU to have sovereignty over something like generative AI?

Sinan Ülgen 

Well, essentially because it's become clear that some of these technologies can be weaponized, and particularly AI, which is increasingly seen as a critical technology that is much more closely linked with the hardcore definition of power that we have, especially at a time when geopolitical concerns have become dominant. There's both a competition aspect to this, which is more linked to economic outcomes. So if you have companies that produce these type of big AI tools, you get to keep the revenue, but also the hard core power aspect of this, which is much more, if you want, become palpable at a time of geopolitical competition. So they're both at play here.

Rym Momtaz

And I guess on a very pedestrian level, we're seeing already how the algorithms just on social media can have an impact on democratic life in the EU. If Elon Musk wants to intervene and interfere, as he has, for example, in the German election, he has very powerful tools.

Sinan Ülgen 

Or Romanian elections.

Rym Momtaz

Or Romanian elections, and he has very powerful tools. 

Sam, You wrote a very interesting piece for Foreign Affairs talking about the way countries like Saudi Arabia and the UAE are trying to get in this space, and they're trying to leverage their current financial power in order to get a better or more dominant position in the generative AI space. What's happening there?

Sam Winter-Levy

If you think about what you need to train one of these large language models, there's basically three key inputs to simplify things a lot. One of them is you need a lot of energy to run these data centers. The second is you need a lot of money to buy chips, to build data centers. It's very expensive. 

The third is you need to be able to build data centers quickly. You need to be able to build land. You basically need some land that you can just build a data center without having to go through all these difficult regulations that many parts of the world have. So all three of those things exist in places like Saudi Arabia and the UAE. They have a lot of energy, they have a lot of money. They have the ability to build things quickly because they don't really need to worry about, nimby politics or democratic opposition or regulation or anything like that. So if you're US tech company and you're trying to stay ahead in this competition to build out data centers as quickly as you can all over the world, you might look to the Gulf States as a natural partner, potentially. They're offering you a lot of money to build big data centers there.

Rym Momtaz

Isn't that great? Because you're actually integrating other parts of the world into the system that is dominated by the US. Why should this be of concern?

Sam Winter-Levy

Potentially. Yeah, you could say there are a lot of benefits to this. Other states are finding different niches in the AI supply chain to position themselves. They're going to get economic benefits. Maybe that's better than just all the benefits flowing back to a handful of US tech companies on the West Coast, Silicon Valley. On the other hand, and certainly from US government's perspective, there are quite big risks associated with building big data centers abroad in general, but in particular in dictatorship in the Gulf, who are not close allies of the US, the UAE, Saudi Arabia. They do have some, they have a long-standing defense relationship with the US, they're economic partners. But the UAE also has close ties with China. The Saudis also have potentially close ties with China. These are not sworn NATO allies. These are not in a Five Eyes intelligence partnership. And so the US government looks at this and it's worried that if you just leave these decisions to the market, you could end up with a position in a situation where US tech companies offshore some of the key inputs to the most important strategic technology of the coming years in a bunch of regimes that are not democracies, do not align closely on US national security priorities on a whole range of issues, and who may not be reliable at all.

They may just steal the technology, they may divert it to China. And so this is a difficult question for the US government to grapple with. On the one hand, it wants to help US tech companies build out. It wants to help them get access to the Gulf capital supplies. And it also wants to keep these countries like UAE and Saudi Arabia, these swing states. It wants to lure them into the US technological orbit away from China. On the other hand people make the analogy of, would you build the Manhattan project of the 21st century? Would you build that in a bunch of oil-rich Middle Eastern dictatorships? Probably not. And so this is the policy problem that the US government is having to deal with when it comes to how to think about the Gulf States in their role in this AI race.

Rym Momtaz

Because the truth is you can't exclude them. I mean, they're part of this. They have, as you said, the assets to be part of this discussion. You need to find a way to engage with them, but also to guard against some of the possible side effects or negative side effects that could be associated.

Sam Winter-Levy

Yeah, that's right. To some extent, this goes back to this discussion about sovereignty in the European context as well, because to some extent, you can exclude them at the moment if you're the United States, because right now, the key input that you need to train a big frontier AI model, these big cutting edge systems, is you need access to AI chips, cutting-edge AI chips. All of those are designed, essentially all of those are designed by US companies, companies like NVIDIA, and they're manufactured by TSMC in Taiwan. And so the US actually, along with some of its allies, controls key parts of this supply chain. And so the US government can actually say, We're not going to allow exports of these chips to the Gulf States. And so that's what, to some extent, the Biden administration did and the Trump administration is currently reassessing. So it is trying to say, if you're not on board with US national security priorities, if you're not in the inner circle, then you don't get access to these cutting-edge AI chips and you won't be able to train your own frontier models. Now, there are big costs to that because then the Gulf States might say “Okay, well, then we'll take all our money and invest it in Chinese tech companies,” or if you're the Europeans and you get excluded, you might try and build up your own European tech companies, although that's not an overnight solution.

So, yeah, trying to thread these trade-offs between not alienating other countries and causing them to root around the US tech ecosystem, but on the other hand, also wanting to control this technology that the US thinks, and I think rightly may have core implications for national security and economic power. So these are some of the recurring questions that policymakers are going to have to grapple with as we enter this new age of AI diplomacy that we're seeing playing out around the world.

Rym Momtaz

And we're very much at the beginning of this process, and I think it's going to be fascinating for people like us to watch that happen, coincide with this very deep crisis that is now at the heart of NATO, like an alliance that is very strong, because you were saying you were contrasting the relationship between, say, the US and European NATO allies, with the US and Saudi Arabia or the UAE. There is today a dearth of trust that is growing even between NATO allies. So it is going to be very interesting to see all of that comes together, gels together, and impacts this very determinant sector for the future, I think, of our societies. 

I do want to end on a positive note to go back to Sinan and what he was saying about to bring it back to a human level, to an individual level, that people who may have listened to this may have learned a lot, which I hope is the case, but also may walk away thinking, wow, there's a lot of danger that is built into this. But as Sinan says in his pieces, there's a lot that the individual can do just in terms of being aware of the biases, learning how to use the tools.

We're not saying don't use the tools, but use them intelligently, be aware of the biases, have basically digital literacy, continue how you learned how to use the internet, apply it to AI tools, and we'll already be on our way. Right, Sinan?

Sinan Ülgen 

Yeah, absolutely. I mean, even today, we are able, if we use these tools in a more knowledgeable manner, to essentially weed out the less relevant parts of these models. For instance, you can use two models at the same time. Just compare what you get from two models. Or you can structure your prompt. In technical parlour, it's called prompt engineering. But as you use these models, you develop the know-how to better streamline how you interact with your models to really get the response that you need. So these are some of the aspects of our interaction with these models that will surely mature over time and will be in a better state definitely going forward. Just like all technologies, there are good ways to use these models, there are bad ways to use these models, and ultimately, it's up to every one of us to try to optimize our use. We do have the tools nowadays to do that.

Rym Momtaz

Well, I really want to thank both of you for this fascinating and I think eye-opening discussion, and I'm sure I'm going to have you on the podcast soon to continue this discussion.

Sinan Ülgen 

Thanks a lot for having me, Rym.

Sam Winter-Levy

Yeah, thank you for having me. It was a pleasure to be here.

Outro

Rym Momtaz

For those who are interested in learning more about all the ways AI could impact our lives, I encourage you to follow the work of Carnegie Europe on X and LinkedIn.

Our producer is Mattia Bagherini. Our editor is Futura D’Aprile of Europod. Sound editing by Daniel Gutierrez. Sound engineering and original music by Jeremy Bocquet.