
Podcasts
Paul, Weiss Waking Up With AI
Regulatory Divergence in AI
In this week's episode, Katherine Forrest unpacks the rapidly widening gap between the U.S., the EU and other international AI regimes, and offers practical compliance pointers for companies deploying AI across borders.
Stream here or subscribe on your
preferred podcast app:
Episode Speakers
Episode Transcript
Katherine Forrest: Hello, I'm Katherine Forrest and welcome to today's episode of “Paul, Weiss Waking Up With AI.” I'm here solo, yes solo. I know that it won't be as much fun as when Anna's here but if you can just bear with me, she's off hosting a couple of round tables, AI round tables in Abu Dhabi, and I'm sure she'll tell us all about them when she's back. But we're going to do two episodes, the next two that will be without Anna and then she'll be back, so for now you're stuck with me. And what that means to me personally, apart from just missing Anna's company, because she's such a great person to have as a co-host on these things, is that I get to choose to talk about whatever I want.
And so I'm going to focus on what can be best called a growing international regulatory divergence in the AI area. And as people who have heard me speak, both publicly all over the place and even on this podcast, I've been someone who's been talking about the possibility that the federal government would one day want to preempt a lot of the state laws that are all over the place on AI. Not that I was favoring federal preemption at all one way or the other. It will be what it will be, but just the fact that there is a lot of different state law regulation and then there's a federal government that has expressed an interest that's both under the Biden administration not completely reconciled with all of the state laws, but under the Trump administration, even less reconciled with the state laws. So we've got these diverging sets of interests.
And so what we want to do today is talk about that and talk about how U.S. companies are going to be confronted with these differing regulatory regimes, not only in the United States and the EU, but in other places, Brazil, South Korea. And what really raised this topic for me was something that I’m sure you’ve all heard about. We now have what's called the “One Big Beautiful Bill,” the 2025 federal bill that's being wound through, winding its way through both the House and the Senate. And that bill contains some very particular AI pronouncements that we'll talk about today.
So let’s start with our little conversation here, and we're going to start with the EU AI Act, which we've talked about before. And we did actually an episode on the EU AI Act last August when it started to actually be implemented. And I encourage those of you who haven't heard that episode to go back and listen to it. Although, as always, make sure that you understand that every episode we do can be dated somewhat because things do change. But that episode does give an overview of the rollout and the timeline of some of the provisions. But what I wanted to talk about was not only that EU AI Act in South Korea and Brazil, the G7, the U.K., but to talk about the fact of regulatory divergence. So let me give a quick refresher on where we are right now with the EU AI Act. And first, we know that this act is the most sweeping regulatory regime worldwide. And it's also out in front of basically really every other regulatory regime in the world right now. And that was part of the intent in the EU of getting it going early and debating it so long, getting it through, getting it passed.
And so it's out of the planning stages and into the implementation stage, although I would say that there's a lot of detail that is getting worked out during this implementation stage. And there's also been guidance that has come out already with regard to how companies are, for instance, to define what an AI system is or what constitutes a prohibited act. And so if you're curious about those things or need to know those things, there is now guidance out there. But as many of you know, if you're familiar with the EU AI Act, you know that it takes what's called a risk-based approach to AI regulation. And that means that there's a primary focus on use cases to which the AI systems will be put. And it breaks those use cases down into categories of practices that might be at different risk levels. For instance, the top two are prohibited and high risk and so on and so forth. And the portion of the Act relating to the prohibited practices, that's already gone into effect, it went into effect in February of 2025. And there is extraterritorial reach. So if you are a deployer of an AI system or a producer of an AI system, then it can actually have extraterritorial reach. So what are some prohibited practices? Well, those can be practices such as AI systems that deploy what they call subliminal techniques beyond a person's consciousness or purposefully manipulative or deceptive techniques with the objective of materially distorting the behavior of a person or group. Actually, it's the objective or the effect of materially distorting the behavior of a person or a group.
And another prohibited use is putting onto the market an AI system that purposefully takes advantage of a person's vulnerabilities, such as age, disability, or specific socioeconomic situation, again with the objective of distorting behavior. So you really need to go back to the Act to get a whole list. It can be quite complicated to figure out all of the nuances of the various practices for the prohibited practices, but for the high risk category, there's a difference. And those are AI systems that pose a significant risk of harm to health, safety or fundamental rights that require conformity with assessments, with other EU laws, and this could include things like cars, medical devices, elevators, on and on and on. And it also includes AI systems that are used in critical infrastructure, in education, law enforcement. So you've really got to go to the Act to study what the use cases are and what categories use cases fall into.
So what happens with prohibited systems is that they're prohibited. And with high-risk systems, there are a series of requirements, including registration requirements, human oversight, and transparency requirements, et cetera, et cetera. Now the timeline for the EU AI Act is between now, as I said already, some of these have started to go into effect, and 2030. So it's it has a really sort of long rollout and there's going to be a lot happening between now and 2030, and there's going to be a lot of also changes in the technical environment so it's still TBD how we're going to all adjust to that. So there was a time in the United States when we were in the process of developing our own regulatory regime that was going to have some similarities to the EU AI Act, not in terms of how it was structured, but in terms of some of the basic principles that were going to be guiding some of the rulemaking.
And that all arose from the Biden executive order that was then withdrawn under the Trump administration, right really at the very beginning of the Trump administration. And the Trump administration articulated a view that regulation in the AI area was at the very least premature, that there need to be study to ensure American competitiveness in the AI area. So that brings us to what's happening in the United States today, and what's on the horizon elsewhere.
So we know that many companies right now are not only incorporated in the United States, doing business in the United States, but they're doing business overseas. So in the United States, if they're doing business nationally, companies can be required to comply with a variety of AI laws that are implemented and in place all over the United States of America, all over the country. And that can include the Colorado AI bill, there are numerous bills in California, actually laws in California, not just bills. And the Colorado, actually it's not a bill, it's a law. We've got a bunch of things going on in Texas. We have in many, many states now algorithmic bias laws. And most recently in the face of all of this state action that companies are trying to grapple with and have policy groups that are following things and following regulatory developments and really at the state law level, following everything that's coming along and trying to map their practices against all of these different state laws. We now have this new thing called the “One Big Beautiful Bill.” And that's, as I mentioned before, that's the federal budget bill, and there's a provision in that bill that would call for a 10-year moratorium on all enforcement of state AI laws. So it's a moratorium, it's not preemption, and it would be a 10-year period of time where there would be a moratorium on these state AI laws.
So this bill along with the, this portion of the bill along with the entire federal budget bill is at relatively early stage. I mean, it's passed the House just last week by a vote, but it now has to go on to the Senate. It can be revised by the Senate, can go back to the House. But if it passes the Senate and it has that moratorium provision in it and gets signed into law, then that moratorium would go into effect. And that moratorium requires, it does require the adoption of some commercial AI for the federal government, but it also, again, would essentially...I don't know if it would be challenged or not, I don't have a view on that, but it would essentially stop, or intends to stop, let me say, the enforcement efforts at the state level for effectively a lot of the algorithmic bias bills, the biometric bills and laws, Colorado laws, what's going on in California in terms of laws, the Texas. It would put a stop to all of that enforcement activity. That's the intent. And support for this budget provision is actually pretty broad. There are a lot of tech companies that are supportive of it, business groups, the U.S. Chamber of Commerce, but there are also a number of opponents who think that there does need to be regulation. So we don't yet know what's going to happen as this thing sort of fights its way through the Senate, so we'll see.
But we could end up in a place where, again, U.S. state laws sort of get put to the side. We have then a light touch U.S. regulatory regime. We have a much harder touch, EU AI Act, which U.S. companies, if they're putting something into the market, if they're a producer or a deployer, if their output is used, in the EU all of the ways in which the extraterritorial jurisdiction can come into effect. They'll be having to navigate those two very different systems. But the EU AI Act is not the only law, international law that actually has extraterritorial effect. There also is extraterritorial effect with the AI acts that are now in South Korea and Brazil, which have their own sort of bells and whistles. And so we've got these different regimes now that are growing up around the world.
So how does a compliance department for a U.S. company that is doing international business, how do they grapple with all of this? And so I'm going to give you a couple of practical pointers which many of you are going to say you've already got it there, it's already in place, and others will maybe find it interesting. First of all, you've got to have somebody who's looking not just at the state regimes, but looking really hard at what's happening internationally and not just focusing on Europe. Because there is now a lot that's happening in as I've said South Korea, Brazil, I should mention the G7, I should mention the U.K., India. There's a lot happening all over the place and so the international environment really needs to be monitored. Two, map your model and map your products if it's not a model and it's a product or a device that's going into another jurisdiction. You need to know what you're putting out there in terms of type of model and reasonable use cases. What's reasonably foreseeable? What are the use cases to which your tool or your model or your system can be put?
So I'm going to call it map your model, but consider it also to be map your tool. So you want to map, right? You need to understand what's reasonably foreseeable. So if you put something out there internationally for one use, but you know it can reasonably, it's a reasonably foreseeable consequence of putting that out there that there'll be a different use as well, you need to take that into consideration. You should also have, if you're a deployer, of the model, you should actually understand how that model is being deployed within your firm or within any subsidiaries. And the next pointer is documentation. There are lots of documentation requirements in different regulatory regimes. And so if you put together what I call an evidence book and you have one evidence book for each of your models or tools and use cases, you probably can cover the requirements for the jurisdictions that have such requirements. And what you'd want to do is take the requirements that are the most comprehensive, put it all into the single evidence book so that you've actually put together a compilation that will be most comprehensive, and then you're going to have what you need for the various jurisdictions. And then the next pointer is you have to make sure you've reviewed the various transparency obligations where you may need to tell the public or others even within an organization that AI is being used. And then lastly, conform any contracts that you may have to AI usage and requirements within a particular country because you may have certain indemnification obligations or opportunities. And so you want to know about those. All right, so that was really a lot. And the takeaway from all of this is there's a lot going on all over the world right now.
And even the United States has got a changing environment where we don't really know what's going to be happening, but there could be an awful lot of change that we see domestically in the next sort of six or so months. And certainly we're already seeing it come to fruition internationally.
Okay, well, that's all we've got time for today. I'm Katherine Forrest. I hope you enjoyed this episode. And if you did, please subscribe wherever you get your podcast episodes. Thanks.