
Podcasts
Paul, Weiss Waking Up With AI
Unpacking America’s AI Action Plan
In this week’s episode of “Paul, Weiss Waking Up With AI,” Katherine Forrest and Anna Gressel discuss the White House’s release of America’s AI Action Plan, exploring its focus on deregulation, infrastructure, workforce support and global leadership as the U.S. aims to “win the AI race.”
Stream here or subscribe on your
preferred podcast app:
Episode Speakers
Episode Transcript
Katherine Forrest: Hello, folks, and welcome to today’s episode of “Paul, Weiss Waking Up With AI.” I’m Katherine Forrest.
Anna Gressel: And I am Anna Gressel. Today we’re going beyond the headlines, the very big headlines, to talk about the White House’s unveiling of the America AI Action Plan that everyone is talking about,—what it means, what’s driving it, and how it’s likely to reshape the business, legal, and tech landscape for AI in the U.S. and actually really beyond.
Katherine Forrest: Wait, you skipped over the part where I get to talk about my moose mug.
Anna Gressel: Can I see it?
Katherine Forrest: And I know you’re really excited about talking about America’s AI Action Plan and I am too. And we’re going to spend a lot of time on it today.
Anna Gressel: What do you have? I have my botanical garden mug.
Katherine Forrest: You’ve done that before. You’ve done the botanical garden mug.
Anna Gressel: Yeah.
Katherine Forrest: I think our audience is familiar with that. And they’ve talked to—I’ve talked about my moose mug before, but I just wanted to mention it again.
Anna Gressel: Which means you’re in Maine?
Katherine Forrest: It means I’m still in Maine.
Anna Gressel: Okay, lovely.
Katherine Forrest: And so, working remotely up here, flying back as needed, which is really easy, super easy, but also was following the unveiling of the America’s AI Action Plan announcement that occurred just two days ago. And it’s really fascinating. So I am super excited about digging into it or delving into it or diving into it or getting into it or unpacking it, whatever our word is going to be. So let’s start by, sort of grouping it into the big picture context and also some philosophical underpinnings, as I think our audience is really well aware. The action plan from the White House is a direct response, really a targeted response, that they told us was coming to the ever more energized global AI race, and there’s in particular a race between the United States and China. And there is now an increased sense—that’s been all over the Nvidia back and forth on the chip issues, and we’ll talk a little bit about that—but a desire to really try to place America in a position where we “win” the AI race, and so that’s what we’re going to do today.
Anna Gressel: So let’s take a step back and give some context for this, because I think it’s important. You know, many of you will remember that on his third day in office, that was January 23rd, 2025, President Trump issued an executive order titled “Removing Barriers to American Leadership In Artificial Intelligence.” And it was, in some senses, an attempt to provide a reset from the Biden-era AI policy landscape, and it actually did rescind the Biden-era AI order executive order. So that was a new direction that the administration was signaling for AI, really focused on competition, global dominance and, in many ways, a national security reaction to competitiveness from other potential foreign adversaries. So we were still waiting to see what would come of that. But that happened about six months ago.
Katherine Forrest: Right, so we had this with rescission of the Biden EO, which was the late 2023, the third quarter of 2023, executive order. We had the rescission, but we didn’t have the replacement yet. And so what we’re going to be talking about today, really, is at least a framework for a replacement. And what we had in the January 23rd 2025 order was it called for a comprehensive review of all of the existing AI policies, as well as a rescission of the Biden executive order. So whatever was already in existence, those were supposed to be reviewed. And there was a mandate to suspend, revise, or rescind anything that could slow down innovation. So the focus was on removing barriers to innovation. And the message really to the agencies,—and these are the executive branch agencies,—was if it doesn’t help us win what was being called even then the “AI race,” it was up for review and potentially on the chopping block.
Anna Gressel: Yeah, and I think it’s quite interesting when we actually take a look at what the Trump administration came out with in this AI action plan. The differences are a lot more nuanced than you might expect, and there’s more continuity than it might seem on the surface between the Trump administration and the Biden administration approaches to AI. And really, those continuity points kind of turn around issues like standards, the creation of standards for AI, rigorous evaluation and real, true risk-management techniques, including on things like cybersecurity, which is a growing area of interest and concern around AI agents. And although Biden’s order was heavy on things like civil rights and what we might call “safety” and also responsible innovation, the Trump plan,—even though it’s viewed as deregulatory, I think that’s, you know, it’s an important point,—it still calls for robust standards in areas like national security, export controls, and government procurement. And how it does those things is different, but really that kind of benchmark of making sure that we have our arms around the risks related to AI and we are really protecting systems that are mission-critical that involve AI, that is a real continuity point between both approaches.
Katherine Forrest: That’s interesting. So I’m going to take a little bit of a different view. I think that there’s some continuity for sure. But I also think that the philosophical divide that you see between the AI action plan, which was released on July 23rd, the difference between that and what the Biden administration was doing was really real, but also in ways that didn’t necessarily focus on AI, for instance, on things like energy policies, land usage, federal land usage, and things like that. And so the Trump order,—both in January, which sort of said “we’re going to look at things,” and then now in July of 2025,—it’s much more skeptical of government intervention in ways that don’t go to what I’m going to call “national security safety.” And it’s much more skeptical of things that regulate in areas that could be tied to, for instance, a lot of the dialogue and debate on algorithmic bias like diversity, equity, and inclusion. And there’s really a clear preference for market-driven solutions and hands-off approach to a lot of AI regulation, unless, again, that national security is at stake. So you know, the Biden executive order was more willing to use regulatory levers to address bias, discrimination, consumer protection. And that you saw reflected in the beginnings of some of the regulation that was coming out of those executive offices like the commerce department, etc.
Anna Gressel: Yeah, it’s such a good point. I just want to pick up on one thing that you said that I think is so interesting. We’re really seeing with the Trump administration an interest in viewing AI as part of a larger tech stack. And that was true to a certain extent with the Biden administration, but I think there was so much focus on AI models and systems in the previous executive order here. It’s a—I mean, literally we see places where it calls it a full-stack approach and whether that is defending the full stack. So making sure that there are security around physical infrastructure or investing in the full stack. So there’s, you know, chip development and data centers or exporting the full stack. I mean, really it’s this kind of soup to nuts approach when it comes to the physical, like all the way down to the digital of AI. And it’s one of the really different things about the Trump approach. But of course, there’s also,—and I think it’s also worth just pointing this out,—really an America first flavor and a recognition that the U.S. should at least be trying to set global standards if it wants to shape the future of AI, particularly with China and other major players deeply involved in standards discussions. And there’s really a sense of urgency around that, which I think is an important point to note in terms of how quickly things may move even after the action plan and the executive orders, which we’ll talk about. And by moving fast, the administration is saying that it really wants to cut red tape. It wants to double down on American values. It wants to shape the global AI ecosystem in its image. And it is actually going to really try to not just be a player in the space, which of course it always was, but the referee. And what that means is, it’s going to be a matter of much discussion looking forward.
Katherine Forrest: Yeah, and one of the things that I find interesting about this, but consistent with a lot of business interests, is the concept that really permeates the action plan, which is that adoption,—if we proliferate American AI technology,—its adoption globally will then have put America forward as the winner in the AI race. And so I think there’s room for debate on that, but that is part of what’s permeating it. So this action plan really was in the works between that January 2025 initial announcement and then here in July. And the special advisor for AI, he came up with it and had a huge group of people who are working with it and working with him on it, and now it’s been delivered.
Anna Gressel: Yep. And it came along with three executive orders, which is super interesting. They’re all pretty short, for people who want to check them out. One is on accelerating federal permitting for data centers. Another is on promoting the export of American AI technology, that’s the stack we were talking about. And a third is on preventing so-called “woke AI and federal procurement.” And so I think we’ll talk about those, but we’ll talk about that action plan itself first.
Katherine Forrest: All right, so let’s talk about the action plan, and it’s built on a few core ideas. Actually in the table of contents when you open up the,—which I’ve got right here beside me,—the action plan, it’s red, white and blue if you have a color printer. It talks about pillar one, pillar two, pillar three, so it sort of lays it all out for you. But in this action plan, deregulation is really front and center, and there’s a strong undercurrent, as you’ve already said, of America first and that AI should reflect American values. And then immediately after sort of the concept of deregulation and furthering American technology, is support for American workers and then serving American interests.
Anna Gressel: It’s super interesting. I mean, I think we talk a lot about AI and whether it really represents kind of another industrial revolution. And this does have an ambition that is on that scale, which is just a very interesting thing to see and note. It’s not only about winning the global AI race, but also about the federal government clearing the runway for private sector innovation, including deregulation. Actually, I think, you know, it’s so funny that there’s really even this quote in there that’s like, “build, build, build.” I mean, it’s just—that is what the tone and the tenor of this entire action plan is about. So I think we can talk about those three pillars, Katherine. Should we break it down?
Katherine Forrest: Yeah. So like I said, you open it up on the table of contents, you’ve got pillar one, and that says “accelerate innovation” in nice big bold letters. And it calls—the portion of the report that is reflected in that section—calls for slashing red tape, rolling back regulations that might slow down AI development. By the way, they don’t specify what those regulations are. They say, if there are regulations which are slowing down AI development, then they have to be, I’ve forgotten exactly what the words are, but essentially eliminated or worked around, things of that nature. And they’re also suggesting that there can be a review, a strong review, of state AI laws and that the lever that would be used is a tie between federal funding and state AI laws. So if there are state AI laws that are interfering with some of what the plan is laying out, then federal funding could be used as a lever against the states to potentially cause that to roll back, while at the same time, there’s a statement that there should be respect for state rights. So the first pillar is really trying to figure out what are the burdensome AI regulations at the state and federal levels without specifying them and having those be essentially pushed to the side to allow for the kind of innovation that the plan wants. And that’s really a big deal because the federal government’s signaling a willingness to tie, like ,as I’ve mentioned, the federal funding to regulatory sort of clearance of the runway and to get everybody on board. So it’s a super interesting first pillar, but it’s actually consistent with what the Trump administration’s been doing with federal funding in other areas.
Anna Gressel: Yeah, and there are two other points, I think, that are worth picking up on in pillar one. One is about protecting free speech in AI systems and really making sure that government-procured AI is objective and free from top-down ideological bias. And we’ll talk about that a little bit more, I think, later when we talk about the executive order around woke AI. But the second is this push for open source and open weight models, which is really fascinating. It wants to make, the environment easier for startups and academics to use large-scale compute and really to be able to delve into, the space around open models, because that can decrease costs and really scale potential avenues for innovation. Of course, there are some risks around that. And we’ve actually done, I think, prior episodes on open source. They’re probably worth going back to if you’re really interested in like, how do people frame up the pro-con debate around open source? But this is a huge win. I mean, I think it’s, you know, for people who have been proponents of open source AI systems, pushing for that recognition of the value of open source, this is a big get for them. And particularly unsurprising to my view in light of China’s gains around open source models like DeepSeek. And the U.S. is saying “we’re going to play on the same landscape, we’re going to play in the same territory as China around open source.” And so that’s my own view. But, you know, I think a lot of people have been commenting on its inclusion in the action plan.
Katherine Forrest: Yeah, open weights in particular, it’s really the keys to the castle for many of these models. And it reveals a fair amount about how they’ve been trained and what the initial hyperparameters were. And they’re considered to be these sort of crown jewels. So really advocating here in this first pillar for open source and open weights, is quite something. And as we know from that prior episode or two that you’ve just mentioned, Anna, there had been a time under the Biden administration when I had at least thought that open source was going to be something where we were going to see some retrenchment and some additional regulation, but this is actually the opposite. So there’s also another piece of this, which is the workforce angle,—which we’ve talked about as sort of the, you know, an immediate pivot to the American workers. And the plan mentions America’s workers a lot. And it talks about workers first and a lot about AI literacy, rapid retraining, tax incentives for employers to upskill workers. And it does mention displacement, but it suggests that there will be other jobs to go to. Now, this is actually something that people debate, and no one really knows the answer. And so I think it is important that there is a commitment to trying to upskill, to trying to provide some tax incentives for retraining. But what the jobs are going to be, I think, is still up in the air because AI is really going to be in the business of job elimination as opposed to job creation. But we don’t yet know what new sort of market opportunities might come into existence. As I’ve said in prior episodes, who knew that the world of apps was going to become an entire industry when the iPhone was introduced in 2007 and sort of the app world took over? And that’s now a “gazillion” dollar industry. So we don’t yet know, but, it’s a big part. And you can imagine it’s got both political aspects and real aspects of protecting the American worker, or at least trying to suggest that the American worker shouldn’t be so worried by this winning the AI race.
Anna Gressel: Yeah, definitely. I mean, I think right around the corner, we have a world of AI agents that are coming, and people are really thinking deeply about the workforce implications around those. So, you know, it’ll be interesting to see what comes out of this plan and how this gets framed up by the relevant agencies. But let’s pivot to the second pillar, which is building American infrastructure. This is where the plan gets concrete, like very, very concrete. And it is because the U.S. articulates a need for more data centers, more semiconductor chip fabs and a bigger, more resilient energy grid. And that is no surprise whatsoever, but very, very interesting that this is, you know, a continued part of the plan. And in many ways, I think it’s needed to realize a lot of the ambitions that were announced at the beginning of President Trump’s term, including around the Starlink project and other projects on data center infrastructure and energy infrastructure to power AI systems. So what is in this plan? First, it calls for streamlining permitting. And this was the quote that I actually think I messed up earlier, “build, baby, build.” I mean, that’s literally the mentality and the…
Katherine Forrest: I think you were like “build, build, build,” but it is “build, baby, build.”
Anna Gressel: Yeah, I think I laughed out loud when I read the first time. But, you know, it really does showcase the energy around this. So what does that mean? It’s really about accelerating federal permitting for data centers. And there is the whole executive order, the company executive order, that is intended to support that. So this is going to be potentially a game changer for anyone involved in building the physical backbone of the AI system. And what it does is creates a fast lane for permitting massive data centers and you know, these—for the, I think, non-data center folks among us—are projects with over 100 megawatts of new load. So that’s potentially going to be a lot of data centers powering AI training and inference.
Katherine Forrest: Right, and so I want to step back for one second and talk about again, why infrastructure is so very important to AI. So as we know, AI requires a lot of computing power, right? It requires an ability to send a query from the model, which is hosted on the cloud, back into a processing center where it’s massively processed. The models also are processing literally like billions of queries all the time, and there’s a lot of parallel processing. There’s an example recently of one being built that’s the size of Manhattan. I mean, we’re talking about huge data centers with lots and lots of servers. And so you need that infrastructure. And then you need the power to actually run those, which can come from wind farms, they can come from solar, they can come from fossil fuels. And so you’ve got this need for both power and space. And the data centers are like big warehouses and they have to also be cooled, by the way, so they take a lot of water. So there’s a real, hard asset component to these things. And the message that the AI plan actually sort of pushes, is that America’s environmental and land use laws are too slow for the AI era, that you need these huge massive centers with lots of power needs and lots of water needs. And so the administration’s creating this categorical exclusion under the National Environmental Policy Act, which is called NEPA. And it’s expanding the process for trying to get various kinds of permitting through, and it’s making a number of federal lands—yet to be determined, we don’t know what they are—available for data center construction. There’s a whole physical infrastructure piece here, which frankly also will create some American jobs, but that’s part of this.
Anna Gressel: And I think, you know, it’s so interesting because there’s really a big shift that’s also being signaled on the horizon, and folks who kind of live and breathe environmental law will be much deeper in the weeds on this. But the plan contemplates a nationwide Clean Water Act permit for data centers. And that may mean potentially bypassing what would be a usual site-by-site review. And there’s also a really, really interesting directive to keep adversarial technology out of the AI infrastructure stack that may require all different kinds of security controls and parameters and audits. So there’s a lot packed in here that is going to be about really not only creating the data centers, but upgrading them and making sure that they meet the standards and requirements to think about this as kind of a national security-level technology, which I think there’s a recognition that it is.
Katherine Forrest: Right. And I think that part of the message here is that there’s a lot left unspecified. There’s a lot of references to national lands. We don’t know what those are. There’s a lot of reference to pushing aside certain kinds of regulations. We don’t know exactly what will get pushed aside. There’s a lot of reference to state laws that may interfere with this plan and the lever of the federal funding, but we don’t yet know what those are. So there’s a lot going on here, much of which is unstated, but we don’t want to miss the grid modernization piece, because a lot of people completely separate it apart from the AI dialogue. I’ve been talking about the need for grid modernization in America. And the plan here calls for stabilizing the current grid and really optimizing existing resources to new power sources, to new infrastructure, in order to make the American power grid top notch, because you’ve got to have a top notch grid in order to handle the power that has to go to and from these huge data centers that we’ve been talking about. So there are a lot of legal and regulatory implications here that are just massive, like the Federal Energy Regulatory Commission which is called FERC, and a variety of state utility commissions. And there are many, many, many, many, many layers and regulations that otherwise would be implicated, and we’re going to have to see how those get navigated with this new plan.
Anna Gressel: And then there’s the semiconductor angle. We’ve talked a lot about semiconductors, but there’s a lot more to say. And the plan wants to restore American semiconductor manufacturing with a focus on ROI for taxpayers and removing “extraneous policy requirements.” That’s a quote from CHIPS Act funding. And that may be code for rolling things back like DEI mandates and climate requirements that currently form at least a significant part of grant programs under the CHIPS Act. So there’s this whole emphasis on bringing kind of semiconductor manufacturing back to the U.S. That’s certainly not new, but they’re beginning to talk about how that might change under the Trump administration.
Katherine Forrest: So, well, we don’t want to forget about cybersecurity, and the plan calls for a variety of things on information sharing and analysis and also talks about the need to shore up cybersecurity that AI can open up. So there’s a reference to that. But again, how all of these things fit together is not yet clear, but let’s go on to pillar three.
Anna Gressel: Yeah, this is where it gets geopolitical. The U.S. wants to export its entire AI stack: hardware, software, standards, and governance models to allies and partners. And the idea is to create a global AI alliance that can counterbalance China and other rivals.
Katherine Forrest: Right, and as part of this—and it’s really interesting if you go to the actual AI plan itself and look at pillar three, which is called “Lead in International AI Diplomacy and Security.” Part of what they actually do there is they have an executive order that is referred to as “preventing woke AI.” And this one is already making waves, like almost immediately, and it requires federal agencies to take in and to contract for large language models that are “truth seeking” and “ideologically neutral” and explicitly barring models that are incorporating DEI concepts, critical race theory, or what the order calls “partisan or ideological judgments.” So that one’s already started a whole debate.
Anna Gressel: Yeah, and it’s interesting because of how hard it is actually even to control what those models produce, like how is this going to be implemented in practice? I think it’s a really interesting question. Another interesting point is that the plan specifically calls on the FTC to review all of its investigations from the prior administration to ensure that they don’t unduly burden AI. And that’s so fascinating. I mean, the administration is signaling that the FTC’s mandate should be pro-innovation, not a roadblock. But, you know, it’s interesting to think about if you’re, whether a company under a consent decree or facing an investigation, there may be a shift in how the FTC approaches AI-related cases.
Katherine Forrest: Absolutely, absolutely. And there’s a similar directive to the OMB to work with all of the federal agencies to identify and repeal any regulations that unnecessarily hinder AI development and deployment. And that really sort of harkens back to the first pillar that we talk about. But then there’s this whole export control piece. And, you know, after commerce—the commerce department rescinded the AI Diffusion Rule, which we’ve talked about on prior episodes—the plan here is attempting to really pivot to a much more aggressive and targeted export control regime. And it seeks to both plug loopholes, but use tools like the Foreign Direct Product Rule and different kinds of tariffs, secondary tariffs, to align allies with U.S. restrictions. And so, they’re looking for real-time, location-based chip tracking to prevent diversion to adversaries, and they’re putting the onus on chip companies and other exporters of chips to make it a much more complicated compliance environment, but also allowing American chips to become, and they want them to stay, really, to be the preeminent chips for advanced AI compute.
Anna Gressel: There’s also a big push for what we might call secure by design AI and critical infrastructure, as well as new standards for AI incident response. These are really important, kind of no matter where you sit in the tech stack. And interestingly, the plan calls for the establishment of an AI information sharing and analysis center, what they’re calling an AI-ISAC, led by DHS, the Department of Homeland Security, to promote the sharing of AI security threat information and intelligence, particularly across U.S. critical infrastructure sectors. And this is a mechanism, just to clarify for folks who don’t live in brief cyber, for improving cyber preparedness through vulnerability-sharing that people have been advocating for, for several years, because of how hard cybersecurity is in the AI space and how novel some of the threats are. And so it’s quite interesting that it’s actually now incorporated in here, not entirely surprising. But the plan also calls for regulatory sandboxes for AI and sectors like healthcare and finance, which could create safe harbors for experimentation. Very, very interesting. They’re really looking kind of not just at developers, but also at deployers across critical infrastructure. And again, healthcare and finance can be within that scope.
Katherine Forrest: Right, and I want to mention just then a couple of last things. One is, there’s this whole piece at the end about deepfakes and you know, it’s a little unclear. It’s a little bit different and adjacent to a lot of what the rest of the AI action plan is about, which is sort of the lack of regulation. This is sort of suggesting that there needs to be some sort of regulation of deepfakes. But let’s put that aside, and you should really take a look at that. But I wanted to mention two other things that just need to get named as part of our review of this. And one is a piece that is, again, unspecified, but I’m sure there’s a lot of dialogue on it that we’re unaware of about improving the financial market for compute. And so there’s a section on page four of the plan about how to improve the financial market for compute. So I think we’re going to see some changes there. And then the last thing that I wanted to mention was trying to establish, and this is on page five for AI adoption, a try-first culture for AI across American industry. And we’re not quite sure what that means, but it’s definitely, I think, intended to take away some of the deep risk aversion and allow American companies to try AI adoption and not be scared away by it. So that’s what I wanted to sort of end on from my piece. What are your last favorite pieces?
Anna Gressel: For me, what is so interesting is really this, as I was talking about earlier, this concept of what it means to export the entire AI tech stack. And it’s really understanding that it’s not just the models themselves, but really the physical components of these systems, the energy components are all going to work together to function. And you know, the government is basically saying we’re looking globally. You know, this isn’t just about one other jurisdiction. This is about really kind of this global dominance and a global network effect that may come into being if the U.S. has a hand in data centers and jurisdictions worldwide. It’s almost like a port system. You can’t just have one port that’s important, you have to have multiple ports in order to create shipping dominance. And that’s kind of the mentality they’re approaching this with. And it’s super fascinating to think about that and think about who their partners are going to be in that. And so to my mind, it’s really the geopolitical overlay here that is jumping off the page, and it’s something we’ll focus on in the future. And of course, we always want our listeners to tell us what they want to hear about. So it’s an open invitation to write us if you want to shape the future direction of our episodes.
Katherine Forrest: Absolutely. So it’s a 23-page start to finish AI action plan released by the Trump administration on July 23rd. And there is a lot that is not in those 23 pages that we’re going to be talking about, I’m sure, in terms of how all of this comes into being. But that’s what we’ve got time for today. And it’s even a longer episode than normal. This is more than a dog walk. Have you been walking your dog? I hope your dog has had water along the way because this was a long dog walk episode.
Anna Gressel: Well, at least she didn’t chime in. She knew she’d make it longer, so...
Katherine Forrest: Your dog walk, right? Yeah, yeah, yeah. Your dog is usually like, you know, holed up in whatever studio you’re recording this in, waiting to go on her dog walk.
Anna Gressel: Mm-hmm.
Katherine Forrest: All right. So we’ll see everybody next time. I’m Katherine Forrest.
Anna Gressel: And I’m Anna Gressel, like and subscribe if you’re enjoying the podcast.