Podcast Banner

Podcasts

Paul, Weiss Waking Up With AI

Spring 2025 – Navigating Evidentiary Challenges in the Age of AI

This week, hosts Katherine Forest and Anna Gressel tackle the evolving landscape of evidentiary issues raised by artificial intelligence in litigation. From the admissibility of AI-generated images and chatbot records to the qualifications of AI experts and the nuances of hearsay, they break down the complex, fact-intensive questions facing courts and practitioners today.

Stream here or subscribe on your
preferred podcast app:

Episode Transcript

Katherine Forrest: Hello, and welcome to today’s episode of “Paul, Weiss Waking Up With AI.” I am Katherine Forrest.

Anna Gressel: And I’m Anna Gressel.

Katherine Forrest: Anna, I know you’re about to go off to Abu Dhabi again. In fact, we may even be airing this when you are in Abu Dhabi—the place that I think you go to almost more than any other, maybe even more than the Upper West Side. Have you been to Abu Dhabi more often than you’ve been to the Upper West Side in the last six months? Answer honestly.

Anna Gressel: No, that’s false, because my dog loves Central Park, so the Upper West Side is in our normal routine. She loves to chase squirrels. We take her, and we indulge her huge love of squirrels on the weekends.

Katherine Forrest: Do you know how much PTSD your dog is responsible for in the squirrel population?

Anna Gressel: Not any more than like every other dog, which like, they’re all like parallel chasing together.

Katherine Forrest: These poor squirrels. Anyway, I’m excited about this episode that we’re going to be doing in our current time zone, which is the same time zone, because while we’ve done some really technical podcasts recently. We’re going back to our roots for a brief moment today and talk again about AI and evidentiary issues. We’ve done this once before, but we’re going to take on some different evidentiary issues today and do some issue spotting for topics we won’t even fully go into today.

Anna Gressel: This is great. I almost feel like we should be offering CLE credit for it, except we don’t have a way of tracking all of our listeners across platforms. But it’s a really interesting topic, and it’s one that gets a lot of attention at legal conferences today. I know you have a lot to say about it. I think it’s super interesting. We talk about it with judges. It’s just a huge area. So why don’t I turn it over to you to get us back into this topic? It’s been a while since we talked about it.

Katherine Forrest: It is, and it’s funny because as a former judge, it’s one of the things I get asked about a lot. Of course, I was off the bench at the time that these evidentiary issues really started to become front and center for judges. So my knowledge of the AI evidentiary issues is from my time now back as a practitioner. Let me start by putting out a group of certain AI evidentiary issues that come to mind, and then we can kick it off that way. You can throw in a few, and then we’ll pick a few to jump into in more depth.

There are all kinds of ways that AI can be used to create realistic photos or videos. We’ve talked about that in prior episodes. They can be used to create demonstratives, for instance, accident reenactments, crime scene reenactments, or even how an invention works. Whether those should be allowed or not can create a whole host of issues, some of which are 403 issues—meaning, are they unduly prejudicial, among other things. The word people have to focus on there is the word “unduly.”

Then there are other evidentiary issues for AI, which I actually think are super interesting, which are hearsay issues. For instance, when a chatbot responds to a query and there is then a digital record of that response, if there’s a lawsuit that follows and that digital record still exists, it might be requested in discovery and then someone might seek to admit it for certain purposes. The question is, is it for a hearsay or non-hearsay purpose? What are the issues there? And then there are testimonial issues, which we surely will not get to in depth today, but things like a chatbot or a combination of chatbots that can actually act as a person and have a conversation. You can then have issues relating to, for instance, the Confrontation Clause, depending on how those are used.