
Podcasts
Paul, Weiss Waking Up With AI
Spring 2025 – Navigating Evidentiary Challenges in the Age of AI
This week, hosts Katherine Forest and Anna Gressel tackle the evolving landscape of evidentiary issues raised by artificial intelligence in litigation. From the admissibility of AI-generated images and chatbot records to the qualifications of AI experts and the nuances of hearsay, they break down the complex, fact-intensive questions facing courts and practitioners today.
Stream here or subscribe on your
preferred podcast app:
Episode Speakers
Episode Transcript
Katherine Forrest: Hello, and welcome to today’s episode of “Paul, Weiss Waking Up With AI.” I am Katherine Forrest.
Anna Gressel: And I’m Anna Gressel.
Katherine Forrest: Anna, I know you’re about to go off to Abu Dhabi again. In fact, we may even be airing this when you are in Abu Dhabi—the place that I think you go to almost more than any other, maybe even more than the Upper West Side. Have you been to Abu Dhabi more often than you’ve been to the Upper West Side in the last six months? Answer honestly.
Anna Gressel: No, that’s false, because my dog loves Central Park, so the Upper West Side is in our normal routine. She loves to chase squirrels. We take her, and we indulge her huge love of squirrels on the weekends.
Katherine Forrest: Do you know how much PTSD your dog is responsible for in the squirrel population?
Anna Gressel: Not any more than like every other dog, which like, they’re all like parallel chasing together.
Katherine Forrest: These poor squirrels. Anyway, I’m excited about this episode that we’re going to be doing in our current time zone, which is the same time zone, because while we’ve done some really technical podcasts recently. We’re going back to our roots for a brief moment today and talk again about AI and evidentiary issues. We’ve done this once before, but we’re going to take on some different evidentiary issues today and do some issue spotting for topics we won’t even fully go into today.
Anna Gressel: This is great. I almost feel like we should be offering CLE credit for it, except we don’t have a way of tracking all of our listeners across platforms. But it’s a really interesting topic, and it’s one that gets a lot of attention at legal conferences today. I know you have a lot to say about it. I think it’s super interesting. We talk about it with judges. It’s just a huge area. So why don’t I turn it over to you to get us back into this topic? It’s been a while since we talked about it.
Katherine Forrest: It is, and it’s funny because as a former judge, it’s one of the things I get asked about a lot. Of course, I was off the bench at the time that these evidentiary issues really started to become front and center for judges. So my knowledge of the AI evidentiary issues is from my time now back as a practitioner. Let me start by putting out a group of certain AI evidentiary issues that come to mind, and then we can kick it off that way. You can throw in a few, and then we’ll pick a few to jump into in more depth.
There are all kinds of ways that AI can be used to create realistic photos or videos. We’ve talked about that in prior episodes. They can be used to create demonstratives, for instance, accident reenactments, crime scene reenactments, or even how an invention works. Whether those should be allowed or not can create a whole host of issues, some of which are 403 issues—meaning, are they unduly prejudicial, among other things. The word people have to focus on there is the word “unduly.”
Then there are other evidentiary issues for AI, which I actually think are super interesting, which are hearsay issues. For instance, when a chatbot responds to a query and there is then a digital record of that response, if there’s a lawsuit that follows and that digital record still exists, it might be requested in discovery and then someone might seek to admit it for certain purposes. The question is, is it for a hearsay or non-hearsay purpose? What are the issues there? And then there are testimonial issues, which we surely will not get to in depth today, but things like a chatbot or a combination of chatbots that can actually act as a person and have a conversation. You can then have issues relating to, for instance, the Confrontation Clause, depending on how those are used.
Anna Gressel: I have a few more. What if a chatbot is hosted by a third party and someone interacts with it and then believes they’ve been harmed by the response? Can the information that the chatbot gave be requested as a document in litigation? And what about experts? I love experts; I love this topic. There are a few issues that come up, among many potential issues. There are all kinds of ways in which courts might find expert testimony useful in disputes relating to AI, but who is qualified to be such an expert, particularly in this world in which the technology changes so frequently? And separately, can the expert use AI to assist them in writing a report? That’s a pretty live issue these days.
Katherine Forrest: That’s a live issue in the fourth grade, in college, in law school and it’s a live issue for experts. All right, let’s pick these off. Let’s just take a couple of them for today. You want to start?
Anna Gressel: Yeah, I’ll start with that last one just because we’re on the topic. Let’s talk about experts. Rule 702 of the Federal Rules of Evidence and case law, including the famous Daubert case and many others in every jurisdiction, govern the admissibility of expert testimony. The two scenarios I set out really have the same basic points in play. First, experts have to be qualified in the field in which they’re seeking to provide testimony and opinions helpful to the trier of fact.
Second, the testimony also has to be reliable. It’s true that there is a call for all kinds of AI expertise now in a variety of cases—some pertaining deeply to major questions about AI, others in which AI is less of a central factor but still important. Those cases might involve how models work, how they learn, what they’ve been taught, how they arrive at answers and even how users use the AI to do all of those things. The role of the individual user can be really important.
Katherine Forrest: Right. I think our audience may be familiar with some of the cases in the AI area today that are out there, and there are a number of them that are employing experts.
Anna Gressel: Definitely. The key is not that there has to be a long history of a particular type of testimony, but whether a person offered as an expert has the right qualification. For instance, if it’s a case in which an expert relating to how a model is trained is useful, then finding someone knowledgeable and who either has firsthand experience in model training or someone who’s studied it and written about it—might be an academic—both could be among the kinds of people who might have the right expertise for that case.
Katherine Forrest: Right, there’s not one kind of expertise that an individual has to have. It’s all really a question of whether whatever the expertise is that the person has—whether it’s academic, research or hands-on—whether, in connection with the proffered testimony, the gatekeeper to the evidentiary proceeding, which is the judge, finds that information would be useful either to the judge as the trier of fact or to the jury as the trier of fact. That’s really based on whether or not this individual can speak knowledgeably in the area and is able to ground his or her statements in ascertainable literature or experience or some basis other than just ipse dixit, which is “I say, therefore it is.” I’m not sure that’s an exact translation, by the way, but it’s close.
Anna Gressel: Yeah, and I think it comes up in every Daubert challenge, so we’ve got to mention it here. But that’s a great point, Katherine, that expertise can be grounded in literature, in facts that are in books, for example, a review of that literature, or can be lived experience. Experts can testify based on a huge volume of their own experience. That might be someone like a business expert. So, experts come in all stripes and can be useful in all different kinds of points and have very different kinds of qualifications.
Katherine Forrest: Hey, can I just interrupt you, though, because I have to give an anecdote. I never really do this, but I have to give an anecdote. When I was a judge, an individual was proffered as an expert who had not graduated from high school and had not gone to college and was an expert in some form of radio wave technology. The side that was opposing the expert was really actively trying to get this expert precluded and used the lack of formal educational qualifications as a way of solidifying their objection. It turned out that this individual had expertise relating to radio waves going back to the age of seven and was able to cite all kinds of hands-on things they had done, companies they had consulted for, because a lot of companies had not cared one way or the other about whether or not the individual had a formal education. This person was a true expert. So he was qualified and the case went on. I don’t even remember how the case came out, but it’s an example of how experts can come in different shapes and sizes.
Anna Gressel: I think that’s a great transition point into our other question, which is: say you have an expert, what can that expert actually say and do in putting together the report, which is the basis of their testimony? It’s not the full scope of their testimony. Of course, you usually have a deposition and the expert expands on their testimony, but let’s talk about the report process. In terms of using AI to help draft a report, it really comes down to this question: Is it the AI itself that has the expertise in the field of the inquiry, or is the AI simply helping with the wording and just putting it together?
Katherine Forrest: You know, there was a case recently in which a proffered expert was found to have had portions of their report written by AI, and it did not go well.
Anna Gressel: Yeah, apparently that expert had not even checked the content. Maybe this was one of those down-to-the-wire drafting processes.
Katherine Forrest: Who knows what the reason was, but that was obviously a fatal mistake. I always tell people, you’ve got to check. You’ve got to check the content if you’re going to be using AI. It’s just a tool. There may be a point in time when AI is better than all of us, but we’re not there yet. So AI really needs to be just a starting point if you’re going to use it. The case we’re talking about is called dz’s v. Ellison, from the District of Minnesota earlier this year.
Anna Gressel: Yeah, it’s a great cautionary tale for teams that work with experts in terms of making sure you really understand what you’re putting out there in a report before you finalize it. But Katherine, let’s move on from experts. Why don’t you choose one of the hypotheticals and we’ll go into that one.
Katherine Forrest: Okay, I want to talk about hearsay. There was a point in time when I wanted to write a book on hearsay, because although that probably would not have been nearly as interesting as some of the AI books that I’ve written—I’m not saying those are scintillating either—but it might have been more interesting than my hearsay book that I’d always planned to write. Hearsay is often a misunderstood rule of evidence. There really are some potential hearsay issues with some of the AI tools that are out there, particularly chatbots. So let’s take the example of a chatbot responding to a query that is presented to it.
Anna Gressel: Okay, so I’m going to give an example. I’m in an office and I use a chatbot my firm has created, and I ask the chatbot a question and it gives me an answer. Later on, there’s a lawsuit and the document request that the plaintiff serves on the company asks for digital materials that could cover the chatbot.
Katherine Forrest: Right, and so there’s really not much of a question that if the chatbot’s answers have been retained in some digital form, they’d be considered discoverable material. We’re now used to having text messages and Slack and every form of digital artifact be produced as part of litigation. That’s part of this lovely American litigation discovery program. But let’s assume that the chatbot gave an answer to a query that formed the basis for some action that was then taken by the company that is somehow relevant to the lawsuit. One of the questions becomes: is that answer a business record? Does it fall under a hearsay exception? Or is it not hearsay at all if, for instance, the plaintiff seeks to admit that answer to the query as a piece of affirmative evidence in the lawsuit? So here are our possibilities, that we’re just going to put on the table: business record, not hearsay at all, or some other hearsay exception.
Anna Gressel: Maybe we should start at the beginning. It’s possible it’s not hearsay at all. The plaintiffs want to use it just for the fact the chatbot gave a particular answer and not for the truth of that answer.
Katherine Forrest: You know what, that is exactly the issue that came up again and again and was so sorely misunderstood by a lot of folks, and was one of the reasons I wanted to write that book. People need to take on board that if you’re using a writing, not for the truth of the writing, but just for the fact that it was said, that’s not hearsay. That’s not a hearsay purpose. A hearsay purpose is when you want to use an out-of-court statement for the truth. So if you’re not using it for the truth, it’s just not hearsay. I find that to be a really important evidentiary point for people to hold on to. But what, Anna, if the plaintiff does want it for the truth?
Anna Gressel: Then I think we get into those other two options you talked about. We’d have to think about whether it’s a business record or an admission.
Katherine Forrest: Right, I actually had forgotten to mention admission, but you’re right. It would be not only business record, but also the question of whether it’s a party admission. For business record purposes under the hearsay rule, you have to remember—and again, this is one of those things that a lot of people just sort of forget about—that business records are things that are created in the regular course of business. You have to look at the case law in terms of whether or not it’s ordinary course or regular course and how that gets interpreted in terms of how often something has to be created, how unusual the document can or cannot be. Board records, for instance, board minutes, would be regular course of business, but a one-time email that had been unsolicited might not be. So it gets very complicated. You have to look at every jurisdiction.
But the query here—that is, the chatbot’s answer—the question will be: was it created in the regular course of business or was it a one-off? And how does the particular jurisdiction deal with that answer to the query in terms of whether it’s considered to be an actual business record?
Anna Gressel: Yeah, I mean, again, I just think it’s so important to emphasize what the chatbot is, how it’s designed, what its interfaces are—these can really affect the answer to that question. We’re talking about a general chatbot right now, but we have all kinds of fit-for-purpose tools that are intended to be used in a specific way. Maybe the answer is different in different contexts. So if, for example, the chatbot is used all the time for a certain kind of query, that might come a lot closer to regular course of business.
Katherine Forrest: Right, and these are going to be very fact-intensive issues that are going to come up. It’s going to be questions about how the chatbot is used, what kinds of questions are posed, what kinds of answers are given, how frequent they are. It’s really going to be fact-intensive. So let’s move on to whether or not the answer a chatbot gives could be a party admission.
Anna Gressel: That’s a really hard one because you get into whether the chatbot is able to make a statement for or on behalf of the company. It’s not really an employee, let alone anyone of true status within the company, right?
Katherine Forrest: At least not yet. But if the chatbot today—let’s just take today—was trained by the company or fine-tuned by the company with their own data, their own documents, the question will become: does that change the analysis? The chatbot could, by implication, be dealing with and basing its answers on statements that are, down the line, made by individuals from the company. On the other hand, they’re being fundamentally altered and changed by the way in which the internal architecture of the model works. So I think I could make a number of arguments that the statement from the chatbot under those circumstances is not an admission. And I also probably could make an argument that it is.
Anna Gressel: Yeah, and it may depend on whether it’s a repeated question with a repeated answer that follows pretty direct guidance that the chatbot was trained on. I also think we have this emerging question of role that I mentioned earlier, and kind of dismissively, but actually, we may start seeing chatbots given certain roles, like in certain meetings, or as challenges to boards, for example. It’s quite interesting to think about how, particularly with agents, we may put them in certain roles within a company. That’s a pretty open area just because I think we’re talking about a simple chatbot here, but the possibilities may change as the technology changes. And of course, that’s always true in the AI space.
Katherine Forrest: Yeah, we’re going to end up, I think, on a spectrum where something is anywhere from a mere tool to performing the role of an agent that takes on certain roles within the company that were or had been normally performed by a human. We’re going to have a lot of really complicated, fact-intensive questions.
Anna Gressel: Yeah, I mean, these are going to be super, super interesting and pretty complex to resolve over time.
Katherine Forrest: Right. So the point for the lawyers in our audience is that there’s going to be a lot of open green field questions in the legal area about the evidentiary use of AI digital records. Here we’ve been talking about chatbot digital records. I think we’re going to have different cases come out different ways based upon different fact scenarios. So we’ll get to different fact scenarios as they arrive and maybe use them for different episodes. But that’s all we’ve got time for today.
Anna Gressel: I’m Anna Gressel.
Katherine Forrest: And I’m Katherine Forrest, and we’re going to leave you with a cliffhanger about additional evidentiary issues for AI.