• videocam Live Webinar with Live Q&A
  • calendar_month June 23, 2026 @ 1:00 PM ET/10:00 AM PT
  • signal_cellular_alt Intermediate
  • card_travel Class Action & Other Litigation
  • schedule 90 minutes

Authentication of AI-Generated Evidence: Interplay of FRE 104, 403, 702, and 901; Proposed FRE 707 and 901(c)

About the Course

Introduction

This CLE webinar will address authentication and admissibility of evidence created or enhanced by AI. The panel will consider the interplay among Federal Rules of Evidence 104(a) and 104(b), 403, 901(b)(9), and 702, and best practices when someone seeks to proffer or challenge AI-generated evidence, whether acknowledged or unacknowledged.

Description

Litigators are already contending with materials that have been produced, enhanced, or modified by AI systems: sentiment analysis showing attitudes and preferences in class actions or employment litigation, programs that purport to identify habits and predict next actions, enhanced video or audio in premises liability or property insurance cases, timelines created from voluminous records, accident reconstruction, progression tools to show how people or property look as a condition progresses, underground and underwater visualizations, reconstruction of lost or incomplete contracts, electronic messages, or emojis. The list is literally endless.

Whether attempting to lay a foundation or responding to an objection, the party proffering AI evidence will have to demonstrate to the judge that the algorithm that created the evidence is valid and reliable, which FRE 901(b)(9) states can be done by "describing a process or system and showing that it produces an accurate result." When everyone is aware of and acknowledges that AI was involved, the inquiry often looks like any other test of scientific or technical evidence under FRE 702 and varies in complexity with the complexity of the AI system. Because there has been no standardization or broad consensus on the best procedure, the Federal Rules of Evidence Advisory Committee has proposed a new FRE 707 to address admissibility of "machine-generated evidence."

When material is offered that the proffering party does not acknowledge was "machine-generated," which means that another party has alleged that the evidence was AI-generated, nothing is settled. If the opposing party presents sufficient evidence from which a jury could conclude that the material was AI-generated, then most courts and commentators take the position that FRE 104(b) requires the jury to hear and decide admissibility issues. The consensus among commentators is that this is a bad idea. Proposed FRE 901(c) is intended to address this issue and is likely to be a good first step.

Listen as this panel of expert litigators discusses how courts are dealing with these issues now and offers guidance and practical tips for addressing machine-generated evidence.

Presented By

Paul W. Grimm
District Judge (Ret.)
United States District Court for the District of Maryland

Judge Paul W. Grimm was appointed as a United States District Judge for the District of Maryland on December 10, 2012. Previously, he was appointed to the Court as a Magistrate Judge in February 1997 and served as Chief Magistrate Judge from 2006 through 2012. In September, 2009 he was appointed by the Chief Justice of the United States to serve as a member of the Advisory Committee for the Federal Rules of Civil Procedure. Before joining the Court, Judge Grimm was in private practice in Baltimore for thirteen years, during which time he handled commercial litigation. He also served as an Assistant Attorney General for the State of Maryland, an Assistant State’s Attorney for Baltimore County, Maryland, and a Captain in the United States Army Judge Advocate General’s Corps. In 2001, Judge Grimm retired as a Lieutenant Colonel from the United States Army Reserve. Judge Grimm received his undergraduate degree from the University of California Davis (summa cum laude), his J.D. from the University of New Mexico School of Law (magna cum laude, Order of the Coif) and his LLM from Duke Law School.



Maura R. Grossman, J.D., PH.D.
Research Professor
David R. Cheriton School of Computer Science at the University of Waterloo

Ms. Grossman, J.D., Ph.D., is a Research Professor in the David R. Cheriton School of Computer Science at the University of Waterloo, an Adjunct Professor at Osgoode Hall Law School of York University, and an affiliate faculty member at the Vector Institute of Artificial Intelligence, all in Ontario, Canada. She also is Principal at Maura Grossman Law, an eDiscovery law and consulting firm in Buffalo, New York. Ms. Grossman is most well known for her scholarly work on technology-assisted review (“TAR”), which has been widely cited in the case law, both in the U.S. and abroad. She is also known for her appointments as a special master and/or as an expert in multiple, high-profile federal and state court cases. In addition to her J.D. from the Georgetown University Law Center, Ms. Grossman also holds M.A. and Ph.D. degrees in Psychology from the Derner Institute of Adelphi University.   

Credit Information
  • This 90-minute webinar is eligible in most states for 1.5 CLE credits.


  • Live Online


    On Demand

Date + Time

  • event

    Tuesday, June 23, 2026

  • schedule

    1:00 PM ET/10:00 AM PT

I. Introduction 

A. Overview of rules for admissibility

B. Authenticating evidence created by processes: FRE 901(b)(9)

II. Evidence the parties acknowledge is AI-generated

A. Current approaches, including FRE 104(a) and 702

B. Proposed FRE 707

III. Evidence the parties do not acknowledge is AI-generated

A. How the issue arises

B. Application of FRE 104(b): letting juries decide admissibility

1. Overreliance deference to machine learning

2. Inability to ignore evidence it has found inadmissible

3. Inability to understand how AI works 

IV. Use of FRE 403

V. Strategies for proponents and challengers 

The panel will review these and other important issues:

  • What is the difference between acknowledged and unacknowledged AI-generated evidence?
  • How can litigants help juries avoid overestimating the value of AI-generated materials?
  • Is a bald allegation that evidence is AI-generated and enhanced without more enough to keep the evidence out?