College of Law Students Contribute Comment on Proposed New Federal Rules of Evidence Rule 707 on Artificial Intelligence

Ten College of Law students, under the direction of Professor Todd Berger, researched and wrote a public comment on a proposed new federal rule of evidence. The Judicial Conference Advisory Committee on Evidence Rules proposes amendments to existing evidence rules and the creation of new rules to the Federal Rules of Evidence, which the pubic could submit comments before the Committee makes a recommendation.

The students selected to write on proposed Rule 707 that addresses growing concerns around the use of AI-generated evidence, particularly when such evidence functions similarly to expert testimony and raises parallel questions of reliability, bias, error, and interpretability.

The submission focused on how the proposed Rule 707 was duplicitous of existing rules and on the use of AI models as a substitute for expert testimony. The students tested and debated the use of AI for expert testimony using a variety of AI models before submitting their comment.

“This was a unique legal research and writing exercise crossed with the Federal Rules of Evidence that produced a reasoned, practical comment that would be reviewed by the Judicial Conference Advisory Committee on Evidence,” said Berger. “The comment brought a different perspective on the proposed rule that the committee had yet to consider. It’s not often that law students could possibly shape a rule of evidence and impact the practice of law in Federal courts and possibly in state courts.”

The Committee will propose any amendments and new rules to the Rule of Evidence before submitting it to the United States Supreme Court for approval and then to Congress for final adoption.

Students took on this challenge for a variety of reasons and learned about how AI could be used in legal proceedings.

“I got involved with this research because in my personal life I do not use or like the use of AI. I wanted to be informed and involved in expressing concern about AI usage and its implications for the legal field. Over the course of our work, I used several AI platforms and saw how they responded, how different platforms think, and the variety of answers and possibilities from it,” said Emily Bielecki L’26. “Though I think it can be useful in closed universes and tailored platforms, like Lexis or Westlaw, I was reassured that AI makes mistakes and should never be substituted for expertise. I think our comment was thorough and provides blatant evidence and examples of why AI cannot be used as an expert witness. I hope it is taken seriously and reviewed by the Committee with the same thoroughness as it was written.”

“My decision to get involved stemmed from my background as a scientist. The proposed rule sits at the intersection of expert evidence and legal standards for reliability, and I wanted to contribute to the conversation that could shape how courts evaluate these new tools, especially as they become more common not just in scientific research but common life,” said Jimmie Bullock L’28. “During this court, what stood out to me most was how much precision matters, because small wording choices can have major consequences in litigation. Our comment focused heavily on generative AI platform, and it seemed that while the Committee’s intentions were genuine in addressing a larger problem, the proposal may have opened a door to a significant unintended consequence. I think our comment offered not only a real-world simulation of how the rule premature, but also careful analysis of those vulnerabilities. I hope the Committee sees our submission not just as a student project, but as a serious contribution from future litigators who are invested in protecting the integrity of the trial process.”