Did Your Expert Use ChatGPT? You Might Want to Ask

Tech Law Crossroads
This post was originally published on this site

We have all heard over and over again about lawyers who use Gen AI and fail to check the citations the tools provide. The dangers of hallucinations and inaccuracies when using Gen AI tools are well known, and a Court will likely have little sympathy for the lawyer who fails to check sources.

But what if an expert witness uses Gen AI to come up with nonexistent citations to support their declarations or testimony?

That very thing just happened in a case pending in Minnesota federal court, as reported by Luis Rijo in an article in PPC Land. Ironically, the expert in question, Professor Jeff Hancock, the Stanford Social Media Lab Director, offered a declaration in a case challenging the validity of a Minnesota statute regulating deepfake content in political campaigns. Hancock subsequently admitted using ChatGPT to help draft his declaration. The declaration included two citations to nonexistent academic articles and incorrectly attributed the authors in another citation.

District Judge Laura M. Provinzino noted that Hancock, a credentialed expert on AI and misinformation, “has fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI.” His testimony was excluded.

An Interesting Question

In excluding Hancock’s testimony, Judge Provinzino suggested that attorneys should specifically ask witnesses whether they used AI in drafting declarations and verified any AI-generated content. The Court noted that while AI has potential benefits, lawyers should maintain independent judgment and critical thinking skills and not just rely on AI-generated content.

A Lawyer’s Duty

A lawyer must not knowingly offer false evidence

Judge Provinzino’s opinion raises some interesting ethical and practical issues concerning expert witnesses and a lawyer’s relationship with them.

ABA Model Rule 3.3 provides that a lawyer must not knowingly offer false evidence. ABA  Rule 3.4, which deals with fairness to opposing parties and counsel, provides that a lawyer shall not unlawfully alter, destroy, or conceal a document or other material.

But that doesn’t answer whether the lawyer should ask the expert if they used Gen AI. And, if so, did they check the citations? Does the lawyer have a duty to check the citations independently?

Expert Witnesses

The rules are relatively straightforward when dealing with lay witnesses. It also seems pretty clear that if a lawyer discovers an expert has relied on Gen AI and the expert’s opinions are based on nonexistent sources, the lawyer is ethically bound to disclose it to the Court and the other side.

But that doesn’t answer the question of whether the lawyer should ask the expert if they used Gen AI. And, if so, did they check the citations? Does the lawyer have a duty to even check the citations independently?

It’s complicated when dealing with experts. Experts testify to opinions based on specialized knowledge, not facts. Determining if an expert’s disclosure or testimony is accurate may require a more detailed review of technical information, methodologies, and research.

Certainly, if a lawyer suspects or has reason to believe that AI generated opinions or citations might not be reliable, the lawyer needs to investigate. But what if the lawyer doesn’t know or suspect? Has the use of Gen AI become so ubiquitous that asking the question about use is now required?

The Civil Rules

The Federal Rules of Civil Procedure help answer these questions. FRCP Rule 26(a)(2)(B) provides with respect to expert witnesses a party must disclose:

  • A complete statement of all opinions the expert will express.
  • The basis and reasons for those opinions.
  • The data or other information considered by the expert.
  • Any exhibits to be used as a summary or support of the opinions.
  • The expert’s qualifications, publications, compensation, and a list of other cases where they’ve testified as an expert.

If an expert used an AI tool’s analysis or citations, it arguably falls within the scope of “facts or data” and should be disclosed. Data or methodological steps that significantly inform the expert’s opinions must be disclosed; this includes which tools were used and if those tools meaningfully contributed to the substance of the opinion.

If a lawyer knows the expert used Gen AI, the lawyer has a duty to ensure (a) that reliance is disclosed if it’s part of the expert’s methodology and (b) that no incomplete, fabricated, or misleading references are proffered.

In vetting the expert, it would be appropriate to ask whether a tool capable of hallucinations and inaccuracies was used.

Practical Implications

Under ethical and civil rules, a lawyer should at least ask the expert what materials and tools they have relied upon to make the requisite disclosure. But does that mean a lawyer should specifically ask the witness whether they used Gen AI to formulate their opinions? It’s not entirely clear under the rules. But practically speaking and as a best practice, it would be a good idea.

It’s well known that Gen AI tools can hallucinate and provide inaccurate answers. So, in vetting the expert, it would be appropriate to ask whether a tool capable of hallucinations and inaccuracies was used.

Given the growing use of Gen AI in virtually every context, the other side will likely check the references and materials. Finding out that your expert has relied on false or inaccurate information after disclosure or during cross is not only embarrassing, it will also result in your expert being excluded, just as Hancock was.

It’s simply good practice to ask your expert if they used any AI tools when researching or drafting their report

Best Practices

It’s simply good practice to ask your expert if they used any AI tools when researching or drafting their report. If so, it is good practice to demand that they (a) identify specifically what portions were generated and (b) cross-check any authorities or factual assertions provided by the tool. If the opinion relies heavily on novel references or “found” data, ask the expert for the primary sources.

Also, keep in mind experts are not lawyers. They may not appreciate the severity of “hallucinated” references. It is worth noting that Hancock made it a practice to check any sources identified by Gen AI in his scholarly work but apparently didn’t think it necessary to do so for court purposes. It does not hurt to provide the expert with a brief explanation of the issues associated with Gen AI use, the need for careful verification, and the consequences if this isn’t done.

The Bottom Line

Given the known risks of generative AI in creating fabricated citations or data and the soon-to-be ubiquitous use of Gen AI tools (if not already), lawyers would do well to ask if their expert used Gen AI tools and how the expert verified the output. Even though there is no single, explicit rule demanding this inquiry, the duty of disclosure, practical implications, and best practice make this diligence step both prudent and aligned with ethical obligations.