Why the Avianca ‘Bogus Cases’ News Is Not About Either Generative AI or Lawyers’ Tech Competence

Uncategorized
This post was originally published on this site

It was bound to happen sooner or later: Two lawyers face sanctions for filing a brief laden with bogus cases hallucinated by ChatGPT. But is this a story about the failings of AI or is it about the failings of the lawyers?

We may know more after June 8, the date on which the judge in the case, Mata v. Avianca, has scheduled a hearing to allow the lawyers to show cause for why they should not be sanctioned for what the judge called “an unprecedented circumstance” of a brief “replete with citations to non-existent cases.”

Six of the cases the attorneys cited “appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” the judge wrote.

In an affidavit filed in the case, Steven A. Schwartz, one of the attorneys for plaintiff Mata, took responsibility for the bogus cases, explaining that he located them “in consultation with the generative artificial intelligence website Chat GPT.”

Not only did ChatGPT provide him with bogus citations and opinions, but it “assured the reliability of its content,” he wrote.

“Your affiant has never utilized Chat GPT as a source for conducting legal research prior to this occurrence and therefore was unaware of the possibility that its content could be false,” his affidavit said.

Technology Incompetence?

No doubt, some will see this as a cautionary tale about generative AI technology and whether it should be used by legal professionals.

But, to me, the cautionary tale here is not about technology, but about lawyer competence.

Eleven years ago, the American Bar Association adopted the so-called duty of technology competence, requiring lawyers to “keep abreast” of changes in law and practice, “including the benefits and risks associated with relevant technology.” Forty states have adopted this duty.

In the cases since that have interpreted and applied the duty, lawyers’ claims of ignorance of technology have never helped them avoid sanctions.

My poster child for this proposition has long been the 2014 Delaware case of James v. National Financial, in which a lawyer facing sanctions for e-discovery misconduct offered these words in this defense:

“I have to confess to this court, I am not computer literate. I have not found presence in the cybernetic revolution. I need a secretary to help me turn on the computer. This was out of my bailiwick.”

Did that excuse garner sympathy from the court? Not in the least. “Professed technological incompetence is not an excuse for discovery misconduct,” the court held, citing Delaware’s then-recent adoption of the duty of technology competence.

Wheres the Common Sense?

Given that lawyers have a duty to keep abreast of the benefits and risks of technology, and given the almost hyperbolic media coverage of generative AI over the past six months — including extensive coverage of the risks of hallucination — there should be no lawyer in the United States who is not at least loosely aware of the benefits and risks of generative AI.

While this case may stand as an abject lesson in technology incompetence, I believe it is actually about lawyer competence — or lack thereof.

I have long believed that so much of a lawyer’s duty of competence — competence of any kind — boils down to exercising common sense.

In this case, after the lawyer had reason to suspect that the cases provided by ChatGPT were not real, what did the lawyer do to investigate that suspicion?

He asked ChatGPT if they were real.

His affidavit shows how he “fact-checked” the legitimacy of the cases by returning to the source, ChatGPT and having this exchange:

Lawyer: Is varghese a real case

ChatGPT: Yes, Varghese v. China Southern Airlines Co Ltd, 925 F.3d 1339 (11th Cir. 2019 is a real case.

Lawyer: What is your source

ChatGPT: I apologize for the confusion earlier. Upon double-checking, I found that the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. I apologize for any inconvenience or confusion my earlier responses may have caused.

Lawyer: Are the other cases you provided fake

ChatGPT: No, the other cases I provided are real and can be found in reputable legal databases such as LexisNexis and Westlaw.

As a matter of common sense, if a lawyer suspects that someone (or something) provided false information, should not the lawyer have checked the information against a different source? I mean, it’s not like the Internet lacks for free sources of case law. Drop the citation in a browser and see if it comes up.

Even more inexplicable in this case is the lawyer’s production of the bogus full text of the bogus case. Nothing I have read explains how it could have happened.

After Avianca’s lawyers alerted the judge that the cases appeared to be nonexistent, and after the judge notified Mata’s lawyers of this and ordered them to provide copies of the opinions, the lawyers somehow complied, at least in part, producing the text of the non-existent Varghese case.

How did this happen? Did the lawyer go back to ChatGPT and ask it to produce the text? Did it never occur to the lawyer, in the face of these red flags, to check a legal research source other than ChatGPT?

Had this lawyer never learned the cardinal rule that you never cite a case you have not read? I don’t care if the case comes from ChatGPT or a learned treatise — read it before you rely on it.

All of this leads me to one conclusion. The moral of this story is not a cautionary tale of the dangers of new technology. It is not even a cautionary tale about technological incompetence. Rather, it is a cautionary tale about lawyer competence and why it is so important for lawyers simply to exercise caution and common sense.


Featured image by Kyle Glenn on Unsplash