Judicial Treatment of ChatGPT: Throwing the Baby Out With the Bath?

Tech Law Crossroads
This post was originally published on this site

Steve Schwartz, the lawyer in New York who improperly used ChatGPT recently, was all the news last week. For those who don’t know, Schwartz says he used ChatGPt to prepare a Brief filed with a court. The Brief included some case citations that ChatGPT supplied. The problem was the cases didn’t exit. They were hallucinations.

Photo by ilgmyzin on Unsplash

While many were quick to blame the tech, the real problem was not the tech. It was that Schwartz didn’t check the citations. He didn’t read the cases. I would guess that he wouldn’t have read cases supplied by online legal research. He wouldn’t have read the cases found by manual legal research and cited by his associates in a memorandum.

And Schwartz should have been wary of the ChatGPT output. The hallucination problem is well known. And OpenAI, GPT-4 Technical Report, 14 March 2023, states, “In particular, our usage policies prohibit the use of our models and products…for offering legal or health advice.” (page 6).

But a bigger problem than the blame being heaped on ChatGPT instead of the lazy lawyer is the knee-jerk reaction by some judges.

For example, Texas Federal District Judge Brantley Starr has a new rule for lawyers in his courtroom. No submissions written by artificial intelligence can be submitted. Unless the lawyer using AI certifies humans checked the AI output.

 

Judge Starr’s Order

According to Judge Starr’s Order, “All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being,” 

Judge Starr