ChatGPT 4: Do Lawyers Know Just Enough to Be Dangerous?

Tech Law Crossroads
This post was originally published on this site

Last week, I posted on the issue of whether law schools should be teaching students how to use tools like ChatGPT. After I posted this, James Lau, well known legal tech author, and former Chief Legal Officer, pointed out to me that Open AI, GPT-4 Technical Report, 14 March 2023, states, “In particular, our usage policies prohibit the use of our models and products in the contexts of high risk government decision making (e.g., law enforcement, criminal justice, migration, and asylum), or for offering legal or health advice.” (page 6)

.

I was generally aware of this prohibition (or disclaimer, depending on your point of view) but failed to mention it in my post. The problem is that I used the term ChatGPT like we often use the term Kleenex. Kleenex is the name of a brand of tissue paper, but there are other brands of tissue paper as well. But we often use the word Kleenex when we really mean tissue paper.

I was guilty of using the term ChatGPT similarly. ChatGPT is the public open generative AI tool developed by OpenAI. It searches data from all public areas to provide its answers. ChatGPT is a sort of “brand” of generative AI. But it is not the only one. And other generative AI tools work differently and, importantly, use different and more limited data. Casetext’s product, CoCounsel, for example, is an AI tool tailored for use by lawyers. It thus promises to be more accurate than ChatGPT, in part because the data it uses is more limited to legal sources.

Not only are there other stand alone generative AI products being developed, but developers are applying OpenAI’s product to more limited data sets to improve its accuracy and reduce its errors (hallucinations). This allows