Generative AI In Legal Needs Specialized Applications

Tech Law Crossroads
This post was originally published on this site

Widespread use of generative AI by lawyers and legal professionals will occur when AI tools can be applied to specialized and often private data bases.

There has been a lot of hype about ChatGPT of late, but according to various reports including one by Bob Ambrogi, the legal community’s reaction has been somewhat ho-hum. And there are some reasons for that. 

Use of ChatGBT By Legal Professionals

ChatGBT uses a public database–the internet–to derive its answers. At the risk of oversimplification, ChqatGPT works by predicting what word will follow another phrase or word. Hence, using all publicly available information to make this prediction could result in some limited or specialized content, being missed or misinterpreted. But this specialized content is often that needed to answer a legally related inquiry. 

The result: the system hallucinates a response that may not be correct. (It’s been postulated that hallucination is the wrong term and that confabulation is a more accurate term). And for legal, incorrect but persuasive answers is a killer. Lawyers and legal professionals are also concerned about confidentiality. The concerns that whatever they put into ChatGPT could become publicly available.

These factors discourage lawyer use, particularly by those knowledgeable about how ChatGPT works. I also think the fact ChatGPT has been overhyped, like other legal tech that didn’t live up to the hype, makes lawyers skeptical. 

But widespread use of generative Ai (ChatGPT is generative AI) by lawyers may occur when the tools are applied to more limited and specialized databases. Applying generative AI to limited databases will reduce the hallucination. It will allow the system to focus not on everything but on what is particularly relevant to a query. And if generative AI is applied to a private database, it reduces the risk of a confidentiality breach and lends