Generative AI Risk in Legal Research: Is the Fault in the Technology or in Ourselves? Answer is Both

Uncategorized
This post was originally published on this site

Will Generative AI awaken the need for serious focus legal research education?

The introduction of Generative AI to the practice of law has been anything but smooth. First there was the unfortunate case of Mr. Schwartz who used Chat GPT-3 to write a brief complete with hallucinated cases which he submitted to a federal  court in New York. Judge castell of the Southern District of New York noted that the attorneys had “abandoned their responsibilities.” More recently there have been the controversies related to a Stanford Human-Centered Artificial Intelligence (HAI) team study criticizing the quality of the Lexis and WL generative AI products. The study was so roundly criticized that it was revised and reissued. The HAI study’s conclusions regarding the Westlaw Precision AI and Lexis+  AI products requires a nuanced understanding of the HAI benchmarking definitions.  The HAI studies flag a wide range of issues including some which appear to be subjective. Problems noted range from a “true hallucination” to a factual error e.g. name of a judge, to the length of responses. Everyone agrees that legal generative AI products require serious benchmarking studies, but Stanford fumbled the ball.

Selling any new legal technology to law firms is hard. Selling generative AI products to law firms appears to be moving at a glacial pace and this post will explore some of the obstacles to adoption of GAI in legal. There are probably more stakeholders in the mix than I have seen for any prior technology. Most noticeable is the presence of the General Counsel/Ethics Officer who in many firms is waving cautionary flags. Then there are clients who are sending conflicting signals limiting, requiring or banning use of GAI products on their matters.  Add to this stew of ambiguity, the proliferation of judges rules restricting or establishing requirements regarding not only the use of generative AI but AI products in general. (AI is probably in  90% of the products the average lawyer uses including their smartphone).

Why are law firms are holding off generative AI adoption for legal research?

Read the full post on Legal TechHub

The LexisNexis 2024 Investing in legal Research Survey indicated that responders identified the top three concerns related to the adoption of GAI are

  • The trustworthiness of current technology solutions 86%
  • The quality of current technology solutions 75%
  • Hallucinated/invented content concerns 74%

Hallucinated cases have created a false hysteria. It is a genuine Issue but there is a real solution.  There are a few factors that have the heightened the genuine risks posed by hallucinated cases.

  • Decline in Legal Research Skills.  Bar associations, law schools and law firms have been ignoring the need to mandate legal research competence for decades
  • When did lawyers stop reading cases? Can a generation of lawyers who grew up scrolling learn to read full cases? In an 2008 Atlantic article, Is the Internet Making us Stupid?, Nicholas Carr predicted that people would loose the ability to ingest and understand large amounts of text  due to the decline in deep reading.  Can this be tolerated in the legal profession.
  • Does generative AI pose truly unique risks for legal research? In my  opinion, there is no risk that could not be completely mitigated by the use of traditional legal research skills? The only real risk is lawyers losing the ability to read, comprehend and synthesize information from primary sources.

Are law firm GC offices and Ethics committees over-reacting to Generative AI after ignoring the risk latent in prior generations of legal research technology? In my opinion yes. They are only reacting to the hyperbole surrounding Generative AI. The risks of GAI can be better understood in the overall context of the long evolution of legal research products and technologies.

 Law firm GCs and ethics officer have largely ignored the general decline in lawyers ability to employ proper legal research techniques.  This is a complaint I have heard from partners for decades. Bar exams don’t test legal research competencies. Most law schools don’t require advanced legal research training. Most law firms don’t have mandatory legal research training requirements. I wish I had a dollar for every associate to excused him/herself from  their new associate legal research training class because they had already been given a legal research assignment…. Before they had been introduced to the best research resources and firm’s research best practices!

Lawyers have been outsourcing pieces of their legal analysis to “technologies” and editors for over 100 years.  Traditional examples of research shortcuts include headnotes, syllabi, citators, citation flags, treatises, practical guidance tools, brief checkers.

Lawyers have been using products that they didn’t completely understand for decades. I don’t recall GC offices and ethics committees trying to holdback the adoption of traditional online legal research products. Lexis and Westlaw were not born as complete repositories of all U.S. legal materials. They grew over time. I remember Westlaw when it only included headnotes, I remember Lexis when it only had Ohio caselaw. Boolean searching and the old research command languages certainly resulted in incomplete research results. There was a time when a lawyer could retrieve a case with “on point” language without realizing  that the text appeared in a dissent. In “the old days” a lawyer would have to print out the full case and read it to discover that the text was in the dissent.

How many lawyers have understood the prior generation of natural language outputs? Lexis, Westlaw and newer rivals Bloomberg Law and Fastcase/vLex offer “non generative AI” products which provide algorithm based natural language research results. Each system displays results ranked by their proprietary  algorithm. The results can be dramatically different. I have never met a technology executive who could  explain the “black box” results of a natural language query.

Bottom Line For decades imperfect legal research systems have flourished and improved over time based on both advances in technology and customer feedback.

Is the generative AI hysteria warranted?

Legal AI is not Open AI  The bottom line is that Open Ai’s Chat-GPT which was misused by the lawyer in the Schwartz case noted above was not designed for legal research! This has not stopped other lawyers from repeating the mistake.

Every generation of research tools has contained some level of risk.  Over the years, I have discovered mistakes in such revered tools as Westlaw headnotes and Shepard’s citators. The solution has always been the same – a lawyer must read the underlying authorities and draw their own conclusions.

 Artificial Intelligence is not new to legal research.  Many of the most widely adopted products were at least in part created with the use of AI or other AI-enabled results.  Features such as “Westlaw Answers” and “Lexis Answers” offer “type ahead” functionality as well as “answer cards” which are the product of human and artificial intelligence. Brief Analysis tools from Lexis, Westlaw and Bloomberg use algorithms to identify missing precedent. Lex Machina’s analytics platform was designed through a combination of human and machine analysis of docket data. Deal analysis tools from Bloomberg Law and Westlaw use machine learning to compare draft deal clauses against market standards extracted from millions of SEC documents.

 The Provenance of Documents is a risk in legal research – Should lawyers be using Google for legal research? I would say “no.” Even though Google doesn’t hallucinate legal documents, it also does not verify the authenticity or provenance of legal documents it serves up in its search results. I do not recommend using public open-source legal documents, even if they are located using a traditional Google search rather than an LLM. Open-source legal documents, even if not hallucinated, could be fakes or contain errors. Law firms subscribe to premium legal research platform (Lexis, Westlaw, Bloomberg Law,vLex, Wolters Kluwer  et.al.) to assure that lawyers have access to authentic and editorially enhanced legal documents.

  Do Generative AI products from Lexis and Westlaw, and vLex  have the same risks of “hallucinated cases as  GAI products using open source data? No. Lexis, Westlaw and vLex have private curated databases which include only authoritative content. They have built controls including Retrieval Augmented Generation (RAG) into their systems which prevent the LLMs from generating “hallucinated” cases, but as of June 2024 can not guarantee that responses such as summarizations are 100% accurate.

 Trust but verify. Lexis + AI, Westlaw Precision AI, vLex Vincent AI will speed up the research process – but in the end these are “trust but verify” technologies. Lawyers will still need to read and cite check any case that is recommended using a commercial Generative AI product designed for the legal market.

The Race to the Market Maybe legal research technology companies share part of the blame for pushback against generative AI. They have been in rush to launch products to market without understanding the collateral ethical, client and judicial issues which have been triggered by GAI in the legal market. Understandably legal research and tech companies need a return  on their tech investment. Perhaps the cost increases for upgrading to a generative AI legal research product is too steep for a class of products that are early stage of development?

There is a solution – Legal Research Education

Back in the 1980’s there was a big push in the law librarian community to promote legal research training. Bob Berring, Dean and Director of the Library at the University of California Berkley, released his swashbuckling Commando Legal Research video series which contained some commandments of legal research including  the “no longer obvious to associate’s”:  “ read any case you are citing, “ and “cross check and validate your research. “

Maybe GAI can only succeed if we go back to the future?

Law librarians are the GAI training and risk mitigation teams which already exist in many law firms. However legal research training doesn’t necessarily get the same level of endorsement and inclusion as other skillsets. Legal Research education needs to be put on a par with all of the other core competencies included in “Associate Academies.”  if firms want to mitigate the risks of all  legal research technologies including Generative AI.