Gen AI and Law: Perfection Is Not the Point

Tech Law Crossroads
This post was originally published on this site

Earlier this week, the well-known commentator Seth Godin observed,  

“One of the valid complaints about some AI systems is that they make stuff up, with confidence, and without sourcing, and then argue when challenged.

Unsurprisingly, this sounds a lot like people.”

In evaluating whether lawyers should use Gen AI tools, lawyers (and legal commentators, for that matter), often forget that humans (lawyers) make mistakes. They make shit up. As one of my former partners once observed, “So and so may be wrong, but he is never unsure.” And even when wrong, boy, can lawyers argue they are nevertheless right.

When it comes to using Gen AI tools and even other technology, lawyers expect perfection but forget that it’s not a perfect world. No system, human or otherwise, has yet to achieve perfection. Think about how many times a beginning lawyer misreads cases in their zeal to craft a winning argument. I can’t begin to tell you how many times I misremembered deposition testimony after the deposition. I would believe in my heart how good the testimony was, only to be shocked at how good it wasn’t when I read the transcript.

The point is not whether Gen AI can provide perfect answers

The point is not whether Gen AI can provide perfect answers. It’s whether, given the speed and efficiency of using the tools and their error rates compared to those of humans, we can develop mitigation strategies that reduce errors. That’s what we we do with humans. I.E. read the cases before you cite them, please).

Another close close to home example. One of the big reasons cited for why lawyers should not use public Gen AI platforms is that its use risks revealing client confidences. Which is a little humorous since a lot of lawyers don’t really understand what the ethical rules say. They confuse the confidentiality ethical rules with the rules defining attorney-client privilege. The ethical rules go beyond protecting communications that relate to the seeking of legal advice. Here is what the ethical rule says:

A lawyer shall not reveal information relating to the representation of a client. 

That’s anything about representation. Yet human error in understanding this difference produces all sorts of possibilities for an ethical lapse. A lawyer talking on their cell phone in a public place where others can hear, for example. A lawyer using public WiFi to email their client, or a lawyer using Google for searches or Gmail for communicating.

We try to deal with the human error rate by educating lawyers, requiring them to use a VPN, and requiring continuing legal education about ethics.

Can we say that the possibility that Gen AI systems used with similar mitigation techniques (like education and training) are so much worse than human error rates that they should not be used at all? Despite the face that they can save so much time and effort?

Examples abound where AI error rates are less than those of humans

Examples abound where AI error rates are less than those of humans. In a study comparing AI to lawyers in reviewing non-disclosure agreements, for example, AI achieved an average accuracy rating of 94%, while lawyers averaged 85%. (By the way, the AI completed the review in 26 seconds on average, compared to 92 minutes for lawyers).

Despite these statistics, some people resist using AI for document review because they fear it might make a mistake.

While we are at it, we need to consider what risk level is tolerable when it comes to client confidences in general. Here is what the ethical rule says:

A lawyer shall make reasonable efforts to prevent …inadvertent or unauthorized disclosure.

What are “reasonable efforts” in today’s world where the expectation of privacy is so different from 50 years ago?

Some seem to think that just because a lawyer discloses something to a Gen AI tool that someone somewhere might be able to determine the client’s identity is enough to say never use the tool. 

We don’t tell lawyers not to go to cocktail parties because they might blabber something that would allow someone to learn something about a representation.

How is that different than saying a lawyer shouldn’t use email because the email system could be hacked and someone could read confidential communications? In point of fact, law firms and cloud providers are hacked from time to time, yet no one says let’s keep everything on paper just to be on the safe side. (Even then, someone could still break into your office if they wanted and see everything). We don’t tell lawyers not to go to cocktail parties because they might blabber something that would allow someone to learn something about a representation.

The point is that we need to examine what risks are tolerable with new technology, given what it can do for us. We need to think about how to mitigate those risks, not throw the baby out with the wash. We need to assess AI capabilities for specific legal tasks rather than generalizing. We need to develop protocols for AI use, including requirements for human oversight and cross-checking of critical information.

We can’t take an absolutist position. We don’t with humans. It’s not practical or necessary.

Or as the Gen AI platform Perplexity put it when I asked it about this issue , “When evaluating the use of generative AI in legal practice, it’s crucial to consider error rates in context rather than dismissing AI outright due to potential mistakes. Both AI and humans are prone to errors, so the key is to compare relative error rates and consider mitigation strategies”.