Beyond the Hype: Opening ILTA Keynote  Reminds Lawyers and Legal Professionals of Limits and Risks of Data Analytics and Gen AI

Tech Law Crossroads
This post was originally published on this site

The gargantuan ILTA (International Legal Technology Association) Conference kicked off today at Nashville’s sprawling Gaylord Opryland Hotel and Convention Center. (Well, sort of in Nashville. The Gaylord is in the Nashville suburbs, not downtown Nashville). Over 4000 registrants from 32 counties. Over 200 exhibitors many of whom hawking their AI and Gen AI wares. Almost 350 speakers (as in most shows these days, many of which are talking about Gen AI).

Indeed, one of the Ediscovery vendors wondered out loud to me whether there were still any Ediscovery issues left. All the conversations are dominated by Gen AI. The answer, of course, is that there are plenty of ongoing ediscovery problems. It’s just that, these days, it’s all Gen Ai, all the time.

You have to be careful with data and algorithms when making decisions

The Opening Keynote

Because the emphasis among vendors and attendees is so focused on the power of Gen AI and data, the kickoff keynote was a little surprising. The speaker was Hannah Fry, a mathematics professor and noted writer on data analytics and Gen AI.

Fry is an accomplished speaker and gave an impressive talk. Her theme: you have to be careful with data and algorithms when making decisions. She gave several examples where misplaced reliance on data to understand human behavior had led to bad results:

  • Making decisions based on incomplete data or ignoring certain data that you have or could have. Data that you didn’t see or construct, for example.
  •  Drawing conclusions from data by not looking at what it is showing you in the right way. Using data to get the result, for example, that you want. Or concluding that the data provides evidence in a particular direction and then accepting what it shows as the gospel truth. As Fry observed, “You have to be sure the data is really telling you what you think it is.”
  •  Putting blind faith in what a machine is telling us, a machine that is not capable of understanding nuance or context.
  •  Placing blind trust in humans to use the data and Gen AI capabilities in the right way.
  •  Believing that GenAI is able to make decisions and provide responses in the same way humans do. Fry told us that Gen AI and algorithms don’t analyze situations the same way humans do. “Algorithms don’t see the world in the same way–they don’t understand context or nuance.”
  •  Believing that algorithms and GenAI tools understand what we want them to do without much effort on our part. Asking these tools the simple question, for example, instead of the right, more nuanced question.

The reason the presentation was surprising was not because Frye’s conclusions were unique or hadn’t been raised before. They have.

But it was surprising to hear them at a conference like ILTA, where the AI hype coming out of vendors and even participants is loud and constant. The notion of someone talking about the elephant in the room or noticing that the emperor has no clothes was, in a word, refreshing.

What Does This Have to Do With Lawyers and Legal Professionals?

As I have written before, I’m a pretty big proponent of Gen AI and AI tools. But we would be foolish as a profession to ignore the pitfalls and dangers. Frye’s keynote did not focus on the pitfalls of data analytics and Gen AI in law, but her presentation did serve to call to mind several potential one.

There are numerous examples of lawyers blindly accepting hallucinated citations without question.  And even the most hyped-up vendor will have to admit that GenAi systems not only hallucinate, they can offer up inaccuracies.

Beyond that, it’s clear that to get a good answer, you have to ask a good question. With all the hype, the danger is that some lawyers may be tempted to think it’s easy to ask ChatGPT a question and then just accept the answer. When, in fact, the question asked yielded an answer that should not be used.

Also, even with good questions, Gen AI’s response can be hard to control. Sometimes, the same question will yield different answers at different times.

Understanding context and nuance and persuasively presenting them can make all the difference

This is a particular problem for lawyers. Accuracy is important, for sure. But beyond that, especially for trial lawyers, understanding context and nuance and persuasively presenting them can make all the difference. That difference can be particularly acute today, where we don’t try many cases and no longer get real-world feedback from jurors that we once did. And it’s not just the words. Sometimes, how you ask a witness a question and even your tone of voice can make a difference in the credibility of the answer you get. You can’t just read from a list of ChatGPT questions.

The subtle and unspoken message is that these tools will enable you to save huge amounts of time either with the task itself or on output review.

The answers from data analytics or GenAI, have to be evaluated based on nuance and context and the actual facts of the situation with which you are faced.  Of course, this presupposes that the lawyers have the experience to do that.(Which begs the question, where will younger lawyers get the experience in the brave new world ?).

It also presupposes that the lawyer will take the time to review the answer. Gen AI marketing largely focuses on time savings; the subtle and unspoken message is that these tools will enable you to save huge amounts of time either with the task itself or on output review. Yes, these tools can save tremendous amounts of time. But you have to know when you need to review the results and when you don’t. That’s why talks like that of Frye are important.

Responsible vendors know this. As Paul Walker, the Senior Director of Solution Outcomes, of iManage told me when I chatted with him at the Conference, “Ultimately, we want to protect the user from the prompt because that’s where the danger lies. That’s where you get the lack of context. That’s where you miss stuff.”

Another danger for lawyers. We are advocates for our clients. We want the facts to line up with what we want to argue. This leads to the temptation to draw the conclusions we want from data and Gen AI without looking at it critically.

And, of course, there is the temptation for younger lawyers who don’t have the experience to understand the nuances of law and facts to over-rely on these tools. Technology has already exasperated this problem. As the pundit Jean O’Grady put it in a recent article, “Law firm GCs and ethics officers have largely ignored the general decline in lawyers’ ability to employ proper legal research techniques.  This is a complaint I have heard from partners for decades”. Gen AI can only compound this problem.

Thanks ILTA

There is a real danger that in all the hype, lawyers and legal professionals may be losing sight of the risks. As Jean O’Grady puts it,  “[lawyers] are only reacting to the hyperbole surrounding Generative AI. The risks of GAI can be better understood in the overall context of the long evolution of legal research products and technologies.”

Kudos to ILTA for offering this Keynote. It takes courage to highlight an issue that many vendors would just as soon ignore or downplay.  The Keynote was a good reminder that like any technology, we have to be mindful of the benefits and the risks of data analytics and Gen AI. We can’t afford to ignore either.