Lawyers need to advise clients of risks of Gen AI.
Another week, and I find myself at yet another legal conference focusing on AI and Gen AI. Lots of the now standard discussions about whether and how Gen AI will impact lawyers and the legal profession. Presenters droning on about the risks and benefits to lawyers of using Gen AI. But like so many things lawyers stew over, the focus of these discussions is almost always on the lawyer’s professional navels and not on the interests of their clients.
When lawyers do focus on their clients in this area, it’s mostly all about worrying about what Gen AI will do to the all-powerful billable hour, what it will do to their revenue, and whether lawyers will be replaced by a Gen AI version of Her (or Him).
Lawyers worry mainly not about their client’s use and potential liability but about themselves.
But as usual, lawyers are collectively missing something. Their clients, who are businesses, and even individuals are using AI and Gen AI every day. They are using it to develop products. To manufacture products. To assist in making business and individual decisions. To assess risks. To create contracts. All the while, lawyers worry mainly not about their client’s use and potential liability but about themselves.
As lawyers, our ultimate responsibility and singular focus should be to advise and protect our clients competently. To help them assess risks. To help them make business decisions about how to use Gen AI and when to use and not use it. Our clients can certainly make these kinds of decisions based on their business or individual needs. But they look to us to help them assess the legal risks and exposure from their Gen AI use. It’s up to us to educate our clients about the risks and potential bias of GenAi in the business context and how that could lead to liability. When there is a lawsuit over their use of GenAI tools, it’s up to us to navigate them through the litigation and reduce exposure.
Gen AI does bring some risk. Hallucinations. Inaccuracies. Bias.
And Gen AI does bring some risk. Hallucinations. Inaccuracies. Bias like that demonstrated by the Amazon Recruitment Tool which was hastily abandoned because it picked men over women for Amazon jobs . Copyright infringement. Voice cloning. Product liability. Class actions. Not to mention the fact that even computer scientists don’t know how these systems a lot of what they do. We need to educate our clients what this means from a legal perspective and the dangers.
So, if we want to be trusted advisors and help our clients with sophisticated and complicated litigation, don’t we have to understand Gen AI and its risks to not only to ourselves but our clients? Don’t we have to know the “risks and benefits” of Gen AI to competently advise our clients?
There are ethical considerations as well. Comment 8 to Model Ethical Rule 1.1, which governs competence, says we should keep abreast of the risks and benefits of relevant technology. Comment 2 to that same Rule provides, “Perhaps the most fundamental legal skill consists of determining what kind of legal problems a situation may involve, a skill that necessarily transcends any particular specialized knowledge.” Comment 2 to Rule 1.3 provides that we are to act with commitment and even zeal in representing our clients.
We need to ask not what GenAI can or can’t do for us but what risks GenAI does or doesn’t hold for our clients.
All these Comments suggest that when it comes to AI and Gen AI that our clients are using, we need to be knowledgeable and assess the risks of that use. We can’t run away from Gen AI any more than we could from things like computers, smartphones, cybersecurity issues, and the legal risks these technologies pose.
The lawyers that succeed in the future will be client centric. They will appreciate their clients use GenAI tools so they can competently advise them. To paraphrase John F. Kennedy, we need to ask not what GenAI can or can’t do for us but what risks GenAI does or doesn’t hold for our clients.