ABA’s Opinion 511 and Its Impact on Legal Ethics in the AI Era: A Wake Up Call?

Tech Law Crossroads
This post was originally published on this site

On May 8, the ABA Standing Committee on Ethics issued its formal Opinion 511 entitled Confidentiality Obligations of Lawyers Posting to Listservs. As Bob Ambrogi rightly pointed out in his recent post on the Opinion, it seems odd that the ABA would issue an opinion now about a technology that has been around since the late 90s. For Bob, it brought to mind Rip Van Winkle, who slept for 20 years only to wake up in a unrecognizable world.

I agree that the timing seemed strange. However, the substance of the Opinion could relate to and reveal the Committee’s thinking about the use by lawyers of large language models (LLMs).

The Opinion deals with when a lawyer can post questions or comments on a ListServ without their client’’s “informed consent.” According to the Opinion (and clearly, under the Rules), a lawyer can only do so if there is not a reasonable likelihood that a reader could determine either the identity of the client or the matter. The Opinion also discusses what “informed consent” entails.

Applicability to LLMs

One only needs to read the Opinion’s initial summary to see where the reasoning could take the Committee with LLMs. Here is a redlined version of the Opinion’s summary with the abbreviation LLM substituted for ListServ:

“This opinion considers whether, to obtain assistance in a representation from other lawyerson a listserv discussion group, or post a comment, a Large Language Model or Generative AI, a lawyer is impliedly authorized to disclose information relating to the representation of a client or information that could lead to the discovery of such information.”

The upshot of course is that if it is not reasonably likely that the identity or matter could be inferred, then the lawyer is free to use a LitServ under the implied authorization exception of