Navigating Legal Tech: Can Lawyers Trust Gen AI Vendor Confidentiality Assurances?

Tech Law Crossroads
This post was originally published on this site

A loophole in Microsoft’s Azure OpenAI Service terms of use could expose privileged information to third-party review. Lawyers need to undertake reasonable diligent vetting of vendors and their terms. Reliance on vendor assurances alone is not enough. But what is?

Last week, I ran across a good piece of reporting by Cassandre Coyer and Isha Marathe in law.com. The report highlighted an important issue.

Legal tech vendors have aggressively marketed Gen AI products over the last 18 months. To a vendor, they all assure potential customers that the inquiries and responses are protected, that they will not be used to train the system, and that third parties will not have access to confidential materials. In short, trust us. But can lawyers rely on these assurances, and to what extent? Do they need to do more?

Some Red Flags

The law.com article raises some red flags. According to the article, “More than a year after law firms and legal tech companies signed onto Microsoft’s Azure OpenAI Service, which gives users access to OpenAI’s generative artificial intelligence models via the Azure Cloud, many found out that a terms-of-use loophole could make privileged information susceptible to third-party review.”

Under its terms and conditions, Microsoft can retain and then have humans manually review certain materials if its “abuse monitoring” policy is triggered. According to the article, this policy “was tucked in a nexus of terms and conditions,” and many vendors and law firms just missed it. The potential for manual review, of course, could jeoparize the confidentiality of client information.

Is It Just a Matter of Reading the Terms and Conditions?

Well, you say it should be a simple matter: just read the terms and conditions more carefully. But who does that? I agree to app terms and conditions all the time without reading them.