Law practice automation has been with us for more than half a century, from early keyword searches of cases manually entered into databases to semi-automatic assembly of common contract clauses to expert-seeded predictive coding for the screening of documents for production in discovery. In the past year, the availability of ChatGPT 3.5 to generate proposals of human-sounding text applying patterns found among billions of words of training text has attracted millions of users. These users include attorneys, at least one of whom was famously called out by a federal judge for filing a brief with machine-generated case citations that provided no substantive support for the propositions for which they were advanced.
Some courts have issued standing orders to address such inappropriate use of generative “artificial intelligence.” However, the uses proposed and implemented by millions for just the one large language model may be dangerous beyond obvious “hallucinations” or clear mis-citations in court filings. Practical economies may suggest “good enough” where the proposed, facially plausible “answer” is not.
Prior (and by no means discontinued) law practice automation has raised important legal ethics issues (such as confidentiality), many of which have not been generally resolved even among attorneys and clients with superior means to inquire. The popularization of AI tools—some of which are trained with information “scraped” from public-facing sources to which creators and individuals may have proprietary or privacy claims (not all facial recognition is the same)—may leave tool providers, attorneys, and clients with less practical opportunity to resolve those issues and new ones such as raised by the “black box” nature of large foundational models.
In this program, attorneys learn to help meet their ethical responsibilities, including competence and communication of risks to clients.