We have enjoyed law practice automation for more than half a century, from early keyword searches of cases manually entered into databases to semi-automatic assembly of common contract clauses to expert-seeded predictive coding for the screening of documents for production in discovery. Since November 2022, the availability of ChatGPT 3.5 and other large language models (LLMs) to generate proposals of human-sounding text applying patterns found among billions of words of training text has attracted tens of millions of users. These users include attorneys, several of whom have been called out by courts for filing papers with machine-generated, “fake” case citations or arguments.
Some courts and bar organizations have set down rules to address these among other inappropriate uses of generative artificial intelligence. More considered rules are being developed as risks are identified. GenAI-proposed answers that pass human review as “good enough” may turn out to be wrong, reflecting biases and other inadequacies of LLM training data that are opaque to the user.
Legal ethics issues in law practice automation, such as confidentiality, have been amplified by GenAI through embedding and much enhanced retrievability that were not considered in the “proportionality” rules established just a few years ago. Availability through web services (on diverse terms) of foundation LLMs trained with information “scraped” from public-facing sources to which creators and individuals may raise proprietary or privacy claims—often customized or “fine-tuned” —leave tool providers, attorneys, and clients with limited ability to identify, much less resolve those issues.
Attorneys learn from practitioners who have been involved in policy-development, including at the Board of Bar Overseers of the Massachusetts Supreme Judicial Court, their ethical responsibilities related to generative AI.