All
Help

"Artificial Intelligence" Ethics for Attorneys

What can AI ethically do for you?

  • Product Number: 2250075WBA
  • Publication Date: 9/19/2024
  • Length: 1 hour CLE Credit Note
  • Copyright: © 2024 MCLE, Inc.
  • Add to Favorites List

Your Selection:

Pricing: $145.00; Members: $130.50; New Lawyers: $72.50
Free for OnlinePass subscribers.
Add to Cart Member and new lawyer discounts applied in cart.
Also Available:
On Demand video and audio
Related On Demand Videos
See Agenda below to purchase individual video segments from this program. Pricing varies by video length. Member and new lawyer pricing available. Free for OnlinePass subscribers.
  • Product Description
  • Agenda & Materials
  • Faculty
  • Product Description

    Product Description

    We have enjoyed law practice automation for more than half a century, from early keyword searches of cases manually entered into databases to semi-automatic assembly of common contract clauses to expert-seeded predictive coding for the screening of documents for production in discovery. Since November 2022, the availability of ChatGPT 3.5 and other large language models (LLMs) to generate proposals of human-sounding text applying patterns found among billions of words of training text has attracted tens of millions of users. These users include attorneys, several of whom have been called out by courts for filing papers with machine-generated, “fake” case citations or arguments.

    Some courts and bar organizations have set down rules to address these among other inappropriate uses of generative artificial intelligence. More considered rules are being developed as risks are identified. GenAI-proposed answers that pass human review as “good enough” may turn out to be wrong, reflecting biases and other inadequacies of LLM training data that are opaque to the user.

    Legal ethics issues in law practice automation, such as confidentiality, have been amplified by GenAI through embedding and much enhanced retrievability that were not considered in the “proportionality” rules established just a few years ago. Availability through web services (on diverse terms) of foundation LLMs trained with information “scraped” from public-facing sources to which creators and individuals may raise proprietary or privacy claims—often customized or “fine-tuned” —leave tool providers, attorneys, and clients with limited ability to identify, much less resolve those issues.

    Attorneys learn from practitioners who have been involved in policy-development, including at the Board of Bar Overseers of the Massachusetts Supreme Judicial Court, their ethical responsibilities related to generative AI.

  • Agenda
  • Faculty
TOP