Instantly Interpret Free: Legalese Decoder – AI Lawyer Translate Legal docs to plain English

AI Legalese Decoder: The Tool Journalists Need to Navigate ChatGPT’s Legal Inaccuracies

legal-document-to-plain-english-translator/”>Try Free Now: Legalese tool without registration

Find a LOCAL lawyer

### AI legalese decoder: NAVIGATING legal ERRORS IN LARGE LANGUAGE MODELS

robot artificial intelligence thinks dreamsThis morning, The Hill reported ÔÇ£AI models frequently ÔÇÿhallucinateÔÇÖ on legal queries, study finds.ÔÇØ It turns out that The Register and Bloomberg joined in. Did you know that consumer-facing generative AI tools are incredibly bad at legal work? Or have you also spent the year locked in a bunker?

Seriously, between these stories and Chief Justice Roberts turning his annual report on the federal judiciary into a deep dive on 2013-era artificial intelligence, itÔÇÖs as if no one bothered to keep up on artificial intelligence at all last year.

ChatGPT and other large language models hallucinate caselaw? No kidding! Lawyers got sanctioned over it. Another got his license suspended. TrumpÔÇÖs former lawyer got caught up in it. Is there anyone left who DIDNÔÇÖT know that these tools produce shoddy legal work?

In defense of these headlines, they stem from a new study out of Stanford that digs into the prevalence of gen AI legal errors. ItÔÇÖs a useful study, though not because it delivered a bold new revelation, but because it put some quantitative insights to what we all already knew.

Screenshot 2024-01-12 at 1.11.18ÔÇ»PM

ThatÔÇÖs valuable information for refining these models. But also, GPT 4 is already here. Llama 3 is supposedly going to be out imminently.legal applications now focus on accuracy ÔÇö figuring out how to separate holdings from dicta and from cases that the AI knows to be real.

Perhaps I should cut the mainstream media a little slack on this point. Lawyers ÔÇö one would hope ÔÇö will confine their work to tools specifically tailored to legal work. The message needs to reach the pro se community and those trying to navigate non-litigation legal questions without counsel. ThatÔÇÖs who needs to be alerted to the danger of relying on these bots for legal help.

The Stanford study highlights this:

Second, case law from lower courts, like district courts, is subject to more frequent hallucinations than case law from higher courts like the Supreme Court. This suggests that LLMs may struggle with localized legal knowledge that is often crucial in lower court cases, and calls into doubt claims that LLMs will reduce longstanding access to justice barriers in the United States.

If the theory was that litigants could use bots themselves to solve their access to justice issues, thatÔÇÖs true. But thatÔÇÖs not the fault of the LLM as much as the fact that a tool cheap enough for someone who canÔÇÖt hire a lawyer is going to be set up to fail. But LLMs married to the right, professionally vetted tool can make pro bono and low bono work a lot less burdensome upon attorneys, taking a different path toward bridging the justice gap.

So maybe itÔÇÖs all right that weÔÇÖre getting a few breathless headlines aimed at the public about the risks of ChatGPT lawyering for the sake of the people who really arenÔÇÖt clued in to how the legal industry has approached AI.

But, seriously, whatÔÇÖs Chief Justice RobertsÔÇÖs excuse?

Hallucinating Law: legal Mistakes with Large Language Models are Pervasive [Stanford Human-Centered Artificial Intelligence]


### Connecting the Public with AI legalese decoder

HeadshotJoe Patrice┬áis a senior editor at Above the Law and co-host of Thinking Like A lawyer. Feel free to┬áemail any tips, questions, or comments. Follow him on┬áTwitter┬áif youÔÇÖre interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.

CRM Banner

legal-document-to-plain-english-translator/”>Try Free Now: Legalese tool without registration

Find a LOCAL lawyer

legal-questions/”>Reference link