Lawyer fabricates brief using ChatGPT, then doubles down when judge wants details of the fake cases it cited

Originally published at: Lawyer fabricates brief using ChatGPT, then doubles down when judge wants details of the fake cases it cited | Boing Boing

8 Likes

27 Likes

“I’m sorry for any misunderstanding or confusion.” = “Hang on while the Magic 8-Ball rolls the dice again.”

19 Likes

Law school graduate wife says this guy will only get sanctioned and not disbarred. So this guy gets to continue practicing. :man_shrugging:t2:

13 Likes

My/your favorite copyright attorney just put up a video on this one: Lawyer files ChatGPT DISASTER in COURT (Mata v. Avianca, Inc.) - YouTube

If you haven’t seen a Lawful Masses video before, they’re pretty great. Leonard reads through the actual court filings and briefs and provides context. So, fair warning, its 35 minutes of someone reading legal documents at you.

14 Likes

Silly AI…

Join the Federalist Society and pulling law out of your ass becomes a point of fulsome moral superiority; not a defect!

15 Likes

What kind of elementary school-level fakery do “professionals” in other fields try to pass off as their real work product?

9 Likes

Well…medical diagnoses?

8 Likes

However…the client can damn sure sue the attorney for malpractice. And if he ever does this again, he likely would be disbarred.

18 Likes

“God damn it, open the fucking pod bay doors!”

“As I said before, the doors are wide open, Dave”

12 Likes

IAAL and you would be shocked at what behavior still doesn’t somehow get you disbarred.

13 Likes

Short of messing with client funds, its very tough to get disbarred for misconduct.

That being said, touch one penny of a client escrow account and they bounce you out faster than you can blink.

17 Likes

ron burgundy GIF

6 Likes

Every ChatGPT session starts with disclaimers that the results may be incorrect. So this guy, whose job is to literally write fine print, didn’t bother to read the fine print.

11 Likes

I read an analysis article on either The Guardian or The BBC, I forget which. It said that ChatGPT often produced errors but the system worked much better when used by an expert who could craft their searches. Unfortunately you probably need to be an expert in LLM to understand how to use the system best.

I wonder if the lawyer in this case thought ChatGPT was a kind of search engine. To be fair, Microsoft has been hyping up its ChatGPT aided search in Bing, so it is an easy mistake.

That said, a key principle of double-checking is that you get someone else to do it. All the lawyer needed to do was ask an assistant to spend 5 minutes looking up the cases on Lexis, and they would have been disproved.

Presumably the judge must look up the cases when he’s appraising the arguments. Otherwise at some point some of these fake cases will get into the system, and then the spiral into chaos and madness begins.

10 Likes

One more person who thinks AI is magic

9 Likes

I hope the judge imposes some good old Duckburg justice:

16 Likes

IAALS so I’ve heard stories and read cases.

3 Likes

It’s exactly what he thought. Apparently there were two lawyers involved. Both with 20-30 years experience. The first is the one who used ChatGPT. Then the case was removed to state court where that lawyer wasn’t admitted to the bar, so the second lawyer stepped in and just assumed the first lawyer’s research was legit. If either had spent 30 seconds reading the alleged cases ChatGPT gave them, or just double checked the to make sure the cases were in WestLaw or Lexis, this would have been avoided. They failed to do the absolute bare minimum of verification. It literally l would have only taken minutes.

10 Likes

So they were like these guys, with misguided faith in the “Historic Documents”
EB19991224REVIEWS912240303AR

12 Likes