Skip to content
LIVE
Loading prices...
Courtroom drama: Lawyer caught using AI to defend using AI

Courtroom drama: Lawyer caught using AI to defend using AI

Courtroom drama: Lawyer caught using AI to defend using AI

As generative artificial intelligence (AI) becomes more commonplace, it has become an easy way to access information, but some people use it when they shouldn’t, and then there are others who even double down on its use after getting caught, including one lawyer.

Ad

Specifically, Michael Fourte, an attorney in a New York Supreme Court commercial case, was facing sanctions after submitting legal briefs riddled with AI-generated fake citations, and when he tried to defend himself, he filed another brief – also filled with AI-hallucinated cases, per a report on October 14.

In his decision on the motion for sanctions, Judge Joel Cohen said that the “counsel not only included an AI-hallucinated citation and quotations in the summary judgment brief that led to the filing of this motion for sanctions, but also included multiple new AI-hallucinated citations and quotations in Defendants’ brief opposing this motion.”

“In other words, counsel relied upon unvetted AI – in his telling, via inadequately supervised colleagues – to defend his use of unvetted AI.”

AI hallucinations meet the courtroom

As it happens, the case itself stemmed from a family loan dispute, but Fourte’s filings became the main issue as the court found that multiple citations and quotations in his defense brief didn’t exist in any real legal record. When the judge pressed Fourte, he said that:

Ad

“I literally checked to make sure all these cases existed. (…) I looked at the cases, looked at everything, (…) but I did not verify every single quote.”

However, this explanation didn’t land successfully, and Cohen ultimately sanctioned Fourte, ordered him to pay the opposing counsel’s fees, and referred the case to the New Jersey Office of Attorney Ethics.

Meanwhile, AI hallucinations, in which chatbots like ChatGPT, Gemini, and Claude generate convincing but false text, have become a growing problem in the legal sphere. Another case pertains to a Canadian couple who used Microsoft Copilot to help them win a condominium dispute, but the AI fed them false information, and they lost the case.

How do you rate this article?

Join our Socials

Briefly, clearly and without noise – get the most important crypto news and market insights first.