WC.com

Sunday, December 31, 2023

AI Incognito?

The year draws to a close. December 31, 2023. In 2024, I will mark a dozen years of memorializing thoughts on this platform. When I began, there was a spell-check, but it was not so reliable. Over those years, I have written extensively about Artificial Intelligence and its potential to impact us. I noted some examples of that in a post in October, AI is a Tool (October 2023). I first touched on AI nine years ago in Attorneys Obsolete (December 2014).

Without a doubt, artificial intelligence was the news of 2023. It impacted the law, lawyers, and the

In any instance, lawyers have a duty of Candor to the Tribunal (April 2018). That is broad, simple, and yet complex. In the broadest and simplest context, it is a throwback to J.K. Rowling's 2003 Harry Potter book, The Order of the Phoenix. The malicious evil there was embodied by a teacher who punished by making a student repeatedly scrawl "I must not tell lies." There you have it, in its simplest iteration. Lawyers need to be accurate. See also Dead Men Tell no Tales (February 2018).

There is an obligation to tell the truth. This is discussed in The Representations We Make (March 2019). It is critical and simple that lawyers must tell the truth. That said, we will all make errors, and mistakes. I heard an attorney hypothesize once, "Wouldn't it be ironic if an attorney made a misrepresentation of fact in a case in which she/he was alleging the injured worker should forfeit all benefits due to misrepresentation?" That is indeed interesting.

The breadth, and complexity, come from the affirmative duty to correct mistakes. See Candor, Omission, and Persuasion (October 2021). Beyond telling the truth, lawyers are obligated to remain cognizant, alert, and conscious of their representations. If a lawyer makes a misrepresentation, it is on the lawyer to notice it, and to inform the tribunal of the error.

That rule was critical for two attorneys, Harry and Lloyd, who practice law in New York. See Mamma Always Said (June 2023). These two filed a memorandum in Federal Court. They had done their legal research with Artificial Intelligence and filed the hallucinated results with the court. Funny? perhaps.Unfortunately for them, many judges "do not have a sense of humor we're aware of." (Men in Black, Columbia, 1997).

The Judge entered an order to show cause. The lawyers were afforded an opportunity to explain their citation to "six fictitious case(s)" that the AI made up. The AI, it seems, is quite susceptible to hallucination and imagination. Reuters reported that the judge ultimately ordered each of the lawyers to contribute to an aggregate fine of $5,000.00. That is likely "real money" even in the big city.

The story, and sanctions, were covered by CNBC, The American Bar Association (an informal, voluntary attorney group), CBS, The Maryland State Bar, The Wall Street Journal, Forbes, and more. AI was the talk of the legal community. I have heard AI here, and AI there. I wonder if there is a lawyer in the world who is not aware of AI and its tendency to make up answers. Some would argue that the entire legal world has been on due notice since June 2023. 

Well, Zachariah C. Crabill, is apparently quite aware now. Colorado Politics reports Disciplinary judge approves lawyer's suspension for using ChatGPT to generate fake cases. That is significant. Suspension means not practicing law, earning income, or serving clients. Mr. Crabill used a template for a motion and then sought to "bolster his legal citations" using ChatGPT. ("Danger Will Robinson," Lost in Space, 1966 - an AI Robot warning a human . . . in 1966). 

Mr. Crabill's AI reliance was, from the lessons learned last June by Harry and Lloyd, not wise. However, Mr. Crabill added the "case citations to his brief without verifying their accuracy." And he proceeded to hearing. But, through some inference, suggestion, or intuition, he began to doubt. He was concerned enough about the falsity of those citations that "he texted his paralegal" and expressed the concern that "all of my case cites from ChatGPT are garbage."

Mr. Crabill then sought to do the actual work, real research, in hopes of "find(ing) actual case law in our favor now to present to the judge." The hearing began, and the judge first raised the issue of the citations and authority. Faced with this challenge, Mr. Crabill would have been well served to have read Don't Double Down Dummy (June 2017). Or, he might simply have recalled Wil Roger's famous "If you find yourself in a hole, stop digging." Too late. 

Mr. Crabill decided that the best course would not include admitting the error and seeking to correct it. Instead, he told the judge that the error was made by "a legal intern in this case, who, I believe, got some mistake." It appears that Mr. Crabill did not prevail on the motion, nor a subsequent motion that was "denied on separate grounds from the 'fictitious case citation'" denial. It appears from this that the judge was gracious in allowing the filing of a second motion after the "faking" and the misrepresentation. The client still lost, but at least that was not because (directly) of the misrepresentations. 

The attorney was disciplined with a "two-year suspension, only 90 days of which Crabill would serve as long as he otherwise completed a probationary period." That is three months of no legal practice, no client service, and likely an interruption in income. He reportedly also lost his job over it. That is a serious reminder that (1) ChatGPT is neither flawless nor reliable, and (2) we must not tell lies. If we make a misstatement, own it. Admit it. There is a real power in admitting mistakes and simply apologizing for them. Step one: Quit Digging. Do Not Double Down!

That life lesson in Colorado has not made the national news. It made the Volokh Conspiracy. The Business Insider covered the story when the young lawyer was fired for his transgressions. Fortune picked up the story as well. These are widely read, but one might certainly miss coverage that is not on the national news feed with the important stories of the day concerning the real news like Ye, Kim, football, and who was seen with whom and where. I remember when tabloid sensationalism and hyperbole were actually looked down on. Thank you social media for al you've done. 

So, anyone might have missed Mr. Crabill's story. Perhaps, even with its national coverage, someone might have missed Lloyd and Harry's sanctioning in New York last summer.

ABC News reported this week that "Former President Donald Trump's onetime fixer Michael Cohen" apparently did not see any of those stories. He is striving "for early termination of his supervised release." He hired attorneys to represent him. Then, he did a little old-fashioned legal research in the new and modern way with Google Bard. That program provided him with "invalid citations" which he provided to his attorneys.

Might he have missed the stories about lawyers in trouble for AI hallucinations? No, Mr. Cohen asserts that he did not know Google Bard was AI. He asserts that he "mistakenly believed Google Bard 'to be a supercharged search engine, not a generative AI service like Chat-GPT.'" But, Mr. Cohen is a former lawyer. Will the judge in his case hold him to the standard one might hold a lawyer? Could the judge perceive the "I didn't know" in the same light as experienced by Lloyd, Harry, and Mr. Crabill?

The lawyers to whom Mr. Cohen provided his hallucinated citations will ultimately be responsible for having provided them to the judge. One of those attorneys, Mr.Schwartz,  has explained he "would have researched them" before providing them to the judge, but did not because he believed those citations came from another attorney ("found by Ms. Perry."). Some might see that as similar to the original Harry and Lloyd scenario.

Mr. Cohen is famous. His AI reliance has made ABC News, the Associated Press, the Washington Post, CNN, NBC News, Fox News, NPR, The New York Times, and even Yahoo. There are perhaps those who missed Harry and Lloyd last May. Some may have missed Mr. Crabill's suspension and termination. But, perhaps every lawyer in the world now knows two things: (1) artificial intelligence hallucinates, and (2) you can get in trouble by relying on it.

Every lawyer should also know a few additional things. (3) Ignorance is no excuse; (4) "my partner (associate, law clerk, etc.) did it is not an excuse; (5) judges are seemingly unimpressed and unforgiving on misstatements; (6) the potential detriments are noteworthy and expensive; (7) Lawyers have the obligation to correct misstatements (think about whether Mr. Crabill would be suspended if he had first raised the hallucinations and sincerely apologized); (8) the lawyer who signs a pleading is responsible; that lawyer has a duty.

With every story that is published, with every blog post written, the "I didn't know" will likely find less forgiveness. The press coverage has now been so broad that "I didn't know" may strain belief in the eyes of some. When is it time for lawyers to recognize the threats of AI? Yesterday.

Don't be Harry, Lloyd, Mr. Crabill, Mr. Cohen, or Mr. Schwartz. Lawyers are responsible for their representations. Do your own verification of citations and authorities. If there is a misstatement, own it. With AI, there is convenience and perhaps assistance. There is also challenge and potential, good and not-so-good. Be aware. Be cautious. Be responsible.