WC.com

Sunday, July 13, 2025

Better look that up

I recently ran into Horace Middlemier*, Dean of a prestigious educational institution. The discussion was free-flowing and touched on multiple topics. But, as is normal this century, we turned to artificial intelligence (AI). The dean noted a recent example of a local lawyer being sanctioned for filing false information with a court.

Those were once newsworthy but have perhaps become commonplace or mundane. For a link to a database of the growing list of poor-performing lawyers, see Prosecuted for lying? (June 2025) and Another one rides the bus (May 2025). A complete list of my AI posts is on my website.

The dean expressed exasperation regarding the speed at which writing has declined in the last 24 months. Few are writing, and fewer still are writing well. The idea of a term paper in class has reportedly become anathema among some educators. They cannot deal with the various challenges that AI presents.

I have since had the chance to discuss these concerns with various educators. The spectrum was clarified for me rapidly. I have been an educator for many years, and am approaching 60 semester classes delivered. Through that time, I have leaned toward tests, quizzes, and homework rather than papers. When I did venture into papers, there were issues of plagiarism and collaboration that were challenging.

What do the instructors face?

First, there is the grading. Tests can be processed through a grading machine in many instances. Even if the test is an essay test, the length of the answers can be limited. Term papers of 15 or 20 pages may simply equate to several evenings of difficult grading.

Subjectivity is a second point. Tests with definitive correct answers are easier to grade than essays and short answers. The subjectivity factor can make grading short answers and essays difficult, and there is the same concern with term papers.

There has always been the nagging concern about plagiarism. This is a suspicion that may be driven by papers that are notably articulate, inspired, and insightful. That is ironic, but it is what I am told. There may be some tendency to look at the quality of one with a comparative eye to another(s). 

That plagiarism concern might be addressed with sophisticated software, and I have heard professors say they merely paste portions of the paper into a Google search—too often there is some poetic license that is readily apparent.

But in the era of AI, there is also detector software. They are no more perfect than the AI that writes the papers. Imagine AI hunting for AI, it is Skynet versus Skynet. See Arms Race (May 2024). I saw that one coming. There are challenges. If an AI detector presents an analysis of some percentage of non-human contribution, how accurate is that prediction or conclusion? Can the instructor even count on a positive being a real positive?

Some have concluded that any AI detector is "an ethical minefield." That article points out the risk of false positives. There are citations to claims from various providers as to the infrequency of false positives. Those sound promising, with only 1%, perhaps. But what if you are the one who is accused of being in the 1%? What if you are the professor striving to defend your grading conclusions and defend such a tool?

I know, I know, the old men in the balcony are grumbling already, "what does this have to do with workers' compensation?"

Well, there is no difference in grading term papers and assessing written arguments, briefs, or memoranda of law. And that is what judges spend a great many hours doing (the prevalence of poor spelling, missing punctuation, and questionable grammar is similar in both settings). There should be some relief for judges in not having to assign a grade, but not necessarily.

There are abundant examples of lawyers citing fake, hallucinated authorities. They are usually noted by opposing counsel, and then trouble ensues, arguments are made, and orders entered. 

Judges should be able to count on four things:
  1. Honesty in fact of the lawyers in any proceeding
  2. Lawyers carefully checking their own authorities and verifying they are real (actual statutes or cases) and accurate (say what you say they say).
  3. Opposing counsel checking their opponent's citations and pointing out errors (in number 2, i.e., "that case does not exist," or "that statute does not say that").
  4. That there will be errors, shortcomings, and interpretations (we are only human).
 ** Appellate courts should be able to count on all these and that trial judges are carefully checking what they rely on, from whatever source. 
That fourth one is critical. The trial judge has to retrieve those cited authorities, read them, and form their own conclusions (subjectivity, see above). The judge is ultimately responsible. The judge better look that up. The judge is studying what the parties bring, doing their own research, and striving to get the outcome right. Day in and day out, the judge must be studying, reading, and verifying.

J.D. Supra reported last week that an appellate court in Georgia had to vacate a trial judge's order. The trial judge cited fabricated cases that were (apparently) hallucinated by an AI and cited in pleadings (at a minimum, they were not real and could have been created without an AI). The opposing counsel apparently did not bother to verify or contest them at trial, and the trial court relied on the falsehood. See Shahid v. Esaam, 2025 Ga. App. LEXIS 299 *; 2025 LX 214277. 

The appellate court reportedly sanctioned the lawyer who originated the hallucinations. In a twist some might find ironic, that lawyer apparently was not awarded fees. The court noted, "Appellee's Brief further adds insult to injury by requesting 'Attorney's Fees on Appeal' and supports this 'request' with one of the new hallucinated cases." For some reason, Hamlet (Billy Shakespeare, 1599) and some petard come to mind. 

Thus ends the fallacy of reliance; see numbers 2 and 3, above. The judge cannot count on the lawyers to check their own work and avoid hallucination. The judge cannot count on the opposing party(ies) to check their opponents. Those halcyon days, it seems, are gone. 

The age of "IDK" and "IDC" has perhaps come indeed. See Ignorance and Ambivalence (July 2025). Unfortunately, this example reveals that perhaps the appellate court cannot rely upon the trial judge either.

        Courtesy Charles Schultz.

There has been talk of AI detectors for legal practitioners and judges (remember the "false positive" mentioned above?). There has been talk of making lawyers certify whether their pleading was prepared using AI. Would a certification cause lawyers to go back and verify, to look it up? Would we require judges to similarly certify? 

There is the reported practice of clients declining to pay attorneys for the seemingly mundane task of legal research ("you should know the law"). Lawyers tell me they do not check the opponent's authority because "the client won't pay for that either." In a world of litigation, with cases being decided daily, how could anyone "know the law" with certainty, thoroughness, and confidence?

Where does the fault lie? Where is the "holy grail" solution? Every lawyer and judge should be looking up the case law and statutes. 

Back to Dean Middlemier, who led this post. The most poignant observation that educator made was in their reaction to various damning news stories of lackluster attorneys (now judges). The dean said their first reaction is to check the education of each malefactor in such stories. The Dean's singular desire is to verify that that particular lawyer (now judges) did not graduate from the dean's school.

I am not faulting Dean Middlemier. I get it. How embarrassing if your graduate is the one hallucinating or relying on them. Imagine if it's your partner, associate, or fellow judge?

But the concerns of AI in the legal practice are deeper. "IDK" and "IDC" are seemingly becoming commonplace. The very future of the profession, the legal system, and ultimately society lies prostrate before us on the road, wounded, perhaps gravely. How shall the community respond? What aid? What remediation?

Should we just drive around it and pretend not to see? Should we try to help it up? Is there anyone we could call to its aid?

The Shahid decision identifies only the husband's counsel by name, Diana Lynch. Nonetheless, the Court of Appeals website says the trial judge was Hon. Yolanda C. Parker-Smith (who has apparently been on the bench less than 6 months), and counsel for the appellant was Mr. Vic Brown Hill (who may or may not have been involved at the trial level). 

Note to Dean Middlemier, it appears none of these attended your school. But that does not mean another dean or two, law firm owner, or a client elsewhere might not be SMH right about now. 


Ed. Note - Horace Middlemier is not a real person but a figment of the author's imagination and experience; a literary tool or foil. Any resemblance to a real person is strictly coincidental and unintended.