I have been beating the drum on robotics and artificial intelligence for a few years now. Almost a decade ago, I warned you about AI and the advent of large language models (LLM) in Ross, AI, and the new Paradigm Coming (March 2016), and about their power and abilities in Chatbot Wins 160,000 Legal Cases - the "Future is Now" (June 2016). I have been persistent since that time with posts like Technology Impacting Judging (March 2019). This is all old news.
Credibility is a major challenge for adjudicators. There are a myriad of issues that may affect the demeanor and presence of any witness. I have written on credibility a great deal over the years. See Credibility Again (June 2023) and the posts cited there. Perhaps my favorite and most unfortunate credibility anecdote is The Chair of Truth (February 2018).
There is evidence that computers are far more effective at detecting "liars ... than humans with the naked eye." The University of Buffalo reported in 2012 that computers are correct "more than 80 percent of the time." That was over a decade ago also.
Make no mistake:
Computers can be more effective than humans at accessing and expressing.
Computers can be effective advocates.
Computers are adept at processing vast data quantities and complexities, in ways we may not even understand.
Computers may make better judges than humans (equivocating, "may" in self-interest).
There are those who are scared of these tools, their evolution, and their impacts on us. In March 2025, a computer tried to represent someone in a court proceeding in New York. Well, in fairness, that is what the Associated Press (AP) headline might lead you to believe:
That is a bit misleading. What factually happened in "the latest bizarre chapter in the awkward arrival of artificial intelligence in the legal world" is just another instance of a litigant using AI. The pro se party asked the Court's permission to "submit a video" instead of presenting an in-person argument.
As the video began, a "smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater" led with the classic “May it please the court.” The AP says this was stated by "the man" in the video, but there was no man. It was a digital compilation, a computer creation, that looked like a man. This is a far cry from early avatars like Max Headroom which were mere parrots. In reality, they were actors pretending to be the harbingers of our future.
This photo was conjured by GROK AI and is its interpretation of the prompt "smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater." It is not a real person, and any similarity or likeness to any real person, living, dead, sentient, or otherwise is purely coincidental.
Bad faith?
The Court shut down the video instantly. The pro-se litigant was accused of misleading the Court. The litigant was allowed then to proceed with his argument incorporeal, bodily, physical, and live. He had to have his say from his own mouth. He had to speak his own mind, in person, in fact.
The Court did not sanction the litigant. But the litigant wrote "an apology to the court." He explained that "he felt the avatar would be able to deliver the presentation" more adequately and effectively than he could personally. He was afraid he would personally present "mumbling, stumbling, and tripping over words."
The litigant conceded after his video attempt, this methodology led the court to be "really upset about it." His perception was that the court "chewed me up pretty good." What were the judges upset by?
Should the Court have been perturbed? If you answer yes, then ask why? The litigant not being frank with the Court is an absolutely appropriate reason for the Court's disappointment, perturbation, or even ire. It is not appropriate to mislead a Court by commission or omission. Hard stop.
If the reason the Court is perturbed is simply that new tech was employed, then perhaps further introspection is warranted. Yes, it is new, unfamiliar, and even alien. Yes, it may be uncomfortable in its novelty. That said, one might wonder how it makes any difference that an avatar reads someone's statement or another human does. In this, might a chatbot represent a litigant? We know that has happened. But we would know, right, should know, right?
All that said, the most recent study seems to suggest that AI LLMs are beginning to pass the Turing Test see X-Files or Poltergeist? (November 2024). This means, in the scope of our expectations and fantasies formed over decades, the "be-all" has occurred in the field of AI; a be-all that has been dreamed of since 1950.
This does not mean further evolution or revolution does not lurk in our future (it does), but it is a major milestone nonetheless. It was the impossible dream of Turing, and yet here we are. Having reached this goal line, others will set new goal lines, even as the stragglers catch up to this Turing point.
The news is that the University of California San Diego says it tested four LLM: "GPT-4o, LLaMa-3, and GPT-4.5 and Eliza." Two of the four passed the Turing Test rather convincingly - LLaMa-3, and GPT-4.5. The two that failed nonetheless still did pretty well, but not quite a passing quality. Focusing back on the thoughts in Is Gartner Helpful on AI? (December 2024), this is indeed "the end of the beginning."
It is significant that the successful Turing test results are attributed in part to having "the synthetic mimics adopt a human persona." In other words, the programmers told the AI LLMs to seem human. That made them more believable. That credibility is exactly what the pro see litigant in New York hoped for and sought with his attempt to use AI as his spokesperson.
As AI evolves into our day-to-day, the power-user will ask it not only to write a brief but to write it in the voice of the judge who will review it. Familiarity and flattery will get you everywhere.
The power users will evolve from asking an AI to describe a situation to asking it to describe the facts in a manner most sympathetic to one view, side, or perspective. In a future post, I plan to explain more about our predispositions, but the bottom line is that we humans are malleable and manipulable.
The "end of the beginning indeed." we are off the edge of the map. The tools are increasingly capable and effective. We are unprepared to engage them, use them, and guard against their potential failures or shortcomings. These are strange days indeed. What are you doing to prepare yourself?