WC.com

Sunday, April 7, 2024

Nothing to See, Move Along

Artificial Intelligence is back in the news, with a state court judge confronted with the admissibility of a "video enhanced by artificial intelligence." It is a criminal case in Washington state, and there is a familiar discussion in the reasoning that led the trial judge to exclude the "enhanced" evidence. The story illustrates that perhaps we already possess tools to evaluate and assess the impacts of AI in the realm of litigation.

I suggested as much previously. See AI is a Tool (October 2023) and A Fool With a Tool (January 2024). The theme there is that AI shares similarities with various evidentiary challenges. It is a new challenge, but not so dissimilar.

NBC News reports that there are issues of interpretation and extrapolation in the use of AI. The resulting video or photograph produced by artificial intelligence is not a mere reflection of a thing, person, or occurrence. The AI product is an interpretation of "what the AI model 'thinks' should be shown." That is, it is an interpretation or translation. The Washington court took issue with that.

There is the obvious challenge with the interpretation. But, in fairness, translation has existed in legal proceedings for generations. It is common for a witness to testify and for that to be filtered through another human being, a "translator." We have seen many challenges with that. Some translators are adept at "word for word" and others are "not so much."

We have all experienced translations that were interpretations in which words and phrases were parsed, paraphrased, and clarified or explained. Instead of a word-for-word "translation," we have seen an extrapolation or explanation. That is perhaps a matter of expediency or may result when there is no exact word or phrase in English.

I have heard translators testify that there is no exact corresponding English. I have heard them describe colloquialisms, slang, and other challenges. There are regionalisms, dialects, and more that may influence how an input is perceived and then what the output (translation) is. Is an AI program so different? Well, perhaps.

The Washington judge was, in some part, critical of the translating, but was also seemingly unimpressed with the foundation for the visual evidence. He criticized the program's "opaque methods to represent what the AI model 'thinks' should be shown." That is not criticism of what is produced or shown, but of the process and how it functions; how well we understand how it functions.

The judge's analysis regards a novel and challenging technology, yes. But the analysis of the court was not novel. It is an analysis that every trial judge has confronted and one for which various evidence codes and rules have already prepared us. The judge weighed the probative value of the evidence against the potential for undue prejudice. That happens daily in trials and hearings across the country.

The Washington court concluded that the AI-enhanced material here had a significant risk of prejudice. In excluding it, the court concluded "admission of this Al-enhanced evidence would lead to a confusion of the issues and a muddling of eyewitness testimony." In short, it was more prejudicial than probative.

In addition, the judge expressed concerns about the foundation. That is explanation of the AI process, tools, and programming. Anyone might question how the AI makes decisions, what alterations or changes it makes, and how those are reflected in the end product. In short, it is not "that" we see Forrest Gump standing next to a long-dead ex-president, but "how" the computer placed, postured, and reflected that image.

The judge concluded that striving to understand that process(es) would perhaps involve forensic testimony and expert testimony. This was characterized as potentially "lead(ing) to a time-consuming trial within a trial." That is a deep dive exchange of evidence about the programming, the methodology, and the process. And yet, that also happens every day in trials and hearings across the country.

Such "trial(s) within a trial(s)" are not novel. In various disputes, there have been such diversions or detours on many occasions. They are necessary whenever a new technology or technique finds its way to litigation. Those hearings are about exploring foundations, assumptions, and innovations. 

They are necessary to probe the foundations in a variety of instances. The entire point of the Daubert analysis is such inquiries and deep dives before evidence is admitted. See Daubert in the Courtroom (August 2019) and Daubert's New Day (May 2019).

This ruling is not about AI, but about foundations. The ruling is about how and whether computers will be used to interpret evidence and to produce "enhanced" or "interpreted" output. The specifics in Washington involve a cellphone video that might establish a "self-defense" avoidance.

The resulting video, however, is not a reflection of the events. It is an interpretation of the events. Prosecutors objected and explained that the results were "predicted images rather than reflect(ions of) the size, shape, edges, and color captured in the original video." Thus, they objected and the judge agreed. It was perhaps important that the software company that was used advocated against the use of its Hollywood tool for litigation purposes.

The result is not novel. To be admissible, a photo or video must accurately reflect the actual circumstance or occurrence. That is foundational. The evidence here did not support that the AI production did that. The evidence suggested that there were parts removed and parts added. The result would be the same if a paper photo was trimmed with scissors and adjusted with a sharpie.

The outcome is the same. Only the means of altering the image is different. The ruling is not about AI. The ruling is about evidence. That happens every day in thousands of rooms coast to coast. Nothing to see here people. Move along.