WC.com

Tuesday, May 13, 2025

Big Data - Not Infallible

There are mathematical equations processing information about our every moment. Algorithms have become an ubiquitous part of our daily lives, powering everything from facial recognition on your smartphone to predicting what websites and advertisers might appeal to your current needs. These predictive computer programs are being used to observe and assess you persistently, perhaps without you noticing. 

In their most fundamental form, they are merely instruction sets that provide processes and procedures for computers. They perform some preset degree of analysis and prompt their next action or reaction. The algorithm is not "thinking," but it is completing tasks relying on these programming instructions. Because it does so rapidly, it might be perceived as thinking. 

This is not artificial intelligence. Artificial intelligence is beyond algorithms; it is reliant on algorithms and intertwined with algorithms. Nonetheless, AI is a much larger concept. Algorithms, instructions, tell machines what and how to do.  That is referred to as "top down," in the same spirit as you instructing a child how to do something. Algorithms are often that "top-down" instruction.

AI is a much broader construct that has been described conversely as "bottom up" and is more akin to children playing unsupervised. The children will learn from experience. They will gather many successes and failures, and will learn, acclimate, and assimilate those lessons. 

Neither approach is perfect, nor even approaching perfection. The failure potential was recently illustrated by the Spanish police, who use an algorithm to analyze data and predict risk. Their process involves a 35-question inquiry into the situation of people alleging domestic violence. This includes:
"the abuse and its intensity, the aggressor's access to weapons, his mental health and whether the woman has left, or is considering leaving, the relationship."
The computer then assesses these variables and reaches a conclusion as to whether the risk or threat is actionable. 

This was reported by the British Broadcasting Corporation (BBC) recently, regarding the complaints of Lina, who sought police protection and a restraining order in January. The algorithm predicted that she faced a "medium risk" of violence, and the restraining order was denied. Shortly thereafter, she was dead. 

The BBC notes that the program in use in Spain, the "VioGen" is similar to programs in use elsewhere. It notes various processes and programs that similarly strive to tease out particulars and details, which are then considered in a big picture process. In this way, various potential facts are assigned weight or significance. and the overall quotient, average, product or result is produced. 

Fans of the algorithm claim that this efficiently manages information to allow effective allocation of resources and reaction. There is significant faith in the algorithm output, with the police relying on its conclusions "95% of the time" in Spain. Nonetheless, there is acknowledgment that the technology fails, as it did with Lina. 

Lesson one today - computer programs are not perfect. 

Beyond that, there is the potential for humans to fail. The BBC reports that the restraining order recommendation from VioGen is only part of what each judge reviews in making a decision about restraining orders. The judge may grant an order for someone at "low risk" or may deny one for someone at "high risk." 

That is critical - the computer is assessing, measuring, quantifying,  and reporting. A human is making critical decisions. 

That will remain true regarding any software. Event the relatively simple Grammarly that is signaling me as I write this. It finds fault, makes suggestions, and proposes rewrites (it did not notice the first word of this sentence should be "even," not "event"). I accept some, reject others, and frankly do not comprehend still others. It is influencing me, but I am writing this. 

Today, the technology exists for analyzing data as never before. Programs are capable of addressing incredible volumes of data points and producing amazingly rapid output. Despite our progress, the speed and sophistication continue to increase. 

With the advent of AI, this is now possible with either "top-down" or "bottom-up" analysis, either of which will produce results, outcomes, and conclusions. Thus, it is possible to analyze a problem or simply search vast data to find a problem. 

And each process may produce highly relevant, pertinent, and persuasive conclusions. Or, each may produce junk. Each may be exemplary or worthless. 

In the end, it was a judge who assessed the risk to Lina. Lesson two today - humans are imperfect, their choices will be imperfect, and while machines will help us, they will not make us perfect. 

The judge in Lina's case was a human relying on apparently the best technology available, and the data it produced. The outcome of this interrelationship will draw criticism and introspection. The micro-outcome here will be assessed against the perceived macro-usefulness or effectiveness. 

We have to consider Mitch Ratcliffe. See AI and the Coming Regulation (September 2023), Cybersecurity 2021 and WCI (May 2021), $11 Billion (December 2023); and Everybody Wake Up (October 2024).

This is pertinent in the workers' compensation world because big data has been part of this community for years. There are a multitude of programs currently assessing the potential of various elements, indicators, and predictors. Algorithms and AI are predicting the likelihood of requiring surgical intervention, the probable duration of disability, the likely permanent impairment and future medical costs, and more. 

The day of the machine is not dawning; we are merely awakening to its presence. As we do, the two lessons here will be critical touchstones. Because of them, the presence of both computers and the human touch will remain co-equal and critical, intertwined, and necessary.