We are in a world that is transitioning around us. It is evolutionary at least and at moments "revolutionary" is more accurate and descriptive. There has been so much discussion recently about artificial intelligence. AI is not news to me. I have been writing about it since 2016, see Intelligence? (November 2022) for links to older material. It has come up again more recently. See AI and the Latest (June 2023) and You're Only Human (May 2023). Are the challenges really bigger today? new today?
I was recently in a conversation (peripherally) in which several really bright people were expounding on the technology wave that they perceive. They see the potential for change coming in the practice of medicine and in that both promise and threat. They are impressively aware, analytical, and insightful. It was a great opportunity to listen. We should all look for opportunities to listen to smart people.
Then the "f" word came up. And that was a favorite it seemed. Not in the Taylor Swift, Olivia Rodrigo, Lana Del Rey context (different "f" word). This post is not about the first one, from Gayle, Demi Lavato, and others. I know I will never sing the alphabet again the same way again. A, B, C, D, . . . . Gayle gets so confused about the letter's order. But like that other one, this "f" word is being heard more often in polite conversation, and some of that is directed at computer programmers and artificial intelligence. Why is that word so frequent in today's music?
The second one, less obvious or pernicious perhaps, is "f"raud. We do hear the word a fair amount in litigation. It is heard far more often in terms of injured workers and their complaints. There has been a trend perceived over the last twenty years. Whether true or not, people express perception that the "f"raud word is plead increasingly often. But there is no "f"raud defense in Florida, See Misrepresentation Defense (February 2017).
But the good folks at Public Broadcasting (some still find them credible despite dubious representations such as "in the 1970s, benefits to injured workers sunk so low . . .") say that "only 1-2% of workers' compensation claims are fraudulent." That said, they cite estimates of the cost of fraud to be $1.2 billion to $5 billion. Remember Dirkson (or the reported that quoted him), "A billion here, a billion there, pretty soon you are talking about real money."
The PBS team notes that there is evidence of fraud on the side of employers. They note a thirty-year-old Florida "compliance audit" that demonstrated employers not fulfilling their obligation to have workers' compensation coverage. There was also discussion there of "cheat(ing) the system" through failing to accurately report payroll or through "falsely representing employees as independent contractors." Misclassification is nothing new. And, there is evidence that "f"raud exists on both sides of the employee/employer relationship.
So, this "f" word is nothing new. There are some who take advantage of the world of workers' compensation. Some pharmacy operators were just convicted in a $145 million case. Reuters says it occurs in various medical settings, including rendering of unnecessary care and billing for care not provided. But why are these smart folks discussing it now?
In artificial intelligence, there may be opportunities to leverage technology. The delivered work product may not be a human work product. It may not be as thorough or personalized as one might expect or anticipate. It may be computer-generated and technology-leveraged. Those seem like detriments.
In the hands of a well-experienced expert physician, with the background to see error or misconception, it is perhaps a ready and valid tool. In the hands of a less experienced newcomer standing on the shoulders of many predecessors, the computer-generated output may be too easily trusted, and too readily endorsed. That worries some of the smart folks.
But that is not the "f" word. The conversation above about AI evolved to the current technology, the programming. One physician explained to me that technology in the examination room has diminished efficiency. The doctor spends significant time typing and documenting.
There are boxes to check and alternatives to select. There is, it seems, a great deal of data being accumulated. This doctor denies seeing how that compilation of data is benefiting patients. Some of it may be AI, in that some boxes must be checked for certain patients, and not for others. The computer program, at least, is perceived as fluid in its requirements.
Then the conversation explored that data entry process. The goal seems to be reducing information to data that can be harvested, analyzed, categorized, and processed. Thus, there are "yes" and "no" questions that some say lack any opportunity for other responses like "maybe," or "as yet unknown." They express a feeling of being pushed to a conclusion even if that is presently at best conjecture or guessing.
The physician laments that to be credited for their work with a patient, some data entry systems simply insist on a "yes" or "no" to these questions. Despite being unsure, the doctor clicks the best answer for today and creates a digital trail that may work well in the goal of simplified data analysis ("three out of five doctors recommended sugar-free gum"), but perhaps not so much in quality patient care, this patient.
Then, they described that some of the system limitations are more troubling. These ask the doctor to select entries from a drop-down menu. This doctor complained that none of the entries may be wholly appropriate, but the system will not allow a free-hand answer or a deferral of the question. Thus, the doctor is making a representation that is seemingly less-than-true.
She/he is doing so because the patient's care cannot proceed until the computer is satisfied. (make no mistake, I am not a critic, "I, for one, welcome our new robot overlords"). Despite the potential impact on care (the first reason anyone goes to the doctor is quality care. No one ever picked or recommended a doctor on penmanship or "most boxes checked on form").
The general sentiment seems to lament that doctors are saying they did things (testing, procedure, extent of exam) that are not true. Well not in the strictest sense. Once we start equivocating on what this "true enough," we have issues. The doctor picks the "drop-down" choice that is closest. The result is that "Neurological evaluation" may be selected, but the services rendered do not match that category. As an employee of the medical facility, the doctor may or may not be able to fill in a blank and explain.
Well, that computer is going to bill someone for a "neurological evaluation" (work not actually done, see Reuters above). You can decide if that is "f"raud. But what of the patient? That patient may have a work accident in two, ten, or twenty years. When asked during a deposition "Have you ever had previous nerve complaints," the worker may validly deny. The worker may have no idea of such complaints, in her or his vernacular.
Yet, that patient might be foisted upon that old medical record that shows "neurological evaluation." That may lead to an accusation of "f"raud. The chances likely increase if a several-week or month course of care back then was founded on repeated iterations of "neurological evaluation" that became a system default for the patient after that first compromise use of that drop-down.
However, the patient did not make neurological complaints. The doctor did not test or treat neurological complaints. The doctor picked the best "drop-down" available in a computer system that insisted on all blanks being filled. The physician did her/his best in an imperfect world. The patient did her/his best in accurately describing the past care. The workers' compensation attorney files an allegation of "f"raud or the equally effective "m"isrepresentation.
The programming may be focused on gathering good statistics. But, if the selections are inaccurate, what is the output? What if that one doctor laments the system in a coffee break: "Yeah, that happens to me, I just choose 'neurological evaluation.'" The doctor she/he tells may tell two more doctors. Pretty soon, "neurological evaluation" is the "go-to" entry used in that program to avoid delay and frustration (the system won't let you progress until an answer is provided). Might such a facility see a disturbing increase in neurological patients? A "cluster" might be epidemiological or might be convenience in the new order.
Another doctor said that the impact of technology has been a diminished capacity for seeing patients. The time spent typing, clicking, and checking is perceived as taking away from the patient. That is true before the system gets insistent on some missed check-box, blank, or drop-down. One doctor said that it is typical to be "charting" on the weekend to catch up with all the digital input that is required. The overall sentiment was seemingly technology is a detriment to care. This may be in volume of care or quality.
Seriously, what do I want my doctor to focus on when I present? Hint, it is not statistics, penmanship, or box-checking. It is really not even computers. I would like care and treatment. Give me attention, bedside manner, and relief. Aye, there's the point.
And none of that addresses the "f" word. Are physicians being pigeonholed into making misrepresentations? Is the confinement of what notes the systems allow combining with the ability of such programs to compel an answer resulting in undeniable and yet unavoidable misrepresentation? Will that change care? Perhaps not for this patient and doctor, but what will happen when the AI is tasked with examination of this data set for future care? Will AI be impacted by the potential for "garbage in, garbage out?"
Are such questions being asked by anyone out there? Does anyone care? There is a physician shortage. It is worsening. The Boomers are aging, and physicians are in demand. Is their time devoted to patients or our computer overlords? Is the result of this data demand better care or future challenges? There is much to unpack and consider.