WC.com

Sunday, July 20, 2025

Indeed Sancho Panza

There is a fine proverb noting "the proof of the pudding is in the eating," Don Quixote, Miguel de Cervantes Saavedra (1605). So many years ago, this was the expression used by the loyal Sancho Panza to explain, essentially, that it the actual experience that provides the proof of outcome. 

Courtesy ChatGPT

Conjecture aside, it is difficult to argue with. Of course, the point can be made that any outcome is anecdotal, transient, or unreliable. An outcome is not always definitive proof of more than the incident that is directly involved. This is why researchers and scientists repeat tests, attempt alternative inquiries, and argue about cause and effect. One result does not usually make an irrefutable conclusion. 

Thus, there is perhaps some support for the threat of artificial intelligence in the recent Your Brain on ChatGPT, a 206-page study published in June 2025. The outcomes and predicates are both fascinating. 

In a fit of "confirmation bias," I might remind you that I have been concerned about disuse atrophy for some time. See Disuse Atrophy (December 2024). That post mentions Idiocracy (20th Century 2006), as do some other posts, such as Sharing a Drink Called Loneliness (May 2023), Are You Innumerate (July 2018), and We are Regressing (March 2025). It seems axiomatic, forgive the expression, "use it or lose it."

Well, a group has now "explore(d) the neural and behavioral consequences of LLM-assisted essay writing." With some grouping and differentiation, the tested individuals were asked to write three essays. Each group was allowed varied resources:
  1. (access to Large Language Model - Artificial Intelligence, "AI") LLMs
  2. (access to) Search Engine
  3. Brain-only (no tools)
After the three "sessions," in the fourth round, the groups remained static, but some of their access to tools was altered:
"LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM)." 
The participant's "cognitive load" was measured with electroencephalography (EEG). In addition to this measure of brain engagement, the resulting fourth essays were analyzed by Natural Language Processing, a type of AI, human instructors, and "an AI Judge."

The results were troubling. The
"EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity."
The efficacy of the human brain was diminished, or showed decreased engagement, most significantly in the people who had been afforded the greatest help, the LLM, in the first three sessions. The results were noted in "reduced connectivity," "under-engagement," "memory recall," and engagement of the "occipito-parietal and prefrontal areas."

In a nutshell, those who exercised their brains in the first three sessions were more likely to display mental ability and agility in the fourth round. The conclusion: "While LLMs offer immediate convenience, our findings highlight potential cognitive costs."

But wait, perhaps there is more.

When studied over time, the "LLM users consistently underperformed at neural, linguistic, and behavioral levels." This, the researchers conclude, "raise(s) concerns about the long-term educational implications of LLM reliance and underscores the need for deeper inquiry."

This is of particular interest because evidence is mounting that a generation has abandoned thinking and studying. They are using AI, without inhibition or regret. An "engineering student at UCLA" reportedly "pulled out his laptop and proudly displayed how he used ChatGPT to complete his final project." There was no apparent reticence or fear of repercussion.

After, the student denied that his use of the AI paradigm was cheating. He explained that he had deadlines, competing priorities, and therefore "used ChatGPT to finish strong." Will that matter? Some suggest this student may "struggle to find a job after graduation," and there is mention of doubt regarding "how much he actually learned while earning his degree."

There are implications and questions. Who wants their doctor to be reliant only on what the internet says (search engine) or the conclusions of AI? In that vein, anyone can Google their symptoms and likely find a page with an answer that is comprehensible to any lay person (someone who is not a doctor). Other than getting a prescription, what is the benefit of seeing the doctor if the internet is all they know?

This is likely a more difficult question if the doctor is ignoring the analysis necessary for comprehending the Google result(s) and instead just lapping up the spoon-fed AI-LLM conclusions gleaned from the vast array of internet data, including the collective wit and wisdom of Wikipedia. See Are I Diminishing? Am You? (May 2025).

Would you want a doctor who used AI instead of studying? An engineer, an accountant, a lawyer? There were various comments noted regarding the UCLA gentleman and his achievement of graduating with the help of AI. Would you agree that he is as prepared to help you solve problems as any non-AI-engaging engineer? Some would argue he is demonstrably more prepared due to his AI savvy. Others will disagree. 

There will be more studies. One study does not often answer all perspectives, concerns, or questions. That said ...

In the end, it is possible that we will be lulled into reliance and eventually become obsolete ourselves. I cannot even remember anyone's telephone number anymore; it is all in my phone. As comedian Kathleen Madigan once described it, "My brain is that phone." That is not because we were told to dump that information. We were given convenience, we forewent using our brains for that task, and we lost the ability (I do remember my phone number from 6th grade, but not those I have had since; that is odd). 

What else will we lose, and how fast? Well, Sancho Panza, that is indeed a worthy question of proof. Where is the pudding?

Thursday, July 17, 2025

Crowd Wisdom

This blog has focused on the challenge presented by evaluation of large data sets and questions that evade scientific analysis. Adapting to the conclusion that science alone cannot answer some inquiries, the Rand Corporation pioneered the Delphi Method of consensus building. See Consensus in the Absence of Proof (January 2021).

There have also been those who criticize group dynamics. George Carlin (1937-2008) is perhaps the most memorable with "Never underestimate the power of stupid people in large groups.” A similar sentiment is expressed by Agent K: "A person is smart. People are dumb, panicky dangerous animals and you know it." Men In Black (Sony 1997).

Nonetheless, there are benefits from group analysis, collaborative or not. The Delphi example is one, but is dependent on the participants possessing expertise that is brought to bear on the challenge. What of the common man?

In the 19th century, a polymath named Francis Galton stumbled on a mathematical proof for estimating, called The Wisdom of Crowds. The legend of this method holds that Galton witnessed a contest in which people strove to guess the weight of a cow. None of them was correct, but he noted the average of their individual guesses was surprisingly close to the bovine's weight.

The method has come into repute and therefore use in estimating populations. The concept came to my attention with study of the amazing volume of theft and vandalism in a lovely city, Amsterdam. The place is famous for canals, dope, and Anne Frank. A municipality might gain fame in various paths.

Some might instead associate Amsterdam with canals. The town's fame in that regard has been reinforced with various movies. It is among the most famed canal cities, along with Venice, Italy. Few realize that neither has the “most” canals, a superlative reserved for unassuming Cape Coral, Florida. I have visited each of these three, and there are arguments for each. But if you may visit only one, Amsterdam is a good choice.

It is a city of canals, but also of bicycles. Getting around town is largely a pedal-endeavor. The tool is so popular, they believe the cycles outnumber people there: "Amsterdam has 780,559 inhabitants, who together have an estimated 881,000 bikes." Many of those end up in the canals each year.

The bikes that go swimming are not estimated. The city counts those as they are perennially dredged from the famous waterways. They call the task "Bicycle fishing," and the city claims that "Every year we fish up between 12,000 and 15,000 bicycles."

In this there is illustrated two methods of measure. First the "fished" cycles. Those are quantified based on results. The canals are dredged, cycles and more are recovered, and the cycles are counted. The number is neither estimate or guess. This is a valid method of quantification.

The second is the 881,000 volume of velocipedes. That is a great many bikes. They are not licensed as is common with automobiles and trucks. If they were, then the registration process would yield a significantly accurate quantity (a few might not be registered due to minimal use, as occurs with cars).

Thus, for the quantification, Amsterdam is said to have turned to the Wisdom of the Crowd. The description provided by Amsterdam’s Statistics Bureau, however, also suggests departure from the pure crowd, noting their conclusion is "the average derived from the guesses of a selection of experts." Thus, some suggestion of deviation toward a Delphi model and away from pure "crowd."

Some might argue with the validity of the crowd. Galton's conclusions regarding the famed bovine were, after all, subject to empirical confirmation. The Fair folks knew what the cow's actual weight was. The guesses of the crowd there could be compared to an objective, measurable, known outcome and accuracy measured. 

Others might forgive this absence of objectivity. They might note the accuracy of the "guesses" perhaps implicates neither benefit or harm. In the end, what relevance is there to knowing how many cycles exist in Amsterdam (other than perhaps predicting the need for racks to which they can be locked)? 

That argument would hinge on relevance, a topic purportedly espoused famously by Einstein:
"Not everything that counts can be counted, and not everything that can be counted counts."
The Quote Investigator seems less than convinced of the provenance, but the point remains regarding what "counts." Presumably, no one would expend resources unless something "counts." On that broad assumption or conclusion, it is logical that the accuracy of the count would therefore be important also. 

To what end the crowd, Delphi, or guessing?  Intriguing indeed.  





Tuesday, July 15, 2025

Physician Shortage

There is ample discussion of both the current physician shortage and the predictions of future shortfall, according to the Association of American Medical Colleges (AAMC). The most imperative element of workers' compensation is medical care. The indemnity is critical, but is dependent on the medical care and provider opinions. Comp needs doctors, period, hard stop. 

The Florida Supreme Court has concluded that attorney fees are the most imperative, somehow secondary to medical or indemnity, but that seems difficult to rationalize. It held "a reasonable attorney's fee has always been the linchpin." Castellanos v. Next Door Company, 192 So. 3d 431, 448-449 (Fla. 2016). That conclusion has drawn some criticism. No worker who was ever injured has ever gasped, "Call an ambulance, get me to a lawyer." 

Shortage of physicians is thus worthy of our attention. Some may find comfort in a more empirical representation of the shortage predictions. Currently, there are many physicians in the U.S., according to the Association of American Medical Colleges (AAMC), which says there are:
"1,010,892 active physicians of which 851,282 were direct patient care physicians, corresponding to 302 and 254 physicians per 100,000 population, respectively."
Those figures are difficult to duplicate using the current U.S. population, 342,065,749. Using that figure, the "1,010,892 active physicians" and "851,282 ... direct patient care" this morning equates to 286 and 249 per 100,000, respectively. Some will say I'm quibbling; others will note that the population increases each moment, both generally and of physicians. These are all moving targets. 

There is not geographic parity today, with "states in the northeast" exhibiting the higher populations of both care physicians and the broader "active" total. The shortage discussion is not something excluding the northeast, but is more acute or concerning in rural spaces.

How many doctors does America "need? That seems a more flexible analysis, subject to various perspectives and input.

The AAMC postulates that we face a "shortage" of 86,000 physicians by 2036. In ten years, they predict that this deficit is probable. Their reporting does not estimate how many doctors will be practicing in 2036, but only that there will be this shortage. The AAMC prediction sounds dire, but other organizations predict a shortage of nearly double the AAMC predictions.

Two postulates might anchor such a conclusion. First, that there will be only 924,892 (1,010,892-86,000) physicians practicing by that time (actual physician population decrease). Or that the current production and replacement will be sufficient to maintain the 1,010,892 but that it will be overrun by population growth (percentage loss of population per 100,000).

The population in 2036 is predicted to be 364,731,659. To maintain the volumes today, "302 and 254" or "286 and 249" respectively, would require a net gain in physician population by 2036 of about 66,000 physicians to an overall population of 1,077,000. This is an increase of about 7% (66,000/1,010,892).

That does not seem insurmountable. With population growth of 6.6% (364,731,659-342,065,749/342,065,749), matching a similar growth rate in physicians should not be challenging. Furthermore, with technology evolving in ways that both empower and force-multiply physicians, the potential may be that such increases in physician population are not only unnecessary but disadvantageous to the medical market generally (macro) and physicians (micro).

In the same vein as technology are the para-medical professions that have been aligning for years. Physician Assistants have flourished in the 21st century. In the last decade alone, "The number of board-certified physician assistants increased by 76.1 percent," according to the National Commission on Certification of Physician Assistants (NCCPA). That growth far outpaces population growth.

Similarly, the American Association of Nurse Practitioners notes that their profession has grown exponentially since it began 60 years ago. That association has reported annual growth rates exceeding 8%. That is an annual growth, compared to the 6.6% overall predicted population increase over the next decade.

Despite these increases in technology, Assistants, and Nurse Practitioners, American medical schools continue to graduate significant numbers of physicians annually. The "match" is a process through which medical school graduates enter residency programs. See Bid Day (April 2025). The program in 2025 was the "Biggest Match Day ever." In 2025:
"43,237 total positions (were) offered—up 4.2% over 2024. There were 1,734 more certified positions offered this year compared with last year, 231 more certified programs, and 877 more positions in primary care."
The volume of positions for residency increased by 4.2% in one year. Again, compared to the 6.6% overall predicted population increase over the next decade. This is notable from the perspective of percentage change.

However, these numbers also bear consideration of career arc and longevity. If this trust-dependent process introduces 43,237 new physicians annually, then the 1,010,892 total physicians in the marketplace are replaced every 23.4 years. (1,010,892/43,237). Stated differently, the current output should replenish the current physician population easily within a 30-year career arc.

In fact, that same production level should easily replace the population-adjusted need, 1,077,000, every 25 years. This is likewise comfortably within the 30-year career arc.

Despite this, there is persistent discussion of "shortage," and at significant numbers. The AAMC projected shortage is of concern for the market (macro), certain specialties such as primary care (micro), and more dire impacts for the rural environment subset (micro).

Thus, despite the math above, the AAMC recently applauded a legislative measure to create 14,000 more residency positions. Not born of market demand, but to be supported by Medicare (at least in part). These new positions would phase in over 7 years.

For the sake of argument, that might increase the annual figures into the future like this

The proposal is to increase the residency opportunities by 32% over seven years. The result would seemingly be to increase the volume of physicians in the US. In the next dozen years, this would presumably add 644,844 physicians to the marketplace. That is 64%, almost two-thirds, of the current U.S. physician population. Can the market really sustain that volume of supply?

Will those doctors enter the rural markets, serve the underserved, and solve the purported crisis driver - primary care? Or will these added positions train more specialists, for greater concentration in the same regions that enjoy supply advantages today?

Or is this whole analysis ignoring something? Is the unregulated monopoly that controls physician residency currently really producing 43,237 new U.S. physicians annually? Or are there portions of that total, and the proposed increase, that enjoy the experience, expertise, and training here and then relocate elsewhere in the world to practice medicine?

The Associated Press recently noted that:
"Hospitals in the U.S. are without essential staff because international doctors who were set to start their medical training this week were delayed by ... travel and visa restrictions."
This suggests that some of those coveted residency positions in U.S. hospitals provided opportunities to visitors from other nations. The article is clear that the volume is not known. Would those individuals be likely to remain in the U.S. following residency and add to the nation's physician population? If not, how many of the 42,237 do stay and practice here? How many in primary care, in rural America?

These points all raise important issues. Is there a shortage in gross terms (macro), or are there shortages in certain specialties (micro) or geographies (micro)? Is the path forward one of continued monopolization of residency participation and opportunity, or would a system with more free-market responsiveness more readily impact supply and demand?

The largest organization of physicians, the AMA, supports that there is indeed a shortage. It has issued an "all hands on deck" call to arms regarding the present and our present path. The actuality of micro and macro impacts and challenges deserves credible, calculated, and empirical study.

This may include analysis of if and why there are leaks in the system - early retirements, departures to other markets, etc. Is the market healthy in both recruitment and retention? This may include why and how services are compensated, and what adjustments might enhance physician attraction and retention in the challenged specialties and locations.

In the end, the questions remain numerous and complex. The challenge will be to quantify those questions, prioritize responses, and facilitate effective and efficient markets for the delivery of medical care, recruitment and retention of practitioners, and appropriate access to care in both local and regional perspectives.

The questions are varied, complex, and vexing. That is no reason not to answer them. As for workers' compensation, the physicians are needed here as much or more than elsewhere. While attorney fees may be "the" lynchpin, many injured workers will need to see a physician instead. 

Sunday, July 13, 2025

Better look that up

I recently ran into Horace Middlemier*, Dean of a prestigious educational institution. The discussion was free-flowing and touched on multiple topics. But, as is normal this century, we turned to artificial intelligence (AI). The dean noted a recent example of a local lawyer being sanctioned for filing false information with a court.

Those were once newsworthy but have perhaps become commonplace or mundane. For a link to a database of the growing list of poor-performing lawyers, see Prosecuted for lying? (June 2025) and Another one rides the bus (May 2025). A complete list of my AI posts is on my website.

The dean expressed exasperation regarding the speed at which writing has declined in the last 24 months. Few are writing, and fewer still are writing well. The idea of a term paper in class has reportedly become anathema among some educators. They cannot deal with the various challenges that AI presents.

I have since had the chance to discuss these concerns with various educators. The spectrum was clarified for me rapidly. I have been an educator for many years, and am approaching 60 semester classes delivered. Through that time, I have leaned toward tests, quizzes, and homework rather than papers. When I did venture into papers, there were issues of plagiarism and collaboration that were challenging.

What do the instructors face?

First, there is the grading. Tests can be processed through a grading machine in many instances. Even if the test is an essay test, the length of the answers can be limited. Term papers of 15 or 20 pages may simply equate to several evenings of difficult grading.

Subjectivity is a second point. Tests with definitive correct answers are easier to grade than essays and short answers. The subjectivity factor can make grading short answers and essays difficult, and there is the same concern with term papers.

There has always been the nagging concern about plagiarism. This is a suspicion that may be driven by papers that are notably articulate, inspired, and insightful. That is ironic, but it is what I am told. There may be some tendency to look at the quality of one with a comparative eye to another(s). 

That plagiarism concern might be addressed with sophisticated software, and I have heard professors say they merely paste portions of the paper into a Google search—too often there is some poetic license that is readily apparent.

But in the era of AI, there is also detector software. They are no more perfect than the AI that writes the papers. Imagine AI hunting for AI, it is Skynet versus Skynet. See Arms Race (May 2024). I saw that one coming. There are challenges. If an AI detector presents an analysis of some percentage of non-human contribution, how accurate is that prediction or conclusion? Can the instructor even count on a positive being a real positive?

Some have concluded that any AI detector is "an ethical minefield." That article points out the risk of false positives. There are citations to claims from various providers as to the infrequency of false positives. Those sound promising, with only 1%, perhaps. But what if you are the one who is accused of being in the 1%? What if you are the professor striving to defend your grading conclusions and defend such a tool?

I know, I know, the old men in the balcony are grumbling already, "what does this have to do with workers' compensation?"

Well, there is no difference in grading term papers and assessing written arguments, briefs, or memoranda of law. And that is what judges spend a great many hours doing (the prevalence of poor spelling, missing punctuation, and questionable grammar is similar in both settings). There should be some relief for judges in not having to assign a grade, but not necessarily.

There are abundant examples of lawyers citing fake, hallucinated authorities. They are usually noted by opposing counsel, and then trouble ensues, arguments are made, and orders entered. 

Judges should be able to count on four things:
  1. Honesty in fact of the lawyers in any proceeding
  2. Lawyers carefully checking their own authorities and verifying they are real (actual statutes or cases) and accurate (say what you say they say).
  3. Opposing counsel checking their opponent's citations and pointing out errors (in number 2, i.e., "that case does not exist," or "that statute does not say that").
  4. That there will be errors, shortcomings, and interpretations (we are only human).
 ** Appellate courts should be able to count on all these and that trial judges are carefully checking what they rely on, from whatever source. 
That fourth one is critical. The trial judge has to retrieve those cited authorities, read them, and form their own conclusions (subjectivity, see above). The judge is ultimately responsible. The judge better look that up. The judge is studying what the parties bring, doing their own research, and striving to get the outcome right. Day in and day out, the judge must be studying, reading, and verifying.

J.D. Supra reported last week that an appellate court in Georgia had to vacate a trial judge's order. The trial judge cited fabricated cases that were (apparently) hallucinated by an AI and cited in pleadings (at a minimum, they were not real and could have been created without an AI). The opposing counsel apparently did not bother to verify or contest them at trial, and the trial court relied on the falsehood. See Shahid v. Esaam, 2025 Ga. App. LEXIS 299 *; 2025 LX 214277. 

The appellate court reportedly sanctioned the lawyer who originated the hallucinations. In a twist some might find ironic, that lawyer apparently was not awarded fees. The court noted, "Appellee's Brief further adds insult to injury by requesting 'Attorney's Fees on Appeal' and supports this 'request' with one of the new hallucinated cases." For some reason, Hamlet (Billy Shakespeare, 1599) and some petard come to mind. 

Thus ends the fallacy of reliance; see numbers 2 and 3, above. The judge cannot count on the lawyers to check their own work and avoid hallucination. The judge cannot count on the opposing party(ies) to check their opponents. Those halcyon days, it seems, are gone. 

The age of "IDK" and "IDC" has perhaps come indeed. See Ignorance and Ambivalence (July 2025). Unfortunately, this example reveals that perhaps the appellate court cannot rely upon the trial judge either.

        Courtesy Charles Schultz.

There has been talk of AI detectors for legal practitioners and judges (remember the "false positive" mentioned above?). There has been talk of making lawyers certify whether their pleading was prepared using AI. Would a certification cause lawyers to go back and verify, to look it up? Would we require judges to similarly certify? 

There is the reported practice of clients declining to pay attorneys for the seemingly mundane task of legal research ("you should know the law"). Lawyers tell me they do not check the opponent's authority because "the client won't pay for that either." In a world of litigation, with cases being decided daily, how could anyone "know the law" with certainty, thoroughness, and confidence?

Where does the fault lie? Where is the "holy grail" solution? Every lawyer and judge should be looking up the case law and statutes. 

Back to Dean Middlemier, who led this post. The most poignant observation that educator made was in their reaction to various damning news stories of lackluster attorneys (now judges). The dean said their first reaction is to check the education of each malefactor in such stories. The Dean's singular desire is to verify that that particular lawyer (now judges) did not graduate from the dean's school.

I am not faulting Dean Middlemier. I get it. How embarrassing if your graduate is the one hallucinating or relying on them. Imagine if it's your partner, associate, or fellow judge?

But the concerns of AI in the legal practice are deeper. "IDK" and "IDC" are seemingly becoming commonplace. The very future of the profession, the legal system, and ultimately society lies prostrate before us on the road, wounded, perhaps gravely. How shall the community respond? What aid? What remediation?

Should we just drive around it and pretend not to see? Should we try to help it up? Is there anyone we could call to its aid?

The Shahid decision identifies only the husband's counsel by name, Diana Lynch. Nonetheless, the Court of Appeals website says the trial judge was Hon. Yolanda C. Parker-Smith (who has apparently been on the bench less than 6 months), and counsel for the appellant was Mr. Vic Brown Hill (who may or may not have been involved at the trial level). 

Note to Dean Middlemier, it appears none of these attended your school. But that does not mean another dean or two, law firm owner, or a client elsewhere might not be SMH right about now. 


Ed. Note - Horace Middlemier is not a real person but a figment of the author's imagination and experience; a literary tool or foil. Any resemblance to a real person is strictly coincidental and unintended.

Thursday, July 10, 2025

Links and Questions

There has been frequent news recently regarding how we take care of our bodies. Several interesting points come to mind. One recent revelation regards "super processed foods," which include many of the things that we all enjoy. These are faulted because of their salt, preservatives, artificial sweeteners, and more. 

We are told that consumption of these can readily predispose us to anxiety or depression and a raft of other challenges. However, there are questions in this evidence as regards causation and coincidence. Essentially, the study concludes that these medical complaints are more prevalent in people who consume these foods. 

But is there evidence that potato chips cause anxiety as opposed to anxiety causing potato chip consumption? This may be a chicken/egg debate that is difficult to differentiate. They do not call it "comfort food" for nothing. That Webster definition is fairly innocuous, but the Cambridge definition suggests we mean food that is sweet or otherwise attractive for its nutritional deficits. 

So, the question here is whether there is a link between some consumption and health concerns or merely questions about the relationship?

Another recent revelation regards the artificial sweetener aspartame and a conclusion from the Houston Medical Center suggesting a link between aspartame and autism. Many have struggled mightily over the last 30 years to grasp and comprehend the vast expansion of autism and other “spectrum“ diagnoses in this country.

No sooner had I read that report than I came across the discussion of whether artificial sweeteners may be related to memory issues we suffer in the aging process. That is, indeed, intriguing, as we witness the prevalence of dementia, Alzheimer’s, and a variety of more specific subcategories. 

A portion of the impacts may be due to recognition rather than diagnostics. As a kid, I knew a great many seasoned citizens who exhibited forgetfulness and worse. There was less inclination to seek diagnosis or labeling in those days, or at least, I perceived it less. Those folks instead talked merely about "getting old." Are we merely labeling better today?

How does all of this fit within the parameters of our lives and our inclinations towards self-care? We are, eventually, individually responsible for the things that we put in our bodies, and the impact that those carry. Certainly, there is also potential for environmental exposure. 

But the research and discussion of ingestion is certainly a topic in the ongoing opioid crisis. See A Vaccine Against Being High (January 2023). Those who intentionally take fentanyl may correctly have perceptions of invincibility or may be sadly misinformed. They nonetheless make a choice of ingestion. Is that different from consuming comfort foods filled with ingredients we don't recognize and often cannot pronounce?

For me, it is the decision to consume ridiculous quantities of pepperoni. In doing so, am I taking unmitigated chances? The same might be asked about my affinity for a particular Zero beverage—am I taking unintended and untoward chances with my health? 

We all know the preferred path. Every doctor we have ever seen has suggested and supported the “periphery“ grocery recommendation. Others refer to this as "shopping the perimeter."

If you’ve missed it, it’s quite simple. Selecting foods from the outer perimeter of the grocery store tends toward better choices (fresh vegetables, fresh fruits, raw meats, and fresh dairy). Some are more forceful in recommending this than others, but they all say "stay of the aisles of cookies, chips, and candy." Spoilsports. 

I raised this in a conversation at a conference, only to be confronted by a gentleman who asked if I realized that the pharmacy was likewise usually on the periphery, which brought laughter. Another jumped in and suggested that the bakery and its chocolate chip cookies are likewise often located there. Touché, I say, touché. There are some flaws, perhaps, in the "periphery" theory. 

Thus, there are maybe a few absolutes. Nonetheless, we are likely impacting our well-being with what we put into ourselves. While we may view our decisions in this regard as free-will choices, it is likely that we are influenced by society, finances, marketing, and more. Few of the good choices are likely to be marketed as enthusiastically—when did you last see an advertisement for apples? 

I don't see many advertisements for fresh fruit, and even the old "got milk" ads have disappeared. But I see many ads for convenient, packaged, and prepared foods with those ingredient lists I cannot pronounce. 

The end result is reasonably simple. We each decide what to ingest, and we live with the effects. The periphery is a good guide, but it is no absolute sanctuary. We face choices and owe it to ourselves to make sound ones. 

Tuesday, July 8, 2025

Judge Douglas Brown

Former Florida Judge of Compensation Claims C. Douglas Brown (1934-2025) passed over the July 4th weekend, 2025. 

Judge Brown was well-known in the workers' compensation and Panama City communities. He was born in McComb, Mississippi, north of Baton Rouge. He earned his Bachelor’s from the University of Southern Mississippi (1956). 

After college, he married and moved to Miami, Florida, where he worked as an insurance adjuster. Several years and three children later, he earned his Juris Doctor from the University of Florida (1966). He began his legal career in Panama City, Florida, and was only the 25th attorney in town at that time.

In addition to practicing law, he began investing in real estate throughout Bay County. He gained national attention after winning a case in Panama City against Dow Chemical and Shell Oil. After that, he was asked to join a larger case against them in San Francisco, where he lived for six months and where he once again prevailed.

He was appointed Judge of Compensation Claims by Governor Lawton Chiles in 1991, was reappointed by Governor Jeb Bush, and continued until his retirement in 2001. He was the first to serve as JCC in what was initially District A-Central, as that area was carved from the long-standing A-West (Judge DeMarko) and A-East (Judge Fontaine). 

Judge Roesch was appointed to succeed him, and she served 2000-2016. She was succeeded by Judge Walker 2016-2020, and then Judge John Moneyham, who served in PMC until that office was closed in 2022. Judge Brown thus served Panama City for about a third of the time that District existed.

Outside of the OJCC, Judge Brown was a highly accomplished pianist. He was a country gentleman, an intriguing storyteller, and a friend to so many. He will be missed across the panhandle. Godspeed, Judge Brown.

Sunday, July 6, 2025

It is not the End of the World (yet)

What do John Lennon and Miley Cyrus have in common? Not much. But they both espoused some advice on perceiving the end, and paths to resilience. Lennon was the more proactive positive with
"Everything will be okay in the end. If it's not okay, it's not the end,"
Thus, if there remains doubt, adversity, or angst, there remains work to be done. That is a positive message. Miley is a little more denialist with the refrain:
"Let's pretend it's not the end of the world, ... Let's pretend, it's not the end, end, end." End of the World (Miley Cyrus, Columbia, 2025)
How do we know what the end looks like? How are we to know we have reached it? Will we realize it in the moment, or will we distractedly wander past but then recognize somehow in retrospect? There is a growing consensus that the world of work is going to change dramatically. How rapidly this occurs may remain a matter of discussion.

In a Fortune article, Vinod Khosla recently predicted that Artificial (AI) Intelligence will "automate() 80% of high-value jobs by 2030." That is one of the more accelerated pace predictions that has made the news; note the lack of definition for "high-value." That is a mere 5 years hence. In the construct of a career arc, 5 years is a mere moment. In that moment, the vast majority of "high-value" work could evaporate.

That paragraph brought Kansas to mind, the band not the state. In Dust in the Wind, they noted 
"Only for a moment and the moment's gone
All my dreams
Pass before my eyes with curiosity
Dust in the Wind" (Kirshner 1977).
That brings yet another perspective to the "end" perhaps.

The 2030 mark is not Mr. Khosla's prediction of the end, but certainly the beginning of it (props to Winston Churchill for that inspiration, though not a direct quote). Mr. Khosla contends that AI will be integrated into the majority of jobs in that 5 years, with "almost every job is being reinvented, every material thing is being reinvented differently with AI as a driver.”

Some will read that prediction with glass half empty perspective (focus on loss). They will see immediate destruction and perhaps chaos and personal loss. Others may read it as a glass half full and see AI as a tool that will be used by the vast majority of workers, enhancing, enabling, and empowering, but not (yet) replacing them.

Mr. Khosla then drops the other shoe, predicting that, in 15 years, 
"by 2040, 'the need to work will go away. People will work on things because they want to, not because they need to pay their mortgage.'"
The choice of whether to work is conveyed there in a permissive tone that suggests an empowered individual with choices. The broader economic impacts of that vision are likely understated, perhaps grossly so. The breadth of analysis that topic deserves will require a subsequent post. 

For today, merely note that this is not the first suggestion of universal socialism in the technology revolution. See Universal Income - A Reality Coming? (November 2016); Strong Back Days are History (February 2017); Let them Eat Brioche? (September 2018); Universal Income Again (March 2019); And now, here's something we hope you'll really like (March 2022); and Long Term Solutions (June 2022). Yes, I have really been focused on these changes for the last decade. A lost of prior AI and robotic posts is here.

Barchart recently reported on similar predictions of "Dario Amodei of Anthropic." He predicts that
"up to 50% of entry-level white-collar jobs could disappear due to AI within the next five years."
That may or may not be more dire than Khosla's view of AI impacting jobs, and perhaps they are saying the same thing in from different perspectives (the integration into some positions leading to elimination of some volume of others - efficiency of one diminishing demand for others). Does this "white-collar" correlate to "high value?" That seems doubtful. 

The CEO of Ford also recently predicted AI "will halve the number of white-collar jobs in the U.S." That is  bit more severe than the "50% of entry-level white-collar jobs" of Mr. Amodei. Thus, all these predictions seem to be subtly distinct. Nonetheless, all seem to coalesce around job loss in the management field.

As a corollary, it is notable that NBC News noted in 2024 a trend of Generation Z toward trades and trade school rather than college. This signals that the younger generations recognized the writing on the wall months ago and are shifting focus. That is likely rational due to the demands for skilled trade workers. However, the 50% reduction in white collar does not mean talented manager hopefuls do not have a path there also. 

The same Barchart article notes that Chat GPT founder Sam Altman does not agree completely with the Amodei predictions. He sees a more measured impact of AI, but concedes that "change in the labor market is inevitable during any technological revolution." And a revolution is most certainly where we are.

Some might argue that the world of jobs is already dead. They would say we already distractedly wandered past that moment, and are now finally recognizing it in retrospect. That might explain our shock, awe, and confusion. That would not be disrespectful of our powers of observation, but realization that in the day-to-day we often have little time to reflect on the big-picture challenges of our lives.

Mr. Khosla is not discouraging for the workers alone. That is a micro point. However, he points to the demise of market giants like Toys "R" Us (1957-2018) and Sears (1892-2018), and notes their failure to evolve and adapt to the digital world. Those standbys of yesterdays commerce are memories quickly fading. Many of today's consumers (Generation Z, 1997-2012) never made a purchase at either of these giants of yesteryear. I have spoken to some who don't even recognize those iconic names.

Mr. Khosla contends that the coming revolution will similarly overrun and destroy much of the "Fortune 500 companies." He sees this as a product of new companies, innovation, and competition for which today's economic establishment is not prepared and with which it will not be nimble enough to compete. He foresees this in health care, robotics, energy creation, and more.

Mr. Khosla contends that "Entrepreneurs (will) invent the future they want.” He sees these small, agile, entities as bursting onto the marketplace with innovation and disruption. He contends that entrepreneurs innovate, while "experts are terrible at predicting the future; they extrapolate the past." The existing paradigms, the "experts," will miss the boat in his opinion and be overrun by the new and innovative.

There are some worthy primary points in these predictions.

First, they are opinions. Those are free for everyone, and each of us has our own. They are impacted by inherent predispositions - our beliefs, nature, and nurture impact what we believe and why. The quoted experts are undoubtedly innovators, but that does not mean they are prescient or infallible.

Second, change is here. Whether you choose to accept that or leverage that is your decision. Nonetheless, change is coming whether you want it to or not. In this, I have previously compared AI to the Grinch stopping Christmas. That is fallacious and naive - at best. Change is coming.

Third, the change will begin with tech providing assistance. New tech always makes work easier. This was true for farmers/tractors, clerks/computers, managers/software, and more. Each iteration of revolution has brought multiplier strength to existing workers, and then has eliminated jobs. I watched it with robotics at General Motors in the 1970s, listening in disbelief to the critics of that age. 

Fourth, you have choices to make. You may eschew the tech, avoid the tools, and stay the course. That is a choice. As you do, you will find yourself increasingly at a comparative disadvantage to those who chose to learn, adapt, and leverage. You can absolutely survive without engaging and implementing AI, but for how long is a valid question. You will become obsolete, but you may choose when. 

Fifth, the impacts will occur around you in both micro and macro effect. Your life will change. Period. If you think you can limp into your Golden Years without adjusting to AI, you are wrong. Perhaps in your job, in the micro, you could get away with it. But, AI is going to change your world in a macro sense. What you buy, how you buy, how you consume or not, is going to change as are many of the names you do business with.

Sixth, of universal impact and interest, the effects of AI will not democratize in an egalitarian and free-love path. There will be competition, winners, and losers. This will include nations, coalitions, companies, investors, individuals, and more. Wealth will be made and lost in this transition. Some will be unavoidable, and more will be through oversight and distraction. 

I have repeatedly suggested that your time for study, adaptation, and growth is short. Despite that, each day is a new chance. The world is changing before your eyes, and with incredible speed. The implications are immediate, pervasive, and pernicious. You are making changes.

Remember Rush intoned, some years ago
"If you choose not to decide, you still have made a choice" Freewill (Mercury 1980). 
Choice, in a moment. You decide - "You must choose, but choose wisely," Indiana Jones and the Last Crusade (Paramount 1989). There is still work to do, so the end is not here. Despite your angst and trepidation, "Let's pretend it's not the end of the world."


Thursday, July 3, 2025

Ignorance and Ambivalence

And I dictated this, it occurred to me what a brilliant invention it would be for someone to build a computer program that could actually recognize the words I’m using. Brilliant! The tribulations of dictating are well known and often funny. 

There was once a funny commercial that poked fun at voice recognition. Another is currently running in which the voice assistant and a GPS application confuse "the mall" for "Nepal." It is good that we can remain good-natured when tech so often is less than we are promised. Technology is neither perfect nor approaching it. 

That fallibility is perhaps what makes it most like us? 

Speaking of imperfect, there’s a frequent joke about the current state of legal practice. I wish it were fanciful, rather than descriptive. The hook is: "What is the difference between ignorance and Apathy?" You let that sit for a moment, and then the punchline resonates: "I don’t know and I don’t care." For the next gens, that is "IDK and IDC."

I have been privileged to work with some of the most brilliant minds in the country. I have had chances for interaction, discussion, and debate beyond the dreams of most. Unfortunately, there are periodic encounters with some who should instead swim in the shallow end of the pool. I am not saying they are not as smart as the best, but it is fair to say the quality of their work is not equivalent with the best. 

Nonetheless, I’ll get emails about that swipe regarding the shallow end, and perhaps I deserve it. But keep this in mind, I’m not making light of those who have diminished capacity. After all, I am discussing lawyers. These are individuals who possess the intellectual prowess to excel in high school, conquer college, and achieve success over the challenges of law school. 

Lawyers have to possess a significant degree of intellect, dedication, and persistence to reach bar membership. The education path is challenging, even today, when we hear of law schools that do not require writing, administer multiple choice tests, and even have classes where grades are not measured with either.  

Thus, I am forced in the conclusion that these are not ignorant or intellectually challenged individuals filing nonsensical, error-ridden documents. And therefore, I wonder why they cannot make a point. Why can't they cite a rule, statute, or decision? Why can't they acknowledge the suggestions of their own spell check?

In short, the quality of much that is filed recently is beyond disappointing. The volume of errors is astounding. The nature of the errors is disappointing. A lawyer who cannot distinguish "there," "their," and "they're." An attorney who cannot finish a . The many mispelings. 

The successful lawyer is representing the interests of their client. They are striving to resolve disputes in a manner that benefits the client. They have a plan, and there is purpose in each action they undertake. The successful lawyer is purposeful, methodical, articulate, and focused. They know how to use spell check. 

Step one with any lawyer action (motion, claim, defense) should be to tell the judge they should do something. If the issue is digging a hole, the lawyers first owes it to the judge to prove that they can dig. What authority does the judge have to dig (this is a great place to cite a statute, rule, or appellate decision). Every claim, defense, or motion should begin with a citation that assures the judge this place is somewhere they are permitted to dig.

Having demonstrated that they can dig here, the lawyer should next tell the judge why they should dig here. That one can does not mean one should. There are many things any human can do, but are they appropriate? Does the timing, location, or audience matter? Of course. The advocate must demonstrate why their sought action is appropriate here, in this case. 

The final step is about the moment. Why now? Even if what you advocate is an appropriate step, and this is the right place, is this the right time? What says so? Certainly, this is a question upon which a rule, statute, or prior decision might be informative or even illuminating. 

The purpose of a claim is to obtain something. The purpose of a defense is to avoid providing something. The purpose of a motion is to gain judicial involvement in your dispute. The purpose of a response is to resist or restrain that involvement. This is very basic stuff. 

Every claim, defense, motion, or response should address these points: Can the judge do this? Why should the judge do this? Why is this the right time to do this?

Unfortunately, many advocates instead focus on only one thing: "what I want." Like Morgan Wallen and Tate McRae (What I want, 2025) "That's what I want, that's what I want." You can repeat that chorus more ad nauseum than those two singing, but it still just describes desire, not persuasiveness. Saying what you want is easy. 

The lawyers who just "want," express only that, a visceral or emotional desire. They see something and they desire it. They apply no intellect to the why or how, and simply seek the "it." Their arguments and foundations are vacuous and empty. As a result, they are unpersuasive. 

There is nothing wrong, per se, with “I want." Every human has wants. But just wanting is not enough. So what? I want one million dollars, please send it. 

The workers' compensation practice is small. Very few are invested in the litigation of workers' compensation claims. Among them are intellectual giants, imaginative icons, and outstanding advocates. That group, however, is small and seemingly shrinking. 

Judges across the country convey to me their amused and confused silence when seeming members of the "fail army" prognosticate and participate in the "I want." Their pontifications, musings, and wanderings are not effective. They are disappointing and disaffecting. 

The members of the fail army apparently do not see themselves in that light. Their self-perception is of exceptionalism and infinite skill. They achieve accidental success and proclaim their superiority. They fail to see that they prevailed not through exceptional skill or prowess or because they are extraordinary. They succeed because their poor performance was nonetheless some measure above the even less effective effort(s) of their opponent.

The fact is that winning any contest does not mean you are the best athlete in that sport. It means merely that you are better than your opponent in that particular game, that day. If you beat me in a foot race, you cannot claim on that sole basis to be "fast," just faster than me.

There is lamentation. Judges see poor performance, incomprehensible pleadings, and unsupported arguments, claims, and defenses. They struggle with the merits of your filings, the absence of citations (rule, statute, cases). They do their own research and struggle with the potential that your bare argument might somehow have merit., unarticulated and vague, but merit.

They issue orders you do not like. Not because your arguments lack merit, but because you chose not to be articulate, thorough, persuasive, and professional. 

Do you not know, or simply not care? Are you a professional or a candidate for the "fail army?" Is your reputation of focus, attention, and detail or of confusion, indecision, and indifference?

Be more:
  1. Can the judge do as you wish (jurisdiction).
  2. Should the judge do so (why here?).
  3. Is this the right time (why now?).
The answers are reasonably simple. Take the time to know. Do your research, read rules, statutes, and decisional law. Ignorance is no path to success. 

Care about the outcome. If the claim, defense, or motion is worth filing, it is worth doing it right. 

The difference between ignorance and ambivalence? Does it really matter whether you failed because you don't know or because you don't care? What matters is that you failed because of either. 

Tuesday, July 1, 2025

Evolving Morality

There is an intriguing debate in this world regarding the sanctity of life. In the vast array of rights, we see persistent weighting and balancing between the various people and rights that strive to coexist on this planet. There are many different people, perspectives, and legalities. 

The British Broadcasting Corporation, BBC, recently reported on a California man who concluded that he had reached his end. Under a reasonably recent state law, he was attended by a modern-day Dr. Kevorkian and committed suicide with the support and acceptance of his family. The story was covered by the BBC because Britain is now considering legalizing suicide. 

One cannot fault the focus of the story, a man named Wayne Hawkins. He was 80 years old when he drank "a drug-laced cup of juice and drifted off to an eternal slumber." Wayne was happily married for fifty years and the father of two. He spent his life in architecture and his spare time camping, hiking, and raising a family. 

He was diagnosed with a terminal heart condition and handed a death sentence. The physicians prognosticated that he would have less than 6 months to live. In the meantime, he would suffer pain and symptoms from other conditions, "including prostate cancer, liver failure, and sepsis." From any perspective, this was a reasonably bleak outlook.  

Wayne departed this life in compliance with California law. He planned for it, personally obtaining the cocktail of drugs in advance. The story notes that if the plan passes in Great Britain, the attending physician will be responsible for bringing the drugs to the patient. Semantics. Wayne consumed the poison with his own hand, surrounded by family. 

Wayne's was a choice. It impacted others. Not directly, he did not kill anyone else, but it affected others. This included family, friends, and those tasked with assisting him. To accomplish his death, he had to convince "the attending physician" and then "a second doctor." Those doctors may never reflect on their involvement. And yet. 

Most doctors in America recite the Hippocratic Oath. It remains a thing after centuries. Notably, it is not today as it began, and has been "rewritten often to suit the values of different cultures." It is frequently referred to almost reverently as providing doctors should "first do no harm," but those words are not included. 

It does say the physician will "benefit my patients," and "will do no harm or injustice to them." There appears some subjectivity to both benefit and harm. It is in those perspective disparities that discussion and debate persist.  

Death might be viewed as harm. There is a perspective for the terminally ill that the resulting relief of death might instead be a benefit. The Oath also commits "I will not give a lethal drug to anyone if I am asked." The California law requires the patient to do the poisoning. The Oath continues "nor will I advise such a plan," and "will not give a woman a pessary to cause an abortion." 

However, the "modern version" of the Oath is a little less clear on the "lethal drug," and instead focuses on the patient, "a sick human being," and the impacts of illness, both personal and familial. It stresses "that there is art to medicine as well as science." And, notably, the physician must "tread with care in matters of life and death," understand "awesome  responsibility," and "not play at God."

It is fair to say the modern Oath is less definitive than the original.  

There is a significant dignity and power in making such a choice. I have watched many suffer, endure, and eventually pass. I have known some who refused nourishment, knowing that food was sustaining them, but who recognized their choice was a departure from their pain and suffering. They did that without the assistance of any physician. There have been instances when others engaged more activelyRefusal of nourishment does not seem to raise the emotions like assisted suicide does. 

It must always be difficult watching those you love suffer. Whose call should it be regarding their end? What criteria would be engaged? How is the interest of society balanced with that of the individual? That is a persistent friction in various examples beyond this topic. One interesting perspective on this was voiced by a Canadian physician about such decisions by those with emotional challenges; it is actually titled First do no Harm.

The self-controlled end idea reportedly began in Switzerland in 1942, according to Reuters. Their process sounds quite simple, and is subject to the singular constraint that "the motive is not selfish." It is difficult to imagine engaging a standard that is more undefined and vague. It is noteworthy, however,  that the debate is not recent.

There are opponents to free will in the present British debate. They argue that when suicide is condoned in any form or setting, the practice will increase in broader populations. The limited requirement that is being legislated there, is that death is expected within 6 months. The opponents fear this will slide to 1 year, then two, and eventually, no requirement. This is perhaps a reference to Belgium's law, which is characterized as "the world's most liberal law on physician-assisted suicide." 

In Belgium, the patient must meet only two criteria. They must be in "constant and unbearable suffering - either mental or physical - and their condition is incurable." Once these are met, the patient must put in writing "I want euthanasia," and sign it. Thus, there are fewer hurdles in Belgium, but the patient is not allowed to self-administer. In Belgium, the doctor has to do the killing - with a pain narcotic followed by a barbiturate. That is an intriguing distinction, Hippocratically and without the Oath's modernity.   

Canada also makes euthanasia available to "patients with psychiatric illness who find their conditions unbearable." The Psychiatric Times reports that this is available in "Belgium, the Netherlands, and Luxembourg." There is no requirement in Canada for "a terminal condition." And if there were, some contend that is not so strenuous because "life is a terminal illness." 

According to Reuters, 10 of the United States allow suicide, as do Australia, France, Germany, and Spain (as well as the other countries mentioned above). There are various definitional and process differences, but this is an apparent trend in industrialized nations. The right of self-determination is being given some deference in its relationship with societal interest in preventing death. 

The potential for a "slippery slope" is apparent. The potential that someone may make a less-than-informed, yet utterly permanent, decision is readily apparent. The potential for people who suffer psychological stress to make spontaneous, permanent decisions in reaction to their present moment is also apparent. 

But, is legality any real measure? If someone elects suicide in this manner, there is control and precision. When one lacks this legal avenue, is there nonetheless the chance for electing suicide but in a manner that is more dangerous for others, more painful, and more subject to error or failure? 

The Telegraph reports that doctors helped more than 30,000 people die" in 2023. Will Britain join the cohort? The World Health Organization (WHO) says "every year 727,000 people take their own life" and more try unsuccessfully. They say it is the "leading cause of death among 15-29 year-olds globally." That volume, alone, is sobering. 

The WHO characterizes this as a "global phenomenon," a "serious public health problem," and one that might be addressed with a "comprehensive multisectoral suicide prevention strategy." The National Institute of Health posture is similar. And thus, there is advocacy for both restraining and enabling. 

The British continue to debate, with a focus on England and Wales, with a "separate assisted dying bill" under consideration in Scotland. The proposals strive to define, delimit, and describe. The implications seem challenging from any perspective, and the debate there will be interesting. Will its outcome be a harbinger for further evolution or debate elsewhere?

The issues will continue to challenge the involvement of medical professionals, morals, legality, and society generally. It is a topic worthy of attention and discussion. 
Note: If you’re in emotional distress, there are options available to help you. You can also contact the 988 Lifeline at any time to connect with a skilled, caring counselor and get support. Confidential support is available 24/7 for everyone in the United States and its territories. For more on this hotline see September is Awareness Month (September 2022).