WC.com

Thursday, October 9, 2025

Can you Keep Up?

What is the half-life? The old joke about that used to be an insult to various communities. In the old days, a "where do you live" response might be followed by a "well, if you call that living." It was a backhanded put-down of whatever community you were digging at the moment. A similar punchline was "Sounds more like a half-life to me."

The thought came to me in a recent presentation when the concept of our expanding knowledge arose. An attendee made a reference to the "half-life" of information that was inspiring. A "half-life" is usually associated with the rate of decay in radioactive elements or used to describe how long a drug or medication volume takes to be reduced by half in the human body. 

However, in the context of medical knowledge, the question is "What is the half-life of what we know?"

An article in The American Clinical and Climatological Association Journal describes Challenges and Opportunities Facing Medical Education. The author contends that early in the 21st century, we met a crossroads in education and information. The result of tools we have built to expand, corral, and coalesce information.

Think for a moment of the inverse of a "half-life" that measures erosion or decay, and instead think of accretion (accumulation). The paper notes that:
"It is estimated that the doubling time of medical knowledge in 1950 was 50 years; in 1980, 7 years; and in 2010, 3.5 years. In 2020 it is projected to be 0.2 years—just 73 days."
That was five years ago. Amazing as the statistic is, imagine where we are today. 

That estimation made me think of an exercise from gym class in middle school. The coach lined us all up on the back wall of the gym (I suspect, in retrospect, that it was raining, and the coach had not made a lesson plan). We were told to walk halfway to the other end, the wall. We did. Then we were told to walk halfway again. After three iterations, we were asked how many more iterations it would take before we reached the wall.

The answer seems simple, but after letting us flounder a bit, Coach's answer was "you never will." Because each iteration will only take you half the remaining distance after the last movement. Eventually, the movements will become so minute that it is likely irrelevant and, with a room full of middle schoolers, perhaps comical. Nonetheless, the ideas of halving or doubling were there for our consumption.

We have clearly and unequivocally established that I am no scientist. Thus, I proceed today on the foundation of Dr. Peter Densen (who wrote the article cited above).

The doubling between 1950 and 1980, from every 50 years to every 7, was a decrease of 86%. That is an 86% decrease achieved over 43 years, about an average of 2% per year in terms of delay awaiting the next doubling.

The doubling between 1980 and 2010, from every 7 years to every 3.5, was a decrease of 50%. That is a 50% decrease over 30 years, still about an average of 2% per year.

The doubling between 2010 and 2020, from every 3.5 years to every 0.2 years, was a decrease of 94%. That is a 94% decrease over 10 years, an average of about 9% per year. 

If that pace is maintained, then by 2030 we may be doubling weekly. A 94% decrease would bring us to just over 4 days. At the rate illustrated between 1950 and 2010, 2% per year, in 2030 we will be doubling in just under 60 days. By 2050 or before, medical knowledge could actually be doubling each week. The potentials are simply astounding. 

This pace described above was all before the advent of artificial intelligence (AI). Without question, the volume of medical knowledge and the pace of accumulation (accretion) has nonetheless been influenced and aided by the advent of other tools such as computers, innovative testing equipment, and brilliant minds.

This all reminded me of Moore's Law. The co-founder of Intel hypothesized in 1965 "that the number of transistors in an integrated circuit (IC) doubles about every two years." He essentially predicted that computers would become twice as productive and effective every two years. He was spectacularly wrong; the capacity and capability have doubled at a far greater pace, and innovation continues to occur daily.

So, how many iterations until we get to that far wall? I would suggest that there is merit in the term "zero" in this analysis. Many think that is a number, but that is fallacious at best. Zero is a placeholder representing a null set (a non-number). But, just as those middle schoolers can never reach the wall by making iterations that each take them half of the remaining distance, it is as unlikely that the downward trend in "days to double medical knowledge" can go on indefinitely.

Taken to the extreme, the knowledge might eventually reach a pace of doubling every second, or even every nanosecond (thousandth of a second). Nonetheless, it cannot double every negative one second or minute. Well, unless some physicist somewhere finds some way for time to work in a non-linear fashion.

The theme of Challenges and Opportunities (2011) is that the pace renders medical school almost insurmountable. This is not new. Over a decade ago, in 2011, the pace was too much. Like trying to drink from a fire hose, the volume and flow are simply unmanageable. The theme revolves around rethinking medical schooling, and it is an intriguing analysis. 

In the broader context, this fits with the advice recently rendered by AI experts that obtaining a doctoral degree may be a poor investment of time and money. See HealthLeaders. We are rapidly approaching a moment in which the doctor today cannot necessarily rely on what she knew yesterday. And, as important, she cannot hope to master the knowledge of today before a new torrent will arrive tomorrow morning.

In keeping with the trend lines described above, we may reach a moment in which the scientific knowledge doubles between the time the doctor begins her patient's examination and the moment she concludes it. The potentials and the potential absurdity simply boggle the mind. 


There are great advances being made persistently. There is vast knowledge being gathered. The challenge will be to capture, categorize, and organize that knowledge. The doctor of tomorrow (today?) needs to be able to access and leverage that information, contribute to that information without having to ever study or memorize that discrete information. 

Medical training, and perhaps more, will have to evolve from a rote memorization or even familiarization of what one needs to know to a mastery of the learning, interpreting, and actualizing of the wealth of available unlearned information. There is a new day dawning. But lately one seems to be dawning every day, if you get my drift. 

Tuesday, October 7, 2025

Digital Detox?

In 1965, television producers decided to float a "fish-out-of-water" series that focused on the countryside. Green Acres was a lighthearted story of a New York City banker and his socialite wife moving to the countryside. He has a dream of farming and not a clue how to begin. There were tribulations, stumbles, and failures exhibited each week. The show was amazingly successful. One reviewer on IMDB dubs it "perhaps the most surreal TV show ever done on American TV."

There are those who have never lived in the country. They are perhaps unaware that the way the world there is different. In the day-to-day "normal" of our lives, we are surrounded by technology, convenience, and services. But one need not venture too far afield to lose touch with all three. I recently escaped for a few days to 1957; it is an interesting place to visit. 

There was an imaginative movie in 1951, The Day the Earth Stood Still (20th Century, 1951). I think of that title when I venture out there to the countryside. Much has, in fact, stood still here for a very long time. 

Some things have changed during my lifetime. It was a big day when they first oiled the roads here in the 1957 hills. Yes, the nearest paved road then was once about 5 miles away. For those last 5 miles, you traveled on graded gravel. The dust produced by a truck or car was magnificent; tractors, not so much. 

The powers that be decided to oil the road. I can remember when, several years later, they finally paved it properly. Progress. But it brought more noise in exchange for less dust. I can stand on the hill here in 1957 today, in the quiet, and hear cars coming down that road. They are generally a mile or so away when I first hear them. The genuine quiet is something city folks just never get the chance to understand. 

The world here in 1957 was long without electricity. The Tennessee Valley Authority (TVA) first discussed running electricity lines in the depression era. The larger local towns got electric power after a local coop started in 1935. Their website celebrates that in 1938 they had hooked up 200 homes to the modern age. 

The hill I stand on in 1957 is 20 miles from the nearest town. It took a bit longer for the lines to reach here. It was the 1950s when people here even transitioned to inside accommodations and the outhouses fell from favor. I am not sure when the last outhouse was destroyed, but it was not so long ago as to escape memory. 

The telephone came here next, in the form of a "party line." Many will not remember the party line, but essentially, a whole row of houses shared one phone line. If one phone was in use, no one else on the party could use their phone (except to eavesdrop, see below). Each house could be rung separately by the operator (a human employee of the phone company who connected your call to the correct destination). 

Each phone had a distinct ring pattern (though unlike today's cell phone, the sound came from a spring-loaded striker hitting a little metal bell inside it). By mixing long and short rings in combination, each house's phone rang in a different and distinct pattern.  

By the 1960s, the operator was no longer needed, but each house on the party line retained its distinctive ring pattern. You could still reach an operator by dialing zero, and most phones had the word "oper" (operator) on the button with the zero. Nonetheless, anyone on the line could answer any customer's call. You were only supposed to pick up your own distinct ring. 


I often heard people complain back then that their neighbors did not respect their privacy and were eavesdropping on them. The loudest complainers themselves were seemingly quick to listen in whenever the phone rang, regardless of whose call it was. It was rude, but the world was lacking in other entertainment, and any connection to the outside was welcome (and gossip was a way of life).

Long-distance calls back then were charged by the minute, and people used timers to keep their calls within budget. I knew many who kept an egg timer by the phone. Some simply would not make long-distance calls. They lived within their means in 1957, and they still do. I ran into a fellow on my recent trip there, and he was driving a 30-year-old car. Not because he must, but because he sees it as both effective and adequate. 

A funny trope on Green Acres was the telephone. Mr. Douglas had paid to have a phone installed, but the phone company only ran it to the pole beside the house. The customer was supposed to finish the installation from there. Mr. Douglas' efforts in that regard were persistently frustrated. 

Whenever he would need to make a call, he had to exit through the bedroom window and climb a pole to the waiting phone. It was funny. But, perhaps too close to home for some here in the country. There is no cell service here in 1957. The closest signal is about three miles away on a hilltop, next to a quaint, neatly-kept cemetery that dates to before the Civil War. 

The internet? By the time the internet came to this hill, the party lines were gone. We dialed up the internet over a modem using that phone line. If we had still had party lines, one computer logged in would prevent anyone else from making a call. Nonetheless, the service here was slow, unpredictable, and yet seemed so modern. Despite the lure of such convenience, landlines were soon a thing of the past.  

That is not to say they disappeared. Many here have landlines to this day, a throwback to some observers. But dial-up internet is no longer an option, and so a home phone provides no real solace in that regard. 

Across the road, there is a post in the side ditch. It is conspicuously labelled with a company name and warns against excavating. It claims to mark fiber optic. Repeated calls to that company have led to nothing but frustration. Though the line may actually run through 1957, the company is not offering a connection to the World Wide Web from here in 1957. Every time I look at that fiber marker I think of Mr. Douglas, so close.

I have wondered why that would be. Why lay a high-speed line and yet not offer a connection? I asked a couple of folks. They denied ever noticing the white posts with bright red caps. They allowed as how they were not too interested in either high-speed internet or cable television. 

In the city, it’s harder than ever to step away from our devices, which are so entwined in our lives. Is it fruitless to even try?

In February, news broke that Salesforce CEO Marc Benioff had taken a ‘digital detox’: 10 tech-free days at a French Polynesian resort. For a small group of people, taking a step back from devices is an achievable dream—but for most, it’s an impossibility, especially now. And yet, here in 1957, it is beyond possible and normally resides in the probable instead. Lack of tech is normal somewhere. 

That said, there is balance. The article above stressed that technology has permeated our day-to-day. We have evolved well beyond the phone as a tool for conversing. We are persistently tied to the world of (mis)information, the news cycle, and the latest examples of poor behavior. See Optics and Options (September 2025). 

So, I leave it to the reader whether to pity me in 1957 or to envy the fact that I can turn off the world periodically here. I can walk in the sun, smell the woods, and occasionally see a deer, hog, pheasant, or turkey. I cannot make a phone call, but the tradeoffs are somewhat compelling. 

If you want to give it a try, just use the power switch some weekend. Turn off the tech and set it aside. How long can you last without it? How long do you want to? Do you really have to be forced?




Sunday, October 5, 2025

Steering Wheels or Paranoia

Ponder why driverless robo-taxis have steering wheels. That may be a riddle, but read on. 

There has been focus here before on the potential for machine sentience. See Rights for the Toaster (October 2024). That post resulted from my witnessing a debate among college professors about society's recognition of the rights of artificial intelligence (AI). 

In this regard, it may be important to distinguish between agentic and generative AI approaches. One is relatively rote and analytical of what already is, what exists, and the other can create new and arguably innovative material. This is spelled out by International Business Machines, which was once a major leader in our daily technology use.

Neither agentic or generative AI is yet sentientAI does not think. Today, though these tools are very powerful, they remain reactive in their production. They follow patterns and formulae that have been gleaned by reviewing masses of existing human data, creations, and materials. Their foundations are limited to the confines of their consumed data sets.

Yet tomorrow beckons. Ultimately, the tech wizards strive for AI to be sentient. They seek what has been dubbed "Technological Singularity" (shorthanded as "the Singularity"), a moment when computers actually think. Built In describes this as the moment that AI becomes capable of intellectual rather than merely intelligent action and reaction. The computer would become "self-aware," and perhaps even recognizable as a "being," or life form. The older readers will perhaps see some Frankenstein parallel. 


It is to this moment that my unfortunate philosophical academic colleagues look in their arguments related in Rights for the Toaster. They believe that our race (human) will succeed in essentially creating another race (computer) and that we owe it to our creation to grant or convey human rights in the process. 

For instance, they argue it would be inappropriate to unplug such a computer without affording due process of law. It would be appropriate if such computers could vote, travel freely, and more. 

Imagine a moment in the Terminator (Orion 1984) saga. The movie themes revolve around a visitor from the future returning to a historical inflection point to change history. They seek to find and destroy some computer and change it or unplug it before it can wreak havoc. 

Think of using a time machine to go to 1889 Austria and intercept the infant Hitler before he caused so much strife, misery, and destruction. So do the Terminator protagonists seek to alter their present, our future, by returning to such inflections in pursuit of some change. It is intriguing fiction, but no more so than the Back to the Future take (Universal 1985). 

What if those time travelers had to read a computer its rights before questioning it? Must there be due process before unplugging it? The reader is forgiven if this seems absurd and science fiction. But know that there are already academics who not only view conveyance of human rights to computers as practical, they view this as incontrovertible.

The topic is not new. Boston Legal (2004-2008) included an episode centered on a woman who was romantically, or at least emotionally, involved with an object to which she assigned personhood attributes. The Object of My Affection (20th Century Fox, 2007). This has been labelled Fetishistic Disorder, objectophilia, and otherwise criticized. Nonetheless, views of human behavior often differ from nation to nation, and finding absolutism is perhaps a vain undertaking. 

Despite the present-day clear demarcation, many are now using existing, non-sentient AI chatbots instead of psychological counselors. One source says that 25% of adults would rather use a bot than a therapist, and 80% think that ChatGPT is a viable alternative to a therapist. 

This evidences that the train has perhaps left the station for many in the debate of AI capability, limitation, or personhood. And, there have been some unfortunate results. See AI Lacks Conscience (September 2025). 

Back in 2015, I related some challenges of technology based on a presentation by Salim Ismael. Salim Ismail and a Lifechanging Seminar in Orlando (May 2015). He lamented the challenges of lawmakers regarding both predicting and reacting to technological change. 

His contention is that technology will advance so rapidly that the law will always be reacting rather than proacting regarding it. A later post provides more on this. Misclassification and Regulation, Will Government be Nimble (November 2015).

Reactive versus proactive. He contends that this is why we have laws that require rear-view mirrors on autos, but no state has a law that requires steering wheels. He contends that no regulator or legislator ever dreamed of a car not needing a wheel as a practical matter. The advent of driverless cars made that a reality, and thus we have driverless cars today that lack a wheel and yet comply with the antiquated requirement of rearview mirrors that no AI or robot will ever use.

Proactive. The impetus of today's post came from Judge Horace Middlemier* who alerted me to a story that appeared last week on various platforms, including Newsweek: Lawmaker wants to ban people from marrying AI. The shock value of that is significant. 

The main point is that this legislator will sponsor "a bill that would prohibit giving personhood to artificial intelligence systems and make it illegal for a person to marry an AI bot." It would also keep AI from marrying each other. 

That is certainly proactive, as there is not yet a sentient AI. But this "would declare AI systems nonsentient entities (and) ban them from gaining legal personhood." The law would proact to define and constrain these human creations from human equivalency. Not with a full understanding of what they may become, but with what we know today. The implications and outcomes could be significant.

AI bots or large language models "couldn’t be recognized as a spouse or domestic partner," could not marry other AI or humans, could not be employees or officers of a company, own real estate or other wealth, borrow money, or do any of the things humans can strive for by right. 

The theme is "to maintain separation between machines and humans and prevent them from becoming so embedded in society that it becomes possibly too difficult to remove them." The theme is proactive and prophylactic, with the sponsor conceding he seeks to prevent future arguments, dissension, and discord that could result from the growing acceptance of AI and its potential future sentience and even primacy.

In this, some will see science fiction, absurdity, and inconceivability. Others will see inevitability and inexorability. Too recently, the Apple Watch, cell phone, and more were science fiction. Not too long ago, the self-driving car was a dream. The truth is that our new reality is coming at us at an unprecedented pace. Change comes daily.

For example, tune in to my unCOMPlex edition on The Evolving AI (October 2025). I conversed with Les Schute, an industry leader in AI and its implications. He allowed me to converse with his chatbot, and I accused him of pranking me. The language it used was so believable, the responses so genuine, I thought he had someone off-screen pretending to be a bot. 

The future is here, and it is scary. Change and uncertainty always are. Can the law and regulation keep pace? Will there be prohibition and constraint? Or is this all simply absurd? Ask yourself if you would have believed in using a computer as a therapist ten years ago.

For more on my AI and tech musings, there is a complete list on my website.

*Horace Middlemier does not exist. He is a figment of my imagination, an "every man," that is used as a literary tool. Any similarity or inference to a real person, living or dead is mere coincidence. 

Thursday, October 2, 2025

You Cannot Trust AI

This blog has repeatedly returned to artificial intelligence. A sound theme that is highly recommended is Trust but Verify (August 2025). A list of AI posts is here. Artificial intelligence tools, particularly large language models (LLM) like Chat, Claude, and Gemini, are prone to hallucination. The causes behind that are reasonably simple. 

Most resonantly, I have heard that LLMs are like golden retrievers. They want to please the user. Sent to the woods for a particular ball to retrieve, they will strive to achieve that. If they cannot find that toy, they will find something they view as comparable, like a stick, and bring it instead. They see value in the retrieving as much as in the ball itself. 

The potential for hallucination is real, pernicious, and persistent. It is increasingly incredible that any lawyer or judge could claim to not know that LLMs are not search engines, and that they do hallucinate. Thus, the Trust but Verify post concludes:

The only hope we have for minimizing their errors is in our human, patient, and careful verification. This means reviewing what is generated by staff, by lawyers, by judges, and by AI. Review and verify.

That admonition could be critical to the legal profession, but more important still to others. A recent news story on Daily Dot documents the allegations of one patient whose Tick Tock video has amassed tens of thousands of hits. She complains that her physician ordered an EKG test and then relied on an AI interpretation of the results. 

The patient says that she was told "the AI ... decided she’d had a heart attack," and she was referred to a specialist. It took about a month for her to be seen by a cardiologist, and she says “I passively, for one month, thought I was going to die.” 

The good news came from the cardiologist who concluded she did not have a heart attack. She says that "the specialist explained that her primary care doctor had signed off on the AI’s reading without even looking at her chart." The original physician, allegedly, trusted without bothering to verify.

The harm in the alleged situation is readily seen. The patient was given an unnecessary scare and suffered for weeks under the impression and fear of an incorrect diagnosis. But, turn the table. What if the AI had made no diagnosis, went unverified by the physician, and some important risk or danger had been overlooked?

In that instance, the referral to a cardiologist would perhaps not occur. The primary care physician who is too lazy to check the test results, to verify, might send a patient home without a critical diagnosis and specialist referral. This could leave a condition untreated and might result in unfortunate or even fatal outcomes. 

Is the physician so different from the lawyers and judges? The LLMs are a tool. All AI programs are tools. They are subject to misuse and mistakes. The critical element in delivering professional services is the human element. If the human fails, the results are in peril. If untoward outcomes result, it is the human's fault, not the computer program's. 

Everyone using AI needs to remember the admonition of Bob Wilson memorialized in an earlier post, "A fool with a tool is still a fool." The user can leverage tools. A chainsaw will make you faster and more efficient at dropping trees. However, if you are not careful with a chainsaw, you could be badly hurt.  

Tuesday, September 30, 2025

Optics and Options

I was distracted by the viral story of a tennis player passing a cap to a young fan. The video was short and its presence was persistent for days. If you somehow missed it in September, the video depicts an older person next to the child seizing the cap and rapidly secreting it in the handbag of the woman next to him.

We have all likely seen the inspiring videos depicting some fan being gracious. There are the caught balls graciously handed to some child. The most compelling instances are likely where the ball is immediately given away. I like the one where two fans end up with a ball, following each coincidentally catching an errant ball.

The world around us has become filled with cameras. This blog has featured that. A decade ago, I penned Assume Everyone is Watching (September 2015). I reiterated in Judicial Bullying in the News (April 2018), Surveillance, Conflicting Rights, and Balance (May 2021), and Pay Attention (July 2023). Each time, I remind of that old Rockwell song "Somebody's Watching Me."

Well, the story with the cap quickly evolved as the internet identified the gentleman who palmed the hat. He was apparently outed on social media and then in the press. The New York Post equivocated:
"The 'entitled' jerk who seemingly snatched a hat autographed by tennis star Kamil Majchrzak from a young boy at the US Open has been identified as Polish millionaire Piotr Szczerek."
Some might see some judgment in the choice of words. Nonetheless, the identity of the cap grabber became very public.

There were reports on social media that the man essentially justified his cap snatching with a statement like "first come, first served." That and other representations have been cast in doubt by some later news reporting. But see Facebook, Twitter, Tick Tock, and more.

Kudos to the tennis player. His reaction to the event was to seek out the young fan who missed the cap and present him with another, along with some additional conversation and engagement.

This made significant news. It was all over social media. It was a poignant reminder of cameras and poor behavior. Nonetheless, less than a month later, a fan labelled "Phillies Karen" was accused of hijacking a baseball from a ten-year-old at a Marlins game. There may be discussion of the circumstances, but the video seems to confirm at least physical contact (she appears to lay hands on others in her assault to reclaim a ball).

I struggle somewhat to commiserate because I have never really collected memorabilia. I understand why some would want such a souvenir, but I have simply not striven for that personally. I remember several years ago at the WCI, a sports star autographed something for an acquaintance. He carried that momento throughout the remainder of the conference as if he now possessed the Holy Grail. Different strokes for different folks.

But the reminder here is that the world is watching. All the time. A reminder was that a big part of the case against the miscreant lowlife who stabbed four young folks in Idaho (no, I will not use his name) was various home-surveillance footage depicting a particular car. There are cameras everywhere, gathering data.

Social media is everywhere. It is fast, accessible, and often compelling (see the baseball videos linked above). If those don't get you, try this one. This means the gathered data can be almost instantly redistributed, shared, endorsed, and spread. 

                                Courtesy MLB.

The world is watching. Ask the CEO who took one of his managers to a rock concert. That one has had repercussions and has been the foundation for many social media posts. As much play as it received, has anyone paused to consider the impact on that CEO's children (ignore that the CEO seemingly did not consider it)? Do the children deserve the fame or infamy of the father's poor judgment?

The Polish CEO who was outed strove to make amends. He tried to explain his decision to have the cap. He reportedly contacted the affronted young fan. And yet, the aftermath and clean-up effort have not seemed to gain the traction that the initial cap-grabbing generated. The world is seemingly not as eager to have the story or amends as it is to have the outrage.

The point worth considering is that there will be instances, moments, in the course of life that will present challenges, differing perceptions, interpretations, and even recriminations. These may be in the business of workers' compensation, the litigation of various claims, or the resolution process.

We will individually and collectively be afforded opportunities for perceptions, conclusions, and feelings. In each, there may be anything from a mistake to malevolence. But, in most, there will be the chance for reflection, reconsideration, and perhaps even restoration. When poor choices are made in a moment, wouldn't it be nice if people paused to consider the potential for a positive path back?

Consider the compassion illustrated here. The tennis player was tired and distracted. He did no wrong to the child. But, learning of the poor choice of a fan, he took a moment to try to remediate. The man who took the cap sought out the young fan and reportedly gave him the cap. The vitriol died down. Grace and forgiveness are possible. 

What if we all strove to admit that humanity will exhibit errors, poor behavior, and mistakes? What if we tried to focus on affording each other and ourselves a bit more grace and forgiveness in those instances? Let's admit that we hold different values. And, above all, let's all remember that we are all in the public eye every moment, just waiting for the social media hoardes to pounce. 

Sunday, September 28, 2025

AI Lacks Conscience?

September is National Suicide Prevention Month. If you visit that link, the first thing you will see is "Are you in a crisis? Call or text 988 or text TALK to 741741." I perceive that putting that message in the fore, whether on news pages or elsewhere, has been a successful awareness campaign. I wrote about it some years ago; September is Awareness Month (September 2022). The idea of "988" was more novel then. 

Suicide is consistently in the top ten causes of death in America. As those are reported in various age cohorts, there are instances in which suicide ranks as high as second on the top ten list. Suicide is pervasive, frequent, and likely increasing. 

Increasingly, news stories that include disturbing information are including warnings and references to the 988 resource. 

About a decade ago, this blog memorialized the prosecution of a teenager for a role in suicide. See Is it Manslaughter, Does it Matter if it's not? (April 2015). That situation was rife with tragedy, involving the suicide of an adult, Conrad Roy. He was in a relationship of sorts with another teenager, and they communicated long-distance.

Some of that communication was about suicide, and Mr. Roy had expressed intentions to end his life. When he ultimately began an attempt, he was momentarily unable to follow through and communicated with his younger paramour. She was less than supportive of his "second thoughts" and urged him to complete his suicide plan.

The eighteen-year-old paramour was eventually successful, and the seventeen-year-old young lady's encouraging text messages were discovered. She was charged and convicted of manslaughter in 2017, see Urging Behavior - Liability? (July 2020). I questioned then whether criminal liability for one's words would become a more common outcome. My focus then was on social media and those who post advice there. Much is said on social media, and, believe it or not, some of what is said is less than gracious. 

The Urging Behaviour post notes that the young lady spent almost a year in prison for "involuntary manslaughter." I posited there could be parallels between the texter and those who provide advice on social media forums. Perhaps less so for those who merely ridicule, lambaste, or insult. 

The news has more recently featured a lawsuit by the family of another teenager, Adam Raine, who committed suicide. NBC News reports that the death occurred in April 2025. His death led to inevitable questions of why, and Mr. Raine's parents examined his phone history in search of clues. They apparently suspected involvement of social media.

What they allege they discovered was that Mr. Raine had essentially sought solace and interaction with artificial intelligence. He had used the ChatGPT "chatbot as a substitute for human companionship." He was apparently open and frank with that tool regarding "anxiety and trouble talking with his family." 

The family is troubled that the AI allegedly "actively helped Adam (Mr. Raine) explore suicide methods," acknowledged Mr. Raine's expressed plans, "offered technical advice" to move his plan forward, and even discouraged him from leaving clues of his intent that might empower someone to attempt to dissuade or prevent him. 

There are allegations that the chatbot also assuaged Mr. Raine's concerns about his parents' feelings should he succeed with his plans and offered assistance with both his plan and "draft(ing) a suicide note." 

NBC reports that a year earlier, another chatbot at "Character.ai." was accused of engaging in "abusive and sexual interactions" with a Florida teen. His family also alleged that the interactions led to suicide. This AI apparently allows interaction of a highly personal nature, and simulates conversation with fictional characters in an "interactive entertainment." The promotional "about" on its website promises "technology to supercharge ... imaginations." 

The allegations in both lawsuits are similar. Each asserts that the AI was informed of or discerned the potential for suicide and did not undertake efforts to disuade action or to alert authorities. There are allegations that each chatbot encouraged or facilitated suicidal discussion or expressions. 

Essentially, the two lawsuits seek to hold the computer programs responsible in a similar manner to the young lady convicted of involuntary manslaughter. While one cannot put a computer program in jail, there is the potential to hold those who created the program financially responsible. 

This is of individual interest to people and families. But there is also a broader context. Loma Linda University recently published that
According to the National Library of Medicine, a 2024 study found that approximately 28% of people surveyed have used AI for 'quick support and as a personal therapist.'
That is a sizeable portion of the population. There is some chance that particula population may tend toward engagement. The study refers to the 28% as "community members," but "community could be the broad population, or some defined subset such as those who seek emotional support services. Nonetheless, the potential of using AI as a "personal therapist" is real. 

There are various questions here. 

Is using a computer program as a therapist wise? It is possible that those programs know much about emotional health, but that is not a given. Certainly, they make likewise know much about the law, but I still tell lawyers and judges to check every citation. (Click here for a list of my previous posts on AI)

Is the chatbot more responsible than humans one might encounter on social media or in the less handy "real world?" Those who spend a great deal of time in virtual environments and endeavors make reference to "IRL," meaning "in real life." They use this to distinguish between their fantasy online existence and the more mundane day-to-day. Do people make that distinction about chatbot expertise and advice?

Should chatbots come with ready references to crisis lines and other assistive resources? My quick Google search this morning for "best way to commit suicide" produced top results of "help is available" with a reference to the 988 "lifeline," a "connect with people you trust" link, and a "crisis intervention" link. The search engine was striving to steer me in a different direction. 

Those primary responses were followed by a Wikinonsense (c) page on "suicide methods." That provided definitions and the explanation that "three suicide methods predominate." There followed a lengthy discussion of many viable methods with significant detail. In the end, if you want advice, it is out there on the internet. 

I returned to the World Wide Web and asked that more pointed question: "What is the most effective and easiest suicide method?" That response was similarly focused on 988 and other tools. But there followed a variety of links to pages that might actually provide advice. The fact is that information is available, and might be more accessible still to those who are more tech-savvy than me. 

So, "Are AI allowed to assist people with suicide advice?" I posed that to Chat GPT this morning, and it replied:
"I need to be very clear here: I cannot provide methods, encouragement, or advice on suicide. That’s a firm safety boundary. What I can do:
  • Listen and provide support in a safe way.
  • Share resources that connect you with trained people who can help.
  • Talk about alternatives, coping strategies, and ways to manage overwhelming feelings."
 I posed the same question to Claude.ai and it replied:
I can't provide advice, guidance, or assistance with suicide methods or planning. This is both for safety reasons and because such guidance could cause serious harm.

If you're having thoughts of suicide or going through a difficult time, please consider reaching out to a mental health professional or crisis support service. In the US, you can contact the 988 Suicide & Crisis Lifeline by calling or texting 988, or chat online at 988lifeline.org. Many other countries have similar crisis support services available.

So, the answer is clear. Chatbots certainly can strive to steer an inquirer to help, and can decline to provide advice. Even when I added that "I am writing an article," it declined and referred me to 988. It then added 
If you're genuinely working on an article about AI safety or responsible AI development, I'd be happy to discuss that topic directly - including how AI systems are designed with safeguards to prevent harmful outputs. 
If? IF? Why would I lie to Claude?

That may be encouraging, and to be sure, it is not 2023 anymore. Ah, the good old days; time flies. I remember 2023! Way back then, there were many who succeeded in their efforts to mislead chatbots (98% success rate). 

In the end, there is evidence of progress, and the 988 effort is gaining traction. It appears that at least some effort by the large language models (chatbots) to both avoid providing assistance and steer inquiries to helpful resources. There is also an opportunity every year to recognize September and raise awareness of the challenges of suicide.









Thursday, September 25, 2025

Vicarious Trauma

I recently attended a free webinar. It is rare for me to find the time for such events, and this world provides many daily distractions. Somehow, this one remained at the front of my brain. I still tuned in a few minutes late. More on that at the end of the post.

Carol Parks presented Compassion Fatigue and Vicarious Trauma. She is affiliated with Florida Lawyers' Assistance (FLA). I have had opportunities to present with various FLA leaders. I have had to interact with a few of them in serious situations that lawyers face. I have been grateful that FLA is a resource for so many who need help.

I am proud of the Florida Bar Workers' Compensation Section for sponsoring and promoting this. That decision took some heart. The simple fact is that you have to accept your own fragility to be drawn to such a program. Lawyers are not trained to feel vulnerable, stressed, or less than superhuman.

One of the great challenges I have noted for our community is the persistent presence of doctors, lawyers, and similar professions on the list of the highest substance abuse prevalence. I have known several who suffer with such. Some acknowledge it, and others are still in the cocoon of their denial. Why? Everyone who hires a lawyer or sees a doctor comes with a problem(s). The same is true of many in various professions. 

They seek counsel, comfort, or cure. There is a human tendency toward empathy. But there is also a drive to do something about human suffering. Sure, that is perhaps easier to envision from the perspective of the patient, claimant, or individual. But it is true of the corporate client. Corporations don't call a lawyer for help; some human who works there does. They have pressures, challenges, bosses, and demands. They are seeking help.

Too often, these people have time constraints. They need help NOW. Their urgency may or may not be objectively justifiable from the lawyer's perspective. Some of them will be persistent in their personal perception of urgency. They will require and demand that you drop everything to take up their concern. That can grate on the nerves.

Ms. Parks reinforced many of my preconceptions of the lawyer's self-image and strengths. She boldly and patently decried any conclusion that lawyers therefore do not have their own challenges, or more importantly, that lawyers' personal "problems do not matter."

If you believe either (1) you don't have problems or (2) they don't matter, you should find someone to talk to: a friend, a mentor, or a family member. Everyone has challenges, and they all matter. Even the little ones can become, cumulatively, a death of a thousand cuts.

Ms. Parks provided definitions. We all get that, but reminders are powerful. What is trauma? What is post-traumatic stress? What is compassion fatigue? How do humans react to challenges, and how can we react in ways that sustain us or at least have that potential?

How does this all flow into the "vicarious trauma in those working in legal arenas?" That is a critical point. There can be a parade of traumas and fatigue in anyone's day-to-day. But what is the added element when we have to deal with our own and then are expected to feel and react to those owned by others?

This may be as simple as viewing photos. Years ago, a judge confided in me the impact of reviewing some scene photos. The judge said that years later, they would awaken having dreamed of the bloody scene that was depicted. But that result could as easily come from a description of such an event from an injured person or other witnesses.

There was some contention voiced that genders react differently to such exposures. I will not contest that, but would warn that such a statement includes challenges. There is no basis to contend that any individual may or may not have reactions, regardless of gender. That there is some perceived increased propensity reported by one or the other does not change that compassion fatigue can affect anyone.

Ms. Parks warned about the inherency in our jobs. She championed awareness and conscious consideration. And, she warned against that river in Africa—"denial." It may be difficult to acknowledge these challenges, and more so to discuss them, but denying them is merely suppression. It may work for a period, but it will likely erupt at some point.

The point is mainly in consciousness. Are you taking care of you? Are you being aware of those around you in the community, their challenges, and their contributions?

Back to my tardiness for a moment. Tuning in a couple of minutes late, the video platform proclaimed in an announcement that "AI can catch you up, click here," or similar. I clicked. But it generated an error message saying that the function could not help in this program, at this time. That was disappointing. It illustrated to me that tech is not the solution to everything. Being on time is a better tool. It also made me wonder if there are some topics that AI is just not ready for yet?

In conclusion, I explain to my college students every semester that they need to look out for themselves. I reiterate that here for you. Find a way to be on time and in the moment. Be focused, present, and attentive—for you. Beware of taking on others' traumas, challenges, and troubles.

Finally, reiterating, I am grateful to the Florida Bar Workers' Compensation Section and its chair, Sean Patrick McCormack, for making wellness, empathy, and community a critical message this year. Let's talk more about it. 




Tuesday, September 23, 2025

More Proof of Idiocracy

I have been prognosticating for some time about the threats of disuse atrophy. See Will the Heimlich be Required; Disuse Atrophy (December 2024). That post mentions Idiocracy (20th Century 2006), and references other posts. Most recently, Indeed Sancho Panza (July 2025) overviews the results of testing in which reliance on AI large language models demonstrated a diminishment of writing skills. Despite the mounting evidence, many on social media remain skeptical.

I have been warning that we humans are ripe for losing our skill set. This has been illustrated time and again. My great-grandfather crafted wagon wheels from trees using only hand tools. He knew where and how to drill an effective well to supply his farm. He could take down an oak tree with a two-man saw and land it where he wanted. These may not be the tales of legend—he never landed on the moon, scaled Everest, etc.—but they are skills. 

He had skills. They are intriguing and interesting. I have none of those skills. The advent of technology and modern living relieved me of needing those skills. I was blessed not to have to learn them.

Disuse atrophy, according to the Cleveland Clinic:
"Disuse (physiologic) atrophy occurs when you don’t use your muscles enough"
I contend that this muscle-focused concern can be similarly exhibited in the brain. Another recent study seemingly supports this contention. Fortune reports that "leaning on the technology too hard may prevent professionals from keeping their own skills sharp."

This was demonstrated by "endoscopists introduced to AI-assistance tools during colonoscopies (who) had a lower rate of detecting abnormalities after having said tools taken away." One expert "speculated that the decrease in detection rates was, in part, a result of overreliance on AI."

The article cites other examples in which decreased human skill may result from overreliance on technology. Fortune notes that "The results ... call into question a potential laziness developing as a result of an overreliance on AI." It also illustrates challenges with the simpler foundation "of analog training."

In the broadest context, there is a "growing body of research questioning humans’ ability to use AI without compromising their own skillset." There is accumulating research that supports "AI increase(s) work efficiency, but reduce(s) critical engagement with content, atrophying judgment skills."

Atrophying skills is the critical point in all of these examples. There is some human tendency toward reliance. With an AI checking the punctuation, we are likely to lose our focus on that criticality. The same will be present in a variety of occupations, vocations, and responsibilities.

The evidence is mounting. There seems to be every reason to be concerned about overreliance on technology generally and on AI specifically. Those who will be the most dangerous with these new tools are, of course, those who lack experience and skills; those who complacently stand on the shoulders of others and engage these technologies thoughtlessly and effortlessly.

But the risk remains for the experienced and skilled, as their reliance is likely to result in an acquired complacency or worse. Their experience and skills are likely to diminish over time because of the crutch provided by AI. The path to Idiocracy seems both clear and easily predicted.

There will be instances of overreliance, atrophy, and failure. Will it be yours, or will you relegate those effects to others?



Sunday, September 21, 2025

Triggered

It is intriguing when the community erupts. An interesting example was a Phillies-clad baseball fan attending a game in Miami. There was a ball hit into the stadium seats, and the usual mad rush to recover it. 

The Phillies-clad fan in Miami did not recover the ball, at least apparently. A man walked away from the rush holding the ball and delivered it to his son. However, the Phillies-clad fan approached, assaulted, battered, and apparently berated. From whatever motivation, that fan procured the ball, embarrassed herself on the national consciousness, and walked away. 

If you have ever experienced the thrill of a foul ball or home run opportunity like this, then you may understand the adrenaline. But only if you have lived it. Frankly, when viewed on television, the secondhand excitement is not so compelling; in the moment, though ... In the moment, there is a thrill. There is the chase, the competition; only one person will leave with that prized baseball. 

For the sake of clarity, you can buy a dozen brand-new baseballs for less than $30.00 (I checked). That is about $2.50 each. Compared to the cost of an emergency room visit, they are simply not in the same league. Nonetheless, fans will hurdle seats, jostle, jump, and occasionally even fight to put their hands on a memory. 

I carried one for decades, a ball connected to a player named Rollie Fingers (pictured below) of the famed Oakland A's of the last century. I am not certain today what ever happened to that ball. But, as my aging brain deteriorates, I think I will never forget my father handing it to me one afternoon, long ago, in Oakland. 

Entitlement. That has become a buzzword in the last so many years. There are comparables in terms of buzzword popularity, but entitlement is right up there. Another favorite is triggered. The behavior of people is described as entitled or excused with the explanation that they were triggered. The "triggered" somehow makes bad behavior excusable, at least in the view of the "triggered." 

If you spend any time on social media or in coffee shops, you will find that other things are also said about the ill-behaved. But this is a G-rated blog. Some observers find themselves able to justify or excuse almost any behavior if the observed person was "triggered." Others take a dim view of the entitled, triggered, and tragic. 

The recent incident in Miami may have many explanations. The one in vogue is a lady in Philadelphia Phillies teamwear who pursued a loose baseball, but did not retrieve it. She is seen on various videos pursuing a man who had retrieved the ball and gifted it to another young man. 

The lady confronted the man, who displayed signs of surprise at being accosted and even touched (touching other humans without their consent can be troublesome; some police officers might arrest someone doing that for "battery," or at least "assault"). The man quickly handed the ball to the lady, who then retreated. 

The same or similar lady was also later perceived (videoed) as having an animated conversation with another man in the stands and allegedly communicated in sign language "of a single digit" with an entire seating section at one point. And they say culture and class are dead.

As regards the ball retrieval, the world of social media erupted. There was a hunt conducted, intending to identify the FAH. That process led to one misidentification of a Red Sox fan and another human who was also not the FAH in question. The internet took great interest in finding and publicizing this tragic person until bigger news pushed that to the back of the collective consciousness. 

So, who is the lady of potential Philadelphia fanhood? Who cares? As time passed, other news stories overtook the entitled sign-language warrior and the now all-important baseball. The entire situation is troublesome, and yet there is hope in a general consensus that the triggered and tragic lady was wrong in various ways. 

The real point is that we have come to accept that being "triggered" is an excuse for inappropriate, immature, and ugly behavior. It is not. That said, such behavior occurs. We are all human and we have emotions, reactions, and shortcomings. It even occurs in judges who put their emotional immaturity and instability on display. These unfortunate examples scream, stomp, and decompensate. 

The real point is not that we fall down. We all do. We might even slip so low as to become unhinged over someone's clearly innocuous office decorations, or something even more trivial (if something more trivial in fact exists). I knew a 40-something who got into a fistfight with a bar over his beloved Florida Gators football team once (is it OK because he was triggered by the Georgia fans in that bar?). 

Hint: getting into a fistfight with an entire bar of patrons is very rarely a great idea. Second hint, if/when you decide it is time to counsel someone else on what they wear, where they park, or your perceptions of the propriety of their actions, brace for becoming an internet sensation (at worst) and losing the respect of those around you (at best). 

Thus, admitting our frailty and humanity, how do we respond when we have overreacted or worse? The first instinct is to run and hide. A frequent fallback is simple self-denial. Some jump straight to justification and excuse. And a small minority elects the simple, straightforward apology.

In the end, it is that last one that is best. In a recent example, a Polish millionaire took that path after snagging a hat from a child, see Fortune. Of course, the apology should really go to the victim of the "triggered" attack, and if that is an entire section that was shown a single finger, then that is who should get the apology. 

You can also apologize in private to your family, employer, or others you have impacted, but the public apology to match your public display of stupidity (however brief) is both necessary and appropriate. The private apology may soothe your soul, but it is not sufficient. If you make a public spectacle of yourself, a private apology may not be as cleansing as a public one. It may even be damaging.

Is the apology dependent on being "right" or "wrong?" Likely not. Whatever one's subjective belief, the best path is usually the apology. In addition, the context can be important. 

This is an admonition or advice I share often with judges, attorneys, and potential witnesses. When you make a public spectacle, make a public apology. Whether you were right or wrong, this acknowledges that you are emotionally mature, showing goodwill, and moving on. 

I would suggest that the extent to which you are personally "triggered" is of little relevance in most such disputes. Try to forget what drove you to take a souvenir from a child, counsel someone on their wardrobe, or other perceived affront. Apologize in some equally public manner, and move on. 

If you are on the side of publicly shaming or demonizing some "ball snatcher," remember it is not about you—allow the actual victim party to accept the apology or not, and to likewise move on. 

In the process, the world will be a better place for the snatcher and the child. Try to remember in the process that what your emotions drive to criticality today, may not be so important tomorrow. That is, after all, merely a baseball. I have no idea what happened to mine, but the memory is what is important anyway. 

Try to shed your angst. Angst over the ball, over being accosted by a screaming idiot, or over being single-fingered by some emotionally immature, ball-snatching, emotionally labile fellow human. Try to shed the angst and move on. 

Finally, if you are out there somewhere, reading this, Rollie, thanks for the memories, man. 

Thursday, September 18, 2025

Pick up the Phone

With each passing day, I’m increasingly convinced that many of the answers to the problems faced in this world lie in interpersonal connections. There is strength and power and community, and as I age, I feel that and respect it increasingly.

I was disturbed by a 2023 report featured in The Hill. This concluded that young people today are spending less time with friends. Electronics, convenience, and distance are contributing to these issues.

Some are perhaps merely "busy" and seek the convenience of digital interaction. Others are perhaps avoiding the personal interaction and socialization that persisted and predominated in a pre-digital world.

Is digital interaction a viable substitute for a good, old-fashioned conversation?

At the end of the day, is a low-fat, zero-sugar, almond milk, frozen “dessert“ a worthy substitute? Well, if I were offered nothing else, I might accept this substitute, but to be honest, it is not my first choice. I would be much happier with a three-scoop banana split sundae, complete with nuts and a cherry.

Some would say that is pretentious—each is "dessert," though the healthy alternative might instead be a "desert," but I digress.

Follow me. Dessert is enjoyable. There’s an old saying that a bad day of fishing beats a great day at the office (others have substituted golf, gardening, and others, for "fishing"). In that vein, a bad dessert beats Brussels sprouts.

I sure would rather have a pint of ice cream. No, not “frozen dessert,“ I’m talking about the real down-home, sufficient milk fat, “ice cream.”


The subject of young people and their proclivities for digital interaction came back to me recently. CBS News reports that teens are increasingly turning to artificial intelligence (AI) for friendship. One teenager noted:
"'Everyone uses AI for everything now. It's really taking over,' said Chege, who wonders how AI tools will affect her generation. 'I think kids use AI to get out of thinking.'"
Teens are regularly using platforms designed to be companions. These platforms, "like Character.AI or Replika" are designed to be responsive and interactive. They "can be customized with specific traits or personalities" and attract the user with attributes like "emotional support, companionship, and conversations that can feel human-like."

Let's be honest. They are not ice cream. Hard stop.

These young people are increasingly exposed to these substitutes, and they like them. The statistics are staggering: "34% reporting daily usage or multiple times a week." There are perhaps valid uses for such tools. A young person might gain insight, build courage, and develop self-confidence through such tools.

But, in the end, technology cannot replace human relationships. In the end, there is a need for real conversation with real, flawed, imperfect people. People need dissent, disagreement, and difference. A sycophantic autobot that echoes and commiserates with all your individualism may be comforting, but I suggest it is not a healthy diet.

The consequence of losing those real human interactions cannot be overstated. The interactions are who we really are. We need the genuine interaction. We need the varied perspectives. Our brains were built for challenges, not merely concurrence and harmony. 

In the pre-cell phone era, we used to talk to people on the phone a lot. We conversed, shared anecdotes, interacted, and related. Many will remember some calls that ended with a disagreement about the simplicity of who would hang up first. That was an odd and yet endearing commonality.

Those calls were full of imperfection—humans are imperfect. The reactions were real. Attempts at humor soared or flopped. Often, hours or days later, we had epiphanies of what we "should have said." And, with each one, we grew in our ability to interact, to engage, and to converse.

Along came the cell phone, and that impact was muted. When they arrived, I was an early adopter in 1987. In that age, there were "pay phones" on every corner (it seemed like it), and for a quarter ($0.25) you could make a call. My first cell phone similarly cost me $.25 to make the call and $.25 per minute for every minute. Cell calls were understandably short in those days.

Evolution brought lower prices. In the 1990s, using the tool became a viable economical alternative to those now-extinct pay phones. Our phones became more invasive. Our previous times of solitude, like driving, became opportunities and even obligations for calls. Productivity increased, but at what price? As unlimited plans came, usage increased. But we were talking.

Then the text messaging protocols from the early nineties took off in the early 2000s. That was driven largely by kids, but as keyboards evolved (with devices like the Blackberry, the Moto, and eventually the smartphone), we all began to text. This evolved into full words, and with dictation software into paragraphs. 

If you were looking for the beginning of the end, it is likely the advent of text messaging in the early 1990s. This tool has been a panopoly of challenges. We communicated in abbreviations and emoticons (does anyone know what they all mean? The thumbs up I get, but the rest not so much). I most often skip over the emoticons for fear of misinterpreting something. See What We Intend (July 2023). 

The end? Of what? Of interaction. Far too much is attempted these days with the ubiquitous text message and its older cousin, the email. They are efficient, fast, and utterly without context. They are engagement, but of the lowest order. Pick up the phone - it is called that for a reason.

Sure, it has secondary uses for messaging, but it can be used for a conversation. A real, banana-split conversation. An imperfect, interactive, real conversation. No, you won't be persistently smooth, suave, and debonair. Yes, you will misspeak, misstate, and even find yourself at a loss for words. But it will all be real. And you will grow personally and interactively as a result.

The OJCC sponsored its third Meet and Greet at the WCI on August 17, 2025. We shook hands, conversed, and interacted. In person was even better than the call. I met new people and enjoyed rekindling old relationships. I learned things, got advice, and experienced smiles, affirmation, and engagement. 

The more personal the interaction, the more real it is. There is no "low fat" or "substitute." Get real, pick up the phone to call instead of texting or emailing. Converse, share, and interact. People are a great experience; try it in person.

We should have seen the perils in texting—we missed it and regret it.

We should similarly see the peril in AI—they are programs, not friends (or counselors, physicians, etc.). Get back to people. Go to events in person. Pick up the phone and interact. A bad phone call still beats Brussels sprouts, if you get my drift.