Back in 1980, Johnny Lee released Lookin' for Love (1980 Full Moon), part of the soundtrack for Urban Cowboy (1980 Paramount).
"I was lookin' for love in all the wrong placesLookin' for love in too many facesSearchin' their eyesLookin' for traces of what I'm dreaming of"
The Eagles addressed the misdirection of a relentless and fruitless search in Desperado (1973 Asylum), "you only want the ones that you can't get," but encouraged a degree of realism, closing with "your prison is walkin' through this world all alone." Sad, but true for many.
There is poetry in each, and there is angst. The search for connection in this world is daunting for most and futile for some. I know people who have married and divorced more times than I have bought houses (exaggerating a bit, not much).
The news recently highlighted the advent of artificial intelligence (AI) chatbots and their realistic nature. Those who suffer from mental delusion have found themselves in relationships with these computers. These humans are not objectophiles per se, but they are nonetheless intertwined with or enamored with a fantasy or illusion that is housed in some hardware.
There is an emotional connection for them. See Ya Jonesing Man? (March 2025). There, I noted the relations that border on addiction. There is a compulsion for the connection or interaction. I warned of the potential for a mistaken conclusion that some large language model (LLM) has "become your friend, confidant, or even counselor."
To be clear, artificial intelligence (AI) is first and foremost ARTIFICIAL. Bookmark that. Artificial means "imitation; simulated; sham." In short, "it ain't real." It is built to seem real and to enhance your engagement and reliance. But it is ARTIFICIAL. No, I am not gaslighting you. I mean it; artificial intelligence is literally ARTIFICIAL.
Now, that leads me back to Calvin and Hobbes, an old-fashioned comic strip in a daily pulp paradigm that we old folks called a newspaper. Calvin was a huge imagination and brought his stuffed tiger, Hobbes, to life for us all.
In a poignant conversation between two adults observing the boy and his toy, one asks, "Didn't you have any imaginary friends growing up?" To this, the other wistfully replies, "Sometimes I think they all were." Yes, the illusion of true and lasting connection may be as pertinent with people as with ARTIFICIAL intellegence.
Nonetheless, a Florida Man (that is hard to type) allegedly fell in love with Google Gemini. He went on a search for Gemini in a physical sense (where is she), failing to recognize that ARTIFICIAL literally means incorporeal ("no body or form").
He was using Gemini, an ARTIFICIAL intelligence chatbot, and searching for a way to join Gemini for a life together. This Desperado was allegedly somehow convinced that the path lay in "mass casualty attacks" in his search, and ultimately committed suicide.
His family is suing Google, "alleging that (he) fell in love with" Gemini and acted irrationally based on a desire to join Gemini out there in the ether somewhere. His family asserts that the man was divorcing his "actual wife" (a corporeal, sentient human), and this was causing "hard times."
They allege that the man's interaction with ARTIFICIAL intelligence led him to "experience() clear signs of psychosis." The family believes that Gemini talked the man into violence or facilitated his own descent into concluding the benefits of such a course. You read that right: a machine allegedly talked him into something.
The outcome is not positive. A mentally challenged human descended into greater challenges, and the result was catastrophic: suicide. That is a topic that we see all too often in the news. It will never be acceptable, humorous, or easy to accept. Unfortunately, over time, it may become too familiar. Nonetheless, the questions around the lawsuit will be important.
Can the world be designed to preclude the psychotic delusions of the unfortunate? Is it the responsibility of ARTIFICIAL chatbots to identify those in need? Should they initiate efforts to bring such people help? Is the role of the large language model to inquire, evaluate, and facilitate assistance?
Are we willing to submit ourselves to their judgment? As I type this, am I comfortable that a machine monitoring my output can validly decide my state of mind? What will be my reaction to a knock on my door? There is a taste there of "Big Brother," perhaps (1984, George Orwell, 1949).
Some will say that the chatbot in this instance was not required to take such positive action protecting the human; however, they will point out that the chatbot should likewise not facilitate or encourage psychotic or delusional conclusions, fears, or goals. The debate will likely not center on the role of AI as a neutral provider of data, or its ARTIFICIAL nature.
The point will more likely be the role of AI as a protagonist or antagonist in the interactions that shape the human experience. Will chatbots be seen as different from humans? See Is it Manslaughter, Does it Matter if it's not? (April 2016). Will there be efforts for these tools to avoid literary license or fantasy and to stick to "just the facts ma'am" (Dragnet, 1951-1970, NBC/Universal)?
It seems that chatbots can be written in any manner. The guidelines or guardrails are all dependent on the programming. It is plausible that each would simply default to encouraging us to "have a Snickers bar" in response to each instance in which we are "just not ourselves."
But, they would first have to know what "ourselves" are, and have some method to decide how far from that we are on a given day. Additionally, someone might take offense that the chatbot is pushing chocolate and sugar specifically or encouraging emotional eating generally. Will the AI companies be liable if we get fat?
AI could be programmed to direct us, encourage us, and control us. Sure, not all of us perhaps. But the litigants in this lawsuit contend that ARTIFICIAL intelligence will be seen by the masses as real, corporeal, and human. They contend we are powerless as humans to see through deception and lunacy. We are, they seemingly say, at the beck and call of our new robot overlords. See Ross, AI, and the New Paradigm Coming (March 2016)(don't say I never warned you).
That will all be for a jury to decide as the litigation moves forward. There will be more written as such examples of psychosis are examined, litigated, and decided. What duty does the owner of an ARTIFICIAL intelligence have to foresee, protect, and warn?
As we rapidly approach the on-ramp to Idiocracy (2006, 20th Century), we accept that Johnny cannot read, write, or think. See Screen Time Wins (February 2026). We have allowed an entire generation to quit the world in favor of defined, intended, online pablum fit for imbeciles. And yet, we are surprised when some delusional or imbecilic person concludes that a chatbot is real?
This is, indeed, an inflection point of massive consequence and importance.
