Ponder why driverless robo-taxis have steering wheels. That may be a riddle, but read on.
There has been focus here before on the potential for machine sentience. See Rights for the Toaster (October 2024). That post resulted from my witnessing a debate among college professors about society's recognition of the rights of artificial intelligence (AI).
In this regard, it may be important to distinguish between agentic and generative AI approaches. One is relatively rote and analytical of what already is, what exists, and the other can create new and arguably innovative material. This is spelled out by International Business Machines, which was once a major leader in our daily technology use.
Neither agentic or generative AI is yet sentient—AI does not think. Today, though these tools are very powerful, they remain reactive in their production. They follow patterns and formulae that have been gleaned by reviewing masses of existing human data, creations, and materials. Their foundations are limited to the confines of their consumed data sets.
Yet tomorrow beckons. Ultimately, the tech wizards strive for AI to be sentient. They seek what has been dubbed "Technological Singularity" (shorthanded as "the Singularity"), a moment when computers actually think. Built In describes this as the moment that AI becomes capable of intellectual rather than merely intelligent action and reaction. The computer would become "self-aware," and perhaps even recognizable as a "being," or life form. The older readers will perhaps see some Frankenstein parallel.
It is to this moment that my unfortunate philosophical academic colleagues look in their arguments related in Rights for the Toaster. They believe that our race (human) will succeed in essentially creating another race (computer) and that we owe it to our creation to grant or convey human rights in the process.
For instance, they argue it would be inappropriate to unplug such a computer without affording due process of law. It would be appropriate if such computers could vote, travel freely, and more.
Imagine a moment in the Terminator (Orion 1984) saga. The movie themes revolve around a visitor from the future returning to a historical inflection point to change history. They seek to find and destroy some computer and change it or unplug it before it can wreak havoc.
Think of using a time machine to go to 1889 Austria and intercept the infant Hitler before he caused so much strife, misery, and destruction. So do the Terminator protagonists seek to alter their present, our future, by returning to such inflections in pursuit of some change. It is intriguing fiction, but no more so than the Back to the Future take (Universal 1985).
What if those time travelers had to read a computer its rights before questioning it? Must there be due process before unplugging it? The reader is forgiven if this seems absurd and science fiction. But know that there are already academics who not only view conveyance of human rights to computers as practical, they view this as incontrovertible.
The topic is not new. Boston Legal (2004-2008) included an episode centered on a woman who was romantically, or at least emotionally, involved with an object to which she assigned personhood attributes. The Object of My Affection (20th Century Fox, 2007). This has been labelled Fetishistic Disorder, objectophilia, and otherwise criticized. Nonetheless, views of human behavior often differ from nation to nation, and finding absolutism is perhaps a vain undertaking.
Despite the present-day clear demarcation, many are now using existing, non-sentient AI chatbots instead of psychological counselors. One source says that 25% of adults would rather use a bot than a therapist, and 80% think that ChatGPT is a viable alternative to a therapist.
This evidences that the train has perhaps left the station for many in the debate of AI capability, limitation, or personhood. And, there have been some unfortunate results. See AI Lacks Conscience (September 2025).
Back in 2015, I related some challenges of technology based on a presentation by Salim Ismael. Salim Ismail and a Lifechanging Seminar in Orlando (May 2015). He lamented the challenges of lawmakers regarding both predicting and reacting to technological change.
His contention is that technology will advance so rapidly that the law will always be reacting rather than proacting regarding it. A later post provides more on this. Misclassification and Regulation, Will Government be Nimble (November 2015).
Reactive versus proactive. He contends that this is why we have laws that require rear-view mirrors on autos, but no state has a law that requires steering wheels. He contends that no regulator or legislator ever dreamed of a car not needing a wheel as a practical matter. The advent of driverless cars made that a reality, and thus we have driverless cars today that lack a wheel and yet comply with the antiquated requirement of rearview mirrors that no AI or robot will ever use.
Proactive. The impetus of today's post came from Judge Horace Middlemier* who alerted me to a story that appeared last week on various platforms, including Newsweek: Lawmaker wants to ban people from marrying AI. The shock value of that is significant.
The main point is that this legislator will sponsor "a bill that would prohibit giving personhood to artificial intelligence systems and make it illegal for a person to marry an AI bot." It would also keep AI from marrying each other.
That is certainly proactive, as there is not yet a sentient AI. But this "would declare AI systems nonsentient entities (and) ban them from gaining legal personhood." The law would proact to define and constrain these human creations from human equivalency. Not with a full understanding of what they may become, but with what we know today. The implications and outcomes could be significant.
AI bots or large language models "couldn’t be recognized as a spouse or domestic partner," could not marry other AI or humans, could not be employees or officers of a company, own real estate or other wealth, borrow money, or do any of the things humans can strive for by right.
The theme is "to maintain separation between machines and humans and prevent them from becoming so embedded in society that it becomes possibly too difficult to remove them." The theme is proactive and prophylactic, with the sponsor conceding he seeks to prevent future arguments, dissension, and discord that could result from the growing acceptance of AI and its potential future sentience and even primacy.
In this, some will see science fiction, absurdity, and inconceivability. Others will see inevitability and inexorability. Too recently, the Apple Watch, cell phone, and more were science fiction. Not too long ago, the self-driving car was a dream. The truth is that our new reality is coming at us at an unprecedented pace. Change comes daily.
For example, tune in to my unCOMPlex edition on The Evolving AI (October 2025). I conversed with Les Schute, an industry leader in AI and its implications. He allowed me to converse with his chatbot, and I accused him of pranking me. The language it used was so believable, the responses so genuine, I thought he had someone off-screen pretending to be a bot.
The future is here, and it is scary. Change and uncertainty always are. Can the law and regulation keep pace? Will there be prohibition and constraint? Or is this all simply absurd? Ask yourself if you would have believed in using a computer as a therapist ten years ago.
For more on my AI and tech musings, there is a complete list on my website.
*Horace Middlemier does not exist. He is a figment of my imagination, an "every man," that is used as a literary tool. Any similarity or inference to a real person, living or dead is mere coincidence.