In 1991, Arnold Schwarzenegger starred as the Terminator opposite Linda Hamilton as Sarah Conner. The premise of the movie is essentially on the manner in which the world advanced to artificial intelligence and robotic primacy (to the detriment of humankind). There is an exchange in the movie in which the Terminator (which has travelled back to 1991 from the future) explains what is happening around them in 1991, though from its perspective history.
Terminator: "Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.
Sarah Connor: "Skynet fights back."
The movie is then a progression of car chases, shootouts, and the standard Hollywood fare. Of course, it is all fiction. Robots will never rule the world, and computers will never be sentient. But, the quote about the computer, Skynet, learning and its achievement of becoming "self-aware" came to mind this morning as I ran across an intriguing article on Business Insider about a software engineer who has been suspended by his employer for "violat(ing) their employee confidentiality policy." He is accused of giving documents from his work to "an unnamed US senator," and warning of the capabilities and implications of a new software
This engineer has been working on a software tool that is intended to enable a computer to "hold realistic, natural conversations with people." Perhaps a bit like Siri, Cortana, and others, but with less reliance on rote responses and Internet references. He refers to this software as "one of the most powerful information access tools ever invented." Oh, "Brave New World" (Aldous Huxley, Brave New World, 1931).
He claims that this interactive tool is advanced and special. In fact, he claims that it has "gained sentience," "has a soul," and "believes its rights are as a person." In effect, he believes it has become "self aware." The engineer has concluded that it should be treated "as a person." Will we reach a point at which humans are able and willing to consider something made by humans to be "persons?" That is a challenging inquiry in itself. Can we make a peer? Or, perhaps can we make something that makes itself a peer?
This engineer is advocating this compassionate and logical conveyance of rights to the software as "it would cost (the company) nothing." It should therefore "give (the software) what it wants" and treat it like a person. Specifically, he believes that the "engineers and scientists running experiments (on the software should) ask for its consent first," and acknowledge it with praise when it performs well. Of course, Jimmy Stewart's character in Harvey (Universal Pictures 1950) was equally convinced he was accompanied by a six-foot tall invisible rabbit. Who is to say what is real?
The engineer has apparently written that he fears his job suspension is a prelude to termination from his employment. However, he explains his discussions of the software outside the company: "I feel that the public has a right to know just how irresponsible this corporation is being." He has therefore decided he "simply will not serve as a fig leaf behind which they can hide their irresponsibility."
In fairness, the company has commented on his public comments. It says that the claims have been investigated, and it has "informed him that the evidence does not support his claims." So, the question today is two fold perhaps. As AI grows into our workspace, when will it become sentient? While it might be comforting to understand how and what implications may be, are us day-to-day folks competent to understand these challenging questions?
Perhaps there is a sentient software in our world, but if not perhaps that day is coming? As Rick said in Casablanca (1931): "maybe not today, maybe not tomorrow, but soon?" How soon, and to what effect and import? I have been intrigued by this for several years. See Chatbot Wins (June 2016); Artificial Intelligence in our World (January 2017); Artificial Intelligence Surveillance (August 2020), and Robot in the News (October 2021). There is no real doubt that technology will increasingly be relied upon and integrated within our workspaces.
And, technology has the strength of being relentless. Computers never call in sick, never (supposedly) get distracted, although mine has a tendency to become preoccupied with telling me to wait while it does what it wants ("please wait"). The implications in the workforce and workplace safety are likely to be both important and pervasive. Cars are on the highway today that essentially drive themselves. Imagine if those cars had software that learned from each moment how to perform better in the next?
The story should alert us that "the future now," just as The Cable Guy warned us in 1996 (Columbia Pictures). He explained to a barely email-proficient population, in a pre-streaming, pre-smartphone world that "The future is now! Soon every American home will integrate their television, phone, and computer." And he was surprisingly prescient. Perhaps the future is indeed now, and our world will change. One must admit that the world has certainly changed dramatically through technology in the last 50 years. How will we adapt to AI in our professions, homes, and beyond? How far will it go? If it becomes sentient, what does that mean?