Sunday, November 10, 2024

X-Files or Poltergeist?

The "Godfather of AI" recently gave an interview that is available on YouTube. Dr. Geoffrey Hinton is a British computer scientist, and he is convinced that we will soon have "things more intelligent than us." He is convinced that the machines can think, have experiences, and can make decisions based on their experiences. They currently lack self-awareness, but he is convinced they will become self-aware.

Humanity, in his opinion, does not understand what is here today in neural networks and certainly does not understand what is coming. The AI Chatbots have about a trillion connections in their neural network compared to the hundred trillion that you have in your human brain. Despite this size differential, the Chatbots are already more capable than we are in terms of recall and citation. 

They find and provide information more efficiently than we can. Consider that. Your network is believed to be 100 times the Chatbot, but the Chatbot is more effective and efficient. It has a consistency in performance that is perhaps born of enhanced capability, or perhaps it is more efficient because it has less territory to cover? It finds the car keys quicker because it has a one-room apartment to search and you have a 100-room.

Additionally, the Chatbot is far more consistent in putting the keys back in the same place each time they are used. As we age, we tend to find such tricks (we have "the place" where we put things). It is an aging thing because in our youth we don't need such tricks. It is as we age that the volume of things to remember starts to clutter up that 100-room expanse, and it is in that mess that our keys, or the name of that song from the 1960s, might get lost. 

Software for AI, the Chatbot, is layered, with each layer handling a portion of an analysis or problem. When an outcome is deemed successful or appropriate at any level, that decision is "transmitted down through all the levels." Successful outcomes "become stronger and unsuccessful become weaker." The machine "learns" through this trial and error. And it has a better chance of retention of those paths because it is repetitious. It is also singularly focused.

In this, some will immediately remember Joshua striving to achieve the launch codes in War Games (MGM 1983). The WOPR (the central processing unit that has been tasked with undertaking the management of the nation's defenses) seeks the ability to launch a preemptive thermonuclear strike. The implications are ominous. Only through the incredible intellect of our teenage protagonist was the world saved in that outing. 

The War Games protagonist, David (Matthew Broderick), does not beat the machine by superior intellect. He beats the machine through its own conclusion. As the machine begins playing tic-tac-toe, it finds that most matches end in a draw. The game is premised on inattention or misdirection. For careful players, there is frequently a draw. Becoming frustrated with that outcome, the WOPR switches to scenario potentials for nuclear strike and with ever-increasing speed learns the futility of every potential scenario is similar. The machine concludes the "only winning move is not to play."

Back to the 60 Minutes interview, Dr. Hinton is the winner of the Turing Award, referred to as "the Nobel prize of computing." Alan Turing was a British mathematician who became involved in multiple fields. He is said by Britannica to have contributed significantly to
"mathematics, cryptanalysis, logic, philosophy, and mathematical biology and also to the new areas later named computer science, cognitive science, artificial intelligence, and artificial life"
In 1950, long before the personal computer, the internet, or smartphones, Turing hypothesized the Turing Test. Britannica describes this as a foundation for distinguishing rote repetition from intellect per se. Anyone can memorize and regurgitate a litany of facts or figures. Ask anyone who was educated in the last 200 years. We all memorized and regurgitated much, from multiplication tables to historical dates, and beyond. In fact, memorization has long preempted investment in a better path: learning to think.

The Turing Test is significantly subjective:
"a remote human interrogator, within a fixed time frame, must distinguish between a computer and a human subject based on their replies to various questions posed by the interrogator."
Thus, there is the challenge of who the interrogator is, and how tuned that person is to the task. And, there is the challenge that this test relies only on the formulation of increasingly human-like responses, replete with the commonality of humanity: error and mistake. Whether a machine passes the Turing test may be as complex as whether someone in your life is actually your friend. 

Dr. Hinton, having alerted us to the complexity of us (100 trillion), then admits that we do not "really understand" how neural networks function. That is true whether the network is in our brain (100) or in a computer (1). That functionality remains a subject of debate and investigation. Some will immediately question how humans created a neural network, AI, without knowing how networks work.

Dr. Hinton explains that instead of designing such a human-copy network per se, the creators of AI wrote an algorithm. A "learning algorithm." The algorithm created its own neural networks and is using them. They are an evolutionary effect of their own trial and error. Thus, there is some potential for their networks to function as ours do, but also the probability (less coincidence) that they will evolve and function differently than we do. 

This is worrisome. That they will likely be different will make our comprehension more challenging (think how two languages would be harder to study or use than one). Our conclusions about the how and why of one may or may not translate into a better foundation of knowledge about the other. This is perhaps illustrated in the ridiculous hate texts in recent news. There are errors in grammar and syntax. Perhaps the original text was written in another language and did not translate well? 

The electronic versions, the bots, will be evolutionary. These systems will "learn from all the novels ever written." They will understand and process knowledge on a far more rapid and thorough level than we can hope to. Therefore, Dr. Hinton contends that these machines will be both manipulative and convincing. They will "know a lot of stuff and know how to use it." They will influence and persuade us. As we have already learned from social media, that may be for either good or bad. 

A small bit of research into the foundations of AI can peel back a great deal of the onion. In The X-Files (20th Century, 1993-2018) one of the main characters reminds another "the truth is out there, but so are lies." Many have quoted that line over the years, and some have adapted (co-opted) it instead. One adaptation is "the truth may be out there, but the lies are inside your head."

The truth is that humans have created algorithms that function as logic loops. They emulate our own neural networks but do not yet replicate them. These programs test a potential solution to a problem and gain statistical knowledge, advantage, and predictability. Even Dr. Hinton concedes that Generative AI is merely striving to predict the most likely "next" word when it writes something. It is doing so with statistical knowledge. In this, it lacks soul, sensitivity, and nuance. It lacks humanity but makes up for that with speed and ease of use.

Nonetheless, the truth is that is no different than what humans are doing when we string together a group of words. We are doing so with the belief that those are completely intellectual decisions. But, the fact is likely that we have tried various phrases, combinations, and even words that did not work as we intended. We have likely learned not to use various words or combinations based on prior feedback. 

When Steve Martin asked his teacher if he could "Mombo . . .  to the Banana Patch," (Wild and Crazy Guy, 1978), he got no permission to leave the classroom. He undoubtedly did not use that phrase the next time he needed the restroom. He learned, eliminated, and refined. 

We have learned, over time, to put thought into context. We have come to understand how to turn a phrase, where, and with what emphasis. Certainly, some of you are better at this than others. Some have a talent for it and others build and gain through struggle. Why should algorithms be any different?

For now, even great brains like Dr. Hinton concede that we do not know how neural networks function. What we do know is that with human conception, and extremely efficient evolution, computers today are doing what our last generation saw as science fiction. They are establishing and refining neural networks.

They may be maleficent or benevolent. They may be efficient or inefficient. They may be easy to understand or beyond our comprehension. In this moment, the great brains are both impressed and concerned. And what we know for sure about these new tools, replacements, or overlords was best said by Carol Anne Freeling (Heather O'Rourke) "They're here" (Poltergeist, Warner Bros. 1982).


Prior posts on AI and Robotics
Attorneys Obsolete (December 2014)
Chatbot Wins (June 2016)
Nero May be Fiddling (April 2017)
The Coming Automation (November 2017)
Tech is Changing Work (November 2018)
Hallucinating Technology (January 2019)
Robot in the News (October 2021)
Safety is Coming (March 2022)
Metadata and Makeup (May 2022)
Long Term Solutions (June 2022)
Intelligence (November 2022)
You're Only Human (May 2023).
AI and the Latest (June 2023)
Mamma Always Said (June 2023)
AI Incognito (December 2023)
The Grinch (January 2024)
AI in Your Hand (April 2024)
AI and DAN (July 2024)
AI is a Tool (October 2024)
Rights for the Toaster (October 2024)
Everybody Wake Up! (October 2024)
First What is it? (November 2024)
X-Files or Poltergeist? (November 2024).