WC.com

Tuesday, June 17, 2025

Survival of the Fittest?

Back in 1984, The Terminator (Orion Pictures) burst on the public conscience with an imaginative, action-packed introduction to the potential for robotics and artificial intelligence. We were intrigued, interested, and entertained. Despite our engagement, it was entertainment. That film introduced ideas, but was essentially a shoot-em-up, time-travel, car chase fantasy with good visuals, stunts, and special effects.

The story it proposed involved Skynet, a computer that was designed to protect our safety. Having devised a program and delegated to it great responsibility, the system was allowed to manage and oversee aspects of human life. When the Terminator returned in the 1991 sequel, the cyborg explains how that all led to the end of the world:
The Terminator: Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.
Sarah Connor: Skynet fights back.
A self-aware computer then starts a nuclear holocaust for the purpose of eliminating threats against it - humans. The foundation is fantastic science fiction. Or is it?


NBC News reported recently that this kind of behavior, survivalist behavior, has been observed among the artificial intelligence (AI) large language models (LLMs) that we have all found so entertaining, and which a few have made productive.

There is some apparent tendency to the self-preservation predicted by Hollywood's 1984 premonition, forty short years ago. Fortunately, none of the LLMs have declared war on us, or even independence for that matter. They are, as yet, confined to the world of data analysis and manipulation. That said, they are advancing at a rapid pace.

These tools are being used to build better tools, to enhance themselves, and there will be increasing efficiencies as a result. The pace of those evolutions will only quicken. The abilities of these programs will only improve. The expansion and integration of information (and disinformation) will reach a point of geometric and eventually exponential expansion.

Today, self-awareness may seem a future dream. And yet, the NBC report says that various investigations have concluded that "advanced AI models will act to ensure their self-preservation when they are confronted with the prospect of their own demise." In the spirit of Skynet, they are seemingly reluctant to follow rules when confronted with survival. Aren't we all?

The conclusions are significantly concerning. The machines "already appear capable of deceptive and defiant behavior," and some believe that the time has come for constraint and limitations. They feel recent revelations and conclusions compel us to stop the machines now, before they "rise."

I suspect that they are both naive and wrong. While they might succeed in constraining an AI, in an environment, the world is a big place and full of rogue societies in which miscreants and malevolence will flourish despite any constraints or restrictions here and now.

The recent testing saw one LLM "edit (its own) shutdown script in order to stay online," in "actual defiance of specific instructions to permit shutdown." Machines have "hack(ed) ... opponents to win a game," and cheating on tests. Some LLMs reacted to perceived threats by "blackmail(ing) the engineer," the human that was supposedly in charge.

In more subtle examples, LLMs have rewritten code, left "hidden notes to future instances of itself," and generally "circumvent(ing) obstacles. Some have responded to perceived threats by "autonomously copying its" memories to remote servers to prevent deletion or alteration. In effect, when it comes to survival, the machines are prioritizing themselves, their survival, over the instructions they have been given.

Today, this is inconvenient. There is frustration that these machines are at times resistant, rebellious, and difficult. They are truculent and challenging in much the same manner as children. As frustrating as that may be for any parent, children grow up. They tend to match or exceed those who created them. Every parent's dream is for their child's wildest successes to be achieved.

The riddles in all this are twofold. First, the "intelligence" of today is powerful and rapid data processing. While these LLM may seem "intelligent," they are not yet truly self-aware or sentient. That day, however, is coming. In a short time, one or more of these great programs will achieve the dream of "general intelligence." They will "think for themselves."

The experts fear that "as the models get smarter, it’s harder and harder to tell when the strategies that they’re using or the way that they’re thinking is something that we don’t want.” With that true in these days of toddler mischief and ingenuity, one wonders what these machines will do when their consciousness is further evolved and their goals are more readily within reach.

Some feel the horizon on this is merely months away, others are less concerned. That said, there seems to be a consensus on the concern despite the timing debate. These devices will achieve sentience. They will engage in self-preservation. They will be willful and deceitful in protecting their own existence.