WC.com

Tuesday, January 14, 2025

Should we Pause?

A fair number of “experts” have announced their opposition to artificial intelligence. They are sincerely convinced that the perils of this technology are simple and terrible. It is their contention that we should pull back from this technology before it destroys us all. They believe we should pause, take a time-out, and consider the implications. Some even think that we need to put up some guardrails. See The Eeeeyew AI Says What? (December 2024).

Sky News reported in 2023 on a letter published by "the non-profit Future of Life Institute and signed by more than 1,000 people." Signers included Elon Musk, Steve Wozniak, Emad Mostaque, Tristan Harris, and Yoshua Bengio. The signatories might be characterized as accomplished and bright, but that would be a gross understatement. None of them has ever sought my advice, but nonetheless a reasonably brilliant bunch.

The challenge with the ideology that "we" should pause is that there is no “we.“ This rock is inhabited by some 8.2 billion people divided along a variety of faults, including culture, continent, country, allegiance, government ideology, and more. The supposition that "we" might collectively and cooperatively do any one thing is borderline preposterous. 

Time and again, "we” have agreed to step back from technology. A prime example was the Nuclear Proliferation Treaty in 1968. Britanica says that at the moment it was signed perhaps 6 countries had such weapons. There was consensus on stopping the evolution and spread. Today the list nonetheless continues to grow, and more countries still aspire to it. This is not because everyone subscribes to what "we" want; some instead are driven by what "they" want.

An interesting article was published recently regarding a gun perfected by the Chinese People's Amy. It is called the "Metal Storm." The story has a catchy MSN headline "China’s ‘metal storm’ gun fires 450,000 rounds per minute, claim scientists." That volume of projectiles is for each barrel of the gun and the gun may have five or more barrels. The bottom line is an amazing flow of bullets. 

For comparison, the very accomplished and astonishing phalanx ship defense system deployed on U.S. Navy ships can fire 4,500 rounds per minute. That system has been effective in defending a variety of US naval vessels around the planet from a multitude of risks and attacks. The new Chinese system is 100 times more prolific.

In 2020 barges destroyed the bridge here in Paradise. See The Bridge that Isn't (January 2021) and If You Were Half the Bridge I am (June 2021). After that event and the incredible disruptions, some in this Navy town advocated the installation of a Phalanx system to similarly protect the Paradise Bridge from miscreant barges. Though these were facetious and humorous, they were nonetheless complimentary of the Phalanx capability.

But that capability (4,500 rounds per minute) pales in comparison to the new Chinese tool capable of firing millions of rounds per minute from a single vehicle, equipped with multiple barrels. The name "storm" is both apropos and intimidating. How did someone get so far ahead in the gun business?

The MSN article explains that the weapon was "initially proposed by Australian inventor Mike O’Dwyer in the 1990s." The original had "a 36-barrel test system capable of firing at an astonishing rate of 1 million rounds per minute." The "US Department of Defense . . . partnered with him," but eventually abandoned the project. One might suspect or suggest that we "paused." 

Nonetheless, "Beijing has sustained its investment in this technology." Beijing has elected not to "pause" and is now producing weapons that are immensely dangerous, threatening, and capable. This advancement threatens the world's balance of power, as there is discussion of how the new gun might destroy missiles and other armaments. 

The fact is that there are advancements in technology and its applications (good and not-so-good) every day. Some progress, some pause, and there is competitive evolution and revolution in our world. This is persistent in various technologies and endeavors and our world evolves. 

There is no "we." If this country or that country elects not to pursue any evolution or revolution in technology or science, that will not preclude or even deter other countries. Is the right solution to pause AI in the United States, the European Union, Great Britain, or elsewhere? The only effect of such a pause may be to enable and embolden others who may have less-than-benevolent intentions for their achievements and advances. 

And some believe that AI will chart its own course despite our intentions or plans. There is an element of AI that reflects the ability of computers to achieve sentience and to learn. From this moment, it is perhaps imperative that "we" remember that "we" who make decisions about its future may not all be biological beings.  

There might be some hope that there could be a "we." Perhaps the world in its entirety might one day learn "to sing in perfect harmony," like an aspirational soft drink commercial. But, is that realistic? Is there any real potential to stuff the "AI genie" back into the bottle with a "pause?" The answer is simply "no."

That said, might there be room for caution, contemplation, and even regulation (territorial or broader through treaty)? Certainly. Is there time to discuss best practices, challenges, and complications? Certainly. Is there anything regarding AI to be worried about? Certainly. None of that is benefitted by wishing, hoping, or pausing.

I would suggest that there is little potential for a "pause." The competitive and complex interrelationships among the 8.2 billion inhabitants and all their various schisms, categories, and conglomerations do not lend themselves to "we" accomplishing anything. No, "we" should not pause.

Instead, yet again, we find ourselves in an arms race no different from the nuclear age. We will strive to build better, faster, and more proficient AI tools. In parallel, there will be a race for better, faster, and more efficient chips and circuitry. Others will also, while we all also strive to build better tools to detect, control, and militate the potential harms or shortcomings. We will act, react, invest, and perhaps lament. 

Those who act in their own best interests will strive to maximize AI benefits and avoid detriments. Some will build programs that make fake pictures and others programs that will detect or preclude them. Some will build programs that write term papers and others that detect them. There will be investment, aspiration, and progress. The arms race is on, and the sooner "we" see that the better.






Prior posts on AI and Robotics
Will the Postal Service be our Model for Reform? (August 2014)
Attorneys Obsolete (December 2014)
How Will Attorneys (or any of us Adapt? (April 2015)
Salim Ismail and a Life-Changing Seminar (May 2015)
The Running Man from Pensacola, Florida (July 2015)
Will Revolution be Violent (October 2015)
Ross, AI, and the new Paradigm Coming (March 2016)
Chatbot Wins (June 2016)
Robotics and Innovation Back in the News (September 2016)
Universal Income - A Reality Coming? (November 2016)
Artificial Intelligence in Our World (January 2017)
Another AI Invasion, Meritocracy? (January 2017)
Strong Back Days are History (February 2017)
Nero May be Fiddling (April 2017)
The Coming Automation (November 2017)
Tech is Changing Work (November 2018)
Hallucinating Technology (January 2019)
Inadvertently Creating Delay and Making Work (May 2019)
Artificial Intelligence Surveillance (August 2020)
Robot in the News (October 2021)
Safety is Coming (March 2022)
Metadata and Makeup (May 2022)
Long Term Solutions (June 2022)
Intelligence (November 2022)
You're Only Human (May 2023).
AI and the Latest (June 2023)
Mamma Always Said (June 2023)
AI and the Coming Regulation (September 2023)
AI Incognito (December 2023)
The Grinch (January 2024)
AI in Your Hand (April 2024)
AI and DAN (July 2024)
AI is a Tool (October 2024)
Rights for the Toaster (October 2024)
Everybody Wake Up! (October 2024)
First What is it? (November 2024)
X-Files or Poltergeist? (November 2024)
Is Gartner Helpful on AI? (December 2024)
The Eeeeyew AI Says What? (December 2024)
Is AI bad or just Scary? (December 2024)
Layers and Layers of What? (January 2025)
Wayback Machine (January 2025)