Artificial intelligence is slowly insinuating itself in our daily lives. Some notice and others appear oblivious or at least disinterested. One key ingredient to accepting change is its involvement in our day-to-day. Even as we begin to recognize it around us, few of us are appreciating the full involvement or impact of these tools in our lives.
In 2023, Pew Research began studying American's reactions to "artificial intelligence (AI) in daily life." It concluded that Americans had begun to readily identify examples like "customer service chatbots and product recommendations." Despite that, only a small percentage (30%) could "correctly identify all six uses of AI asked about in the survey."
The results of that study, only two years ago, memorialized a significant level of misinformation or misunderstanding of AI. There were significant differences across demographic groups and educational strata. Nonetheless, the demonstrated level of awareness and engagement is sobering.
Many times, as I have written about AI and robotics, I have enjoyed the skepticism, jeers, and dismissiveness of my peers. Many have explained their intention to retire from the workforce before these innovations appear or at least before they predominate. Those who think they can outrun the oncoming technology wave are entertaining and to some degree comforting.
An old joke about two hikers and a bear has entertained us to the point that it has become folklore. "I don't have to outrun the bear," has become a euphemism that carries meaning without even telling the joke. The fact is, that humans have competed with each other for thousands of years. We are competitive by nature, by necessity, and by desire.
You do not have to outrun the AI, you merely need to outrun the person next to you. There will be opportunity, vocation, and reward in the future. AI will not take all the jobs. But, there will be two types of workers and professionals in 5 years, those who use AI and those who wish they had read this post and listened.
AI is all around us. The "generative" tools are creating content that used to be made by humans. It is writing articles, plagiarizing content, and more. We will see evidence of it in our daily lives, yet it will be its failures that will stick with us. I read a Buzzfeed story on Yahoo last week that included pictures. The following are screenshots that may make you think. For the curious, the man in these photos used to be the Vice President of the United States. He was, and perhaps is, somewhat famous.
The photos are not noteworthy, but the writing and editing are. Note that the person is not identified. Possibly, a human working for a news organization could not identify the former Vice President and consciously decided to label a photo "person in a suit holding a microphone" or merely apologized to their boss with "I'm sorry, I can't identify or describe people in images." That is an explanation, but not a probable one.
More probably, the world of journalism has foregone the human writer and editor and elected instead to have a human put some data into a Large Language Model (LLM) and ask for its assistance. The LLM has generated an article describing its best guesses about what such an article would express or imply. That has then been published under a byline that misrepresents it was written by a human somewhere.
I pause for effect.
The reader, of course, says "but surely the human proof-read." Or, "but surely it was written by a human and merely refined by technology." I protest, and "Don't call me Shirley." (Airplane, 1980, Paramount).
The evidence here is compelling. Either a human who wrote this piece could not identify the former Vice President and consciously elected not to research ("Hey, Carl, do you recognize this guy"), or no human ever wrote or edited this article. We must accept that a brain-dead human wrote this in blissful ignorance, or that it was written by an LLM and appropriated by a brain-dead human who did not even notice that the LLM could not identify the "person ... holding a microphone."
Back to my eschatonic predictions above - yes, "it's the end of the world as we know it" (R.E.M., Sound Emporium, 1987), and to be clear "I feel fine." R.E.M. makes a fine point here, perhaps too fine for the masses? Notice it is not the "end of the world," but merely "as we know it." This is not armageddon, apocalypse, doomsday, or "the end of days."
This is not "Old Testament... wrath of God ... fire and brimstone ... rivers and seas boiling ... forty years of darkness, earthquakes, volcanoes ... the dead rising from the grave ... human sacrifice, dogs and cats living together ... mass hysteria." (Ghostbusters, Columbia, 1984).
No, this is an age of enlightenment. The LLM is not here to get you, it is here to help you. It is a tool. Merely a tool. See A Fool With a Tool (January 2024). You cannot blame the tool for your failures and shortcomings. Similarly, before you get too far down that road, you cannot claim its effects or benefits as your own brilliance (see Who's Harry Crumb, Columbia, 1989).
Thus, you find yourself at a crossroads. You may choose to proceed on, knowing that AI, robots, and other challenges lay in your path. Or, you may turn now to retirement, or some similar obscure corner (the world will always have places free of AI, just as they are currently largely free of I). Turn if you will and avoid the path ahead. As they say, "You must choose, but, choose wisely." (Indiana Jones and the Last Crusade, Paramount, 1989).
The world will always need people capable of using tools, recognizing failure (recognizing the former VP), and steering, evaluating, and managing. Be a part of the path ahead, or pull to the side and allow us to pass. The choice is yours. Stay sitting indecisively in the road and someone is liable to hit you sooner than later. Get with the moment; make a decision.
There is a list of prior AI and robotic articles on my website, www.dwlangham.com.