Sunday, May 25, 2025

Are I Diminishing? Am you?

The idea of losing capacity through disuse is not new. See We are Regressing (March 2025), Disuse Atrophy (December 2024), Evolution and DNA (November 2022), and Are You Innumerate? (July 2018). Any Joe Bauers can see the potential for disuse resulting in declining mental capacity. The potential is so patent it is nearly laughable.

The social media mavens have begun to post on it in force. There is a growing wave of criticism for artificial intelligence. One "detector" suggested that some of the most critical and caustic social media is perhaps written by large language models (LLMs). AI is apparently being used to criticize AI. It is almost poetic, but also a bit tragic.


I have engaged with people who resort to AI at every turn. Their suggestion for every question is "let's see what AI would say." That may be driven by curiosity. It may be encouraged by perceived efficiency. And yet, it may also be part of a spiral downward into progressive dependence in a self-fulfilling prophecy.

Is it empirical? Some would conclude that the beginnings of empirical proof are surfacing. A 2025 study largely attributed to Microsoft has raised some notable findings. It is titled:
It is a somewhat challenging read, 23 pages long. Rather than read it, I clicked on the "view summary" button suggested by my web browser. I had to "agree" to the terms and conditions. Notably, there was no warning that using this AI summary creator could diminish my critical thinking skills. The warning did not even tell me which AI brand would build the summary.

The summary concluded:
"Knowledge workers engage in critical thinking primarily to ensure work quality, with motivations including enhancing quality, avoiding negative outcomes, and skill development. Barriers to critical thinking include time pressure, lack of awareness, and difficulty in improving AI responses."
And, it concluded that use of AI is impacting those of us who engage in critical thinking tasks (or who are supposed to):
"The use of GenAI tools generally reduces the perceived effort required for critical thinking tasks, especially when users have high confidence in AI capabilities. However, those confident in their own skills perceive greater effort, particularly in evaluating AI outputs."
The critical thinking demands are seemingly shifting to a new paradigm that includes "goal and query formation," "inspecting responses," and "integrating responses."

This is perhaps pulling the entire world into a greater dependence on "groupthink." Algorithms have already done this masterfully. Our Google searches drive us to information provided by the highest bidder first (those with resources can drive engagement and push perception or ideology).

Our research drives us to ideas and concepts that are the most popular - hits, likes, and more drive search engine outcomes to the ideas getting the most exposure without any regard for the value or foundation of those ideas. Where is the dissenting voice? It is too often buried on page 47 of a list of results that most of us will never venture down.

There is a long-standing draw of the easy. Many seemingly intelligent people have come to view Wikis as authoritative. Judges and courts cite such examples as Wikipedia as meaningful and sound foundations for legal decisions impacting people's lives and livelihoods. There was a time that any fool knew better than to cite such nonsense. That time is past. 

There was a time when citing such a source would draw an "F" on your term paper. I recently discussed Wiki with an academic who could see no harm in such reliance. The frog is boiling. See The Dying Professionalism (May 2025). That time is past. 

The AI-generated summary concludes that:
"Higher confidence in GenAI leads to perceived lower effort in critical thinking."
"Workers with high self-confidence perceive greater effort in evaluating AI outputs."​
"Shift from task execution to oversight is noted, with increased focus on verifying AI-generated content."
Thus, we remain driven by our critical thinking foundations. The critical thinkers are not yet ready to unthinkingly accept the AI output any more than we are confident of the input (the "prompt" or "query").

That reminded me of iRobot (20th Century Fox, 2004). There, a character is investigating a murder and following digital breadcrumbs left by the victim. The victim, through technology, is "speaking" to the investigator, but because the communication is figmental and recorded, his "responses are limited." His avatar often repeats that response/warning.

The protagonist is cautioned, "You must ask the right questions." The irony of this movie, a predictor that robots will be among us, ubiquitous, by 2035, is palpable. I suspect that it is prophetic and that, too, scares me. 

Nonetheless, there are career paths today that are about writing prompts, asking the "right" questions. There are entities that employ people to help the team ask better questions. There is a flood of social media advice on how to better ask "the right questions."

A recent example is to include "do not rely upon or cite any wiki in your response." Some suggest that this is so important that it should be included in every prompt. I question why that limitation is not hard-wired into every large language model. 

To me, it seems as simple as remembering "don't take advice from Gilligan." If you don't get that generational reference, think instead perhaps Jerry Smith from Rick and Morty, or Joey Tribbiani on Friends? Homer Simpson? If none of these make sense, paste that last sentence in an LLM prompt "what do Gilligan, Jerry Smith, Joey Tribbiani, and Homer Simpson have in common." Or write your own prompt, find your own reference point, and ask your own questions. 

The real point is that empirical proof is evolving. The use of AI is demonstrably impacting the manner in which we think, and it is very likely to impact and impair the motivation to think. As we lose motivation, can we avoid losing ability? We are focusing on the how and the when of AI, and there is some intellectual struggle with that.

Meanwhile, there are a growing number who have no thoughts or reservations about this new shortcut. They are engaging it, living it, and relying upon it. The machines are replacing human analysis, and increasingly doing so based on repetitive, brief, unthinking, and non-critical prompts or queries.

While AI may be a boon to all, I suggest again that I am far less fearful of its engagement by active minds. A 30-year physician using AI and comparing its results to her experience and training, frankly, is not overly concerning. A newly minted doctor prompting AI for diagnosis or complications based on minimal experience frankly terrifies me.

I am not picking on doctors. The same is true for the engineer, architect, accountant, lawyer, and a raft of others whose critical thinking I rely upon and value.

That said, tech is here. Today, I can almost instantly run calculations and composites with technology that took me hours to complete with a pencil years ago. I can find data with a quick Google search that used to take hours in the library. I can instantly find legal authority with search engines that used to require careful Boolean algebra queries and so much filtering. 

Are we asking the right questions?

Are we being critical in that process?

Are we being critical of the results?

Are we being honest with ourselves in evaluating all of this? 

Can we be?

Can we pass the test of using the tech without succumbing to it?

Are I diminishing?

Am you?

Do you care?


A compendium of my previous AI posts is on my website: https://dwlangham.com/blog-compilations