There is a fine proverb noting "the proof of the pudding is in the eating," Don Quixote, Miguel de Cervantes Saavedra (1605). So many years ago, this was the expression used by the loyal Sancho Panza to explain, essentially, that it the actual experience that provides the proof of outcome.
Conjecture aside, it is difficult to argue with. Of course, the point can be made that any outcome is anecdotal, transient, or unreliable. An outcome is not always definitive proof of more than the incident that is directly involved. This is why researchers and scientists repeat tests, attempt alternative inquiries, and argue about cause and effect. One result does not usually make an irrefutable conclusion.
Thus, there is perhaps some support for the threat of artificial intelligence in the recent Your Brain on ChatGPT, a 206-page study published in June 2025. The outcomes and predicates are both fascinating.
In a fit of "confirmation bias," I might remind you that I have been concerned about disuse atrophy for some time. See Disuse Atrophy (December 2024). That post mentions Idiocracy (20th Century 2006), as do some other posts, such as Sharing a Drink Called Loneliness (May 2023), Are You Innumerate (July 2018), and We are Regressing (March 2025). It seems axiomatic, forgive the expression, "use it or lose it."
Well, a group has now "explore(d) the neural and behavioral consequences of LLM-assisted essay writing." With some grouping and differentiation, the tested individuals were asked to write three essays. Each group was allowed varied resources:
- (access to Large Language Model - Artificial Intelligence, "AI") LLMs
- (access to) Search Engine
- Brain-only (no tools)
After the three "sessions," in the fourth round, the groups remained static, but some of their access to tools was altered:
"LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM)."
The participant's "cognitive load" was measured with electroencephalography (EEG). In addition to this measure of brain engagement, the resulting fourth essays were analyzed by Natural Language Processing, a type of AI, human instructors, and "an AI Judge."
The results were troubling. The
"EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity."
The efficacy of the human brain was diminished, or showed decreased engagement, most significantly in the people who had been afforded the greatest help, the LLM, in the first three sessions. The results were noted in "reduced connectivity," "under-engagement," "memory recall," and engagement of the "occipito-parietal and prefrontal areas."
In a nutshell, those who exercised their brains in the first three sessions were more likely to display mental ability and agility in the fourth round. The conclusion: "While LLMs offer immediate convenience, our findings highlight potential cognitive costs."
But wait, perhaps there is more.
When studied over time, the "LLM users consistently underperformed at neural, linguistic, and behavioral levels." This, the researchers conclude, "raise(s) concerns about the long-term educational implications of LLM reliance and underscores the need for deeper inquiry."
This is of particular interest because evidence is mounting that a generation has abandoned thinking and studying. They are using AI, without inhibition or regret. An "engineering student at UCLA" reportedly "pulled out his laptop and proudly displayed how he used ChatGPT to complete his final project." There was no apparent reticence or fear of repercussion.
After, the student denied that his use of the AI paradigm was cheating. He explained that he had deadlines, competing priorities, and therefore "used ChatGPT to finish strong." Will that matter? Some suggest this student may "struggle to find a job after graduation," and there is mention of doubt regarding "how much he actually learned while earning his degree."
There are implications and questions. Who wants their doctor to be reliant only on what the internet says (search engine) or the conclusions of AI? In that vein, anyone can Google their symptoms and likely find a page with an answer that is comprehensible to any lay person (someone who is not a doctor). Other than getting a prescription, what is the benefit of seeing the doctor if the internet is all they know?
This is likely a more difficult question if the doctor is ignoring the analysis necessary for comprehending the Google result(s) and instead just lapping up the spoon-fed AI-LLM conclusions gleaned from the vast array of internet data, including the collective wit and wisdom of Wikipedia. See Are I Diminishing? Am You? (May 2025).
Would you want a doctor who used AI instead of studying? An engineer, an accountant, a lawyer? There were various comments noted regarding the UCLA gentleman and his achievement of graduating with the help of AI. Would you agree that he is as prepared to help you solve problems as any non-AI-engaging engineer? Some would argue he is demonstrably more prepared due to his AI savvy. Others will disagree.
There will be more studies. One study does not often answer all perspectives, concerns, or questions. That said ...
In the end, it is possible that we will be lulled into reliance and eventually become obsolete ourselves. I cannot even remember anyone's telephone number anymore; it is all in my phone. As comedian Kathleen Madigan once described it, "My brain is that phone." That is not because we were told to dump that information. We were given convenience, we forewent using our brains for that task, and we lost the ability (I do remember my phone number from 6th grade, but not those I have had since; that is odd).
What else will we lose, and how fast? Well, Sancho Panza, that is indeed a worthy question of proof. Where is the pudding?