In the old days, we geriatrics were told we had to learn math, trigonometry, and algebra manually. The academic premise, boiled down to stock, was that we should not rely on calculators because in "the real world," we would not always have access to them.
That guess was spectacularly wrong, and today we each carry more computing power in our pockets than NASA had to send Neal Armstrong to the moon (or to make deep fakes to make you believe he did).
I learned to process calculations for the mean and standard deviations a million years ago in a galaxy far, far away. We took a great deal of time to do these calculations in an era that had just discovered the spreadsheet.
After graduating from college, I was introduced to the PC, VisiCalc, and later SuperCalc, Lotus123, Quattro Pro, and eventually Excel. Today, the standard deviation calculation that used to take me an hour can be done in seconds in Excel. It is so easy, I have forgotten how to do it manually (atrophy)
A few years later, I struggled to research decisional law, draw inferential distinctions and similarities, and build arguments. This also included assuring the currency of law through a book-intensive process called "Shepardizing." That company published a book of cryptic codes alerting to the citation of authorities.
As I Shepardized, I had free access to the then-nascent innovations called Westlaw and Lexis, which each did the Shepardizing function. They were clunky, unfamiliar, and of questionable efficacy, but they did it.
The academics told us we really needed to know how to do it the manual way. They cautioned that computers and software were expensive, and we might not always have them handy. I recall students using the tech to check their work and others using it to avoid their work. I still have an old Shepards; why can't I part with it?
I have written about the process by which unused skills decay and disappear. See Disuse Atrophy (December 2024); More Proof of Idiocracy (September 2025). What we do not practice, we lose. That is as sure as the sun will come up tomorrow, Annie, "bet your bottom dollar ..." (Annie, Broadway, 1977).
So, we face a similar moment in 2026 regarding technology. Academics across the country are struggling with how to effectively teach writing, test inquiry, and measure achievement in a world where Artificial Intelligence (AI) writes without spelling, grammar, or syntax errors.
Academics, judges, and others insist that people need to know how to write. Just as they clung for a long time to cursive writing, and so much more.
But academia may be shifting. TCD noted in December that Purdue University would be among the first to bend the knee to our new robot overlords. In the pragmatist spirit of "If you can't beat them, join them," Purdue is shifting curricula.
In 2026, the school is adding "a new graduation requirement" for incoming freshmen. It is adding five AI core competencies in which students will be tested as a requisite for graduation. It is essentially embracing an enemy against which others argue and rebel.
Notably, it is doing so before the ultimate impacts and results of AI are known, in a time when conjecture, pontification, fear, and anxiety rule. Another current evolution example is Macrohard, and the implications are intriguing. Mr. Musk may be viewed as a heretic, but that does not mean he is wrong.
Purdue may be the thought leader of tomorrow, or a dimly lit bulb. The TCD article notes that its "announcement was met with widespread scorn" and skepticism. Embracing the purveyor and enabler of plagiarism is not seen as progress. One commenter described the plan as requiring "demonstrating 'competency' in a tool that's primarily a shortcut for incompetent people."
Ouch. That is fairly critical. In centuries past, thinkers like Giordano Bruno and others confronted threats to their forward-thinking heresy. Power and institutions have always feared and disliked change.
The Purdue announcement harkened back to summer and the Hechinger Report article on AI integration in higher education. This exposé noted that employers are seeking workers with AI skills and comprehension. The writer claims that "Generative AI technology is rapidly changing the labor market," and that trainers and educators will either get with the preparatory bandwagon or be run over by it.
Thus, colleges and universities are integrating AI into "their course catalogs, and individual professors are altering lessons to include AI skill building." There is a tacit admission there that perhaps calculators and computers will be part of our future and that there is merit in both learning to calculate standard deviations and using VisiCalc to do it for you.
The questions will be multifaceted. Students who lack skills beyond using AI, (manually calculating deviations, checking case citations, writing) may find some purchase in the market. However, those who lack skills will never know for sure if AI is hallucinating, misunderstanding, or misrepresenting.
Those who can both do the calculation and know how to engage a calculator will be more functional, competent, and effective. The same for the lawyers who can both build and support arguments and can use IA to polish, streamline, and perfect.
They will be competent "humans in the loop." For now, at least, there is enough distrust and discomfort that we will demand "humans in the loop" for the foreseeable future until the AI becomes smart enough to oversee itself (and us).
In the legal profession, it has always been reasonably easy to spot an advocate making arguments they read somewhere but don't really understand. Some of that is ignorance (not smart) and some is merely ambivalence (not invested). What is the difference between them? I don't know, and I don't care (ponder that).
Both will persist with AI, indistinguishable from the mediocre lawyer who has a brilliant and imaginative paralegal (partner, associate, clerk). They will persist. As Wanda so poignantly noted, apes do read philosophy, "they just don't understand it." Fish Called Wanda (MGM 1988).
Nonetheless, there will be arguments, disagreements, and posturing in education, training, and workplaces. Are we training philosophers or apes? Higher education and academia will struggle with whether today's skills are as important or as measurable as those of yesteryear.
There will be early adopters, adherents, and patrons. There will be critics, detractors, and denigrators. There will be false starts, failures, and victories.
Time will tell whether AI is the Brave New World of tomorrow or merely another tech bubble waiting to burst. There will be successes, failures, and much in between. But that has been true with various prior innovation waves.
Progress is a path paved with many potholes and with only a few glorious destinations. There have been many visionaries vilified for their heresy, and yet a fair few who turned out to be correct in the end. This will be fun to watch, but perhaps difficult to live through.

