The medical term disuse atrophy describes what happens to a muscle that stops being loaded. The fibers shrink; the strength declines; recovery from disuse takes longer than the disuse itself did. The phenomenon is real and generic: tissues that are not used become less able to be used.
The encyclopedia’s claim — Gesnot’s, derived from a literature now spanning two decades — is that the same is broadly true of mental skills. Skills that are not exercised attenuate. Skills that are systematically offloaded to AI, day after day, are by definition not exercised. The expected outcome is cognitive atrophy: a measurable decline in the user’s ability to perform, on their own, the operation they have been delegating.
This is a strong claim, and it deserves to be examined carefully. Three questions are worth holding apart.
Question one: which skills are at risk?
Not all skills atrophy at the same rate. Some are remarkably durable. Most adults can still do mental arithmetic, despite forty years of universally available calculators. Spelling has survived autocorrect, with regional exceptions. Map-reading is more endangered, but slightly: people who learned to navigate with paper maps mostly retain the skill if they used it for a few years before GPS arrived.
The skills the empirical literature suggests are most at risk from generative AI are different in character. They are higher-order, less practiced in isolation, and more easily masked by the AI’s fluent output. The short list:
- Independent drafting — producing a first attempt at an idea before consulting an oracle.
- Critical evaluation — judging whether a piece of plausible prose is actually correct.
- Source verification — running down where a claim came from before repeating it.
- Sustained attention — staying with a hard problem long enough for insight to surface, rather than asking the model and moving on.
These are precisely the skills the next article — Critical Thinking in the Age of LLMs (B.10) — develops in detail.
Question two: what does the evidence look like?
The clearest contemporary signal comes from the 2024 Microsoft and Carnegie Mellon study of professional knowledge workers using generative AI in their day-to-day work. The headline finding: higher self-reported confidence in the AI’s capabilities tracks with a measurable decline in independent critical thinking, and with what the authors describe as an “atrophy” of analytical skill.1 Workers who deferred more produced fewer creative responses, evaluated AI outputs less rigorously, and reported lower engagement with the underlying problems.
The study is one paper. Several others, with smaller samples and different methodologies, find converging signals — students who routinely use LLMs score lower on standardized critical-thinking measures, and the gap widens with usage frequency. The literature is young; the effect sizes vary; the direction of the signal is consistent enough to take seriously.
Question three: is atrophy reversible?
This is the question that determines how much of the literature’s alarm is warranted. The medical analogy gives some hope: muscles do recover from disuse, given time and load. The cognitive analogy is messier. Some skills — language fluency in a second language, for example — recover poorly after long disuse. Others recover quickly with renewed practice.
There is no good answer yet for the AI case, because the period of universal generative-AI use is too short. What is clear is that the structural features of AI use encourage continued offloading rather than periodic exercise. There is no obvious moment, in a typical workflow, when the tool says “and now you should do it without me, to keep your skills up.” The opposite signal — the tool is faster, the tool is more polished, the tool is right there — is constant.
What follows
The point of naming AI-induced cognitive atrophy is not to argue against AI use. It is to argue that the question of which faculties one wants to keep exercising is now a question every regular AI user faces, whether or not they notice it. The point of B.10 — and the synthesis in Reading the Whole Argument (F.40) — is that the cumulative effect of a population not asking that question is very different from the cumulative effect of one that does.
Tools shape the people who use them. The shape is mostly invisible during the shaping. The encyclopedia’s job is to make the shaping visible while the user still has time to choose what to keep.
Footnotes
-
Microsoft / Carnegie Mellon, 2024. The strongest empirical signal to date. ↩