Pencil and paper are cognitive offloading. So is a calculator, a calendar, a knot tied in a handkerchief. The act is older than writing: humans have always pushed parts of their thinking out into the world so that the world can hold what the mind would otherwise have to. What changes with artificial intelligence is the kind of thinking we offload, the fluency of the receiving partner, and the frequency with which we do it.

A working definition

Cognitive offloading is the deliberate or habitual delegation of a mental operation to something outside the head. The operation could be storage (writing down what you do not want to forget), arithmetic (a calculator), navigation (GPS), or — newly — prose drafting, code synthesis, judgment under uncertainty (a large language model). The receiving partner can be a tool, a written artifact, another person, or now a machine that produces fluent text on demand.

Risko and Gilbert, in their 2016 review, treat offloading as one of cognition’s basic adaptive strategies — not a failure mode but a feature.1 Working memory is small; the world is large; we are sensible to use the world.

The Google effect, and what it taught us

In 2011, Sparrow and colleagues showed that when participants knew information would be saved, they remembered the location of the information rather than the information itself.2 The phrase coined for this — the “Google effect” — has slipped into ordinary language. It names a real shift: the internet changed not whether people remember, but what they remember. The unit of memory moved from fact to address.

The Google effect is a special case of transactive memory — Daniel Wegner’s 1987 idea that couples and small groups distribute remembering across members, with each person knowing who knows what. Couples who have been together for decades exhibit a famous variant: she remembers names; he remembers dates. The Google effect made the internet a transactive partner. Generative AI makes a fluent one.

What changes with a fluent partner

A pencil and paper offloads only what you write down. A search engine offloads retrieval but leaves you to evaluate results. A large language model offloads something different: the production of inference, prose, and judgment in a form that looks like it has already been evaluated.

Two consequences follow. First, the cost of offloading drops to near zero — there is no transcription, no query crafting, no scanning of results. Friction was a brake; the brake is gone. Second, the evaluative step — checking what the partner returned — now requires more effort than the offloaded operation itself, because evaluating a paragraph of plausible prose is harder than evaluating a list of search results. The economics invert.

This is the structural reason why AI-era offloading is not just more of the same. With pencil and paper you offload to keep your mind free for the harder problem. With a fluent partner you can offload the harder problem.

”Use it or lose it”

Risko and Gilbert noted that offloading “can also lead to a decrease in cognitive engagement and skill development” if it becomes excessive.1 The phrase has the shape of a folk saying for a reason — the underlying mechanism is generic. Skills that are not exercised attenuate. The medical literature on disuse atrophy in musculature is not a metaphor for what happens to neglected mental skills; it is the same kind of phenomenon at a different layer of the body.

The empirical question is whether the mental skills displaced by AI are durable enough to survive partial disuse. Some certainly are: arithmetic survived calculators; spelling has limped along with autocorrect. Others may not. The faculty of forming a position before consulting an oracle — of producing a draft argument from one’s own materials before asking what the model thinks — is a skill, and like any skill it can be lost through non-practice.

The trust loop

The more a user trusts an AI, the less she checks its output, and the more she delegates next time.1 This is a loop, and loops compound. Recent classroom studies report the predicted shape: students who rate an AI partner as reliable invest less in source verification, which over time degrades the very evaluative skills that would let them detect when the partner is wrong.1

A small, ugly word for the equilibrium this loop tends toward is cognitive dependency — a state in which the user can no longer perform the offloaded operation without the partner, and can no longer detect when the partner is in error. The condition is not new in human history. What is new is its scope.

Designing for offloading you can return from

The pessimist’s reading of all of this is that AI tools should be designed with “friction” — small obstacles that force the user to engage. The optimist’s reading is that the tools should be designed for legibility — the partner shows its work, the user can step back into the loop at any moment, and the evaluative step is supported rather than abandoned. The two readings are not opposed. Both ask the same question: which faculties do we want to keep using, and what would it cost to keep using them?

That question, asked seriously, is the hard one. Cognitive offloading is not a moral failure or a technical inevitability. It is a design decision, made many times a day, mostly without noticing. The job of an encyclopedia entry is to make the decision visible.

Footnotes

  1. Risko & Gilbert, 2016. See also the synthesis in §2.3.2 of Gesnot, The Impact of AI on Human Thought, arXiv:2508.16628 (2025). 2 3 4

  2. Sparrow, Liu & Wegner, 2011. The “Google effect” study.