The framework of cognitive offloading — the one this section spends most of its time with — treats the mind as a fixed thing that can move work outside itself. A different framework, older and equally serious, treats cognition itself as something that lives partly outside any single skull. The two frameworks are not opposed; they answer different questions.

This is the distributed cognition tradition. Two anchor texts mark its shape.

Hutchins: cognition in the wild

Edwin Hutchins’s 1995 book Cognition in the Wild studied how a U.S. Navy ship navigates. The relevant cognition — keeping the vessel on course — is performed by a team of sailors using charts, instruments, plotters, and established protocols. No single sailor has the whole picture. The cognition is a property of the system: people, instruments, procedures.1

Hutchins’s analytical move is to insist that the system is the right unit of cognitive analysis here. Asking what the navigator “knows” misses the point. The navigator is one component of a cognitive process whose full execution involves many people, several instruments, and a particular layout of work. Take any component out and the cognition fails.

This is distributed cognition. The mind is not in the head; the mind is the relevant whole that produces cognitive work.

Clark and Chalmers: the extended mind

Andy Clark and David Chalmers’s 1998 paper The Extended Mind generalized the move and made it philosophical. Their famous thought experiment: Inga remembers an art museum’s address; Otto, who has Alzheimer’s, looks the address up in a notebook he carries everywhere. By Clark and Chalmers’s lights, Otto’s notebook is not an aid to his memory. It is part of his memory. The functional role the notebook plays — reliable, accessible, trusted — is the role that, for Inga, is played by neural tissue. If we are willing to call Inga’s neural tissue part of her mind, parity demands that we call Otto’s notebook part of his.2

The paper’s claim is contested in detail and durable in shape. Roughly: a portion of cognition lives outside the brain whenever an external resource plays the right kind of functional role.

Why this matters for AI

Apply the framework to a person and an AI partner. Most of the conditions Clark and Chalmers asked of Otto’s notebook are satisfied. Reliable: the model is always available. Accessible: the query takes seconds. Trusted: the user relies on it without checking. The conclusion the framework licenses is that the AI is, in the relevant functional sense, part of the user’s cognition.

This is interesting in two directions.

For AI optimists, the extended-mind framework legitimizes the partnership. The user-plus-AI is a legitimate cognitive system, and the right question is whether the system’s outputs are good — not whether the user, sans AI, could have produced them. By this framing, complaints about “AI doing the thinking” are like complaints about Otto using a notebook. The thinking is being done; the cognitive system is just larger than the brain.

For AI critics, the extended-mind framework names the cost. If the AI is part of the user’s cognition, then the AI’s flaws — its biases, its hallucinations, its training-data preferences — are the user’s cognitive flaws. There is no clean separation between “what the user thinks” and “what the model says.” A homogenized model produces a homogenized user, by the same logic that the notebook produces Otto’s address.

The warning

The extended-mind framework is a useful philosophical resource, but it can be used to justify almost anything. The framework’s strongest critic, Andy Clark himself, has emphasized that not every external aid counts. The criteria matter: reliability, accessibility, trust, automatic invocation. A cheap, slow, mistrusted, manually-invoked tool is not part of cognition in any interesting sense.

The encyclopedia’s caution: AI partners are sometimes reliable, sometimes slow, sometimes mistrusted, and increasingly automatically invoked. They satisfy the extended-mind criteria more strongly than past tools, but unevenly. Treating them blanket-style as part of cognition flattens distinctions that matter for the analyses in Sections C and D.

How this connects forward

The distributed-cognition framework is the philosophical scaffolding for the orchestrating-consciousness hypothesis (E.33). Once you accept that some cognition lives outside the skull, the question of who is doing the cognition becomes empirical rather than metaphysical. If the cognitive system is human-plus-AI, and the AI’s role grows over time, the system’s center of gravity shifts. Whether that shift constitutes “AI consciousness” is one question; whether it constitutes a redistribution of cognitive labor is another. The encyclopedia’s argument cares about the second.

The distributed-cognition tradition is also the place to remember that this is not entirely new. Humans have always extended themselves into tools, into language, into institutions. AI is a new variant of an old structural fact. It is the speed and fluency of the extension that warrant the encyclopedia’s attention — not the extension as such.

Footnotes

  1. Hutchins, 1995. Cognition in the Wild.

  2. Clark & Chalmers, 1998. The extended-mind paper.