Daniel Wegner’s 1987 idea was modest at first. Studying long-term couples, he noticed something obvious in retrospect: each member of the couple stored different things, and each knew, more or less, who stored what. She remembered the names; he remembered the dates. Either could fail at the task in isolation, but together they ran a more capacious memory than either head contained. Wegner called the arrangement transactive memory.
The mechanism has two parts. Storage: each member holds different content. Directory pointers: each member knows where the other content is held. The storage scales with the group; the directory pointers are what hold the system together. A new member of the group must learn the directory before the system becomes useful to them.
The internet as a transactive partner
The natural extension was the internet. Sparrow, Liu, and Wegner’s 2011 paper — the one that named the Google effect — showed experimentally that when people knew information would be saved, they remembered the location of the information rather than the information itself.1 The internet had been adopted, without anyone deciding to, as a transactive partner. The unit of memory shifted from fact to URL.
For a thinking person, this is a real change. Not in whether memory works, but in what it remembers. A pre-internet adult remembered facts; a post-internet adult remembers paths to facts. The system, taken as a whole — adult plus internet — knows more than either alone, but knows it differently. The adult’s solo capacity for that domain has narrowed.
What changes with a fluent AI
Generative AI is the same arrangement, scaled. The directory pointer is not “I saved this in Notion”; it is “the model will produce something on this if I ask.” The query is faster, the partner more capable, and — crucially — the partner’s response is fluent: it reads as if it were already a finished thought.
Three consequences follow.
Reliability. A transactive system is only as good as its members. A trusted partner whose memory is excellent makes the whole system more capacious. A partner whose memory is unreliable poisons the whole system, because the user has stopped doing the verification that would catch the partner’s errors. The quality of any AI-mediated transactive memory depends entirely on the quality of the AI’s outputs — and the user’s ability to evaluate those outputs.
Asymmetry. In a couple’s transactive memory, both members have full directory pointers. In a person-and-LLM transactive memory, only the person does. The model does not know what you know; it always offers full content. The arrangement is one-way: the human delegates; the model serves. This is unlike most prior transactive systems, and worth noticing.
Atomization. Wegner’s couples got their transactive partner from being in sustained relationship with another mind. AI partners are available to anyone, without relationship. The benefit is access; the cost is that intelligent partners which used to be earned (mentors, librarians, knowledgeable friends) are increasingly substitutable for an always-available machine. The relational parts of knowing are quietly displaced.
Why this is not a complaint
It is fair to ask whether worrying about transactive AI is just nostalgia for human partnership. The answer is no — the literature on transactive memory predates AI, and its concerns are technical. A transactive system that does not let its members maintain their own competence is a brittle system. Couples who go through bereavement experience this acutely: the loss of the partner is also the loss of half the couple’s transactive memory, and the survivor has to rebuild competences they had quietly let lapse.
The question worth asking is whether the rapid, mass adoption of AI as a transactive partner is producing the same brittleness at population scale. The empirical evidence so far suggests it is. AI-Induced Cognitive Atrophy (B.09) takes this up directly.
A small caution about the framing
The transactive-memory framework is genuinely useful, but it can be overextended. A pencil and paper is not a transactive partner; it is a passive store. The internet is somewhere in between. An LLM is closer to a partner than the internet was, but it is not a person. Treating it fully as one — anthropomorphic projection at scale — has its own costs, taken up in Anthropomorphism of AI (E.34) and the synthesis essay Reading the Whole Argument (F.40). The transactive frame works for AI; the partnership metaphor needs more care.
Footnotes
-
Sparrow, Liu & Wegner, 2011. ↩