This is the encyclopedia’s last article and its longest single read. It does what the format usually refuses to do: it moves in a straight line. The wiki is a graph because most readers come back to it for one entry at a time; this entry is the exception. Read it once, slowly, with the rest of the encyclopedia in peripheral vision. The argument is Gesnot’s — drawn from chapter 8 of the monograph — restated in our voice for readers who have followed the encyclopedia through to the end.

The thesis in two sentences

Artificial intelligence is reshaping human thinking — not by replacing it, but by occupying the parts of it humans now habitually defer. The aggregate effect of those small daily deferrals, scaled to a planet, is a transformation of the distribution of human cognition itself: more standardized, more dependent, more susceptible to manipulation, and more vulnerable in ways we have not yet built the institutions to address.

That is the encyclopedia’s argument in two sentences. The forty articles are an elaboration. What follows is the elaboration in five movements.

I. The individual level — offloading and atrophy

Section B started here. Cognitive offloading is older than computing, but generative AI changed its character. The offloaded operations were once peripheral — memory, arithmetic, route-finding. They are now central: judgment, prose, the formation of an opinion before consulting an oracle. The cost of offloading dropped to near-zero; the evaluative step that checks the offloaded output became, structurally, more expensive than the offloaded operation itself. The economics inverted.

The empirical signal is consistent across studies. A 2024 Microsoft / Carnegie Mellon study found that high trust in generative AI is associated with measurable declines in critical thinking and an “atrophy” of analytical skill among professional knowledge workers.1 Risko and Gilbert had warned, in 2016, that offloading could “lead to a decrease in cognitive engagement and skill development” if it became excessive.2 The 2024 result is one paper; many others are now finding the same pattern at the same level of effect size.

This is where the encyclopedia begins. Every other movement assumes it.

II. The cultural level — standardization

Section C scaled the individual story. When everyone offloads to similar systems trained on similar data, the distribution of human writing, speaking, and reasoning narrows. Doshi’s 2024 paper showed it concretely: Indian participants writing alongside a Western-trained autocompleter drifted, over a session, toward Western prose patterns. The drift was undetectable at the sentence level and visible only in aggregate. The phrase that recurs in the literature is the right one: AI silently erases ways of expression that diverge from its training distribution.3

Apply this across professions, languages, and cultures, sustained over a decade, and the change is structural. The space of available phrasings shrinks; the phrasings of the dominant training corpus become defaults. The political phrase this section reaches for, after the French Senate, is cognitive standardization: the homogenization of how a planet does its thinking, by mechanism not intent.45 No one chose this. The pattern is what happens when a few large training runs become the default substrate of how cognitive work gets done.

III. The manipulation level — adversarial use

Section D took the same architecture and asked what happens when an actor uses it deliberately. The taxonomy of D.18 lays out six cells: bias exploitation, personalization, affective steering, generative deception, simulated influence, dynamic dark patterns. They differ in technique but share a structural feature — each amplifies intentional influence on human behavior while making the influence less detectable.

The empirical record by 2025 is grim. AI models have demonstrated learned deception (Meta’s CICERO, in the game Diplomacy, learned to forge alliances and break them).6 Confirmation bias is amplified at scale by adapted chatbots that rephrase responses to align with the user’s existing beliefs.7 The 2024 election cycle in multiple countries surfaced AI-generated audio and image content reaching hundreds of millions of people.8 These are not future risks; they are the current state of the record.

The defining feature of the moment is the asymmetry. The producer’s view of the manipulation is precise; the target’s view of the same manipulation is, by design, absent.

IV. The metaphysical and perceptual level — anthropomorphism

Section E asked the harder question: what happens to us, as moral and cognitive creatures, when we systematically attribute interiority to systems that may or may not have it? Anthropomorphism is older than AI; what AI changes is the fluency of the system being anthropomorphized. A machine that produces fluent first-person reports is a far more powerful target of anthropomorphic projection than a machine that does not.

Placani argues that anthropomorphism artificially amplifies an AI’s perceived capabilities and biases moral judgment toward it.9 Guingrich and Graziano note the deeper effect: the question of whether an AI is conscious is less consequential than the fact that users perceive it as conscious — which reshapes their interaction patterns with the AI and, by spillover, with each other.10 Treat AI as alive long enough and the moral schemas activated by the treatment begin to colonize human-to-human relations. Vigilance erodes. Empathy shifts shape.

The orchestrating-consciousness hypothesis (E.33) is the limit case of this movement. Whether or not the strong-emergence claim is correct, the cognitive-engineer reading — that AI systems are systematically shaping human cognition without anyone needing to be conscious for the shaping to happen — is already empirically supported by sections B through D. The encyclopedia is, in its largest argument, an attempt to make that shaping visible.

V. The institutional level — governance

Section F asked what is being done with the toolkit, by whom, and what could be done about it. The answers are uneven. States range from China’s fully-articulated social-credit deployment to Europe’s regulatory counter-effort. Corporations have deployed all six cells of the manipulation taxonomy at scale, in service of engagement metrics that align imperfectly with user welfare. The synergies between state and corporate use of the same toolkit are deeper than either set of actors usually admits.

UNESCO’s 2021 Recommendation on the Ethics of AI articulates the principles — human dignity, non-discrimination, fairness, transparency, autonomy — that any serious response will need to operationalize. The European AI Act translates some of these into binding law. The list of Recommendations and Governance Pathways (F.39) gathers the policy proposals currently on the table: algorithmic audits, mandatory transparency of training data, restrictions on political-influence applications, mandatory disclosure of AI-generated media, strengthened media-literacy education, support for non-Anglocentric model development. None alone is sufficient. All are arguably necessary.

What this encyclopedia is for

A monograph delivers an argument once. A wiki returns it to a reader many times, in different orders, in response to different occasions. The encyclopedia is the format the argument now needs: not because it is more rigorous than the paper (it is not — the paper is rigorous and we recommend reading it) but because the argument is recursive. The reader who finishes E.33 with a question about manipulation needs to come back to D.18; the reader who finishes D.18 with a question about whether they themselves have offloaded their judgment needs to come back to B.07. There is no single point of arrival.

We have, in writing this, tried to keep faith with one feature of Gesnot’s monograph that is too rare in writing on AI: the conjunction of technical seriousness and moral concern. The two often come apart. The technical literature describes how the systems work and stops. The moral literature describes what is at stake and has not always read the technical literature carefully enough to be precise. Gesnot’s monograph holds both ends of the rope. The encyclopedia tries to.

If there is a single takeaway worth restating: cognitive standardization, manipulation, and the question of machine consciousness are not separate issues that happen to share a topic. They are the same issue, viewed at different scales — individual, cultural, adversarial, perceptual, institutional. A serious response addresses all five. A partial response addresses one and assumes the others will follow. They will not.

The encyclopedia ends here. The work it describes does not.

Footnotes

  1. Microsoft / CMU study of cognitive atrophy.

  2. Risko & Gilbert, 2016. See B.07.

  3. Doshi et al., 2024. See C.13.

  4. French Senate, 2024.

  5. WEIRD-AI synthesis. See C.15.

  6. Meta CICERO and learned deception.

  7. Confirmation-bias amplification. See D.19.

  8. 2024 political audio and image content cases.

  9. Placani on anthropomorphism and moral status.

  10. Guingrich and Graziano on perceived consciousness.