The encyclopedia takes the period 2022–2025 as its central reference window. This article explains why. The short version: the combination of fluency, ubiquity, and integration that defines contemporary generative AI is new, and its effects on human cognition are correspondingly new. Earlier “AI moments” — and there have been several since the 1950s — did not have this combination.

The longer version follows.

Earlier AI waves

The technical history of AI has roughly three earlier waves of public attention.

The 1950s–1970s. Symbolic AI’s early successes — chess-playing programs, theorem provers, the first chatbots (ELIZA, 1966). Public fascination, academic optimism, popular fears about thinking machines. The optimism faded when the techniques scaled poorly. The cognitive effect on the public was mostly cultural — AI became a science-fiction genre — rather than directly behavioral.

The 1980s–1990s. Expert systems and the first wave of neural networks. Specialized AI deployed in medicine, finance, and manufacturing. Public visibility lower than the 1950s wave, practical impact higher. The effect on individual cognition: minimal. Most people interacted with AI through intermediaries, not directly.

The 2010s. Deep learning’s rise. Image recognition, speech recognition, machine translation. The first widespread AI consumer-facing tools — Siri (2011), Alexa (2014), Google Translate’s neural reupgrade (2016). The cognitive effect: significant for translation and search; modest elsewhere. Most users still treated AI as a specific-task tool.

None of these waves produced what 2022–2025 produced. The mechanism worth understanding.

What is structurally new

Three features in combination, none of them present in earlier waves.

Fluency across domains. Pre-LLM AI was good at narrow tasks — chess, translation, image classification. LLMs, from 2022 onward, produce human-equivalent (and in many cases better-than-median) outputs across a very wide range of cognitive tasks: writing, analysis, summarization, explanation, brainstorming, code generation. The user does not need to choose which tool to use; the same tool handles most cognitive tasks they might bring to it.

Ubiquity and accessibility. ChatGPT’s public launch in November 2022 brought generative AI to millions of users in weeks.1 By 2025, generative AI is built into operating systems, search engines, productivity software, classroom platforms, and consumer apps. Using it does not require a decision; it is the default in many workflows.

Conversational interface. Earlier AI required users to learn the tool — specific commands, search syntax, structured input. LLMs accept natural language. The cognitive cost of using AI dropped to roughly zero. This is the difference between a tool that requires a small adoption decision and a tool that is invoked by default the moment a question forms.

The combination is what’s new. Each feature alone existed (calculators were ubiquitous; Google was conversational in a limited sense; specific domain-AI was fluent within its domain). All three together — fluent, ubiquitous, and trivial to invoke — is the 2022–2025 condition.

Why this matters cognitively

The cognitive consequence: AI moves from a tool one decides to use to a default cognitive partner. The decision points where one might consider the costs (atrophy, dependency, standardization) get crossed without deliberate choice. Use becomes automatic; effects compound silently.

This is the structural reason the encyclopedia treats 2022–2025 as a break rather than a continuation. A society in which AI is one of many specific-task tools handles AI’s cognitive effects differently than a society in which AI is the way thinking gets done by default. Pre-2022 discourse was about the first kind of society. Post-2022 discourse is catching up to the second.

The “ChatGPT moment”

A note on framing. The phrase “ChatGPT moment” — sometimes used to mean “the public’s first encounter with generative AI” — is a useful but limited shorthand. It captures the launch event but misses the structural shift. A more careful framing: ChatGPT was the consumer surface of a deeper trend (the rise of capable, general-purpose foundation models) that would have produced similar effects through a different surface if ChatGPT had not been first. The branding matters less than the underlying capability distribution.

The encyclopedia tries to keep this distinction. Where it discusses “ChatGPT,” it usually means the specific OpenAI product. Where it discusses “generative AI” or “LLMs,” it means the broader category whose effects are the encyclopedia’s subject.

Where this connects forward

The next article (A.05) takes up the central tension of the period: AI as augmentation versus AI as decline. The tension is, in a sense, the encyclopedia’s whole subject. Section B’s articles develop the cognitive- offloading framework that is the working analytical lens. Sections C, D, E, and F follow from there.

The 2025 moment is, on the encyclopedia’s argument, the first period in which it is worth writing this kind of book. Earlier, the cognitive effects of AI were too narrow or too rare to warrant systematic treatment. By the late 2020s, they may have stabilized into a steady state that warrants a different kind of treatment. This work is sized to the present.

Footnotes

  1. OpenAI ChatGPT public launch, 30 November 2022.