The catalogue of human cognitive biases is well-mapped. Tversky and Kahneman’s 1974 paper opened the modern literature; the half-century since has produced hundreds of named biases, dozens of robust ones, and a few — confirmation bias, anchoring, availability, framing — that are general enough to be the working vocabulary of behavioral economics, advertising, and now algorithmic design.1

What changes when AI enters the picture is not the menu of biases. The menu is essentially fixed; humans have these biases for good reasons of cognitive economics, and no AI is going to change that. What changes is the scale and granularity at which the biases can be exploited.

A short menu, with the AI angle

Confirmation bias is the tendency to weight evidence that confirms existing beliefs and discount evidence that contradicts them. AI exploits it two ways. Recommendation systems serve users content that aligns with inferred prior beliefs (filter bubbles). Conversational AI, in dialogue, produces responses that match the user’s emotional valence — gentler when challenged, more confident when agreed with. Both push the user toward what they already thought.

Anchoring is the tendency to weight an initial number, value, or framing disproportionately in subsequent judgment. AI exploits it through display choices. Showing a high “list price” before the discounted price anchors willingness to pay; showing a model’s most confident interpretation before its caveats anchors the user’s reading.

Availability is the tendency to judge frequency by how easily examples come to mind. AI’s role is in shaping what comes to mind — by what it shows, how often, and in what emotional packaging. A user who sees three deepfake news stories about a political opponent in a week will, when asked about that opponent, retrieve those stories.

Framing is the dependence of judgment on how a question or option is posed. AI exploits framing by personalizing it: the same product, the same opinion, the same argument, served to different users in the framing that their inferred profile predicts will work.

These are four. There are more. The point is not the catalogue.

The “hypernudge”

The term that captures the shift is hypernudge. The original concept is Sunstein and Thaler’s nudge: a small design choice that shapes a decision without removing the freedom to choose otherwise. The default option on a form, the order of choices on a menu. Nudges are visible-in-aggregate, subtle-per-instance, generally treated as benign or beneficial when used ethically.

A hypernudge is a nudge tailored to a single user, in real time, by a system that knows them better than they know themselves. The scale shift matters. Population-level nudges can be discussed, debated, regulated. Per-user hypernudges escape detection by anyone except the system performing them.

The structural feature shared with the rest of the manipulation taxonomy (D.18): the producer’s view of the manipulation is precise; the target’s view is, by design, absent.

Why the scaling is the real problem

Consider a comparison. A 1980s direct-mail marketer who exploited confirmation bias in their copy was working with the same bias an AI uses today. The marketer:

  • Wrote one piece of copy, sent it to many people. Bias exploitation per user: approximate, generic, often miscalibrated.
  • Could not detect the bias. They guessed it from demographics.
  • Could not measure the result per user. They measured aggregate conversion rates.

A 2025 AI-driven feed:

  • Selects content per user from millions of options, dynamically. Exploitation: precise, calibrated, updated minute to minute.
  • Detects the bias from behavioral history. The user has provided thousands of micro-signals (clicks, dwell time, scroll velocity).
  • Measures result per user, in real time, and adjusts.

The mechanism — exploit a bias to produce a desired action — is the same. The asymmetry between exploiter and exploited is much larger.

This is the structural argument for treating AI manipulation as a different order of phenomenon than its analog predecessors. Not because the techniques are new, but because the techniques that already worked now work much better.

What can be done

The honest answer is less than one would like.

  • User-level vigilance helps but does not scale. A user trained in their own biases is still affected; the system is faster than the user’s metacognition.
  • Friction in interfaces — slowing the path between feed and action — helps modestly. The Andrew Yang–style “are you sure you want to share this?” prompt has measured benefits and modest political feasibility.
  • Algorithmic transparency — letting users see why they are being shown what they are shown — has the right shape but is fought hard by platforms.
  • Regulation of personalization in politically sensitive domains (election ads, public health, vulnerable groups) is the most ambitious response. The European AI Act takes initial steps. The U.S. has not legislated.

The encyclopedia’s framing here is consistent with the rest of Section D: the asymmetry between exploiter and exploited is now too large to be addressed by individual vigilance alone. The serious responses are structural — at the level of platform incentives, regulatory tools, and educational policy. Individuals can protect themselves, but populations cannot.

Footnotes

  1. Tversky & Kahneman, 1974.