The encyclopedia’s first six sections describe what AI does to thinking. This one describes who is using AI to do things to thinking. It begins with states and proceeds, in F.36, to corporations. The two are not always aligned and not always at odds; the most consequential effects often emerge from their overlap.

The limit case: social credit

The most fully realized example of state-deployed AI for behavioral control is China’s social-credit system. The architecture is familiar in pieces — every modern state collects financial, administrative, and movement data — but the integration is unprecedented. Algorithms continuously aggregate transactions, administrative records, social-media activity, and geolocation into a behavioral score.1 Compliance with norms (paying on time, traffic obedience, declared community participation) raises the score; minor offenses, late payments, or politically suspect interactions lower it.

The score has consequences. Travel restrictions, denial of certain government services, exclusion from particular schools or jobs. The mechanism is not pure surveillance; it is automated reward and sanction. The state does not need to catch you to punish you. The system catches you, scores you, and the consequences follow as bureaucratic facts.

Wright’s analysis of this architecture observes that algorithmic surveillance allows authorities to “monitor, analyze, and control the population more intimately than ever before.”1 The word intimately matters. Pre-digital surveillance scaled badly — there was always too much population for too few agents. AI surveillance scales perfectly, and so changes its character: it is no longer episodic but continuous, no longer reactive but anticipatory.

The behavioral effect is the deepest part. Citizens know the system is watching; they adapt accordingly. Many of the regime’s preferred behaviors are produced not by enforcement but by anticipated enforcement. The mechanism is the panopticon’s, scaled and automated.

The democratic version

The temptation, reading the social-credit example, is to treat it as a problem of authoritarian governance — present in China, absent here. The temptation is wrong. Democracies are deploying AI at the population level in ways that overlap significantly with the social-credit architecture, even where the political inflection differs. A short tour:

Adaptive infrastructure. Cities use AI to manage road traffic — adaptive traffic lights, dynamic congestion pricing, reroute suggestions — to enforce traffic law and reduce pollution. Travel habits change in response. The intervention is benign in intent and barely discussed politically; it is also a clear example of population-scale behavioral modification by algorithm.

Predictive enforcement. Tax authorities and benefits administrators use predictive analytics to flag fraud risk, then send personalized “nudges” — automated reminders, targeted messages, flagged interviews. The Netherlands’ 2013–2019 toeslagenaffaire (childcare-benefits scandal), in which an algorithmic risk model flagged thousands of mostly-immigrant families for benefit fraud they had not committed, is the cautionary tale. The model was opaque; the appeals process was inadequate; the harm took years to surface.

Public-health steering. During COVID-19, governments worldwide deployed contact-tracing apps, vaccine-uptake models, and AI-driven public messaging. Some of this was straightforwardly useful. Some involved population-scale behavioral targeting on the same architecture as commercial advertising, with public-health goals replacing commercial ones.

State-actor disinformation. A more pointed register. Government-aligned actors have used deepfakes for political ends — the 2022 fabricated video of President Zelensky urging Ukrainian troops to surrender is the canonical example.2 The Zelensky deepfake was unconvincing and detected within hours; the next one will not be.

Two scenarios

Gesnot’s §7.4 sketches two prospective scenarios for the next decade of state AI use. They are not predictions; they are framings worth holding next to each other.

Scenario A — channeled. Strict legislation (the EU AI Act and its successors), classifying high-risk influence systems and constraining their deployment in political contexts. Mandatory transparency for public-sector algorithms; a right to explanation; banned categories of manipulative practice. States use AI to improve services while limiting its intrusion into the private and political spheres. Democratic balance preserved by proactive governance.

Scenario B — digital authoritarianism. AI integration deepens in the absence of binding constraint. The administrative state acquires the capabilities of the social-credit architecture without the explicit ideology — a “soft” version that is no less effective. Individual freedom becomes conditioned, in practice, on legibility to algorithms. The political question of which behaviors are acceptable is decided in code, downstream of the political process that nominally governs it.

The two scenarios are not exclusive. A democracy can move toward A in some domains and B in others, depending on which pieces of the toolkit get scrutinized and which slip past unnoticed. The European Union’s regulatory effort represents one of the few coordinated attempts at A; whether it succeeds is among the most consequential open questions of the period the encyclopedia covers.

The framing this section takes

States have always shaped behavior. They have always done so with the best tools available. AI is the next tool. What is new is the asymmetry: a state’s AI toolkit is far more capable than its citizens’ toolkit for noticing it, which is the same structural asymmetry the manipulation taxonomy (D.18) names at the individual level, now scaled to the population.

This is one of the places the encyclopedia’s six sections meet. Cognitive standardization (C) and manipulation (D) describe what AI does to thinking; power and governance (F) describe who is doing it. The same techniques recur because the tool is the same. The defenses recur because they have to.

Footnotes

  1. Wright, 2018. The phrase “monitor, analyze, and control … more intimately than ever before.” 2

  2. The 2022 Zelensky deepfake; subsequent state-actor disinformation cases.