The previous five articles in Section D treated specific techniques — biases, personalization, disinformation, deepfakes, simulated influence, dynamic dark patterns. This one steps back. The question: when several or all of those techniques become routine, what happens to the shared reality that public discourse, democratic politics, and ordinary social life all depend on?
The phrase shared reality is doing a lot of work here, so it deserves a careful definition.
What “shared reality” means
A society in which most members can, on most ordinary questions of fact, reach the same answer when they investigate. The shared reality is not the agreement on values, opinions, or predictions; it is the agreement on what happened. The president gave a speech yesterday. The factory closed last month. The economic numbers are these.
Shared reality has never been complete. Different groups have always disagreed about contested facts; different cultures have always read the same events differently. But for most of modern history, the substrate of evidence — written records, photographic record, audio recordings, video — has been credible enough that disagreements were typically about interpretation, not about whether the evidence existed.
This is what AI-era manipulation threatens.
The threat is structural
The threat is not that any single fabrication will deceive a critical mass. Most fabrications, taken alone, can be detected, refuted, dismissed. The threat is to the substrate — the implicit assumption that some evidence is real and some is fake, and that we can usually tell.
Three shifts, working together:
- Cheap fabrication. Anyone can produce convincing audiovisual content about anything (D.22).
- Targeted distribution. Anyone can deliver that content to specific populations at scale (D.20, D.21).
- Plausible denial. Anyone caught in genuine evidence can plausibly claim it was fabricated (the liar’s dividend).
The three combine into a different kind of information environment. Not one in which more falsehoods circulate — that has always been true — but one in which the public’s confidence in the substrate is actively eroded. Real evidence becomes harder to credit. Fake evidence becomes harder to dismiss. The cognitive cost of every information transaction rises.
”Reality apathy”
The longer-term concern, articulated in different forms across the literature: reality apathy. Faced with sustained difficulty in distinguishing genuine from fabricated, citizens may simply stop trying. The psychological cost of constant verification exceeds the perceived benefit of being right; the rational adaptation is to disengage.
Reality apathy looks like cynicism but is structurally different. The cynic believes everything is propaganda. The reality-apathetic gives up on the question. The first is a stable equilibrium; the second is a political vacuum that authoritarian governance is well-positioned to fill.
The encyclopedia’s worry, drawn from §4.5.3 and §8 of Gesnot, is that sustained AI-mediated manipulation produces reality apathy in populations that previously had higher epistemic standards. Once produced, reality apathy is hard to reverse. The institutions that would normally rebuild shared reality — journalism, courts, science — are themselves attacked by the same techniques.
The Frankfurt point
Harry Frankfurt’s 2005 essay On Bullshit drew a useful distinction that matters here.1 A liar knows the truth and says the opposite. A bullshitter does not care about the truth at all. Liars are constrained by the truth — they have to know what it is, in order to deny it. Bullshitters are not constrained by anything.
Generative AI is, in Frankfurt’s sense, a bullshit machine. It does not know what is true; it produces plausible text whose relationship to truth is incidental. When this kind of output saturates the information environment, it does not make the environment more wrong; it makes the distinction between right and wrong harder to maintain. Right and wrong have not gone away. The capacity to detect them has been crowded.
This is a stronger claim than “AI produces falsehoods” and a weaker claim than “AI ends shared reality.” Both stronger and weaker; the right level of worry is between them.
What can be done
The responses for individual cells of manipulation — detection, provenance, verification literacy — apply here. But the deeper response is institutional: building, defending, and funding the epistemic infrastructure that produces shared reality in the first place. Journalism, libraries, courts, archives, scientific publishing, public broadcasters. None is a tech company; all are now in the front line of the AI-era information environment.
The policy implications are significant. Reading the Whole Argument (F.40) takes them up at the level of synthesis. The encyclopedia’s stance here is modest: the question of shared reality is now a public-policy question, not just a media-ethics question. The tools that would protect it are mostly funded poorly. The tools that would erode it are mostly funded well. That asymmetry is itself part of the problem.
Footnotes
-
Frankfurt, 2005. On Bullshit. ↩