The word deepfake is a contraction of “deep learning” and “fake.” It covers a family of techniques — face-swap video, voice cloning, full-synthetic imagery — by which audiovisual material can be fabricated at quality sufficient to deceive. By 2024, the technical bar for convincing fakes had dropped to consumer hardware and a few hours of training data. The trajectory since suggests “minutes” and “cell phones” by the time this article is published.
Two specific incidents serve as anchor cases.
Two incidents
The 2022 Zelensky video. In the early weeks of the Russia–Ukraine war, a fabricated video appeared on hacked Ukrainian media outlets showing President Zelensky urging Ukrainian forces to lay down arms. The fake was unconvincing — poorly lipsynced, off-pitch, of low resolution — and was identified within hours. It is the canonical example because of what it tried to do, and because it failed only because the production quality was poor. The capability gap between that 2022 fake and a 2025 production attempt is enormous.
The 2024 New Hampshire Biden voice memo. In the days before the Democratic primary, voters received a robocall in a voice indistinguishable from President Biden’s, urging Democratic voters to skip the primary. The voice was generated; the campaign behind it was promptly investigated. As of publication of the encyclopedia, the call is the most-cited contemporary American example of generative-AI political manipulation reaching mass distribution.
The two incidents share a structural feature: each was designed, in some sense, to be detected — by activists, by investigators, by the public. The attacks that succeed will be designed not to be detected. We will not have canonical examples of those, by definition, until we do.
What changes when fakes get cheap
Three shifts, each consequential.
Burden of proof on real content. When fabrication is cheap, the default skeptical move is to doubt audiovisual material. This is sometimes appropriate and frequently corrosive. Genuine footage of crimes, human-rights violations, or political misconduct now faces routine challenges of “but is it AI?” — challenges raised by the parties whose interests are served by the doubt.
The liar’s dividend. Chesney and Citron’s term, introduced in their 2019 paper, captures the second-order effect.1 When everyone knows fakes exist, public figures caught in genuine recordings can plausibly claim the recordings are fakes, and the claim is now harder to refute. The existence of the technology benefits liars even when no actual fake is produced.
Trust costs across the board. Every individual interaction with audiovisual content now carries a small new cognitive cost — the implicit question of authenticity. Sustained at population scale, this is a real tax on the information environment. Trust in all media — including genuine reporting — declines as a side effect of the technology’s existence.
These three are not symmetric. The first is mostly local; the third is diffuse. The second is the most politically dangerous.
Detection and provenance
Two parallel responses to deepfakes have emerged.
Detection. Train classifiers to distinguish AI-generated content from genuine. The detectors work for current-generation fakes, fail for next-generation fakes, and live in a permanent arms race with generators. Detection is not a long-run solution but is useful for the steady-state volume of cheap, undisguised generation.
Provenance. The Content Authenticity Initiative and the C2PA standard represent the more ambitious response. The idea: build cryptographic provenance into media at the device level, so that genuine content can be proven genuine rather than fakes being detected as fakes. This shifts the burden from negative proof to positive proof. The standards are technically real; adoption is partial.
The honest assessment: provenance is a long-run answer; detection is the short-run patch. The transition between them will be ugly.
What can be done politically
The harder responses are political, not technical.
- Criminalize specific abuse cases — non-consensual intimate imagery, electoral fraud, market-moving impersonation. Most jurisdictions are partway there; enforcement is uneven.
- Mandate disclosure. Require generated content in political advertising to be labeled. Several jurisdictions have moved on this; others have not.
- Strengthen verified-identity systems for news organizations and public figures. The cost is centralization; the benefit is a higher floor on verifiable communication.
- Educate. Public awareness of deepfake capabilities is shockingly uneven; many older voters are not aware that consumer-grade voice cloning is real. This gap is closeable with targeted media literacy.
What this connects to
Deepfakes are one cell in the manipulation taxonomy (D.18). They interact strongly with disinformation at scale (D.21) — generative AI text and generative AI media are the two halves of a complete fabrication apparatus. Manipulation of Perceived Reality (D.23) takes up the deeper question this article gestures at: what happens to shared reality when no piece of audiovisual evidence is automatically credible? States: Surveillance and Social Control (F.35) discusses state-actor uses of deepfakes specifically. The encyclopedia’s recurrent point applies here too: the problem is not the technology in isolation; it is the fit between the technology and the institutions that were designed for a world without it.
Footnotes
-
Chesney & Citron, 2019. The “liar’s dividend.” ↩