This article gathers, in one place, the policy responses the encyclopedia has surfaced across its sections. The catalogue is necessarily condensed; each item could be a paper of its own. The framing follows Gesnot’s §8.5 and §8.6.
The recommendations cluster into four families. None alone is sufficient. A serious response is all four.
I. Education and media literacy
The least controversial and most chronically under-funded.
- Critical-AI literacy in schools. Teach what LLMs are, how they produce outputs, what their characteristic failure modes are. Should begin at primary school and continue through higher education.
- Source verification skills. The mechanics of evaluating a piece of content for credibility — finding originals, checking dates, reverse-image searching, cross-referencing claims. Teachable, currently poorly distributed.
- Cognitive metacognition. Teach the structure of common biases, the mechanisms by which AI exploits them, and the practical habits that preserve independent reasoning.
- Multilingual literacy. Multilingual users are structurally less vulnerable to cognitive standardization. Public support for multilingual education is part of cognitive-diversity policy.
- Public-service journalism. A literate public requires sources of information that journalism, libraries, and public broadcasters produce. Funding these institutions is part of media literacy at the societal level.
What this can do: build a citizenry that can engage critically with AI-mediated information environments. What it cannot do: solve structural problems at the platform or model level.
II. Technical responses
Where the engineering can address the problem directly.
- Provenance standards (C2PA and successors) for verifiable media origin. Adoption is partial; pushing it through is realistic.
- Watermarking of AI-generated content. Useful for steady-state volume; fragile against determined attack. Should be required by regulation, with technical standards updated regularly.
- Mechanistic interpretability of frontier AI. The research program is real; substantial public funding would accelerate it. The labs do some of this; independent academic and government efforts could do more.
- Diverse training corpora and locally-developed models. Funding for non-English, non-WEIRD AI development. Capability gaps with frontier models matter less than cultural fit; the work is meaningful.
- Transparent uncertainty. Tools that surface what they don’t know rather than producing confident defaults. A design principle, not a regulatory matter, but one that responsible labs can adopt voluntarily.
What this can do: raise the floor of AI deployment quality and provide the technical substrate for other responses. What it cannot do: substitute for political choices about how AI is used.
III. Regulatory responses
Where the policy literature has the most to say and the politics is hardest.
- Risk-based AI regulation. The European AI Act framework — graduated obligations by risk category — is the leading model. Adoption in other jurisdictions is uneven; the U.S. has not legislated comparably.
- Mandatory disclosure of AI involvement in political advertising, in public-sector decisions, in synthesized media. Several jurisdictions are partway there.
- Algorithmic auditing as a profession and a regulatory requirement. Modeled on financial auditing; built around independent third-party assessment of high-impact systems.
- Restrictions on manipulative practices. Specific bans — on social scoring, on certain manipulative designs, on biometric surveillance in defined contexts. The European AI Act includes some of these; the list could be extended.
- Data protection. Strong frameworks (GDPR-style) that limit what data can be collected and what can be done with it. The American framework lags; the European framework is real but unevenly enforced.
- Platform responsibility frameworks. Updates to liability regimes (Section 230 in the U.S., the Digital Services Act in the EU) that reflect platforms’ actual role in shaping content rather than treating them as neutral conduits.
What this can do: create an environment in which augmentation-side AI use is favored over decline-side use. What it cannot do: replace the political will needed to enact and enforce any of it.
IV. Institutional reform
The least-developed family of responses, and possibly the most important.
- Public AI infrastructure. Public-interest models — funded by governments or coalitions, designed for cognitive-diversity and educational goals, made available freely. Mistral’s French effort, the BLOOM multilingual model, several public-broadcasting AI projects are beginnings.
- Public-interest research. Sustained funding for academic AI safety, AI ethics, and AI-effects research independent of industry. The current ratio of industry-funded to publicly-funded AI research is badly skewed.
- Strengthen democratic institutions. Courts, electoral commissions, independent press, statistical agencies. The institutions whose business is producing shared reality are the ones the AI environment most directly stresses; they need investment.
- International coordination. AI is a transnational technology; its governance cannot be effective at the level of any single state. UNESCO’s 2021 Recommendation on the Ethics of Artificial Intelligence is a starting point.1 OECD frameworks, G7 statements, and bilateral agreements between the EU, the U.S., and other major jurisdictions can build on it. The work is slow, partial, and necessary.
How to read this catalogue
Three notes.
No single recommendation is sufficient. The realistic stance is that all four families need to advance together. A regulatory regime without educated users is theatre; education without regulation is inadequate against structural pressure; technical responses without institutional support do not scale.
Many recommendations are contested. The encyclopedia presents the list; the political work of building coalitions to enact specific items is outside its scope. The recommendations are not consensus recommendations. They are a synthesis of what the careful literature has said is worth pursuing.
The work is open. The encyclopedia is a snapshot of 2025. Some of these recommendations will be implemented in the next decade; some will not; some will be replaced by more specific or more ambitious versions. The encyclopedia’s value is in articulating the shape of the response seriously enough that the specifics can be debated on substance.
A closing observation
The recommendations in this catalogue are not exotic. Most are extensions of well-established institutions and practices. The strangeness of the AI moment is not that it requires invented responses; it is that it requires the combination of responses we already know how to make, applied at the speed and scale the technology operates.
This is the encyclopedia’s policy claim, condensed: we have the intellectual resources to govern AI in ways that preserve cognitive diversity, intellectual autonomy, and democratic public life. We are not yet using them at the rate the technology demands. Closing the gap is the substantive political project of the next decade.
The encyclopedia ends with the synthesis essay Reading the Whole Argument (F.40), which gathers the threads of the whole work and asks what follows.
Footnotes
-
UNESCO 2021 Recommendation. ↩