The standardization section ends here, with the question that organizes most of what follows in F: what can be done? The honest answer is that the literature has converged on three lines of response, that each line is real, and that none alone is sufficient. The realistic policy is all three at once.
Education
The most-discussed response: build awareness of cognitive standardization into education from an early age. The targets:
- Recognize the model’s preferences. Teach students that LLMs have a default register, default examples, default cultural assumptions. Show them how to read for those defaults the way one reads for an author’s voice.
- Practice independent first drafts. Build into pedagogy the habit of writing-before-asking. The exercise is artificial; the skill it preserves is not.
- Support multilingual literacy. A student fluent in more than one language is, structurally, less standardizable. The cognitive-diversity argument is also a multilingualism argument.
- Strengthen source verification. Teach the actual mechanics of source-checking. The model produces fluent prose that mimics well-sourced writing without being it; the difference is detectable but only if students are taught to detect it.
What education cannot do: solve the structural problem at the platform level. Educated users in a homogenized environment are still in a homogenized environment.
Technology
The technical responses cluster around three goals.
Diverse training corpora. Models trained on broader, less Anglocentric data. This requires sustained funding for non-English-language data collection, particularly in low-resource languages. It also requires political support for protecting locally-produced text from the centripetal pressure of the WEIRD-trained frontier (see C.15).
Locally-developed models. Models built by local teams for local users. The capability gap with frontier models matters less than the cultural fit; a slightly less capable model that gets local nuance right may serve a community better than a more capable one that does not.
Surfacing model preferences. Tools that show the user what kind of output the model defaults to before they accept it. The model can flag “this is the most common register” and offer alternatives. Most product surfaces do not currently do this; they could.
What technology cannot do: change the distribution of who uses what tool. The technical responses succeed only if there is demand for them, and demand is shaped by economic and cultural forces outside any single tool’s control.
Policy
The regulatory responses cluster around transparency and accountability.
- Disclosure of training-data composition. Users have no right to know what their AI was trained on; legal frameworks are starting to treat this as a missing right.
- Algorithmic auditing. Independent audits of model bias on population-relevant tasks, modeled on financial auditing. Several proposals; few implementations.
- Mandatory interoperability. Regulations that prevent any single model from becoming the only practical choice for a use case, by requiring portability of context, prompts, and outputs.
- Public funding for non-WEIRD AI. Governmental support for models that the market would not produce on its own. The European AI Act contains seeds of this; it is more developed in policy talk than in budgets.
The UNESCO 2021 Recommendation on the Ethics of Artificial Intelligence articulates the principles — diversity, inclusion, fairness, transparency — that any of these regulatory moves can claim as foundation.1 The work of translating principles into binding requirements is just beginning.
What policy cannot do: substitute for educated users and well-designed tools. Regulation alone produces compliance without comprehension.
The shape of a serious response
The encyclopedia’s claim, drawn from Gesnot’s §3.5 and §8.5, is that none of these three responses works alone. Education without policy produces aware users in a captured environment. Policy without technology produces compliance theatre. Technology without education produces tools no one knows how to use well.
The shape of a serious response is to do all three, in coordination, over time horizons longer than any single budget cycle or political term. The shape is also harder to maintain than any single response, which is part of why each is regularly proposed alone — by writers who hope theirs will be sufficient.
It will not be. The cumulative effect of cognitive standardization will only be slowed by cumulative responses. The encyclopedia’s argument is for the patience to build all three at once.
What follows
The standardization section closes here. Section D — Manipulation — picks up the same algorithmic substrate from a different angle: not what AI does to thinking when no one is trying, but what AI does to thinking when someone is trying. The substrate is the same; the question is who is using it for what.
Footnotes
-
UNESCO 2021 Recommendation on the Ethics of Artificial Intelligence. ↩