The word polarization attracts a lot of confused argument. The encyclopedia’s working definition: an increase in the distance between the most extreme positions held in a population, and a thinning of the middle ground between them. Polarization is not the same as disagreement; populations have always disagreed. Polarization is what happens when the modal opinion within camps moves further apart, and the willingness to engage across camps drops.

The empirical question for this article: how much of the polarization observed in democracies over the last fifteen years is produced or amplified by AI-driven personalization?

What the literature says

Three findings, established robustly enough to build on.

Recommendation algorithms reward strong reactions. Engagement is maximized by content that produces anger, fear, or affirmation — emotions strong enough to drive a click, a share, a return visit. Mild content produces less engagement; the algorithm learns this and surfaces less of it. Strong-reaction content is, on average, more polarizing than mild content. This is documented across the major platforms.

Microtargeting amplifies the effect. A user identified as right-leaning gets right-leaning strong-reaction content; a user identified as left-leaning gets the opposite. Each user’s stream pushes them harder toward their identified pole. Over time, the modal user in each camp shifts toward more polarized positions.

The effect is asymmetric across users. Most users are mildly affected. A minority — the most engaged, most politically active, most online — are strongly affected. Polarization in the literature is a tail phenomenon: the extremes get more extreme; the middle drifts a little.

These findings are not contested. What is contested is how much of the real-world polarization observed tracks back to algorithmic causes versus other factors (economic stress, partisan media, demographic sorting).

Bail’s correction

Chris Bail’s 2021 book Breaking the Social Media Prism makes a useful corrective point.1 His finding: exposure to opposing views on social media — the kind a personalization algorithm could deliver if it chose to — does not depolarize users. It often polarizes them further. Users encountering disagreement on a hostile platform respond by digging in, not by reconsidering.

This complicates the obvious policy response (“show users more diverse content”). Diversity of content is necessary but not sufficient. The context in which the diversity is encountered matters. A polite face-to- face conversation with someone of different politics depolarizes; a hostile thread reply does the opposite. The algorithm cannot easily substitute the first for the second.

Why “the algorithm did it” is too simple

Two cautions about treating polarization as primarily an AI problem.

Selection effects are large. Users self-curate. They follow what matches their priors; they unfollow what doesn’t. Most of the filter is the user’s own; the algorithm assists. A pre-AI user with the same psychological profile would also have polarized — slower, with different content, but in the same direction.

Demographic and economic forces matter. Geographic sorting (people moving to areas where they fit politically), media-environment fragmentation (cable, podcasts, partisan press), and economic stress all contribute to polarization. The literature attributes a meaningful fraction to these factors, not all of it to algorithms.

Both cautions are right and both can be overstated. The honest summary: algorithms are a contributing cause of polarization, more than nothing, less than everything, with effects concentrated in the political tails. They are also a contributing cause that is easier to address through policy than the demographic and economic ones, which is why the policy literature focuses on them disproportionately.

What can be done

Three policy directions, none alone sufficient.

Algorithmic transparency. Mandate that platforms reveal what content their recommenders prioritize and why. The European Digital Services Act moves in this direction. Implementation has been slow.

Algorithmic choice. Let users select among recommender algorithms — chronological, friend-prioritized, user-controlled — rather than being locked into engagement-maximization. The technical feasibility is high; the business incentive is low.

Friction in viral propagation. The most viral content is the most polarizing. Slowing virality — through forwarding limits, share-prompts, share-counters — has measurable depolarizing effects. WhatsApp’s forwarding limits are an existence proof.

The encyclopedia’s framing, after Section D’s larger argument: polarization is one of the cleaner cases where AI’s structural features — engagement optimization, microtargeting, real-time adaptation — produce harmful collective outcomes from individually-rational design choices. The structural features are addressable. Whether they get addressed depends on politics that is itself shaped by the very polarization in question.

This is the loop the section asks readers to notice.

Footnotes

  1. Bail, 2021.