The previous two articles treated states and corporations separately. They are rarely separate in practice. Most of the politically consequential AI use of the period this encyclopedia covers happens at the interface between the two, and the interface itself is undertheorized in public discourse.

This article sketches three classes of state-corporate AI synergy and notes the policy questions each raises.

I. Data flow

The cleanest case. Corporations collect vast quantities of data on individual behavior; states have legal mechanisms (subpoenas, national- security letters, mandatory disclosure regimes, voluntary partnerships) to access that data when it suits them.

In the United States, the basic shape: corporations build the surveillance infrastructure for commercial reasons; the state accesses it under defined legal conditions, often expanded by post-2001 statutes and rulings. Snowden’s 2013 disclosures documented the depth of the relationship for telecommunications and online services. Subsequent disclosures (vendor relationships at Customs and Border Protection, ICE contracts with location-data brokers, FBI face-recognition arrangements) have continued to surface.

In China, the relationship is more direct. Major platforms are subject to data-access requirements that effectively merge corporate surveillance with state surveillance. The social-credit system (F.35) runs on this integration.

The European model attempts a middle path: substantial commercial data collection, with regulatory limits on state access (GDPR, the e-Privacy Directive). The model is real; its implementation is uneven; the political durability of the limits is contested.

II. Influence operations

A second class of synergy: state actors use commercial AI infrastructure to conduct influence operations. The mechanism is generally:

  1. Acquire access to a commercial platform’s user-targeting tools.
  2. Use those tools to deliver tailored content to populations of interest.
  3. Use commercial-grade AI to generate that content at scale (D.21).

The 2016 Cambridge Analytica revelations are the canonical case, though later analyses have moderated some early claims about its effectiveness. The mechanism is real; the per-piece effect on individual voters is modest; the cumulative effect on the political ecosystem is significant. The state-corporate boundary in this case is the boundary between who designed the campaign and who provided the targeting infrastructure. Both contribute; both are typically present.

III. Regulatory capture

A third class of synergy: corporations work to shape the regulatory environment that governs them. AI policy is no exception. Major frontier labs maintain substantial Washington and Brussels presence; they participate in regulatory drafting; they offer technical expertise that regulators lack and depend on.

The result is regulation that is technically informed but also industrially-aligned. The European AI Act, despite its public-interest framing, was substantially shaped by industry input; the same is true of American regulatory proposals. None of this is unique to AI — every regulated industry shapes its regulators — but the speed of AI development and the depth of the technical asymmetry between regulators and labs make the dynamic especially consequential.

The encyclopedia takes no position on whether any specific instance of regulatory engagement constitutes “capture” in the pejorative sense. It notes that the structural conditions for regulatory capture are present, that the policy outputs reflect those conditions, and that the public-interest framing of AI policy depends on a vigilance that is unevenly distributed.

Why “synergy” is the right word

A reasonable objection to this article: many of the cases described are not “synergies” in the ordinary sense; they are conflicts, accommodations, or mutual exploitations. Why use a positive-coded word for arrangements that often have negative effects on third parties?

The encyclopedia’s answer: the dynamic is systemic, not adversarial. States and corporations are not, in the AI domain, fighting; they are mutually building. Each gets something from the arrangement; each accepts costs the other imposes. The system the two together produce — more capable than either alone — is the political reality of the period.

Calling this dynamic “synergy” rather than “complicity” is a stylistic choice. Both terms apply. The first is more analytically neutral.

What this connects to

The synergies discussed here are most consequential when they produce outcomes neither party would have chosen alone. Three examples:

  • Surveillance breadth that exceeds either purely state or purely commercial appetites — the state would not build all of this; the corporation has commercial reasons to collect more than the state would ask for; the combined apparatus exceeds both.
  • Behavioral targeting precision in political campaigns that exceeds what state-only or party-only operations could mount.
  • Information-environment shaping — the deepfake and disinformation ecosystems of D.21–D.23 — that depends on both commercial AI infrastructure and state-aligned actors using it.

These are not theoretical. The empirical record of 2018–2025 documents each.

What can be done

Three policy directions, none easy.

Structural separation. Limit, by law, the data-sharing channels between commercial and state surveillance. The European model attempts this; the American model largely does not.

Auditable boundaries. Where state-corporate cooperation is necessary (public-health emergencies, criminal investigations), require auditable records of the cooperation. The records protect against scope creep.

Political accountability. The synergies are sustained by political choices; they are vulnerable to political reversal. Public attention, investigative journalism, and electoral pressure can change the incentive structure facing both states and corporations.

The encyclopedia’s stance, consistent with Section F: the synergies between state and corporate AI use are themselves a substantial part of what makes the contemporary AI environment hard to govern. Treating the two as separate problems — as much policy commentary still does — misses the actual structure of the politics.

The next article (F.38) takes up the prospective scenarios that follow from these dynamics; F.39 catalogs the response options.