Who Determines the Future of AI?

An article in the journal Science warns that the US government is talking about “deregulating” AI, but rather than reducing government interference and encouraging competition, it is merely shifting state power into less transparent forms.


The starting point was Trump’s announcement of a “one rule executive order” intended to prevent individual states from enacting their own AI laws. According to Trump, companies would be crippled if they had to navigate 50 different legal frameworks: “AI will be destroyed in its infancy!” The message: less bureaucracy, more innovation.

An Invisible Hand

The analysis in Science paints a contrasting picture: governance does not disappear just because formal rules are abolished. Power is exercised through other channels—often earlier, more quietly, and more effectively. These include equity investments, subsidies, trade restrictions, visa regulations, research funding, and the redirection of administrative priorities. The result: hyperregulation by other means.

Regulation is traditionally associated with authorities, public hearings, and written rules. This is visible, comprehensible, and open to discussion. But governance can be exercised much earlier and more discreetly through forecasts, political signals, institutions, financial flows, and strategic decisions. The Trump administration does just this, relying heavily on executive discretionary decisions rather than public procedures.

New Instruments of Control

The utilization of state holdings is particularly striking. In 2025, the US Department of Commerce acquired just under ten percent of the chip manufacturer Intel. Billions more flowed into companies in the chip, raw materials, and energy sectors. Such investments give the state direct influence over site, technology, and supply chain decisions—in some cases, even with veto rights. This is regulation, but without procedure, consultation, or judicial oversight.

What is remarkable for a conservative US government is the aggressive encroachment on the jurisdiction of individual states. The one-rule strategy prevents regional experiments and deprives the local public of co-determination. A patchwork of different rules might be a hindrance—but Trump’s solution avoids parliament as far as possible. Instead, it is executive, with a concentration of power and less democratic accountability.

Two other factors determine the future of AI without being considered regulation: Visas determine who is allowed to work in laboratories and start-ups; research funding determines which questions can be asked in the first place. Research into social or environmental consequences is being cut back, while technical development is being prioritized.

There is also a more fundamental and international dimension: the US is trying to establish the idea worldwide that government oversight is an obstacle to innovation. However, this does not promote deregulation, but rather a framework in which competitiveness takes precedence over accountability and the common good.

Three Paths to AI Regulation

The analysis in Science also points to a constant: countries always pursue industrial policy. It is significant whether they do so transparently and deliberatively, or discreetly and executively. Early decisions are seldom reversible. Decisions on standards, research, and infrastructure create path dependencies that make subsequent corrections difficult. Europe is pursuing a different approach: instead of exercising executive power, the EU relies more on law, procedure, and supervision. Its instruments are the AI Act, the Digital Services Act, and the Digital Markets Act. While the US sets standards through markets and instruments of power, Europe regulates through legislation and institutions.1 China, on the other hand, operates with a state-capitalist model that works through speed of implementation and centralization—a third form of creating power in AI.2

A geopolitical triangle is emerging. The US primarily regulates AI through capital, understood as entrepreneurial intelligence and initiative that requires freedom. Europe regulates AI through law—an equalizing force that focuses on procedures and equal opportunity. China integrates AI deeply into industry, logistics, and services, as a sphere of value creation that aims for efficiency, scaling, and material implementation. These three spheres stand in opposition to each other: capital, law, and value creation—each with its own idea of AI and what it is used for.

Who Writes the Rules of the Game?

Each of the three systems captures an important aspect, but only one—the US understands the role of capital, China that of value creation, and Europe that of law. Capital alone creates speed but not legitimization. Value creation produces efficiency but not freedom. Law fosters order but not scale. That is why the European model has a special feature: it can constitutionally mediate between capital and value creation without making either principle absolute.

The question remains: will the future AI order be shaped by capital, law, or value creation? But perhaps the essential question is not which of these systems will dominate but rather how they are constitutionally related to each other? And does Europe really have the ability to mediate contradictory logics—or have the rules of the game long since been written elsewhere?


Translation Laura Liska
Photo Taylor Vick, Unsplash

Footnotes

  1. The EU adopts an administration and competition law approach that has been established in the internal market since the 1990s. Technological innovation is not driven by government capital allocation but by regulatory structures. The AI Act focuses on fundamental rights, risk classes, and oversight, and is primarily aimed at market players and platforms.
  2. China primarily integrates AI into industrial policy, logistics, infrastructure, healthcare, and energy systems. Regulation is achieved through implementation speed, government prioritization, and central coordination; legitimacy arises from performance and efficiency rather than from law or public opinion.

Letzte Kommentare