Why AI Markets Are Forming a New World Order
The defining fault line in the AI market doesn’t run between better and worse models. It runs between systems trusted with autonomy and systems that aren’t. This divide is now empirically visible, and at Davos, it was the real, unspoken issue.
Anthropic’s Claude Code reached an annualized revenue rate of roughly one billion dollars within months. The number matters less than the source. This isn’t a consumer product. It’s agentic use. Claude Code writes code autonomously, modifies repositories, runs tests, and iterates on errors. The usage is expensive, energy-intensive, and deliberately not optimized for reach. Autonomy before distribution.
OpenAI sits at a different strategic point. Despite roughly 900 million users and an estimated 20 billion dollars in revenue, the company projects a cumulative free-cash outflow of over 110 billion dollars through 2028. In parallel, OpenAI is searching for new monetization paths. The introduction of advertising into ChatGPT is the most visible signal.
This is not a minor detail. Advertising means a system no longer acts exclusively on behalf of its user. It carries third-party interests.
Demis Hassabis (Source) criticized this move openly at Davos. Google has no plans to put ads in Gemini, he said, because an agent acting on behalf of a user must not carry outside interests. Dario Amodei (Quelle) put it more sharply. Research must take priority over product pressure. Trust cannot be retrofitted.
The market is already making its selection. What matters is not reach, but delegability.
Agents Shift Where Decisions Happen
This split follows no ideology. It follows a technical consequence. Chatbots answer questions. Agents act. They access file systems, write code, alter processes, and trigger real economic outcomes. Every additional layer of autonomy shifts ownership of consequences.
In traditional software markets, trust was pre-established. Contracts, liability, and institutional safeguards defined the boundaries within which systems operated. In agentic systems, that model breaks. Decisions happen in milliseconds, without human approval, with irreversible effects.
No law decides in real time. No court intervenes during execution. When systems act, trust must reside inside the system itself.
Davos 2026: A Room Without a Language for Trust
This shift was palpable at Davos. The World Economic Forum felt less like a future-oriented summit and more like a situation room. CEOs weren’t talking about visions. They were talking about operational consequences. Which decisions can still be delegated? Where does ownership end? What can no longer be undone?
The gap was striking. Regulation dominated the language. Trust barely came up. Yet it was implicitly clear that regulation hits its limits where systems act faster than institutions can respond.
Mark Carney (Source) spoke of the end of a comfortable fiction. He meant trade agreements. But the fiction runs deeper, the assumption that institutions remain the primary site of order, while decisions are already being made elsewhere.
The Deeper Break: From Extraction to Learning
The context of this shift extends beyond AI. Between 2010 and 2014, three fundamental inputs of human development each crossed a threshold: energy, intelligence, and biology. Solar and battery costs began falling exponentially. AlexNet showed in 2012 that neural networks scale. CRISPR made genomes editable that same year. All three shifted from extraction to learning curves. Resources are no longer primarily found and distributed – they are built and improved.
This changes the logic of power. It increasingly emerges not from possession but from the ability to orchestrate learning systems. The state-based trust model – slow, pre-established, institutional – no longer fits systems that improve through use.
Alignment Becomes a Market Condition
In this environment, alignment is not an ethical add-on. It’s an economic prerequisite. Agents create value only when granted access, execution authority, and process control. The more autonomy, the greater the productivity gain – and the higher the stakes.
The market responds not with norms but with selection. Systems that aren’t trusted don’t get deployed. Systems that prove reliable scale. It’s notable that the most autonomous agent systems currently available come from the lab that has positioned itself most explicitly around safety: Anthropic.
Conversely, systems with weaker alignment face pressure. xAI has repeatedly drawn scrutiny – over deepfake incidents, regulatory attention, and enterprise-side hesitation. The market logic is clear. Alignment builds trust. Trust enables autonomy. Autonomy creates value.
The market is beginning to function as a coordination mechanism for safety.
Trust Is Produced After the Action
This shifts where trust is generated. In the industrial order, trust was created before the action, through laws, institutions, and enforcement. In agentic markets, trust is created after the action, through repeated, reliable system performance under real conditions.
A system entrusted with autonomy that delivers consistently generates trust through practice. That trust scales. It becomes reproducible. And it begins to functionally displace institutional trust, not abruptly, but incrementally.
The market becomes a trust generator.
Three Trust Architectures
Against this backdrop, the dominant models of order can be reread.
In Switzerland, trust is produced through institutional stability. Rule of law, property protection, and predictability creates an extremely robust foundation. This architecture is ideal for wealth preservation and long-term positioning, but too slow to turn autonomous systems themselves into anchors of order.
In the United States, trust is produced through market feedback. Usage, iteration, and scale decide. This architecture is ideal for agentic systems, but volatile and institutionally thin.
In China, trust is replaced by control. Full system integration enables rapid implementation but generates compliance, not trust. This architecture is efficient but limited in its exportability.
The new order will emerge where these models overlap – and where none of them holds on its own.
Ownership Under Autonomy
For owners, the core question shifts fundamentally. It’s no longer how systems can be controlled. It’s which system you trust enough to give up control to.
Ownership under agentic conditions no longer means oversight. It means making irreversible decisions about what gets delegated. These decisions are structural. They cannot be tested iteratively without producing real consequences. Granting a system autonomy changes the architecture of your company, regardless of whether that system succeeds or fails.
Responsibility moves forward in time. It no longer lies in intervention but in permission. Not in correcting but in choosing. Ownership shows not in monitoring systems but in determining, upfront, which systems are allowed to act and which are not.
This creates a new form of irreversibility. Traditional business decisions could be adjusted or reversed. Agentic delegation works differently. It changes decision-making structures, stakeholder expectations, and liability frameworks simultaneously. Responsibility doesn’t arise when something goes wrong. It arises in the moment of the architectural decision.
Ownership becomes quieter but harder. Less visible in daily operations, but harder to escape. Owners who don’t make these decisions consciously make them implicitly – by not deciding, by adopting external standards, by deploying outside systems without their own structural logic.
In this world, trust is not a feeling and not cultural capital. It’s an economic commitment made upfront. Owners invest trust long before returns become visible – and bear the consequences when that trust was misplaced.
Technology can be adopted. Autonomy can be granted. But ownership of consequences remains indivisible.
The Real Question After Davos
The decisive question is not how AI should be regulated or slowed down. It’s where trust emerges when systems act faster than governments can decide.
The new world order is not being written in treaties. It’s being written where AI markets scale trust. And that is precisely why trust, not control, will be the scarcest resource of the coming decade.