American AI was financed on a particular bet. The bet was that frontier models would be the next great monopoly business — winner-take-all, capex-justified-by-monopoly, the kind of structurally protected market that supports trillion-dollar valuations and the capital flows necessary to build them. Two and a half years into the cycle, the assumption is breaking. Not slowly. Not at the edges. Visibly, in the public benchmarks, the open-source repos, the Hugging Face download counts, and the inference price sheets.
The break is straightforward to describe. Open-weight models — most of them released by Chinese labs, served through a stack of mostly Western open-source infrastructure — are commoditizing the capability that the moat was supposed to protect. Capability that a U.S. closed lab could charge enterprise rates for in 2024 is now available, downloadable, deployable on rented hardware, at single-digit cents on the dollar in 2026. The gap between the open frontier and the closed frontier is six to twelve months. It is closing, not widening.
The collision between those two facts — that American capital paid for a moat, and that the technology no longer provides one — is the most important force in the AI industry today. Everything else, including the policy direction the U.S. government will take in the next eighteen months, is downstream of how that collision resolves.
The Capital Thesis
To understand what is at stake, follow the money. U.S. frontier labs and their hyperscaler partners have committed somewhere on the order of a trillion dollars to AI capex over the next four years — data centers, GPU clusters, power infrastructure, fiber, the entire physical stack that frontier inference requires. Those commitments are not made on the assumption of SaaS-grade margins. SaaS-grade margins do not service that kind of capital base. The commitments were made on the assumption that frontier capability would behave, at scale, like a regulated monopoly: high fixed costs, high marginal margins, durable rents, very few competitors.
The valuations of the labs themselves reflect the same assumption. OpenAI, Anthropic, and the model arms of Google and Meta trade — privately, or via parent — at multiples that only resolve if frontier capability eventually commands monopoly-grade pricing. Strip out the monopoly assumption and the math does not work. The data centers are still there. The compute bills are still there. The investors who funded the build do not have a ready exit on a commodity-margin business.
That is the structural pressure. Frontier AI was financed as a moat. The financial commitments are durable and large. The technology that was supposed to provide the moat is failing to provide it. Capital, faced with that gap, does not quietly accept lower returns. Capital reaches for the moat through other means. That reach is what the next phase of U.S. AI policy will be about.
The Commons
The open-weight ecosystem did not arrive in stages. It arrived in a wave. In late 2024, a Chinese lab named DeepSeek released a model whose training cost was reported at roughly $5.6 million in compute, against an estimated $500 million to $1 billion for the U.S. closed-frontier equivalent it was benchmarked against. The performance gap on most general benchmarks ran six to twelve months. The performance gap on inference cost ran ten to thirty times in the open weight's favor. The model came under a permissive license, downloadable, modifiable, deployable on a single eight-GPU node by anyone with the storage and the patience to read the README.
That release was the leading edge, not the totality. By mid-2025, the open-weight frontier from the Chinese ecosystem — DeepSeek, Qwen, Kimi, GLM, MiniMax — had compounded into a competitive baseline. Llama, Mistral, and a dozen smaller community projects filled in the rest. The closed labs in the U.S. continued to win the very top of the capability curve. Below that top, the curve was being closed in from underneath at a pace that made the gap a six-to-twelve-month problem rather than a generational one.
What sits underneath the model release is the open ecosystem that delivers it. vLLM serves the weights at production-grade throughput. llama.cpp runs them on a developer's laptop. Ollama wraps the experience for the non-technical user. LangChain and LlamaIndex provide the orchestration layer that, two years ago, only existed inside OpenAI's product organization. None of these tools are owned by the closed labs. Most of them are American or Anglosphere open-source projects. The infrastructure is geographically and economically agnostic. The weights are not.
The Defection Problem
Last week's essay laid out an argument: that frontier AI is sold at a structural loss because users are providing the training data, and that when the apprenticeship ends, prices reprice upward sharply. There was an unstated premise in that argument. The premise was that when the prices rise, the user has nowhere to go.
That premise no longer holds. A consumer rationing a $250-per-month subscription at the moment of repricing has the option, today, of running an open-weight equivalent at fifteen dollars in cloud compute or zero dollars on a sufficiently equipped local machine. The defection cost is a weekend of integration work and a haircut on capability that, for most workloads, the user does not notice. For an enterprise the haircut is even smaller and the savings are larger.
That is a strategic problem for the closed labs, but it is a structural problem for U.S. capital. The original deal — subsidize, train, reprice — assumed lock-in at the moment of repricing. Lock-in does not exist if the next-best option is free. And if lock-in does not exist, the post-apprenticeship pricing the entire capital structure depends on does not exist either.
The valuations require a moat. The technology no longer provides one. Capital will reach for one anyway.
What Capitalism Does When Scarcity Disappears
There is a recurring move in industries where technology fails to provide the natural moat the financial structure assumed. The move is to manufacture scarcity through means other than the technology itself. American capitalism, despite its mythology, is unusually good at this. It has done it in pharmaceuticals, where patents and FDA exclusivity create monopolies the molecule alone could not. It has done it in finance, where regulatory complexity creates barriers to entry the underlying business of lending does not. It has done it in telecom, where spectrum allocation and right-of-way agreements substitute for technological superiority that competitive carriers would otherwise force.
The pattern is reliable enough to be predictable. When a technology produces something that wants to be a commodity, capital does not gracefully accept commodity returns. It reaches for three tools, in roughly this order. First, regulatory enclosure — using the policy apparatus to manufacture exclusion the market does not provide. Second, vertical integration — moving up or down the stack to capture margins the immediate product can no longer command. Third, bundled distribution — leveraging adjacent monopolies (cloud, ad networks, app stores, payment rails) to gate access to the commodity layer beneath.
All three of these tools are now being rehearsed in the U.S. AI sector. They are being rehearsed because the technology is producing a commodity, and the capital structure cannot survive a commodity. They will be deployed because the financial commitments are too large to walk away from. They will be deployed regardless of what is best for the user, because that is not what capital is selecting for at this stage of the cycle.
Three Predictions for the U.S. Direction
What that looks like in practice is a set of moves over the next eighteen to thirty-six months, mostly without legislation, mostly through the slow accumulation of advisories, procurement guidelines, and corporate practice. Three are likely enough to bet on.
1. Regulatory enclosure dressed as security.
The first move is the cheapest one. Chinese-origin open-weight models will be reframed as supply-chain risks — language already worn smooth by years of Huawei, ZTE, and DJI debate. The model card itself will be described as a vector for embedded behavior, the inference deployment as a potential exfiltration channel, the training data as suspect. None of those concerns are entirely without foundation. None of them are the actual reason for the policy. The actual reason is that the open-weight models are commoditizing capability the closed labs have already booked into their valuations.
The advisories will harden into procurement restrictions for federal agencies, then for federal contractors, then for critical infrastructure. Major U.S. cloud providers, watching the regulatory weather, will quietly delist Chinese-origin model endpoints from their managed services. The framing will not, at first, target individual developers running Qwen or DeepSeek weights on their own machines. But the institutional path of least resistance — for any cloud, any enterprise, any compliance officer — will be to treat Chinese-origin weights as the path that loses you contracts. That is enclosure achieved without a single new statute.
2. The labs become the operators.
The second move is the one the labs are already making, quietly and without much commentary. If selling the model produces commodity returns, the lab moves up the stack and sells the work the model does. The frontier capability runs internally; the customer-facing product is the output of that capability — legal research, software, drug discovery, financial analysis, whatever vertical the lab can structure into a service. The lab captures the operator's margin instead of the tool vendor's, and there is no tool to sell at any price.
From the capital structure's perspective, this is the cleanest path. From the user's perspective, it is the worst one. The lab is no longer trying to make the model accessible; it is trying to make the model inaccessible to the user's competitors, which includes the user. Vertical integration substitutes a margin the lab can defend (the operator's) for one it cannot (the tool vendor's). It is a rational move under capital pressure. It is also a structural retreat from the open ecosystem the original mission rhetoric described.
3. The market splits.
The third move is what happens to the rest of the world. U.S. domestic users — consumers, indie developers, mid-market companies — get the closed-frontier pricing the capital structure requires, with limited legal access to the open alternatives that would otherwise compete with it. The rest of the world routes around U.S. rails. European, Indian, Singaporean, and Latin American developers build on whichever combination of open and hosted endpoints sits in the cleanest jurisdiction. The U.S. closed-frontier business retains its margin in its protected market and loses share in every other market on Earth, on a multi-decade arc that mirrors the auto industry exactly.
The arithmetic is not subtle. The U.S. is roughly four percent of the world's population and perhaps fifteen percent of its consumer-facing technology market. Building a capital structure that requires the U.S. domestic market to absorb monopoly-grade rents, while accepting that the other eighty-five percent will route around the wall, is a strategy that produces excellent five-year balance sheets and disastrous twenty-year competitive positions. It is, nonetheless, the strategy. It is the one the capital flow already implies.
The Auto Mirror
There is a clean historical analogue. In 1980, U.S. domestic automakers controlled roughly 80% of the U.S. light-vehicle market. By 2024 that share was below 40%, and the global share was lower still. The arc of decline does not correlate with the absence of policy support. It correlates almost perfectly with the presence of it. Voluntary export restraints in the 1980s, repeated bailouts, and most recently a 100% tariff designed to keep BYD out of North America — none of those interventions reversed the trend. They lengthened it. The wall produced exactly what walls produce: protected margins, protected complacency, and a foreign competitor that compounded its advantage in every other market while the U.S. consumer paid more for less at home.
The same mechanism applies to AI. A walled domestic market lets the closed labs sustain the pricing the capital structure assumes. The protected balance sheets produce a generation of product that does not need to compete on cost. The open ecosystem outside the U.S. continues to compound. The gap between the protected industry and the global standard widens — in the wrong direction. By the time the wall is reconsidered, the protected industry no longer has a competitive product to bring outside of it.
The wall protects the producer. It does not protect the product. Twenty years on, the producer cannot compete without the wall, because the wall is what stopped them from learning to.
Who Pays
As with every protectionist regime, the cost lands on parties without lobbyists. Four cohorts come out behind.
- U.S. consumers and small developers — pay closed-frontier pricing for capability the rest of the world buys at commodity rates, with limited legal recourse to the open alternatives.
- U.S. independent developers and startups — either eat the closed-API premium, take architectural risk on a politically vulnerable open-weight stack, or relocate workloads to offshore endpoints. None of those options is free.
- U.S. closed-frontier labs themselves, on a long enough horizon — engineering and pricing discipline come only from competition. The protected producer eventually loses the ability to compete in the markets it isn't in.
- U.S. influence over the global AI ecosystem — every developer who routes around the wall does so on infrastructure outside U.S. control, and brings the relationships with them.
The beneficiaries are narrow and known. U.S. closed-frontier labs gain a margin window measured in years rather than decades. U.S. cloud providers extract some rent from compliance complexity. The capital that funded the build gets to mark its commitments at something other than zero. The political class earns a security narrative that polls well in election cycles. None of the beneficiaries are the median user. None of them are the median developer. None of them are the long-term competitive position of the country itself.
What To Do About It
The defensive move and the offensive move are the same move. There is a window in which the open commons remains accessible, and that window is open today. Three positionings make sense while it remains open.
- Build on the commons. Run open weights now, on infrastructure you control, for the workloads that pay for themselves today. The closed-frontier APIs remain useful for the very top of the capability curve, but the architecture should treat them as substitutable, not foundational.
- Architect for jurisdictional flexibility. The same compliance pressure that will eventually push Chinese open weights out of U.S. clouds will push U.S. workloads into European, Indian, and Singaporean endpoints. That is not a contingency; it is an architectural concern. Plan for it now, while the migration is voluntary.
- Treat the policy clock as part of the stack. The window between freely deployable open-weight models and open-weight models restricted to compliant entities under a guidance document is shorter than the design cycle of most production systems. Anything mission-critical built on the assumption of permanent open access to current-generation Chinese weights is a trapdoor.
The Closing Frame
American capitalism is unusually good at allocation and unusually poor at abundance. When a technology produces commodity capability, the U.S. capital structure does not gracefully reorganize around the new economics. It reaches for the policy levers that can manufacture the scarcity the technology has stopped providing. This is not a moral failing. It is a structural consequence of how the system finances itself. The same dynamic that made it possible to fund a trillion dollars of AI infrastructure on the back of a monopoly thesis now requires the monopoly to be defended by means other than the underlying technology.
The collision between that financial logic and the open-weight commons is the central force in the U.S. AI industry over the next decade. The capital structure will fight to manufacture scarcity. The commons will continue to compound. The user — domestic and global — sits in between. The choice the country makes about how heavily to wall the domestic market against the commons will determine whether U.S. AI looks like the U.S. internet sector in 2005 — open, exporting, dominant — or like the U.S. auto industry in 2025 — protected, exporting nothing, durably uncompetitive.
That is the actual question. Not whether open weights threaten frontier labs, because they obviously do. Not whether the labs and their capital partners will reach for protection, because they obviously will. The question is whether the country that hosts that fight chooses to subsidize the moat or the commons. So far, the choice is going one way.
The moat or the commons. American capital prefers the first. American consumers, developers, and long-term competitiveness need the second. The next decade resolves which preference the policy follows.