On May 14, 2026, Anthropic published a policy essay titled “2028: Two scenarios for global AI leadership.” The essay is not about the capability of a specific Claude model. It is about a larger question: by 2028, which political and industrial system might hold global leadership in AI?
It is important to be clear from the start: this is a policy essay with an explicit point of view. Anthropic’s core argument is that the United States and its allies should preserve and expand their lead in frontier AI, especially by defending their compute advantage, closing export-control loopholes, restricting model distillation attacks, and promoting the global deployment of the American AI stack. The following is a structured summary of the article’s main arguments, not an unconditional endorsement of every claim.
The Core Argument
Anthropic frames the AI competition of the next few years mainly as a competition between the United States and China. It argues that advanced AI is not just a commercial product, but a general-purpose technology that could reshape national security, military capability, cyber offense and defense, research speed, and social governance.
The article’s most important claims are:
- Frontier AI competition is, to a large extent, a competition for compute.
- The United States and its allies currently have advantages in advanced chips, semiconductor equipment, cloud infrastructure, and capital.
- If the US does not close loopholes in export controls and model access, Chinese AI labs could approach or even catch up with US frontier models by 2028.
Anthropic therefore presents 2028 as a fork in the road: one scenario where democracies maintain a commanding lead, and another where US and Chinese AI capabilities are close enough to create a more dangerous neck-and-neck race.
Why Anthropic Emphasizes Compute
The original essay repeatedly emphasizes compute: the advanced chips and computing resources needed to train and deploy frontier models.
Anthropic’s logic is that data, talent, and algorithms all matter, but without enough compute, frontier models cannot keep iterating. As AI is increasingly used to accelerate AI R&D itself, compute advantage compounds: more compute enables more experiments, more experiments lead to better algorithms, and better models help build the next generation of models.
That is why the article places export controls so high on the policy agenda. Anthropic argues that US restrictions on advanced AI chips and semiconductor manufacturing equipment flowing to China have already constrained China’s frontier AI development. It also cites external analyses suggesting that the advanced-compute gap may continue widening.
In short, Anthropic is not only asking “who has smarter researchers.” It is asking who can keep accessing the compute infrastructure needed to train and serve the strongest models.
The Loopholes Anthropic Worries About
The essay argues that current export controls have been effective but insufficient. It highlights two main loopholes.
The first is compute access. This includes smuggling advanced chips, remotely using restricted chips through overseas data centers, and incomplete controls around semiconductor manufacturing equipment. The essay notes that US export controls mainly regulate chip sales, but do not fully cover remote access to restricted chips in foreign data centers.
The second is model access, described as distillation attacks. In this context, “distillation attacks” do not refer to ordinary academic distillation, but to using large numbers of accounts to bypass access controls, systematically harvest outputs from US frontier models, and train or enhance competing models from those outputs. Anthropic describes this as systematic extraction of US model capabilities.
In Anthropic’s view, these two loopholes weaken export controls: even if Chinese companies cannot legally buy enough advanced chips, they may still maintain near-frontier capability through overseas compute and model distillation.
Two 2028 Scenarios
Anthropic uses two hypothetical scenarios to show how today’s policy choices could shape the future.
Scenario One: The US and Allies Extend Their Lead
In the first scenario, the US and its allies preserve their compute advantage. Export-control loopholes are closed, chip smuggling and foreign data-center access are restricted more effectively, and defenses and penalties against model distillation become stronger.
In this world, US frontier models are 12 to 24 months ahead. This lead is not just about benchmark scores; it affects critical sectors such as cybersecurity, finance, healthcare, and life sciences. Anthropic argues that such a lead would give democracies time to set AI rules, safety norms, and global deployment standards.
It also argues that if the American AI stack becomes core global economic infrastructure, it will further attract allies, markets, and talent, creating a self-reinforcing cycle.
Scenario Two: China’s AI Ecosystem Is Near the Frontier
In the second scenario, the US does not continue tightening loopholes, or it loosens restrictions on Chinese companies’ access to advanced compute. Chinese AI labs stay near the frontier through overseas compute, chip access, distillation attacks, and rapid domestic deployment.
In this world, Chinese models may be slightly weaker than US models, but faster domestic adoption, lower cost, more flexible on-premise deployment, and infrastructure exports into certain markets give them real influence.
Anthropic worries that this neck-and-neck state could intensify risks in military use, cyber operations, and domestic governance. It could also pressure both American and Chinese AI companies to release faster, weakening safety evaluations and governance efforts.
Four Fronts of Competition
Anthropic does not treat AI competition as only a model capability race. It lists four fronts:
- Intelligence: who develops the most capable models.
- Domestic adoption: who integrates AI faster across commercial and public sectors.
- Global distribution: whose AI stack becomes the infrastructure of the global economy.
- Resilience: who maintains political and social stability through the economic transition.
Intelligence is the most important because frontier model capability drives the other fronts. But the essay also notes that intelligence alone is not enough. If one side deploys slightly weaker models faster into the economy, military, government, and overseas markets, it may offset part of the capability gap.
This is worth noting: future AI competition is not simply about who has larger models or higher benchmarks. It is a combined competition across models, chips, cloud, applications, regulation, and international markets.
Anthropic’s Policy Recommendations
The article closes with three policy directions.
First, close compute loopholes. This includes combating chip smuggling, restricting access to export-controlled chips through overseas data centers, and strengthening controls and enforcement budgets around semiconductor manufacturing equipment.
Second, defend model innovation. This includes restricting model access, deterring distillation attacks, and enabling threat-intelligence sharing between US AI labs and the government.
Third, promote the export of American AI. In other words, make hardware, models, cloud services, and applications developed by the US and its allies the trusted global AI infrastructure, reducing the chance that China’s AI ecosystem expands through low cost and local deployment advantages.
All three recommendations serve the same goal: help the US and its allies establish a more durable frontier AI lead before 2028.
How to Read This Essay
The importance of this essay is not that it reveals new model-architecture details. Its importance is that Anthropic states its view of AI geopolitics very directly.
It represents an increasingly common policy narrative among Silicon Valley AI companies: frontier AI is not just product competition, but national capability competition. Model capability, chip supply chains, cloud infrastructure, export controls, and safety governance must be considered together.
But readers should keep distinctions clear:
- The argument that the US should maintain a lead is Anthropic’s policy position.
- Claims about China’s AI capability, export-control effectiveness, and the scale of distillation attacks mix facts, external citations, and Anthropic’s interpretation.
- The two 2028 scenarios are thought experiments, not predictions.
In other words, the essay is best read as a document explaining how Anthropic understands AI competition, not as a neutral global AI industry report.
Summary
Anthropic’s “2028: Two scenarios for global AI leadership” presents 2028 as a key decision point. If the US and its allies defend compute, restrict distillation attacks, and promote their AI stack globally, Anthropic believes they may secure a 12-to-24-month lead in frontier capability. If they do not act, China’s AI ecosystem could move close to the frontier and gain influence through domestic adoption and low-cost global deployment.
The signal is clear: Anthropic is placing frontier AI, safety governance, chip export controls, and geopolitics into one framework. Future AI competition may be less like a contest among model companies and more like a competition among compute, supply chains, national policy, and global infrastructure.
Reference: