On May 6, 2026, Anthropic announced higher usage limits for Claude Code and the Claude API, along with a new compute partnership with SpaceX. For everyday users, the most direct change is more usable capacity for Claude Code. For developers and enterprises, the larger point is that Claude’s inference capacity is still expanding.
The announcement has two parts:
- Higher limits for Claude Code and the Claude API.
- New compute capacity from SpaceX data centers.
What changed for Claude Code limits
Anthropic says the following three changes took effect on the day of the announcement:
- Claude Code’s five-hour rate limit doubled for Pro, Max, Team, and seat-based Enterprise plans.
- Peak-hour limit reductions for Pro and Max Claude Code accounts were removed.
- Claude Opus API rate limits were significantly increased.
In practical terms, if you often use Claude Code for long coding sessions, repository analysis, refactoring, debugging, or agent workflows, this change may reduce the number of times a task stops before it is finished.
That does not mean unlimited usage. Claude Code is still affected by subscription plan, usage pattern, model, task length, context size, and platform policy. But Anthropic has clearly expanded the usable room compared with the previous limits.
Why compute affects the Claude Code experience
Tools like Claude Code consume more resources than ordinary chat. A single coding task can involve:
- Reading many files.
- Long-context analysis.
- Multiple tool calls.
- Generating, editing, and checking code.
- Repeatedly running tests or explaining errors.
- Using Opus for difficult reasoning.
Behind those actions are not only tokens, but also inference capacity, concurrency, and scheduling resources. Users see limits, queues, or slower peak-hour behavior; the platform sees pressure between compute supply and demand.
So Anthropic putting limit increases and a compute partnership in the same announcement is meaningful. It is saying that improving Claude Code is not just a plan-setting change, but also depends on more backend inference capacity.
What the SpaceX partnership adds
Anthropic says it has signed an agreement with SpaceX to use the full compute capacity of SpaceX’s Colossus 1 data center. The announced capacity is over 300 megawatts, corresponding to more than 220,000 NVIDIA GPUs, and will be made available to Anthropic within a month.
This added capacity is expected to directly improve available capacity for Claude Pro and Claude Max subscribers.
Anthropic also says it is interested in future work with SpaceX on orbital AI compute. That is more of a long-term direction, not the same thing as the Claude Code limit increase users can feel immediately.
Anthropic’s compute footprint is getting larger
SpaceX is only one part of Anthropic’s recent compute expansion. The company also lists other partnerships:
- Up to 5GW with Amazon, including nearly 1GW of new capacity planned to come online by the end of 2026.
- 5GW with Google and Broadcom, expected to come online starting in 2027.
- A strategic partnership with Microsoft and NVIDIA, including $30 billion of Azure capacity.
- A $50 billion U.S. AI infrastructure investment with Fluidstack.
Anthropic also notes that Claude training and inference will use multiple types of AI hardware, including AWS Trainium, Google TPUs, and NVIDIA GPUs.
The trend is clear: competition among leading model companies is not only about model names, benchmarks, and product features. It is also about power, data centers, GPUs, TPUs, networking, and global deployment capacity.
Practical impact for Claude Code users
For developers, the most important change is the doubled five-hour Claude Code limit. It affects scenarios such as:
- Reading large repositories.
- Multi-file refactoring.
- Bug investigation and test fixing.
- Code migration and dependency upgrades.
- Long-running agentic coding tasks.
- Multiple people using Claude Code in Team or Enterprise plans.
A common Claude Code problem has been reaching the limit while a task is still in progress. Higher limits make it easier for an agent to complete a full task instead of stopping halfway.
For Pro and Max users, removing peak-hour limit reductions is also important. It means the experience may become more stable during busy periods, with less disruption from temporary tightening.
What it means for API users
The announcement also says Claude Opus API rate limits have increased significantly. For teams using Opus for difficult tasks, that usually means:
- Higher concurrency.
- Fewer 429 rate-limit errors.
- Easier support for batch workloads.
- Better fit for long-context, complex reasoning, and agent workflows.
Actual limits still vary by account, organization, model, and plan. Before production deployment, teams should still check their Anthropic Console, rate limit documentation, and error logs.
Enterprise and regional deployment matter more
Anthropic also notes that regulated industries such as finance, healthcare, and government increasingly need regional infrastructure to satisfy compliance and data residency requirements. Part of its capacity expansion will therefore be outside the United States, especially for inference capacity in Asia and Europe.
This matters for enterprise customers. Once large model applications enter core business workflows, the questions are not only whether the model is good enough. They also include:
- Whether data stays in the required region.
- Whether industry compliance requirements are met.
- Whether peak-hour capacity is stable.
- Whether team-level and organization-level concurrency are supported.
- Whether audit, permission, and security controls are available.
From that perspective, compute expansion is not just performance news. It can shape enterprise procurement and deployment decisions.
Summary
Anthropic’s message is direct: Claude Code and Claude API usage constraints are being relaxed because new compute capacity is coming online.
For everyday Claude Code users, the most important points are the doubled five-hour limit and the removal of peak-hour reductions for Pro and Max. For API and enterprise users, the main points are higher Opus rate limits and Anthropic’s longer-term compute partnerships with SpaceX, Amazon, Google, Microsoft, NVIDIA, and Fluidstack.
AI tools are increasingly infrastructure services. Model quality matters, but stable capacity, regional compliance, limit policy, and cost control also shape the user experience.
Reference: