wezebo
Back
ArticleMay 7, 2026 · 4 min read

Anthropic Taps SpaceX Compute to Raise Claude Limits

Anthropic says a SpaceX data center deal will add more than 300 megawatts of compute and immediately raise Claude Code and Claude API limits.

Wezebo
Abstract dark editorial scene of dense AI compute infrastructure powering branching light paths, with no text, logos, or brand marks.

Anthropic has a new answer to one of Claude's most visible bottlenecks: buy more compute, fast.

The company said on May 6 that it has signed a compute partnership with SpaceX and will use all available capacity at SpaceX's Colossus 1 data center. Anthropic says the deal gives it more than 300 megawatts of new capacity, backed by more than 220,000 Nvidia GPUs, within the month.

That capacity is not being framed as a distant infrastructure story. Anthropic is tying it directly to product limits: Claude Code five-hour rate limits are being doubled for Pro, Max, Team, and seat-based Enterprise plans; peak-hour reductions are being removed for Pro and Max users; and Claude Opus API limits are being raised.

The immediate user impact

For developers, the practical change is simple: fewer hard stops in Claude Code and more room to run larger sessions. That matters because coding agents are increasingly used in long loops, not quick one-shot prompts. Hitting a cap in the middle of debugging or refactoring can break the flow.

For API customers, higher Opus limits make Anthropic more useful for heavier production workloads. The company did not put every new limit in plain text in the announcement, but the direction is clear: Anthropic wants Claude to handle more sustained demand without asking customers to route around capacity constraints.

This also shows how quickly AI product quality has become tied to infrastructure access. A model can be capable, but if customers cannot call it often enough, the experience feels worse than the benchmark chart suggests.

Why SpaceX is a notable supplier

SpaceX is not a neutral name in the AI race. Elon Musk owns xAI, has spent years criticizing OpenAI, and is still fighting OpenAI in court. A SpaceX-Anthropic compute deal sits in the middle of that messy competitive landscape.

Al Jazeera, citing Reuters reporting, noted that the facility is in Memphis, Tennessee and that Musk said he was comfortable leasing compute to Anthropic after meeting with its leaders. The report also said the announcement came around Anthropic's developer day, where the company discussed agent-related product work.

The more important business point is that frontier AI companies are no longer depending on one cloud relationship. Anthropic's post points to capacity across AWS Trainium, Google TPUs, Nvidia GPUs, Amazon, Google/Broadcom, Microsoft/Nvidia, Fluidstack, and now SpaceX. That is a hedge against supply shortages, pricing pressure, and strategic dependence on any single infrastructure partner.

Compute is becoming the feature

AI companies usually talk about models, agents, and safety systems. But capacity is now part of the product roadmap. If Anthropic can reliably offer higher limits than competitors, it can make Claude feel better to the teams that use it all day.

That could matter most for Claude Code. Developers are unusually sensitive to interruptions, and coding agents burn tokens quickly because they read files, reason through tool output, and iterate over changes. A higher cap is not glamorous, but it may be one of the most meaningful upgrades for paying users.

There is also a signal for enterprise buyers. Anthropic says international compute expansion is part of meeting compliance and data residency needs for regulated industries. The infrastructure race is not just about raw model training. It is also about where inference runs, who controls the supply chain, and whether customers can get predictable service at scale.

The orbital compute line is still speculative

Anthropic also said it has expressed interest in working with SpaceX on multiple gigawatts of orbital AI compute capacity. That is worth noting, but it should not be treated as a shipping plan. Space-based data centers face obvious technical, energy, networking, and maintenance problems.

For now, the grounded story is big enough: Anthropic is adding a large block of near-term GPU capacity and immediately turning some of it into higher Claude limits. In the current AI market, that may be as important as a model release.