Anthropic Doubles Claude Code Limits via SpaceX Deal
Nate Herk | AI Automationgo watch the original →
the gist
Anthropic's SpaceX partnership adds 300 MW compute with 220k Nvidia GPUs, doubling Claude Code's 5-hour session limits, ending peak-hour throttling, and boosting API rates (e.g., Opus output from 8k to 80k tokens/min).
The Breakthrough
Anthropic partnered with SpaceX for 300 megawatts of compute capacity including over 220,000 Nvidia GPUs, which doubled Claude Code's 5-hour rate limits across Pro, Max, and Team plans, removed peak-hours throttling for Pro and Max accounts, and substantially increased API rate limits for Claude 3 Opus models.
What Actually Worked
- Claude Code's 5-hour session limits doubled to 10 hours for all Pro, Max, and Team plans, effective immediately.
- Peak-hours limit reduction removed for Pro and Max Claude Code users; previously, weekday mornings throttled sessions faster.
- API rate limits for Claude 3 Opus increased significantly across tiers: lowest tiers saw up to 16x on input and 10x on output; tier 1 now supports 500,000 input tokens per minute (roughly 370 pages of context) and 80,000 output tokens per minute, up from 30,000 input and 8,000 output.
- Builders should retest rate-limited workflows, such as Opus agents abandoned months ago, as new limits may enable them.
- Shift from Haiku/Sonnet delegation to more Opus usage in workflows, while maintaining context management; 1 million token context window now viable in production API calls.
- Claude Code supports production infrastructure beyond prototypes; multi-agent workflows viable with sub-agents each handling 50k tokens.
Before / After
Claude Code sessions doubled from 5 hours to 10 hours across plans. Peak-hours throttling eliminated for Pro/Max. Claude 3 Opus API: input tokens per minute from 30,000 (20-22 pages) to 500,000 (370 pages) on tier 1; output tokens per minute from 8,000 to 80,000 across tiers (up to 10x on low tiers, 16% on some).
Context
Anthropic faced frequent outages and rate limits due to demand exceeding compute capacity amid feature releases like Opus and new plans. The SpaceX deal, plus existing partnerships with Amazon, Google, Broadcom, Microsoft, Nvidia, and Fluid Stack, addresses this by scaling infrastructure rapidly. A Goldman Sachs/Blackstone JV announcement preceded it. This enables enterprise-scale usage, international expansion, and long-term plans like orbital AI compute to bypass terrestrial limits on power, water, and cooling. Builders gain flexibility for production agents, routines, and knowledge work without rapid session exhaustion.
Notable Quotes
- "They're going to be able to double Claude Code's 5-hour rate limits double whether you're on pro max or team your 5-hour limit is going to be doubled."
- "Per minute you used to only be able to send 30k input tokens at a time or you'd be rate limited and that has been upgraded by like 16% on the output side it used to be 8,000 a minute and now it's 80,000 a minute."
- "Anthropic and SpaceX have expressed interest in developing multiple gigawatts of orbital AI compute capacity."
Content References
- Event: Code with Claude conference (San Francisco, London, Tokyo), mentioned.