I guess gigawatts is how we roughly measure computing capacity at the datacenter scale? Also saw something similar here:
> Costs and pricing are expressed per “token”, but the published data immediately seems to admit that this is a bad choice of unit because it costs a lot more to output a token than input one. It seems to me that the actual marginal quantity being produced and consumed is “processing power”, which is apparently measured in gigawatt hours these days. In any case, I think more than anything this vindicates my original decision not to get too precise. [...]
I think for the same model wall time is probably a more intuitive metric; at the end of the day what you’re doing is renting GPU time slices.
Large outputs dominate compute time so are more expensive.
IMO input and output token counts are actually still a bad metric since they linearise non linear cost increases and I suspect we’ll see another change in the future where they bucket by context length. XL output contexts may be 20x more expensive instead of 10x.
As a customer, it's nice that I can quantize and count the units of cost in an understandable way.
For Anthropic, as a business bleeding money, it's probably nice to have value-based pricing, for the tokens, so innovation (like computation efficiency improvements) can result in some extra margin. If they exposed the more direct computation cost, they could never financially benefit from any improved efficiency, including faster hardware!
> I think for the same model wall time is probably a more intuitive metric; at the end of the day what you’re doing is renting GPU time slices
This is a bit too much of a simplification.
The LLM provider batches multiple customer requests into one GPU/TPU pass over the weights, with minimal latency increase.
The LLM provider may in fact be renting GPUs by the second, but the end user isn't. We the end users are essentially timesharing a pool of GPUs without any dedicated "1 vGPU" style resource allocation. In such a setting, charging by "GPU tick" sounds valid, and the various categories of token costs are an approximation of cost+margin.
Gigawatts seems like more a statement of the power supply and dissipation of the actual facility.
I’m assuming you can cram more chips in there if you have more efficient chips to make use of spare capacity?
Trying to measure the actual compute is a moving target since you’d be upgrading things over time, whereas the power aspects are probably more fixed by fire code, building size, and utilities.
Measuring data centers in watts is like measuring cars in horsepower. Power isn't a direct measure of performance, but of the primary constraint on performance. When in doubt choose the thermodynamic perspective.
I mean a single nuclear reactor delivers around 1GW, so if a single datacenter consumes multiple of those, it gives a reasonably accurate idea of the scale.
It's not really a stable measure of compute, but it's a good indication of burn rate as energy cost is something we closely track in economies and it actually dominates a lot of the cost of operating data centers. At least short term. Over time we'll get more tokens per energy unit and less dollars for the hardware needed per energy unit. Tokens currently is too abstract for a lot of people. They have no concept of the relation ship of numbers of tokens per time unit and cost. Long term there's going to be a big shift from op-ex to cap-ex for energy usage as we shift from burning methane and coal to using renewables with storage.
That these data centers can turn electricity + a little bit of fairly simple software directly into consumer and business value is pretty much the whole story.
Compare what you need to add to AWS EC2 to get the same result, above and beyond the electricity.
That's a convenient story, but most consumers' and businesses' use of AI is light enough that they could easily run local models on their existing silicon. Resorting to proprietary AI running in the datacenter would only add a tiny fraction of incremental value over that, and at a significant cost.
Sure but where the puck is going is long-running reasoning agents where local models are (for the moment) significantly constrained relative to a Claude Opus 4.6.
All of big tech (except Google obviously) is pushing hard for Claude Code internally. I’m talking “you all have unlimited tokens and we’re going to have a leaderboard of who used the most” kind of push.
"we’re going to have a leaderboard of who used the most"
Yeah I've seen stuff like that and it's a bit bewildering for me. Feels a bit like AWS is new and we're competing to see who can deploy the most EC2 instances.
It’s the crudeness of available management methods at play. Quite exposing for the profession, really (remember lines of code as measure of productivity?).
Their disclosed run rate was 14bn around the time of those filings IIRC, they started showing meaningful revenue around start of 2025, so if you just linearly extrapolate up that would give you ~7bn-ish actual revenue over that period. The more the growth is weighted towards the last few months the more that number goes down
So I don't think those numbers are really in tension at all
If your revenue doubles every month, then in the first month where you make $2.5B, your total lifetime revenue has been $5B ($2.5B this month, $1.25B the month before, etc. is a simple geometric series). But your current revenue run rate for the next year will be $2.5B x 12 = $30B.
They're not quite growing that fast, but there's nothing inherently inconsistent between these claims... as long as the growth curve is crazy.
1) It's in their interest to distort numbers and frame things that make them look good - e.g. using 'run-rate'
2) The numbers are not audited and we have no idea re. the manner in which they are recognising revenue - this can affect the true compounding rate of growth in revenues
The numbers are certainly audited by their investors. Anthropic isn't foreign to PR talk, but investors know what to look for in their book. They aren't stupid unlike how they are viewed on HN.
There are more investment money than Anthropic need. They can pick and choose.
I do, and I do trust the numbers. I doubt Anthropic is pursuing fraud given that they already don't have enough compute to serve demand. What is the point of lying to the public, investors and risk going to jail?
Interesting to see Anthropic investing in compute infrastructure. The bottleneck I keep hitting is not
raw compute but where that compute lives — EU customers increasingly need guarantees their data stays
in-region. More sovereign compute options in Europe would unlock a lot of enterprise AI adoption.
Interesting timing given the quantum computing timeline pressure from this week's cryptography discussions. $30B run-rate and gigawatts of TPU capacity — and meanwhile the most interesting AI work I've seen lately runs on a phone in Termux with no cloud dependency at all. Both things are true simultaneously.
How is compute shortage to satisfy demand manifested? Obviously they never close sign-ups, so only option is to extended queues? But if demand grows like crazy, then queues should get longer, yet my pro claude plan seems snappy with only occasional retries due to 429.
There's no limit to the algorithms. People dont understand yet. They can learn the whole universe with a big enough compute cluster. We built a generalizable learning machine
126 comments
> Costs and pricing are expressed per “token”, but the published data immediately seems to admit that this is a bad choice of unit because it costs a lot more to output a token than input one. It seems to me that the actual marginal quantity being produced and consumed is “processing power”, which is apparently measured in gigawatt hours these days. In any case, I think more than anything this vindicates my original decision not to get too precise. [...]
https://backofmind.substack.com/p/new-new-rules-for-the-new-...
Is it priced that way, though? I assume next-gen TPU's will be more efficient?
> but the published data immediately seems to admit that this is a bad choice of unit because it costs a lot more to output a token than input one
And, that's silly, because API pricing is more expensive for output than input tokens, 5x so for Anthropic [1], and 6x so for OpenAI!
[1] https://platform.claude.com/docs/en/about-claude/pricing
[2] https://openai.com/api/pricing
Large outputs dominate compute time so are more expensive.
IMO input and output token counts are actually still a bad metric since they linearise non linear cost increases and I suspect we’ll see another change in the future where they bucket by context length. XL output contexts may be 20x more expensive instead of 10x.
For Anthropic, as a business bleeding money, it's probably nice to have value-based pricing, for the tokens, so innovation (like computation efficiency improvements) can result in some extra margin. If they exposed the more direct computation cost, they could never financially benefit from any improved efficiency, including faster hardware!
> I think for the same model wall time is probably a more intuitive metric; at the end of the day what you’re doing is renting GPU time slices
This is a bit too much of a simplification.
The LLM provider batches multiple customer requests into one GPU/TPU pass over the weights, with minimal latency increase.
The LLM provider may in fact be renting GPUs by the second, but the end user isn't. We the end users are essentially timesharing a pool of GPUs without any dedicated "1 vGPU" style resource allocation. In such a setting, charging by "GPU tick" sounds valid, and the various categories of token costs are an approximation of cost+margin.
I’m assuming you can cram more chips in there if you have more efficient chips to make use of spare capacity?
Trying to measure the actual compute is a moving target since you’d be upgrading things over time, whereas the power aspects are probably more fixed by fire code, building size, and utilities.
The equivalent of cars would be pricing by how much gas you burned, not horsepower.
Compare what you need to add to AWS EC2 to get the same result, above and beyond the electricity.
I often get a 10x more cost effective run processing on my local hardware.
Still reaching for frontier models for coding, but find the hosted models on open router good enough for simple work.
Feels like we are jumping to warp on flops. My cores are throttled and the fiber is lit.
Feels like the lede is buried here!
Yeah I've seen stuff like that and it's a bit bewildering for me. Feels a bit like AWS is new and we're competing to see who can deploy the most EC2 instances.
So I don't think those numbers are really in tension at all
They're not quite growing that fast, but there's nothing inherently inconsistent between these claims... as long as the growth curve is crazy.
1) It's in their interest to distort numbers and frame things that make them look good - e.g. using 'run-rate' 2) The numbers are not audited and we have no idea re. the manner in which they are recognising revenue - this can affect the true compounding rate of growth in revenues
There are more investment money than Anthropic need. They can pick and choose.
Hahaha.
Mate nobody cares about that nor trusts it. Everyone is waiting in anticipation for the S-1 filing.
https://news.ycombinator.com/item?id=47637597