The beginning of scarcity in AI (tomtunguz.com)

by gmays 227 comments 197 points
Read article View on HN

227 comments

[−] keiferski 28d ago
We just had a realization during a demo call the other day:

The companies that are entirely AI-dependent may need to raise prices dramatically as AI prices go up. Not being dependent on LLMs for your fundamental product’s value will be a major advantage, at least in pricing.

[−] andersmurphy 28d ago
Yup. Also regardless of price they need to spend more and more as the project collapses under the inevitable incidental complexity of 30k lines of code a day.

It's similar to how if you know what you're doing you can manage a simple VPS and scale a lot more cost effectively than something like vercel.

In a saturated market margins are everything. You can't necessarily afford to be giving all your margins to anthropic and vercel.

[−] prox 28d ago
I also can’t wait for the time when few know how to code. Just like how many folks don’t know html from css when the homebrew website went away.

Their might always be llms, but the dependence is an interesting topic.

[−] Cthulhu_ 28d ago
Look no further to be honest; look at older generation programming languages like COBOL and how sought-after good developers for that language are.

But I'm also afraid / certain that LLMs are able to figure out legacy code (as long as enough fits in their context window), so it's tenuous at best.

Also, funny you mentioned HTML / CSS because for a while (...in the 90's / 2000's) it looked like nobody needed to actually learn those because of tools like Dreamweaver / Frontpage.

[−] raw_anon_1111 27d ago
The whole “you can make a lot of money programming in COBOL” is one of those myths that needs to die.

Even the briefest of Google searches show they make around the same as any other enterprise dev if not slightly less.

[−] Max-q 27d ago
The issue with COBOL code is that it’s hidden. It’s mostly internal systems so little code available for training. HTML, TypeScript, JavaScript, C, etc, are readily available, billions of code lines.
[−] prox 27d ago
Well, on the 2nd paragraph, I have no illusion they’ll figure out more as they are being trained. I am more thinking of the custodians (as coders turn into that)

Say you are a good coder now, but you are becoming a custodian, checking the llm work will slowly erode your skills. Maybe if you got a good memory or an amazing skillset it might be some time, but if you don’t use it, you lose it.

[−] ipaddr 27d ago
COBOL developers are sought after but still paid less than a grad doing crud. Is that the future?
[−] solenoid0937 27d ago
How are COBOL developers "sought after"? That's an oft-repeated but woefully incorrect meme.

FAANG new grads make more. If the COBOL devs had upskilled throughout their career they'd be Senior Staff/Principal+ and making 5-10x more than they do today.

[−] BobbyTables2 27d ago
A time when few know how to code?

I think that was about 10 years ago…

[−] zozbot234 28d ago

> The companies that are entirely AI-dependent may need to raise prices dramatically as AI prices go up.

It's not that clear. Sure, hardware prices are going up due to the extremely tight supply, but AI models are also improving quickly to the point where a cheap mid-level model today does what the frontier model did a year ago. For the very largest models, I think the latter effect dominates quite easily.

[−] lelanthran 28d ago

>> The companies that are entirely AI-dependent may need to raise prices dramatically as AI prices go up.

> It's not that clear. Sure, hardware prices are going up due to the extremely tight supply, but AI models are also improving quickly to the point where a cheap mid-level model today does what the frontier model did a year ago.

I agree; I got some coding value out of Qwen for $10/m (unlimited tokens); a nice harness (and some tight coding practices) lowers the distance between SOTA and 6mo second-tier models.

If I can get 80% of the way to Anthropic's or OpenAI's SOTA models using 10$/m with unlimited tokens, guess what I am going to do...

[−] satvikpendem 28d ago
GitHub Copilot is already $10 and I don't even use up the requests every month, it's the most bang for buck LLM service I've used.
[−] chewz 27d ago
Until May
[−] kwakubiney 27d ago
What’s happening in May?
[−] chewz 27d ago
Github Copilot switches all users from per prompt to per token billing
[−] bcjdjsndon 28d ago
There's only so far engineers can optimise the underlying transformer technique, which is and always has been doing all the heavy lifting in the recent ai boom. It's going to take another genius to move this forward. We might see improvements here and there but the magnitudes of the data and vram requirements I don't think will change significantly
[−] zozbot234 28d ago
State space models are already being combined with transformers to form new hybrid models. The state-space part of the architecture is weaker in retrieving information from context (can't find a needle in the haystack as context gets longer, the details effectively get compressed away as everything has to fit in a fixed size) but computationally it's quite strong, O(N) not O(N^2).
[−] aerhardt 27d ago
I’ve read and heard from Semi Analysis and other best-in-class analysts that the amount of software optimizations possible up and down the stack is staggering…

How do you explain that capabilities being equal, the cost per token is going down dramatically?

[−] bcjdjsndon 26d ago
Optimizations, like I said. They'll never hack away the massive memory requirements however, or the pre training... Imagine the memory requirements without the pre training step....this is just part and parcel of the transformer architecture.
[−] bcjdjsndon 26d ago
And a lot of these improvements are really just classic automation or chaining together yet more transformer architectures, to fix issues the transformer architecture creates in the first place (hallucinations, limited context)
[−] abarth23 26d ago
Exactly this. To actually visualize the sheer scale of the VRAM wall we are hitting, I recently built an LLM VRAM estimator (bytecalculators.com/llm-vram-calculator).

If you play around with the math, you quickly realize that even if we heavily quantize models down to INT4 to save memory, simply scaling the context window (which everyone wants now) immediately eats back whatever VRAM we just saved. The underlying math is extremely unforgiving without fundamentally changing the architecture.

[−] CodingJeebus 28d ago
You also have to look at how exposed your vendors are to cost increases as well.

Your company may have the resources to effectively shift to cheaper models without service degradation, but your AI tooling vendors might not. If you pay for 5 different AI-driven tools, that's 5 different ways your upstream costs may increase that you'll need to pass on to customers as well.

[−] chewz 28d ago
We are processing same data for the last 2 years.

Inference prices droped like 90 percent in that time (a combination of cheaper models, implicit caching, service levels, different providers and other optimizations).

Quality went up. Quantity of results went up. Speed went up.

Service level that we provide to our clients went up massively and justfied better deals. Headcount went down.

What's not to like?

[−] oeitho 28d ago
The decline of independent thoughts for one. As people become reliant on LLMs to do their thinking for them and solve all problems that they stumble upon, they become a shell of their previous self.

Sadly, this is already happening.

[−] WarmWash 28d ago
We'll need to do faux mental work like how we do faux labor work.
[−] chewz 27d ago
There is no decline. Human assets were always too expensive to process some additional information. We are simply processing lot more of low signal data.

Actually some of our analysts are empowered by the tools at their disposal. Their jobs are safe and necessary. Others were let go.

Clients are happy to get fuller picture of their universe, which drives more informed decissions . Everybody wins.

[−] oeitho 27d ago
You are free to believe what you want, but what you describe does not match what I’ve seen from society as a whole. I’m just going to leave this here: https://www.media.mit.edu/projects/your-brain-on-chatgpt/ove...
[−] suttontom 26d ago
Are you being satirical?
[−] bluecheese452 27d ago
The headcount that went down probably isn’t too thrilled about it.
[−] chewz 27d ago
Yes, probably. But the others gained skills and tools that made their jobs secure.
[−] accrual 28d ago

> Not being dependent on LLMs for your fundamental product’s value

I think more specifically not being dependent on someone else's LLM hardware. IMO having OSS models on dedicated hardware could still be plenty viable for many businesses, granted it'll be some time before future OSS reaches today's SOTA models in performance.

[−] michaelbuckbee 28d ago
What's weird though is the bifurcation in pricing in the market: aka if your app can function on a non-frontier level AI you can use last years model at a fraction of the cost.
[−] Cthulhu_ 28d ago
That'll be (part of) the big market correction, but also speaking broadly; as investor money dries up and said investors want to see results, many new businesses or products will realise they're not financially viable.

On a small scale that's a tragedy, but there's plenty of analysts that predict an economic crash and recession because there's trillions invested in this technology.

[−] muppetman 28d ago
No shit. People are just figuring this out now?

This is the “Building my entire livelihood on Facebook, oh no what?” all over again.

Oh no sorry I forgot, your laptops LLM can draw a potato, let me invest in you.

[−] anonyfox 28d ago
in fact I am betting opposite. frontier models are getting not THAT much better anymore at all, for common business needs at least. but the OSS models keep closing the gap. which means if trajectories hold there will be a near future moment probably where the big provider costs suddenly drop shaerply once the first viable local models consistently can take over tasks normally on reasonable hardware. Right now probably frontier providers rush for as much money as they possible can before LLMs become a true commodity for the 80% usecases outside of deep expert areas they will have an edge over as specialist juggernauts (iE a cybersecurity premium model).

So its all a house of cards now, and the moment the bubble bursts is when local open inference has closed the gap. looks like chinese and smaller players already go hard into this direction.

[−] michaelje 28d ago
Absolutely. Pricing exposure is the quiet story under all the waves of AI hype. Build for convenience → subsidise for dependence → meter for margin is a well-worn playbook, and AI-dependent companies are about to find out what phase three feels like.

Hyperscalers are spending a fortune so we think AI = API, but renting intelligence is a business model, not a technical inevitability.

Shameless link to my post on this: https://mjeggleton.com/blog/AIs-mainframe-moment

[−] finaard 28d ago
How is that surprising? We've been taking that into account for any LLM related tooling for over a year now that we either can drop it, or have it designed in a way that we can switch to a selfhosted model when throwing money at hardware would pay for itself quickly.

It's just another instance of cloud dependency, and people should've learned something from that over the last two decades.

[−] strife25 28d ago
Marginal costs matter in this world.
[−] onion2k 28d ago
The companies that are entirely AI-dependent may need to raise prices dramatically as AI prices go up

Or they'll price the true cost in from the start, and make massive profits until the VC subsidies end... I know which one I'd do.

[−] bjornroberg 28d ago
I wonder if it could be that they won't because the real mechanism is that AI wrapper pricing power is weak (switching costs near zero) but state of the art models makes it difficult to lower prices due to higher cost.
[−] lowsong 25d ago
Any company that has become dependant on AI will struggle to survive from here on. By the time many teams realise it'll be too late.
[−] thih9 28d ago
Also: AI dependance could be explicit AI API usage by the product itself, but also anything else, like: AI assisted coding, AI used by humans in other surrounding workflows, etc.
[−] sevenzero 28d ago
This was as clear as the sky when the first llm based businesses popped up. How did you realize this only now?
[−] sidewndr46 28d ago
Not really, the next move is to establish standards groups requiring the use of AI in product development. A mix of industry and governmental mandates. What you view are viewing as COGS instead becomes instead a barrier to entry.
[−] dmazin 28d ago
Constraints can lead to innovation. Just two things that I think will get dramatically better now that companies have incentive to focus on them:

* harness design

* small models (both local and not)

I think there is tremendous low hanging fruit in both areas still.