This take confuses the value of a project at inception with its value at maturity. Vibe-coded projects are at the beginning of their life. When Slack was at a comparably stage, it similarly didn't have hundreds of engineers running it. So the question facing vibe coding is not whether it can substitute for a mature tech product. The question is if vibe coding can substitute for genuine engineering expertise at the very beginning of a budding, immature project.
In the long run there is no alternative to really reading your codebase and understanding what is going on. You can leave the nitty gritty details to the LLM but you have to be in the drivers seat and know how the parts of your codebase works together. You have to be the architect and can leave the plumbing to the LLM, but dont try to make a plumber an architect.
How many projects get to the point where they're at 50 or 100 people online at the same time but fail due to technical issues before they reach 50k? I would say very few. 99% of the time the problem is that they never reach that simultaneous 100 people due to other, non-technical issues like not being a product that people really want. If you've got 50k people wanting to use your product it's a success even if you've got technical problems and it's crashing all the time.
Are these chat apps built with one giant monolithic architecture? Seems like you could spin up isolated copies per organization and your scaling needs would be a lot lower and simpler. Then run everything in k8s with over subscription to deal with the compute overhead waste.
I wonder if vibe coding dev-ops will follow the path blazed by virtual machine managers vs bare metal servers. If the bare metal server crashes, you had to go out and, like a rancher’s calf, nurse it back to health. If the VM crashes, you take it out into the pasture and shoot it (and re-spin up another VM).
In the vibe coded world, if a bug is found (or a relied-upon api is deprecated, or a a dependency is found to suffer a security vulnerability, a vendor changes. etc) do we simply kill the codebase and vibecode up a fresh one de novo from the same prompts as the original, adding only knowledge of the recent failure mode?
> In the vibe coded world, if a bug is found [...] do we simply kill the codebase and vibecode up a fresh one de novo from the same prompts as the original, adding only knowledge of the recebt failure mode?
That sounds like a horrible plan, LLMs are non-deterministic (practically speaking, I know they can be run with temperature=0 locally, but not really relevant to the way anyone is writing code with them now).
Feeding the same spec in with some changes to deal with the one bug you discovered and regenerating all the code is likely to create a system that has new bugs (unrelated to the one you fixed by amending the spec) that may not have existed the last go-around.
Are you wondering if in the future AI will take a spec in natural language and convert it into thousands or millions of lines of code every time a bug is surfaced?
This reminded me of the shift from gambling with cash and a bookie connected to the mafia, to draft kings/fanduel to prediction markets. In the end the house always wins.
This is true, but people also seem to think that means we're going to be getting more worthwhile software, yet that is never really the case. Look at how commercially available game engines made publishing A and AA games more accessible, the expectation is a flood of amazing indie games but it's actually just a flood of slop and cash grabs and asset flips. And now the same thing is happening again in the game industry, like it is with the software industry.
15 comments
https://x.com/paoloanzn/status/2032388364025118757 (https://xcancel.com/paoloanzn/status/2032388364025118757)
just my 2 ct
Are these chat apps built with one giant monolithic architecture? Seems like you could spin up isolated copies per organization and your scaling needs would be a lot lower and simpler. Then run everything in k8s with over subscription to deal with the compute overhead waste.
In the vibe coded world, if a bug is found (or a relied-upon api is deprecated, or a a dependency is found to suffer a security vulnerability, a vendor changes. etc) do we simply kill the codebase and vibecode up a fresh one de novo from the same prompts as the original, adding only knowledge of the recent failure mode?
> In the vibe coded world, if a bug is found [...] do we simply kill the codebase and vibecode up a fresh one de novo from the same prompts as the original, adding only knowledge of the recebt failure mode?
That sounds like a horrible plan, LLMs are non-deterministic (practically speaking, I know they can be run with temperature=0 locally, but not really relevant to the way anyone is writing code with them now).
Feeding the same spec in with some changes to deal with the one bug you discovered and regenerating all the code is likely to create a system that has new bugs (unrelated to the one you fixed by amending the spec) that may not have existed the last go-around.
You'll be playing whack-a-mole forever.