If you started a company two years ago, many assumptions are no longer true (steveblank.com)

by tie-in 151 comments 207 points
Read article View on HN

151 comments

[−] andai 31d ago
So previously the bottleneck was production. I'd wager now the bottleneck is willingness to test your hypotheses. The willingness to experience failure as soon as possible. To test and iterate.

As technology brings the cost of everything else to 0, psychological costs will predominate.

Reality testing is ultimately unavoidable, of course, but I'd guess most people still lean away from that rather than into it. (Our whole culture is set up that way, and most of us get like two decades of Pavlovian conditioning in that direction.)

Edit: Expanded here: https://nekolucifer.substack.com/p/willingness-to-fail-is-no...

[−] nateburke 31d ago
Add on the compounding effect that "QA" or "test" in someone's job description was viewed as a synonym for "less-highly compensated" over the past few decades, and you have an entire generation of mid career devs with poorly adapted instincts regarding what is valuable in the process of shipping working product.

The bottleneck was never coding...

[−] petervandijck 31d ago
I keep seeing people say that, but the bottleneck really was, to a large degree, writing the code.
[−] DontchaKnowit 29d ago
Has never been true in my career.
[−] canarias_mate 31d ago
[flagged]
[−] bob1029 31d ago
The ability to endure some degree of suffering seems essential for building high quality products. Getting in front of the customer as often as possible and proving things end to end is very painful. But it provides the most feedback and gets you aligned quickly.

If you want an example of the polar opposite, the TDD idea seems to be a good fit. Unit tests are a perfect little universe that you can always control. All side effects and scary possibilities can be handwaved away under mocks. The psychological power of having control over everything is what draws so many toward the idea. A deterministic guarantee that the little circles will turn green when you press play every time is painless.

Failing tests are the most informative and you can only develop those by meaningful interaction with the customer's requirements. If you aren't constantly fighting a wave of red in your testing suite, it's likely you are too isolated from reality.

[−] garrickvanburen 31d ago
I agree, and a willingness to experience failure as soon as possible has always been a competitive advantage.

If anything LLM chatbots & synthetic users will make the majority of founders evermore comfortable not testing reality.

[−] karolist 32d ago
The post reads like written by someone who read too much about AI rather than tried to build a startup with the help of AI that they advocate so much. I'm still bounded by system design, UX, pricing and feature decisions, if not by the speed of code output, by the review time for sure. Yes, iterating is faster, but we're nowhere near agentic AI loops spitting out working products. Technically it's possible, but then you just spent that time planning and writing the spec up front, which you'd interleave with dev time otherwise. If the product is a simple CRUD database skin, then yeah, chances of success are lower I think, but this is not the type of startups the post seems to write about.
[−] ryandrake 31d ago
It's gotten to the point where, when I see yet another Thought Leadership article about software development, I search the page for the word "will". If I see unqualified predictions of the future (AI will change this and Agents will do that and developers will need to do thus), I think I can safely ignore the article. Who has the hubris to make such strong and unwavering statements about a future nobody can see?
[−] marcus_holmes 31d ago
A local "thought leader" wrote a ridiculous piece about software leadership in a post-LLM world, when clearly they'd never actually used an LLM to build anything, or actually led a team using LLMs. Lots of hand-waving but obviously no real experience.

As gp says, there's a big difference between theory and practice here, and a lot of the things we needed when we weren't using LLMs are still needed when we are, but it takes a bit of actual practice to work this out. It's still not at the stage where an Ideas Guy can make a real working product without someone on the team actually knowing how to develop software.

At least in my experience, so far. But the world is changing fast.

[−] bombcar 31d ago
The last "Thought Leader" worth listening to was put to death by Athens.

Some that came after might be worthy of the title, but those who claim it for themselves aren't.

[−] antonvs 31d ago

> Who has the hubris to make such strong and unwavering statements about a future nobody can see?

LinkedIn influencers

[−] skeeter2020 31d ago
How come all the talking heads telling us AI has made whatever we've learned or built obsolete never point the lens back at themselves? I personally thing knowledge from entrepreneurs who learned their lessons in previous decades is valuable, but if what they're spouting is true it doesn't make any sense to listen to them, i.e. these thought leaders are desperate to tell you of their irrelevance. Same thing for every CTO who tells me AI replaces developers. That's a stretch, but if we could do that, don't you think it would be trivial to replace your job?
[−] throwanem 31d ago
Someone is still needed to decide what the AI should do, and to harness and manage it. A CTO can do that with existing skills, and the resulting organization should converge quickly on near-zero need or want for human SWEs.

So goes the thinking, anyway. It's why my couple decades of experience and I still occasionally get to hear from rando cold recruiters desperate to sell someone a "pivot to AI," probably thinking they can lowball me by holding my mortgage over my head in order to screw three times the work out of me that they'd pay for.

I was in this business too long.

[−] hackingonempty 31d ago
Have you ever sweat bullets at 3 a.m.? While Claude spins in circles unable to fix production without breaking five other things?

You will!

[−] coffeefirst 32d ago
Yeah… also, it’s just weird. Interfaces are important, they contain information and affordances, everything should not become a chatbot.
[−] pizzly 31d ago
Interfaces will definitely evolve. Visual graphs and representations are useful but speaking to an agent will become mainstream as its faster than typing. Also the ability for agents to code on the fly will open up different interfaces. For instance you could say "show me the impact that our marketing campaign x over this time period" and out comes graphs that were coded. Drawing might even make a comeback for instance when designing a website you just cross out things you don't want, draw boxes of where you want things, talk at the same time saying what you want in that box. Then some people are using virtual relativity. Not everything will become a chatbot but its definitely going to evolve with chatting being an integral part of the user interface.
[−] skydhash 31d ago
You're describing some Hollywood version of SF, not the real world. Speaking is not faster than pressing a key or turning a knob (like try to operate a CAD or a DAW without keyboard and mouse). And for most report/infographic, you mostly need to design a few dashboard and almost never change them, because those are your core metrics that you need to monitor. And the ability to sketch even a simple wireframe relies on a lot of knowledge that most people don't want to burden themselves with.
[−] Daishiman 31d ago
In the real world a verbal description to start off a design from a template is _very much_ a competitive advantage.

I dabble in music production and having a DAW to help me guide some parts of the process would be extremely useful to get me out of certain creative ruts.

[−] leoedin 31d ago
Yeah, it's crazy to think an opaque chatbot will be preferable to a well designed UI for most users. People don't like badly designed UIs, but I'm pretty sure most people under 40 prefer a well designed UI to a customer service agent. We call customer service because the website doesn't do what we want, not because we don't want to use the website.
[−] DrewADesign 31d ago
I usually find chatbot interfaces completely infuriating. I know the UX paradigm of click-reduction went the way of the dodo (and rightfully so, because it was based on bullshit research,) but I think it’s funny that completely removing the user’s agency and visibility into any process, and turning 3-click processes into 200 keystroke processes is the hot shit right now.
[−] hn_throwaway_99 31d ago
I'm glad this is the top comment. I'm ambivalent about a bunch of writing I've seen from Steve Blank - some of his stuff I've loved and some I thought was awful.

But this I just thought was vacuous. I agree with what you wrote, but more to the point, I didn't find any real advice about how a startup should actually change that passed my sniff test. I left the tech startup world about 2 years ago myself, and I'm glad I did, because I just think there are way fewer differentiable opportunities now. That is, even if I accept what Blank says is true, what are all these 2+ year old startups supposed to do - just create some model wrapper/RAG chatbot product like the million other startups out there?

Even in defense, like the article says, there are now a bajillion drone companies, and it looks like a race to the bottom. The most successful plan at this point just looks like the grifter plan, e.g. getting the current president to tweet out your stock ticker.

I'm honestly curious what folks think are good startup business plans these days. Even startups that looked they were "knock it out of the park" successes like Cursor and Lovable just seem like they have no moat to me - I see very few startups (particularly in the "We're AI for X!" that got a ton of funding in the past two years) with defensible positions.

[−] jnovek 32d ago
Are you familiar with Steve Blank? What you’re describing really isn’t his MO at all.
[−] alex_c 31d ago
I have a lot of respect for Steve Blank, but my heuristic by now is to ignore any breathless posts that state “teams are doing X with AI, if you are not doing the same you’re behind”.

The much more useful posts are “my team and I are doing X with AI”. Of course, the challenge there is that the ones who are truly getting a competitive edge through AI are usually going to be too busy building to blog about it.

[−] detourdog 31d ago
I really enjoyed reading his articles a while back. He wrote an article about about silicon valley's roots in microchip development. I can't remember the details but I reached out to point out how important Autonetics was to chip development and that was based in Los Angeles. From my perspective what made Silicon Valley significant was it's connection to wall street. I wanted to engage on the topic of Venture Capital might be the real product of silicon Valley.

He could have ignored the email or engaged on the topic I introduced. Instead he sent me a wikilink to Autonetics. I was left with the feeling that he had no real interest in the topic he wrote about. It was really no big deal. He is a busy guy and doesn't need to engage with strangers. I never read anything by him again because I was left with the feeling he is just phoning these posts in.

[−] stickfigure 32d ago
I'm not, but this is not a great introduction. It's handwavy and makes the assumption that AI dev tools are much farther along than they are. I have seen this a lot lately; the farther up the management chain and farther away from putting hands on code, the more confident people seem to be in the power of AI tools.

For big complex real world problems, and big complex real worlde codebases, the AIs are helpful but not yet earth shattering. And that helpfulness seems to have plateaued as of late.

I am extremely skeptical of posts like this.

[−] scyzoryk_xyz 31d ago
I will take a lot more hand waving from the 70-something year-old Stanford professor who co-created far-up the chain management paradigms that run a good chunk of the economy. That context kinda changes things but what do I know.
[−] skeeter2020 31d ago
Based on his own arguments a 70-something Stanford prof has no more knowledge, experience or credibility than someone who started 18 months ago.

These guys don't get to have it both ways.

[−] majormajor 31d ago
That thing he created says you should take your assumptions out into the real world and validate them, ya?

So hand-waving about how easy it is to have an MVP in days w/o actually experience in doing that seems ironic.

Now, maybe he's saying this based on companies he's funded who've had great success with what he's saying. But it's curious that the only concrete example of a company mentioned is one that's six years old and not operating like that. And in fact, many of the ways he thinks that company went wrong seem completely unrelated to AI?

> Chris is now starting to raise his first large fundraising round. In looking at his investor deck I realized that while he’s been heads down, the world has changed around him – by a lot. The software moat he built with his 5-year investment in autonomy development is looking less unique every day. Autonomous drones and ground vehicles in Ukraine have spawned 10s, if not 100s, of companies with larger, better funded development teams working on the same problem.

> While Chris has been fighting for adoption for this niche market (one that is ripe for disruption, but the incumbents still control), the market for autonomy in an adjacent market – defense – has boomed. In the last five years VC Investment in defense startups has gone from zero to $20 billion/year. His product would be perfect for contested logistics and medical evacuation. But he had literally no clue these opportunities in the defense market had occurred.

> While there’s still a business to be had (Chris’s team has done amazing system integration with an existing airborne platform that makes his solution different from most), – it’s not the business he started.

"Being heads down without paying enough attention to the market for 6 (!!) years" doesn't seem like an AI-caused issue.

Meanwhile, the core suggestion doesn't seem to fix that, it seems almost completely perpendicular.

> You can now test multiple versions of the same business at once (or simultaneously be testing different businesses). While you can be simultaneously testing five pricing models, ten messages or twenty UX flows, the “user interface” may no longer be a screen at all. Testing might be to find prompt(s) to AI Agent(s) deliver needed outcomes.

Ok, but this person didn't even seem to be doing enough paying to the market of one version already?

And while this claim about parallel development being a huge unlock is the most interesting thing, it also sounds a bit glib. Getting your foot in the door is the hardest thing early on, now you're trying to run six versions of your company at once? Each time you get a foot in the door sales-wise, are you trying to make them use all 6 versions, or are you only gonna get feedback on 1? Would you want to pay money to be a beta tester of 6 different products simultaneously, with reason to believe that 5 of them will probably evaporate over night soon?

[−] antonvs 31d ago
So you’re saying he’s majorly complicit in the ultracapitalist dystopia the US has turned into?
[−] gottorf 31d ago

> the ultracapitalist dystopia the US has turned into

Seriously, where do ideas like this come from? An "ultracapitalist" country that has about as much redistributive social spending as other developed economies[0]? A "dystopia" that millions of people from all over the world clamor to get into every year?

[0]: https://www.piie.com/publications/policy-briefs/2016/true-le...

[−] all2 31d ago
This is disingenuous at best.

The man is an economist, not a crony operating at the federal level (one does not imply the other, and I know nothing of the man's background).

[−] stickfigure 31d ago
So in other words... he doesn't actually use the tools he's firmly convinced will automate the building of software.

I don't agree with the parent; I think capitalism is doing a lot of great things for us and will continue to, even with AI. But man I'm tired of these hot takes from people with limited practical experience.

[−] FreakLegion 31d ago
Steve Blank the startup whisperer and Steven Blank the economist are two very different people.
[−] PaulHoule 31d ago
I think the AI backlash is strong enough that "AI-Free" might be a powerful marketing tool, whether that is fair or not.
[−] dist-epoch 31d ago
You are assuming a linear future while we are in an exponential.

One year ago models could barely write a working function.

[−] kdazzle 32d ago

> the bottleneck is no longer engineering, it’s ____

90% of blog articles created in the last two years are probably dead on arrival

[−] kristianc 32d ago
Well of course it is, most startups are dead on arrival.

The big pinch of salt I throw in with advice like this though is that startup failure rate hasn't dramatically shifted despite two decades of lean startup methodology, accelerators, and an entire cottage industry of startup advice. It's never the fault of the framework, mind you.

[−] ubauba 31d ago
I think Blank's story is about a guy who missed a $20B market shift in defense VC, not a guy who failed to adopt AI.

He's not saying "add AI to your product" or "use AI or die" but more that AI has shifted institutional assumptions about tech stacks, defensibility and fundability. The bottleneck moved up the stack from engineering to judgment, insight and design.

Chris lost because he was heads down building while $20B in defense VC was flowing into his exact problem space and he didn't build the boat to capture that wave.

[−] hyperpape 32d ago

> Founders who started pre-2025 typically have built a technical stack optimized for a world where software development was bespoke and expensive.

Of all the things that AI has changed, tech stacks aren't one of them. The bots will gladly write Typescript, Java, Python, Rust, what have you. They could not give less of a shit.

[−] tyleo 32d ago
I almost feel like this is always true. Every startup is dead on arrival. Most of them fail.

You’ve always needed to constantly learn and innovate to launch a successful business.

[−] givemeethekeys 32d ago
"You better be doing something in AI" applies to Startups - businesses that are expected to spend every penny as quickly as possible, to meet the metrics needed to raise the next (bigger) round of funding.

It is also not the same as, "If you want to be a profitable company...". For that you need to somehow make more money than you are spending.

[−] notarobot123 32d ago
Building SaaS businesses has become a whole lot less capital intensive. Solo founders can go much further than they've been able to previously. New startups probably don't need funding anymore.

VC for conventional SaaS is dead.

[−] psychoslave 31d ago
If you started something at some point, many assumptions are no longer true, and might possibly never have been true at any moment.

That said, if you believe universe exists, chances are not null that you are correct. But solipsism might actually be right.

In case of doubt, remember that your memory might be mere illusions.

[−] bamazizi 32d ago
This article resonated with me, especially since I've noticed the fund raising hoopla's in my circle has dramatically dropped. Either investors are tightening the belt so founder-investor fit has crossed into the realm of disillusionment
[−] mvkel 31d ago
Interestingly, this was always true, it's just made more obvious when it takes less time to bring a product to a potential customer.

Launching a product was never the finish line; it was always the start line. But technical founders could trick themselves into thinking that building a product was building a business.

The same "Lean Startup" rules apply. Build something and get it in front of real people who will pay you for the thing. If they won't, back to the drawing board.

The only real "shift" I've seen is that most startups don't actually need VC at all. That's a great thing.

[−] mbesto 31d ago

> The bottleneck is no longer engineering. It’s moving up the stack to judgment, customer insight for desired outcomes and distribution.

My posit is this: engineering never was the bottleneck, or at least hasn't been for 10 years now. Frameworks and best practices are pretty well known at this point. AI is simply exposing this reality to engineers' faces.

Proof point - most publicly traded SaaS first businesses S&M equals their R&D spend, if not dwarfs it. You're going to see this even more lopsided going forward.

[−] fred_is_fred 32d ago
Chris has been so focused for 5 years that he had no clue about anything else going on in the industry?
[−] dangus 31d ago
I think this is a lot of fluff supported by a weak anecdote.

Chris' company's assumptions are no longer true, but that doesn't apply to everyone's startup. This is mostly a Chris problem.

No, not every product can just be a chat window like in the silly little screenshot.

If the author actually wrote software they'd realize that, no, AI isn't speeding up development by any more than a modest amount. It's great that we have it and it's removing tedium but it has replaced zero engineers at my company or at any other company of anyone else I know.

And no, your company laying off some people isn't because of AI, your startup idea not getting funding is not because of AI, it's because we've been in a regular old recession which is now a developing oil crisis. Interest rates aren't 0% so nobody wants to lend money to infinite startups.

[−] amoorthy 31d ago
"a pricing model based on seats, a product roadmap built around features rather than outcomes [is outdated]"

I disagree with this.

On pricing, I get that agents and tokens can scale in a way that's unrelated to # of users. But for much SaaS software, AI remains helpful to a human and the human remains the receiver of value. Seat-based pricing is easy to understand and you can always layer in token/agent costs thresholds.

On features vs. outcomes, the latter is hard to define and measure in many industries. In marketing SaaS, which I know well, you can't often tell what outcome to expect. You have to try a lot of ideas and some will hit. No way a SaaS vendor can guarantee that.

[−] wavemode 31d ago

>

https://i0.wp.com/steveblank.com/wp-content/uploads/2026/03/...

The author didn't spend more than, maybe, 30 seconds thinking this through? Information I could've gotten in 3 seconds by opening a screen and looking at a line item, I now have to extract by writing a paragraph to an AI agent (and cross my fingers that nothing I said was ambiguous or misunderstood). And that's supposedly an upgrade?

[−] 0xAntonioo 32d ago
I read it less as “every startup now needs an AI feature” and more as “your assumptions expire faster than they used to.” That part feels true even if the examples are a bit overstated imo...
[−] jumploops 32d ago

> How can we throw away years of work?

This trap has killed many startups, well before AI.

Now that code is cheaper to write, hopefully it becomes less of a problem?

In either case, founders should never fall in love with their solutions.

[−] apparent 32d ago
Seems like the headline should have been "is now dead on arrival". As currently written, it fails to convey the temporal aspect that is the focus of the blog post.

It also fails to convey that he's actually only talking about startups that were created 2+ years ago, rather than the many AI startups founded in the last 2 years.

[−] elzbardico 31d ago

> "And if your competitor’s product does the task automatically while yours still waits for a human click, you no longer have a competitive product. The next generation of applications won’t just put information on a screen, they’ll act just like an employee."

Oh the joy of unbridled optimism!

[−] Escafati 31d ago
One assumption that's also changed: small teams no longer need dedicated QA. Tools like Autonoma (open source, getautonoma.com) use AI agents to generate and run E2E tests in real browsers, and tests self-maintain as your app evolves. One of the things making the solo-founder path more viable.
[−] Animats 31d ago
Note the mention of "systems of record" being unsuitable for the present level of AI. The real question is whether the costs of AI mistakes and hallucinations can be dumped on some external party who can't impose costs on you. If not, there's a problem.
[−] touristtam 30d ago
This blog is borderline unreadable - is there a technical reason this is the case? Or the author isn't able/willing to update his wordpress website?
[−] syngrog66 31d ago
depends on assumptions. thats the load bearing element. 99.99% of what was true then is still true. its mostly on the fractal churning edges where hyped change happens. things flip-flop too:

2021-2024: good time in US for EV startup

2025: terrible time in US for EV startup

2026 March/April: AWESOME time in world for EV startup

focus on fundamentals, not flakey ephemerals

2020: wise to have smart elite software engineers on your team

2021: ditto

2022: ditto

2023: ditto

2024: ditto (is this when ChatGPT launched? dont care. snore)

2025: ditto (what are YC/HN/VC hyping now? snore)

2026: ditto

2027+: ditto, likely

[−] aristofun 31d ago
It’s sad but at the same time reassuring to see more and more of your childhood “heros” revealing their ignorance and lack of wisdom.
[−] jart 31d ago
If it was stretched before it's definitely snapped now. The only place I see convincing opportunity is resources and commodities.
[−] jopsen 32d ago
Not being AI focused might mean fewer competitors.
[−] MrSkelter 30d ago
Anyone who writes that VC investment in defense was zero five years ago is neither informed nor credible.
[−] hapless 31d ago
the best/worst part of this is that it is obviously not written by steve blank. it does not look, read, or track like a steve blank blog piece

he obviously generated large parts of this with claude or chatgpt or whatever.

i don't know what that means for the rest of us, but boy it's a big ole spike of signal

[−] kayson 32d ago

> Now, before you build a physical prototype, you can simulate more design variants, create digital twins, and stress-test assumptions earlier and much cheaper than before.

Hahahahahahahaha no you can't. The rise of LLMs has done little to nothing in this area because it's very much compute-limited. Digital-twins and other ML-based strategies predate ChatGPT by a long shot. There are definitely places in hardware design where LLMs and agentic workflows will help, but that's largely because the existing tooling is utter garbage, and now the industry has a fire under its ass to make things automatable so they can build their own agents.

[−] thekuanysh 32d ago
I dig MPO as a term and that's exactly what I think of implementing agentic systems.
[−] VladVladikoff 31d ago
Why not both? When the chat interface fails, you still need a UI to fix things.
[−] rvz 32d ago
It used to be 90% of startups would unfortunately fail.

Now with AI, it is likely going to be 98%.

[−] Noaidi 31d ago
It is insane to me that anyone could look at the US Economy right now and think this was a good time to start a business. Between the war, AI, a pending economic recession (look at bond prices), all I see is failure. Maybe a funeral home, but that is all I can think of.
[−] Schlagbohrer 31d ago
" In the last five years VC Investment in defense startups has gone from zero to $20 billion/year. " Really? No investment in weapons startups 5 years ago? Not even by the CIA-backed VC firms or other SoCal weapons manufacturer networks?
[−] mannyv 31d ago
'Design for automation.'
[−] ajaystream 31d ago
[dead]
[−] danish00111 31d ago
[dead]
[−] laughing_abder 31d ago
[flagged]
[−] operatingthetan 32d ago
I find the story of a startup founder who entirely missed the developments of the last two years and did absolutely nothing with AI difficult to believe. If that actually happened it's the exception, not the rule. Most startup founders are way more in-tune with AI developments. This makes it sound like Chris (the mentioned founder) is behind marketing people who use LLM bots to post slop on LinkedIn.

In that case, yes their startup is most certainly DOA.

[−] amazingamazing 32d ago
Not if you’re launching a startup based in the real world. Tell me how AI will make a laundromat business DoA?

If your business is selling services at 40% margin that are entirely digitally based, then maybe you’ll need to cut some margin, sure.