Disagree with the overall argument. Human effort is still a moat. I've been spending the past couple of months creating a codebase that is almost entirely AI-generated. I've gotten way further than I would have otherwise at this pace, but it was still a lot of effort, and I still wasted time going down rabbit holes on features that didn't work out.
There's some truth in there that judgement is as important as ever, though I'm not sure I'd call it taste. I'm finding that you have to have an extremely clear product vision, along with an extremely clear language used to describe that product, for AI to be used effectively. Know your terms, know how you want your features to be split up into modules, know what you want the interfaces of those modules to be.
Without the above, you run into the same issue devs would run into before AI - the codebase becomes an incoherent mess, and even AI can't untangle it because the confusion gets embedded into its own context.
I feel like you're pretty strongly agreeing that taste is important: " I'm finding that you have to have an extremely clear product vision...""
Clear production vision that you're building the right thing in the right way -- this involves a lot of taste to get right. Good PMs have this. Good enginers have this. Visionary leaders have this....
The execution of using AI to generate the code and other artifacts, is a matter of skill. But without the taste that you're building the right thing, with the right features, in a revolutionary way that will be delightful to use....
I've looked at three non-engineer vibe-coded businesses in the past month, and can tell that without taste, they're building a pretty mediocre product at best. The founders don't see it yet. And like the article says, they're just setting themselves up for mediocrity. I think any really good PM would be able to improve all these apps I looked at almost immediately.
It's leaning in a good direction, but the author clearly lacks the language and understanding to articulate the actual problem, or a solution. They simply dont know what they dont know.
> Human effort is still a moat.
Also slightly off the mark. If I sat one down with all the equipment and supplies to make a pair of pants, the majority of you (by a massive margin) are going to produce a terrible pair of pants.
Thats not due to lack of effort, rather lack of skill.
> judgement is as important as ever,
Not important, critical. And it is a product of skill and experience.
Usability (a word often unused), cost, utility, are all the things that people want in a product. Reliability is a requirement: to quote the social network "we dont crash". And if you want to keep pace, maintainability.
> issue devs would run into before AI - the codebase becomes an incoherent mess
The big ball of mud (https://www.laputan.org/mud/ ) is 27 years old, and still applies. But all code bases have a tendency to acquire cruft (from edge cases) that don't have good in line explanations, that lack durable artifacts. Find me an old code base and I bet you that we can find a comment referencing a bug number in a system that no longer exists.
We might as an industry need to be honest that we need to be better librarians and archivists as well.
That having been said, the article should get credit, it is at least trying to start to have the conversations that we should be having and are not.
> One of the most useful things about AI is also one of the most humbling: it reveals how clear your own judgment actually is. If your critique stays vague, your taste is still underdeveloped. If your critique becomes precise, your judgment is stronger than the model output. You can then use the model well instead of being led by it.
Something I find that teams get wrong with agentic coding: they start by reverse engineering docs from an existing codebase.
This is a mistake.
Instead, the right train of thought is: "what would perfect code look like?" and then meticulously describe to the LLM what "perfect" is to shape every line that gets generated.
This exercise is hard for some folks to grasp because they've never thought much about what well-constructed code or architectures looks like; they have no "taste" and thus no ability to precisely dictate the framework for "perfect" (yes, there is some subjectivity that reflects taste).
I think there is a parallel to what happened to watch market with Quartz crisis. The same way Quartz has led to decline of Swiss movements, LLMs are going to have a huge effect on developer market. I hypothesize that in future there will be a micro segment which care about quality, taste, exclusivity etc the same way the luxury watch makers found a niche. My perspective is that this "taste" or "quality" will not be a moat. Instead, it will be a niche where only a small segment would care about it.
If you're properly bitter-lesson-pilled then why wouldn't better models continue to develop and improve taste and discernment when it comes to design, development, and just better thinking overall?
Speed and distribution aren't a long-run moat because they are something AI can canabalize in a platform. Eventually they will coexist on your distribution base and offer it at a lower cost than you. Its a mote if it holds up before you exit at a high valuation... which a lot are setup to do.
Taste: that's interesting. There is an argument there. It's hard to keep in the long-run and requires a lot of reinvestment in new talent
Proprietary data: Yes, very much so.
Trade Craft: Your new shiney system will still have to adhere to methods of of old clunky real world systems. Example, evidence for court. Methods for investigations. This is going to be industry specific, but you'd be surprised how many there are. This is long-term.
Those who have the moat should focus on short burts of meaningful changes as they will rely heavily on gaining trust in established systems. In those places its more about trusting whats going on than doing it faster and better, so you want trust + faster and/or better.
Ah, the classic "we'll ship production to China and just do design and marketing in US, because we have taste on what to build, and China doesn't". That worked really well...
IMO, taste has always been one of the strongest moats because we struggle to define what good taste even is. We know it when we see it, but other than pointing to examples, we can’t really describe it in general terms. I still remember a line from Paul Graham’s Hackers and Painters where he was describing the difficulty of hiring software engineers. He says he was talking with a colleague after an interview and remarked (I’m paraphrasing), “I know he can write code. But does he have taste?” Taste is something we all want our colleagues and products and services to have, but defining it is really difficult. And yes, I fully agree with the writer that it’s important more than ever in this age of AI where generation is cheap.
I agree with the author and I think this is turning everyone into an investor. How I view (financial) investing as a career is that it is less manual and more taste oriented. You put your stake in the things you feel will work out and taste here just means the judgement required to make good calls. A person with good taste would have a better idea of capital allocation.
What AI is doing is making all of us investors instead of doers. "Doing" is no longer something praiseworthy - what will become praiseworthy is how your taste has turned out in hindsight.
I'm seeing this at work. More or less everyone can do tasks well. But what's harder now is the more subtle task of taking bets and seeing it work over a few months or years.
What you notice
What you reject
How precisely you can explain what feels wrong
I think it's just as important, if not more, to be able to explain what is right and what you accept. Having a well defined acceptance criteria also fits into existing project management frameworks. These criteria are generally based on asking users. The article mentions, You do not get a spreadsheet that tells you which sentence will make a customer care, which feature is worth a month of engineering time, or which design crosses the line from polished to forgettable. And this is why you talk to your customers.
I'm not sure this is only true of LLMs - most corporate sites are very similar. Professional designers learn very quickly not to be too quirky or unique.
I think this is symptomatic of humans - our comfort zone is the "7 out of 10" morass of similarity and blandness. We are herd animals. LLMs are just reflecting this.
And I don't think our tendency to herd will allow us to select the quirky outliers even if that's the only distinguishing characteristic of non-LLM output.
I think "taste" is definitely an overused meme at this point, its like tech twitter discovered this word in 2024 and never stopped using it (same with "agency", "high leverage", etc).
Having read the article, I think I see the author's argument (*). I think "taste" here in an engineering context basically just comes down to an innate feeling of what engineering or product directions are right or wrong. I think this is different from the type of "taste" most people here are talking about, though I'm sure product "taste" specifically is somewhat correlated with your overall "taste." Engineering "taste" seems more correlated with experience building systems and/or strong intuitions about the fundamentals. I think this is a little different from the totally subjective, "vibes based taste" that you might think of in the context of design or art.
Now where I disagree is that
1. "taste" is a defensible moat
2. "taste" is "ai-proof" to some extent
"Taste" is only defensible to the extent that knowing what to do and cutting off the _right_ cruft is essential to moving faster. Moving faster and out executing is the real "moat" there. And obviously any cognitive task, including something as nebulous as "taste," can in theory be done by a sufficiently good AI. Clarity of thought when communicating with AI is, imo, not "taste."
Talking specifically about engineering - the article talks about product constraints and tradeoffs. I'd argue that these are actually _data_ problems, and once you solve those, tradeoffs and solving for constraints go from being a judgement call to being a "correct" solution. That is to say, if you provide more information to your AI about your business context, the less judgement _you_ as the implementer need to give. This thinking is in line with what other people here have already said (real moats are data, distribution, execution speed).
I think there's something a bit more interesting to say about the user empathy part, since it could be difficult for LLMs to truly put themselves in users shows when designing some interactive surfaces. But I'm sure that can be "solved" too, or at least, it can be done with far less human labor than it already takes.
In general though, tech people are some of the least tasteful people, so its always funny to see posts like this.
I use AI for code and we review that code and write tests ourselves first which the AI cannot touch. For writing we hardly ever do, unless we know the requester of something is incompetent and will never read it anyway; then it is a waste of time to do anything, but they expect something substantial and nice looking to tick a few boxes. It is great for that; a large bank with 40 layers of management, all equally incompetent, asked for a 'all encompassing technical document vault'; one of them sent an 'expectation document' which contained so much garbage as to show they did not even know what they were asking, but 1000s of pages was the expectation. So sure, claude will write that in an hour, notebooklm will add 100 slidedecks for juiceness. At first sight it looks amazing; its probably mostly accurate as well, but who knows; they will never ever read it; no one will. We got the 20m+ (with many opportunities to grow much larger) project. Before that was only in reach of the huge consultants (where everyone in those management levels worked before probably) who we used to lose against. Slop has its purpose.
This cope is insane. Even simple projects generated by Claude are riddled with bugs. And there’s no way in hell it could generate a larger scoped project without a lot of manual human intervention. But yea, TODO apps and trivial calculators are effectively “solved”. Same with leetcode. I guess that’s probably the limit of many people’s imagination these days.
I get really skeptical when someone says "AI does most of the work just by asking simple prompts."
Don't get me wrong, I use AI too for daily tasks and my job as a programmer, and it definitely helps me get the job done faster (sometimes it's almost instant).
But it still requires a lot of effort for complex or unconventional tasks.
When I hear "AI does most of my job", I think of DOGE employees who use AI to identify "waste of money".
All they do is ask AI with very lazy prompts like "list DEI projects" with the list of government sponsored projects including simple descriptions.
They don't even provide what DEI means.
And they just cut all projects the AI flagged.
I'm sure their "productivity" is very high.
They can "complete the job" that would require days, weeks, or months of investigation with a single prompt.
I also think the results have a strong tendency to flag a project as DEI, because "is this DEI" is a question often asked by racists and misogynists on right-wing websites, and often the answers are "Yes", and that likely causes a strong bias.
> AI and LLMs have changed one thing very quickly: competent output is now cheap.
If you're working on something not truly novel, sure.
If you're using LLMs to assist in e.g. Mathematics work on as-yet-unproven problems, then this is hardly the case.
Hell, if we just stick to the software domain: Gemini3-DeepThink, GPT-5.4pro, and Opus 4.6 perform pretty "meh" writing CUDA C++ code for Hopper & Blackwell.
And I'm not talking about poorly-spec'd problems. I'm talking about mapping straightforward mathematics in annotated WolframLanguage files to WGMMA with TMA.
Taste is cheap. Taste (or a rudimentary version of it at least) is something you start with at the beginning of your career. Taste is the thing that tells you "this is fucking cool", or "I don't know why but this just looks right". LLM's are not going to replicate that because it's not a human and taste isn't something you can make. Now - MAKING something that "looks right" is hard, and because LLM's are churning out the middle - the middle is moving somewhere else. Just like rich people during the summer.
211 comments
There's some truth in there that judgement is as important as ever, though I'm not sure I'd call it taste. I'm finding that you have to have an extremely clear product vision, along with an extremely clear language used to describe that product, for AI to be used effectively. Know your terms, know how you want your features to be split up into modules, know what you want the interfaces of those modules to be.
Without the above, you run into the same issue devs would run into before AI - the codebase becomes an incoherent mess, and even AI can't untangle it because the confusion gets embedded into its own context.
Clear production vision that you're building the right thing in the right way -- this involves a lot of taste to get right. Good PMs have this. Good enginers have this. Visionary leaders have this....
The execution of using AI to generate the code and other artifacts, is a matter of skill. But without the taste that you're building the right thing, with the right features, in a revolutionary way that will be delightful to use....
I've looked at three non-engineer vibe-coded businesses in the past month, and can tell that without taste, they're building a pretty mediocre product at best. The founders don't see it yet. And like the article says, they're just setting themselves up for mediocrity. I think any really good PM would be able to improve all these apps I looked at almost immediately.
> Disagree with the overall argument.
It's leaning in a good direction, but the author clearly lacks the language and understanding to articulate the actual problem, or a solution. They simply dont know what they dont know.
> Human effort is still a moat.
Also slightly off the mark. If I sat one down with all the equipment and supplies to make a pair of pants, the majority of you (by a massive margin) are going to produce a terrible pair of pants.
Thats not due to lack of effort, rather lack of skill.
> judgement is as important as ever,
Not important, critical. And it is a product of skill and experience.
Usability (a word often unused), cost, utility, are all the things that people want in a product. Reliability is a requirement: to quote the social network "we dont crash". And if you want to keep pace, maintainability.
> issue devs would run into before AI - the codebase becomes an incoherent mess
The big ball of mud (https://www.laputan.org/mud/ ) is 27 years old, and still applies. But all code bases have a tendency to acquire cruft (from edge cases) that don't have good in line explanations, that lack durable artifacts. Find me an old code base and I bet you that we can find a comment referencing a bug number in a system that no longer exists.
We might as an industry need to be honest that we need to be better librarians and archivists as well.
That having been said, the article should get credit, it is at least trying to start to have the conversations that we should be having and are not.
Customers can’t find you
This is a mistake.
Instead, the right train of thought is: "what would perfect code look like?" and then meticulously describe to the LLM what "perfect" is to shape every line that gets generated.
This exercise is hard for some folks to grasp because they've never thought much about what well-constructed code or architectures looks like; they have no "taste" and thus no ability to precisely dictate the framework for "perfect" (yes, there is some subjectivity that reflects taste).
Followed by an entire AI generated fluff piece https://www.pangram.com/history/347cd632-809c-4775-b457-d9bc...
Flagged
(edit: typos)
evergreen.
Speed and distribution aren't a long-run moat because they are something AI can canabalize in a platform. Eventually they will coexist on your distribution base and offer it at a lower cost than you. Its a mote if it holds up before you exit at a high valuation... which a lot are setup to do.
Taste: that's interesting. There is an argument there. It's hard to keep in the long-run and requires a lot of reinvestment in new talent
Proprietary data: Yes, very much so.
Trade Craft: Your new shiney system will still have to adhere to methods of of old clunky real world systems. Example, evidence for court. Methods for investigations. This is going to be industry specific, but you'd be surprised how many there are. This is long-term.
Those who have the moat should focus on short burts of meaningful changes as they will rely heavily on gaining trust in established systems. In those places its more about trusting whats going on than doing it faster and better, so you want trust + faster and/or better.
There's always been ways to "flatten the middle" - by outsourcing, by using pre-packaged goods, with industrialization...
So yeah we've always loved handcrafted, exquisite things; there's never been a "moat" in middle
It doesn't mean you can't make a good living without a moat though
Taste may be kind of important because it helps toward the truly important thing, which is skin-in-the-game.
But also, with the right skin-in-the-game, you don't even need "taste." You just need real life consequences, which we don't do enough in tech.
Discussion: https://news.ycombinator.com/item?id=47089907
What AI is doing is making all of us investors instead of doers. "Doing" is no longer something praiseworthy - what will become praiseworthy is how your taste has turned out in hindsight.
I'm seeing this at work. More or less everyone can do tasks well. But what's harder now is the more subtle task of taking bets and seeing it work over a few months or years.
I think it's just as important, if not more, to be able to explain what is right and what you accept. Having a well defined acceptance criteria also fits into existing project management frameworks. These criteria are generally based on asking users. The article mentions, You do not get a spreadsheet that tells you which sentence will make a customer care, which feature is worth a month of engineering time, or which design crosses the line from polished to forgettable. And this is why you talk to your customers.
Just google "taste is the new moat"
Doesn't deserve to be on the front page.
> Good Taste the Only Real Moat Left > YC startups are doomed
I think this is symptomatic of humans - our comfort zone is the "7 out of 10" morass of similarity and blandness. We are herd animals. LLMs are just reflecting this.
And I don't think our tendency to herd will allow us to select the quirky outliers even if that's the only distinguishing characteristic of non-LLM output.
- Just think about scientific research. Lots of data analysis results are not cheap to get.
- Even vibe coding is difficult: you need to think very hard about what you want.
What is cheaper now are some building blocks. We just have a new definition of building blocks. But putting the blocks is still hard.
Steve Jobs stopped them, drew a square on the whiteboard and said “anything the user drags into this square gets written to the DVD” - that is taste!
Having read the article, I think I see the author's argument (*). I think "taste" here in an engineering context basically just comes down to an innate feeling of what engineering or product directions are right or wrong. I think this is different from the type of "taste" most people here are talking about, though I'm sure product "taste" specifically is somewhat correlated with your overall "taste." Engineering "taste" seems more correlated with experience building systems and/or strong intuitions about the fundamentals. I think this is a little different from the totally subjective, "vibes based taste" that you might think of in the context of design or art.
Now where I disagree is that
1. "taste" is a defensible moat
2. "taste" is "ai-proof" to some extent
"Taste" is only defensible to the extent that knowing what to do and cutting off the _right_ cruft is essential to moving faster. Moving faster and out executing is the real "moat" there. And obviously any cognitive task, including something as nebulous as "taste," can in theory be done by a sufficiently good AI. Clarity of thought when communicating with AI is, imo, not "taste."
Talking specifically about engineering - the article talks about product constraints and tradeoffs. I'd argue that these are actually _data_ problems, and once you solve those, tradeoffs and solving for constraints go from being a judgement call to being a "correct" solution. That is to say, if you provide more information to your AI about your business context, the less judgement _you_ as the implementer need to give. This thinking is in line with what other people here have already said (real moats are data, distribution, execution speed).
I think there's something a bit more interesting to say about the user empathy part, since it could be difficult for LLMs to truly put themselves in users shows when designing some interactive surfaces. But I'm sure that can be "solved" too, or at least, it can be done with far less human labor than it already takes.
In general though, tech people are some of the least tasteful people, so its always funny to see posts like this.
If one disagrees with that's statement, there is nothing of value to extract from this article.
Words are cheap, bullet point are cheap.
>AI and LLMs have changed one thing very quickly: competent output is now cheap.
Already wrong.
> That is why so much AI-generated work feels familiar:
This was already a complaint people had before Ai. Like when logos and landing pages all used to look the same. Or coffee shops all looking the same.
https://youtu.be/jg1WUOxY6Cg?si=0ajVvgKnyuSz0e2Y
Don't get me wrong, I use AI too for daily tasks and my job as a programmer, and it definitely helps me get the job done faster (sometimes it's almost instant). But it still requires a lot of effort for complex or unconventional tasks.
When I hear "AI does most of my job", I think of DOGE employees who use AI to identify "waste of money". All they do is ask AI with very lazy prompts like "list DEI projects" with the list of government sponsored projects including simple descriptions. They don't even provide what DEI means. And they just cut all projects the AI flagged. I'm sure their "productivity" is very high. They can "complete the job" that would require days, weeks, or months of investigation with a single prompt.
I also think the results have a strong tendency to flag a project as DEI, because "is this DEI" is a question often asked by racists and misogynists on right-wing websites, and often the answers are "Yes", and that likely causes a strong bias.
[1] https://www.9news.com/article/news/politics/doge-chatgpt-dei...
> AI and LLMs have changed one thing very quickly: competent output is now cheap.
If you're working on something not truly novel, sure.
If you're using LLMs to assist in e.g. Mathematics work on as-yet-unproven problems, then this is hardly the case.
Hell, if we just stick to the software domain: Gemini3-DeepThink, GPT-5.4pro, and Opus 4.6 perform pretty "meh" writing CUDA C++ code for Hopper & Blackwell.
And I'm not talking about poorly-spec'd problems. I'm talking about mapping straightforward mathematics in annotated WolframLanguage files to WGMMA with TMA.
you already see it on facebook with all the ai generated meme sharing... taste is being eroded there
Distribution, Data (Proprietary) and Iteration Speed.
Very successful companies have all three: Stripe, Meta, Google, Amazon.
there's a reason people prefer aged products - they gain a character of their own, similar to art or works produced under constraints.
> A practical loop for training taste
Taste is cheap. Taste (or a rudimentary version of it at least) is something you start with at the beginning of your career. Taste is the thing that tells you "this is fucking cool", or "I don't know why but this just looks right". LLM's are not going to replicate that because it's not a human and taste isn't something you can make. Now - MAKING something that "looks right" is hard, and because LLM's are churning out the middle - the middle is moving somewhere else. Just like rich people during the summer.