Disagree with the overall argument. Human effort is still a moat. I've been spending the past couple of months creating a codebase that is almost entirely AI-generated. I've gotten way further than I would have otherwise at this pace, but it was still a lot of effort, and I still wasted time going down rabbit holes on features that didn't work out.
There's some truth in there that judgement is as important as ever, though I'm not sure I'd call it taste. I'm finding that you have to have an extremely clear product vision, along with an extremely clear language used to describe that product, for AI to be used effectively. Know your terms, know how you want your features to be split up into modules, know what you want the interfaces of those modules to be.
Without the above, you run into the same issue devs would run into before AI - the codebase becomes an incoherent mess, and even AI can't untangle it because the confusion gets embedded into its own context.
I feel like you're pretty strongly agreeing that taste is important: " I'm finding that you have to have an extremely clear product vision...""
Clear production vision that you're building the right thing in the right way -- this involves a lot of taste to get right. Good PMs have this. Good enginers have this. Visionary leaders have this....
The execution of using AI to generate the code and other artifacts, is a matter of skill. But without the taste that you're building the right thing, with the right features, in a revolutionary way that will be delightful to use....
I've looked at three non-engineer vibe-coded businesses in the past month, and can tell that without taste, they're building a pretty mediocre product at best. The founders don't see it yet. And like the article says, they're just setting themselves up for mediocrity. I think any really good PM would be able to improve all these apps I looked at almost immediately.
The way I understood it, the original article is saying the _only_ remaining differentiator is taste and the comment you replied to is saying "wrong, there are also other things, such as effort".
I don't necessarily interpret the comment you replied to as saying that "taste is not important", which seems like what you are replying to, just that it's not the only remaining thing.
I agree that taste gets you far. And I agree with all the examples of good taste that you brought up.
But even with impeccable taste, you still need to learn, try things, have ideas, change your mind etc.. putting all of that in the bucket of "taste" is stretching it..
However, having good taste when putting in the effort, gets your further than with effort alone. In fact, effort alone gets you nowhere, and taste alone gets you nowhere. Once you marry the two you get somewhere.
Aren’t you just making their point stronger? Effort is what is being replaced here, with some taste and a pile of AI (formerly effort) you can go to the moon.
In other words, it requires a tremendous amount of effort to fully communicate your tastes to the AI. Not everybody wants to expend the time or mental effort doing this! (Once we have more direct brain/computer interfaces, this effort will go down, but I expect it will not be eliminated fully)
This is the second time in two days I've seen a subthread here with folks seemingly debating whether or not defining and communicating requirements counts as work if the target of those requirements is an LLM system.
I'm confused as to why this is even a question. We used to call this "systems analysis" and it was like... a whole-ass career. LLMs seem to be remarkably capable of using the output, but they're not even close to the first software systems sold as being able to take requirements and turn them into working code (for various definitions of "requirements" and "working").
I'm also skeptical that direct brain interfaces would make this any less work; I don't think "typing" or "english" are the major barriers here, anymore than "drafting" is the major barrier to folks designing their own cars and houses... Any fool thinks they know what they need!
Not really. The effort required to produce the same result has declined, but it has been on the decline for many decades already. That is nothing new. Of course, in the real world, nobody wants the same result over and over, so expectations will always expand to consume all of your available effort.
If there is some future where your effort has been replaced, it won't be AI that we're talking about.
Effort is still (and probably will always be) the hardest thing to replace.
Any time someone says AI can do this, and do that, and blah blah. I say ok, take the AI and go do that.. the barrier to entry is so low you should be able to do whatever you want. And they say, oh, no, I don't want to do that (or can't, or whatever). But it should be able to be done.. And I just nod, and sip my drink, and ...
.. and I'd like to point out these are seasoned professionals that I've seen put in effort into other things in their careers that have the capacity to literally do whatever is they want to do, especially now.. and they choose not to do so, at least not without someone guaranteeing them a paycheck or telling them they have to do it to survive.
“ I've looked at three non-engineer vibe-coded businesses in the past month, and can tell that without taste, they're building a pretty mediocre product at best.”
Are you doing this altruistically for friends - or as a consultant?
Both a) to help a friend out and b) to help non-technical founders I've meet at some Meetups/AI events to launch their product. My short-term goal is to put together a checklist/cheatsheet for all the technical things someone needs to do to launch a business because it's not just having a webapp running on Vercel with Supabase. And if they do have an app, is it a complete mess or not.
I think the solo-founder hype is an overplayed unless the person has the right skills, and even worked at a tech company, and knows what they're getting into. Alerting and monitoring for example is one of like 30 things they should be aware of.
It's leaning in a good direction, but the author clearly lacks the language and understanding to articulate the actual problem, or a solution. They simply dont know what they dont know.
> Human effort is still a moat.
Also slightly off the mark. If I sat one down with all the equipment and supplies to make a pair of pants, the majority of you (by a massive margin) are going to produce a terrible pair of pants.
Thats not due to lack of effort, rather lack of skill.
> judgement is as important as ever,
Not important, critical. And it is a product of skill and experience.
Usability (a word often unused), cost, utility, are all the things that people want in a product. Reliability is a requirement: to quote the social network "we dont crash". And if you want to keep pace, maintainability.
> issue devs would run into before AI - the codebase becomes an incoherent mess
The big ball of mud (https://www.laputan.org/mud/ ) is 27 years old, and still applies. But all code bases have a tendency to acquire cruft (from edge cases) that don't have good in line explanations, that lack durable artifacts. Find me an old code base and I bet you that we can find a comment referencing a bug number in a system that no longer exists.
We might as an industry need to be honest that we need to be better librarians and archivists as well.
That having been said, the article should get credit, it is at least trying to start to have the conversations that we should be having and are not.
This is an underrated comment. You could have the best product out there, but AI has not only lowered the effort for competitors but has flooded traditional ways to get your product known, from outbound sales to content marketing. Sometimes make you question whether there are customers anymore.
You make a really salient point about having a clear vision and using clear language. Patrick Zgambo says that working with AI is spellcasting; you just need to know the magic words. The more I work with AI tools, the more I agree.
Now, figuring out those words? That's the hard part.
Jensen Huang said he commands thousands of AGIs but still feels pretty useful.
Founders and CEOs are still needed to set direction, bring unique vision to life, and build relationships for long-term partnerships—-as long as humans still control the economy, that is.
> Without the above, you run into the same issue devs would run into before AI - the codebase becomes an incoherent mess, and even AI can't untangle it because the confusion gets embedded into its own context.
We have a term for this and it is called "Comprehension Debt" [0] [1].
I'm not sure I agree the term applies. Comprehension debt, as I understand it, is just the dependency trap mentioned in that arxiv paper you linked. It means that the AI might have written something coherent or not, but you as a human evaluator have little means to judge it. Because you've relied on it too much and the scope of the code has exceeded the feasibility of reading it manually.
When I talk about an incoherent mess, I'm talking about something different. I mean that as the codebase grows and matures, subtle details and assumptions naturally shift. But the AI isn't always cleaning up the code that expressed those prior assumptions. These issues compound to the point that the AI itself gets very confused. This is especially dangerous for teams of developers touching the same codebase.
I can't share too much detail here, but some personal experience I ran into recently: we had feature ABC in our platform. Eventually another developer came in, disagreed with the implementation, and combined some aspects of it into a new feature XYZ. Both were AI generated. What _should_ have happened is that feature ABC was deleted from the code or refactored into XYZ. But it wasn't, so now the codebase has two nearly identical modules ABC and XYZ. If you ask Claude to edit the feature, you've got a 50/50 shot on which one it chooses to target, even though feature ABC is now dead, unreachable code.
You might say that resolving the above issue is easy, but these inconsistencies become quite numerous and unsustainable in a codebase if you lean on AI too much, or aren't careful. This is why I say that having a super clear vision up front is important, because it reduces this kind of directional churn.
> This is why I say that having a super clear vision up front is important, because it reduces this kind of directional churn.
I'm on my 6th or 7th draft of a project. I've been picking away at this thing since the end of January; I keep restarting because the core abstractions get clearer and clearer as I go. AI has been great in this discovery process because it speeds iteration much more quickly. I know its starting to drift into a mess when I no longer have a clear grasp of the work its doing. To me, this indicates that some mental model I had and communicated was not sufficiently precise.
> I've gotten way further than I would have otherwise at this pace, but it was still a lot of effort, and I still wasted time going down rabbit holes on features that didn't work out.
By the time I'm done learning about the structure of the code that AI wrote, and reviewing it for correctness and completeness, it seems to be as much effort as if I had just written it myself. And I fear that will continue to be the reality until AIs can be trusted.
Well that is not how anyone is doing agentic coding though. That sounds like just a worse version of traditional coding. Most people are building test suites to verify correctness and not caring about the code
I think you're missing the point. Effort is a moat now because centaurs (human+AI) still beat AIs, but that gap gets smaller every year (and will ostensibly be closed).
The goal is to replicate human labor, and they're closing that gap. Once they do (maybe decades, but probably will happen), then only that "special something" will remain. Taste, vision... We shall all become Rick Rubins.
Today: Ask AI to "do the thing", manual review because don't trust the AI
Tomorrow: Ask AI to "do the thing"
I'm just getting started on my AI journey. It didn't take long before I upgraded from the $17 a month claude plan to the $100 a month plan and I can see myself picking the $200 a month plan soon. This is for hobby projects.
At the moment I'm reviewing most of the code for what I'm working on, and I have tests and review those too. But, seeing how good it is (sometimes), I can imagine a future where the AI itself has both the tech chops and the taste and I can just say "Maybe me an app to edit photos" and it will spit out a user friendly clone of photoshop with good UX.
We already kind of see this with music - it's able to spit out "Bangers". How long until it can spit out hit rom-coms, crime shows, recipes, apps? I don't think the answer is "never". I think more likely the answer is in N years where N is probably a single digit.
I'm continually fascinated by the huge differences in individual ability to produce successful results with AI. I always assumed that one of the benefits of AI was "anyone can do this". Then I realized a lot of people I interact with don't really understand the problem they're trying to solve all that well, and have some irrational belief that they can get AI to brute force their way to a solution.
For me I don't even use the more powerful models (just Sonnet 4.6) and have yet to have a project not come out fairly successful in a short period of time. This includes graded live coding examples for interviews, so there is at least some objective measurement that these are functional.
Strangely I find traditional software engineers, especially experienced ones, are generally the worst at achieving success. They often treat working with an agent too much like software engineering and end up building bad software rather than useful solutions to the core problem.
> One of the most useful things about AI is also one of the most humbling: it reveals how clear your own judgment actually is. If your critique stays vague, your taste is still underdeveloped. If your critique becomes precise, your judgment is stronger than the model output. You can then use the model well instead of being led by it.
Something I find that teams get wrong with agentic coding: they start by reverse engineering docs from an existing codebase.
This is a mistake.
Instead, the right train of thought is: "what would perfect code look like?" and then meticulously describe to the LLM what "perfect" is to shape every line that gets generated.
This exercise is hard for some folks to grasp because they've never thought much about what well-constructed code or architectures looks like; they have no "taste" and thus no ability to precisely dictate the framework for "perfect" (yes, there is some subjectivity that reflects taste).
I think there is a parallel to what happened to watch market with Quartz crisis. The same way Quartz has led to decline of Swiss movements, LLMs are going to have a huge effect on developer market. I hypothesize that in future there will be a micro segment which care about quality, taste, exclusivity etc the same way the luxury watch makers found a niche. My perspective is that this "taste" or "quality" will not be a moat. Instead, it will be a niche where only a small segment would care about it.
If you're properly bitter-lesson-pilled then why wouldn't better models continue to develop and improve taste and discernment when it comes to design, development, and just better thinking overall?
Speed and distribution aren't a long-run moat because they are something AI can canabalize in a platform. Eventually they will coexist on your distribution base and offer it at a lower cost than you. Its a mote if it holds up before you exit at a high valuation... which a lot are setup to do.
Taste: that's interesting. There is an argument there. It's hard to keep in the long-run and requires a lot of reinvestment in new talent
Proprietary data: Yes, very much so.
Trade Craft: Your new shiney system will still have to adhere to methods of of old clunky real world systems. Example, evidence for court. Methods for investigations. This is going to be industry specific, but you'd be surprised how many there are. This is long-term.
Those who have the moat should focus on short burts of meaningful changes as they will rely heavily on gaining trust in established systems. In those places its more about trusting whats going on than doing it faster and better, so you want trust + faster and/or better.
Ah, the classic "we'll ship production to China and just do design and marketing in US, because we have taste on what to build, and China doesn't". That worked really well...
IMO, taste has always been one of the strongest moats because we struggle to define what good taste even is. We know it when we see it, but other than pointing to examples, we can’t really describe it in general terms. I still remember a line from Paul Graham’s Hackers and Painters where he was describing the difficulty of hiring software engineers. He says he was talking with a colleague after an interview and remarked (I’m paraphrasing), “I know he can write code. But does he have taste?” Taste is something we all want our colleagues and products and services to have, but defining it is really difficult. And yes, I fully agree with the writer that it’s important more than ever in this age of AI where generation is cheap.
I agree with the author and I think this is turning everyone into an investor. How I view (financial) investing as a career is that it is less manual and more taste oriented. You put your stake in the things you feel will work out and taste here just means the judgement required to make good calls. A person with good taste would have a better idea of capital allocation.
What AI is doing is making all of us investors instead of doers. "Doing" is no longer something praiseworthy - what will become praiseworthy is how your taste has turned out in hindsight.
I'm seeing this at work. More or less everyone can do tasks well. But what's harder now is the more subtle task of taking bets and seeing it work over a few months or years.
What you notice
What you reject
How precisely you can explain what feels wrong
I think it's just as important, if not more, to be able to explain what is right and what you accept. Having a well defined acceptance criteria also fits into existing project management frameworks. These criteria are generally based on asking users. The article mentions, You do not get a spreadsheet that tells you which sentence will make a customer care, which feature is worth a month of engineering time, or which design crosses the line from polished to forgettable. And this is why you talk to your customers.
I'm not sure this is only true of LLMs - most corporate sites are very similar. Professional designers learn very quickly not to be too quirky or unique.
I think this is symptomatic of humans - our comfort zone is the "7 out of 10" morass of similarity and blandness. We are herd animals. LLMs are just reflecting this.
And I don't think our tendency to herd will allow us to select the quirky outliers even if that's the only distinguishing characteristic of non-LLM output.
I think "taste" is definitely an overused meme at this point, its like tech twitter discovered this word in 2024 and never stopped using it (same with "agency", "high leverage", etc).
Having read the article, I think I see the author's argument (*). I think "taste" here in an engineering context basically just comes down to an innate feeling of what engineering or product directions are right or wrong. I think this is different from the type of "taste" most people here are talking about, though I'm sure product "taste" specifically is somewhat correlated with your overall "taste." Engineering "taste" seems more correlated with experience building systems and/or strong intuitions about the fundamentals. I think this is a little different from the totally subjective, "vibes based taste" that you might think of in the context of design or art.
Now where I disagree is that
1. "taste" is a defensible moat
2. "taste" is "ai-proof" to some extent
"Taste" is only defensible to the extent that knowing what to do and cutting off the _right_ cruft is essential to moving faster. Moving faster and out executing is the real "moat" there. And obviously any cognitive task, including something as nebulous as "taste," can in theory be done by a sufficiently good AI. Clarity of thought when communicating with AI is, imo, not "taste."
Talking specifically about engineering - the article talks about product constraints and tradeoffs. I'd argue that these are actually _data_ problems, and once you solve those, tradeoffs and solving for constraints go from being a judgement call to being a "correct" solution. That is to say, if you provide more information to your AI about your business context, the less judgement _you_ as the implementer need to give. This thinking is in line with what other people here have already said (real moats are data, distribution, execution speed).
I think there's something a bit more interesting to say about the user empathy part, since it could be difficult for LLMs to truly put themselves in users shows when designing some interactive surfaces. But I'm sure that can be "solved" too, or at least, it can be done with far less human labor than it already takes.
In general though, tech people are some of the least tasteful people, so its always funny to see posts like this.
I use AI for code and we review that code and write tests ourselves first which the AI cannot touch. For writing we hardly ever do, unless we know the requester of something is incompetent and will never read it anyway; then it is a waste of time to do anything, but they expect something substantial and nice looking to tick a few boxes. It is great for that; a large bank with 40 layers of management, all equally incompetent, asked for a 'all encompassing technical document vault'; one of them sent an 'expectation document' which contained so much garbage as to show they did not even know what they were asking, but 1000s of pages was the expectation. So sure, claude will write that in an hour, notebooklm will add 100 slidedecks for juiceness. At first sight it looks amazing; its probably mostly accurate as well, but who knows; they will never ever read it; no one will. We got the 20m+ (with many opportunities to grow much larger) project. Before that was only in reach of the huge consultants (where everyone in those management levels worked before probably) who we used to lose against. Slop has its purpose.
This cope is insane. Even simple projects generated by Claude are riddled with bugs. And there’s no way in hell it could generate a larger scoped project without a lot of manual human intervention. But yea, TODO apps and trivial calculators are effectively “solved”. Same with leetcode. I guess that’s probably the limit of many people’s imagination these days.
I get really skeptical when someone says "AI does most of the work just by asking simple prompts."
Don't get me wrong, I use AI too for daily tasks and my job as a programmer, and it definitely helps me get the job done faster (sometimes it's almost instant).
But it still requires a lot of effort for complex or unconventional tasks.
When I hear "AI does most of my job", I think of DOGE employees who use AI to identify "waste of money".
All they do is ask AI with very lazy prompts like "list DEI projects" with the list of government sponsored projects including simple descriptions.
They don't even provide what DEI means.
And they just cut all projects the AI flagged.
I'm sure their "productivity" is very high.
They can "complete the job" that would require days, weeks, or months of investigation with a single prompt.
I also think the results have a strong tendency to flag a project as DEI, because "is this DEI" is a question often asked by racists and misogynists on right-wing websites, and often the answers are "Yes", and that likely causes a strong bias.
> AI and LLMs have changed one thing very quickly: competent output is now cheap.
If you're working on something not truly novel, sure.
If you're using LLMs to assist in e.g. Mathematics work on as-yet-unproven problems, then this is hardly the case.
Hell, if we just stick to the software domain: Gemini3-DeepThink, GPT-5.4pro, and Opus 4.6 perform pretty "meh" writing CUDA C++ code for Hopper & Blackwell.
And I'm not talking about poorly-spec'd problems. I'm talking about mapping straightforward mathematics in annotated WolframLanguage files to WGMMA with TMA.
Taste is cheap. Taste (or a rudimentary version of it at least) is something you start with at the beginning of your career. Taste is the thing that tells you "this is fucking cool", or "I don't know why but this just looks right". LLM's are not going to replicate that because it's not a human and taste isn't something you can make. Now - MAKING something that "looks right" is hard, and because LLM's are churning out the middle - the middle is moving somewhere else. Just like rich people during the summer.
211 comments
There's some truth in there that judgement is as important as ever, though I'm not sure I'd call it taste. I'm finding that you have to have an extremely clear product vision, along with an extremely clear language used to describe that product, for AI to be used effectively. Know your terms, know how you want your features to be split up into modules, know what you want the interfaces of those modules to be.
Without the above, you run into the same issue devs would run into before AI - the codebase becomes an incoherent mess, and even AI can't untangle it because the confusion gets embedded into its own context.
Clear production vision that you're building the right thing in the right way -- this involves a lot of taste to get right. Good PMs have this. Good enginers have this. Visionary leaders have this....
The execution of using AI to generate the code and other artifacts, is a matter of skill. But without the taste that you're building the right thing, with the right features, in a revolutionary way that will be delightful to use....
I've looked at three non-engineer vibe-coded businesses in the past month, and can tell that without taste, they're building a pretty mediocre product at best. The founders don't see it yet. And like the article says, they're just setting themselves up for mediocrity. I think any really good PM would be able to improve all these apps I looked at almost immediately.
I don't necessarily interpret the comment you replied to as saying that "taste is not important", which seems like what you are replying to, just that it's not the only remaining thing.
I agree that taste gets you far. And I agree with all the examples of good taste that you brought up.
But even with impeccable taste, you still need to learn, try things, have ideas, change your mind etc.. putting all of that in the bucket of "taste" is stretching it..
However, having good taste when putting in the effort, gets your further than with effort alone. In fact, effort alone gets you nowhere, and taste alone gets you nowhere. Once you marry the two you get somewhere.
I'm confused as to why this is even a question. We used to call this "systems analysis" and it was like... a whole-ass career. LLMs seem to be remarkably capable of using the output, but they're not even close to the first software systems sold as being able to take requirements and turn them into working code (for various definitions of "requirements" and "working").
I'm also skeptical that direct brain interfaces would make this any less work; I don't think "typing" or "english" are the major barriers here, anymore than "drafting" is the major barrier to folks designing their own cars and houses... Any fool thinks they know what they need!
Not really. The effort required to produce the same result has declined, but it has been on the decline for many decades already. That is nothing new. Of course, in the real world, nobody wants the same result over and over, so expectations will always expand to consume all of your available effort.
If there is some future where your effort has been replaced, it won't be AI that we're talking about.
Any time someone says AI can do this, and do that, and blah blah. I say ok, take the AI and go do that.. the barrier to entry is so low you should be able to do whatever you want. And they say, oh, no, I don't want to do that (or can't, or whatever). But it should be able to be done.. And I just nod, and sip my drink, and ...
.. and I'd like to point out these are seasoned professionals that I've seen put in effort into other things in their careers that have the capacity to literally do whatever is they want to do, especially now.. and they choose not to do so, at least not without someone guaranteeing them a paycheck or telling them they have to do it to survive.
Are you doing this altruistically for friends - or as a consultant?
I think the solo-founder hype is an overplayed unless the person has the right skills, and even worked at a tech company, and knows what they're getting into. Alerting and monitoring for example is one of like 30 things they should be aware of.
> Disagree with the overall argument.
It's leaning in a good direction, but the author clearly lacks the language and understanding to articulate the actual problem, or a solution. They simply dont know what they dont know.
> Human effort is still a moat.
Also slightly off the mark. If I sat one down with all the equipment and supplies to make a pair of pants, the majority of you (by a massive margin) are going to produce a terrible pair of pants.
Thats not due to lack of effort, rather lack of skill.
> judgement is as important as ever,
Not important, critical. And it is a product of skill and experience.
Usability (a word often unused), cost, utility, are all the things that people want in a product. Reliability is a requirement: to quote the social network "we dont crash". And if you want to keep pace, maintainability.
> issue devs would run into before AI - the codebase becomes an incoherent mess
The big ball of mud (https://www.laputan.org/mud/ ) is 27 years old, and still applies. But all code bases have a tendency to acquire cruft (from edge cases) that don't have good in line explanations, that lack durable artifacts. Find me an old code base and I bet you that we can find a comment referencing a bug number in a system that no longer exists.
We might as an industry need to be honest that we need to be better librarians and archivists as well.
That having been said, the article should get credit, it is at least trying to start to have the conversations that we should be having and are not.
Customers can’t find you
Now, figuring out those words? That's the hard part.
> Now, figuring out those words? That's the hard part.
To be clear, this is the hard part for comp sci majors who can't parse other disciplines. Language isn't a black box for everyone.
Founders and CEOs are still needed to set direction, bring unique vision to life, and build relationships for long-term partnerships—-as long as humans still control the economy, that is.
> Without the above, you run into the same issue devs would run into before AI - the codebase becomes an incoherent mess, and even AI can't untangle it because the confusion gets embedded into its own context.
We have a term for this and it is called "Comprehension Debt" [0] [1].
[0] https://arxiv.org/abs/2512.08942
[1] https://medium.com/@addyosmani/comprehension-debt-the-hidden...
When I talk about an incoherent mess, I'm talking about something different. I mean that as the codebase grows and matures, subtle details and assumptions naturally shift. But the AI isn't always cleaning up the code that expressed those prior assumptions. These issues compound to the point that the AI itself gets very confused. This is especially dangerous for teams of developers touching the same codebase.
I can't share too much detail here, but some personal experience I ran into recently: we had feature ABC in our platform. Eventually another developer came in, disagreed with the implementation, and combined some aspects of it into a new feature XYZ. Both were AI generated. What _should_ have happened is that feature ABC was deleted from the code or refactored into XYZ. But it wasn't, so now the codebase has two nearly identical modules ABC and XYZ. If you ask Claude to edit the feature, you've got a 50/50 shot on which one it chooses to target, even though feature ABC is now dead, unreachable code.
You might say that resolving the above issue is easy, but these inconsistencies become quite numerous and unsustainable in a codebase if you lean on AI too much, or aren't careful. This is why I say that having a super clear vision up front is important, because it reduces this kind of directional churn.
> This is why I say that having a super clear vision up front is important, because it reduces this kind of directional churn.
I'm on my 6th or 7th draft of a project. I've been picking away at this thing since the end of January; I keep restarting because the core abstractions get clearer and clearer as I go. AI has been great in this discovery process because it speeds iteration much more quickly. I know its starting to drift into a mess when I no longer have a clear grasp of the work its doing. To me, this indicates that some mental model I had and communicated was not sufficiently precise.
> I've gotten way further than I would have otherwise at this pace, but it was still a lot of effort, and I still wasted time going down rabbit holes on features that didn't work out.
By the time I'm done learning about the structure of the code that AI wrote, and reviewing it for correctness and completeness, it seems to be as much effort as if I had just written it myself. And I fear that will continue to be the reality until AIs can be trusted.
The goal is to replicate human labor, and they're closing that gap. Once they do (maybe decades, but probably will happen), then only that "special something" will remain. Taste, vision... We shall all become Rick Rubins.
Until 2045, when they ship RubinGPT
Today: Ask AI to "do the thing", manual review because don't trust the AI
Tomorrow: Ask AI to "do the thing"
I'm just getting started on my AI journey. It didn't take long before I upgraded from the $17 a month claude plan to the $100 a month plan and I can see myself picking the $200 a month plan soon. This is for hobby projects.
At the moment I'm reviewing most of the code for what I'm working on, and I have tests and review those too. But, seeing how good it is (sometimes), I can imagine a future where the AI itself has both the tech chops and the taste and I can just say "Maybe me an app to edit photos" and it will spit out a user friendly clone of photoshop with good UX.
We already kind of see this with music - it's able to spit out "Bangers". How long until it can spit out hit rom-coms, crime shows, recipes, apps? I don't think the answer is "never". I think more likely the answer is in N years where N is probably a single digit.
> ... for AI to be used effectively.
I'm continually fascinated by the huge differences in individual ability to produce successful results with AI. I always assumed that one of the benefits of AI was "anyone can do this". Then I realized a lot of people I interact with don't really understand the problem they're trying to solve all that well, and have some irrational belief that they can get AI to brute force their way to a solution.
For me I don't even use the more powerful models (just Sonnet 4.6) and have yet to have a project not come out fairly successful in a short period of time. This includes graded live coding examples for interviews, so there is at least some objective measurement that these are functional.
Strangely I find traditional software engineers, especially experienced ones, are generally the worst at achieving success. They often treat working with an agent too much like software engineering and end up building bad software rather than useful solutions to the core problem.
This is a mistake.
Instead, the right train of thought is: "what would perfect code look like?" and then meticulously describe to the LLM what "perfect" is to shape every line that gets generated.
This exercise is hard for some folks to grasp because they've never thought much about what well-constructed code or architectures looks like; they have no "taste" and thus no ability to precisely dictate the framework for "perfect" (yes, there is some subjectivity that reflects taste).
Followed by an entire AI generated fluff piece https://www.pangram.com/history/347cd632-809c-4775-b457-d9bc...
Flagged
(edit: typos)
evergreen.
Speed and distribution aren't a long-run moat because they are something AI can canabalize in a platform. Eventually they will coexist on your distribution base and offer it at a lower cost than you. Its a mote if it holds up before you exit at a high valuation... which a lot are setup to do.
Taste: that's interesting. There is an argument there. It's hard to keep in the long-run and requires a lot of reinvestment in new talent
Proprietary data: Yes, very much so.
Trade Craft: Your new shiney system will still have to adhere to methods of of old clunky real world systems. Example, evidence for court. Methods for investigations. This is going to be industry specific, but you'd be surprised how many there are. This is long-term.
Those who have the moat should focus on short burts of meaningful changes as they will rely heavily on gaining trust in established systems. In those places its more about trusting whats going on than doing it faster and better, so you want trust + faster and/or better.
There's always been ways to "flatten the middle" - by outsourcing, by using pre-packaged goods, with industrialization...
So yeah we've always loved handcrafted, exquisite things; there's never been a "moat" in middle
It doesn't mean you can't make a good living without a moat though
Taste may be kind of important because it helps toward the truly important thing, which is skin-in-the-game.
But also, with the right skin-in-the-game, you don't even need "taste." You just need real life consequences, which we don't do enough in tech.
Discussion: https://news.ycombinator.com/item?id=47089907
What AI is doing is making all of us investors instead of doers. "Doing" is no longer something praiseworthy - what will become praiseworthy is how your taste has turned out in hindsight.
I'm seeing this at work. More or less everyone can do tasks well. But what's harder now is the more subtle task of taking bets and seeing it work over a few months or years.
I think it's just as important, if not more, to be able to explain what is right and what you accept. Having a well defined acceptance criteria also fits into existing project management frameworks. These criteria are generally based on asking users. The article mentions, You do not get a spreadsheet that tells you which sentence will make a customer care, which feature is worth a month of engineering time, or which design crosses the line from polished to forgettable. And this is why you talk to your customers.
Just google "taste is the new moat"
Doesn't deserve to be on the front page.
> Good Taste the Only Real Moat Left > YC startups are doomed
I think this is symptomatic of humans - our comfort zone is the "7 out of 10" morass of similarity and blandness. We are herd animals. LLMs are just reflecting this.
And I don't think our tendency to herd will allow us to select the quirky outliers even if that's the only distinguishing characteristic of non-LLM output.
- Just think about scientific research. Lots of data analysis results are not cheap to get.
- Even vibe coding is difficult: you need to think very hard about what you want.
What is cheaper now are some building blocks. We just have a new definition of building blocks. But putting the blocks is still hard.
Steve Jobs stopped them, drew a square on the whiteboard and said “anything the user drags into this square gets written to the DVD” - that is taste!
Having read the article, I think I see the author's argument (*). I think "taste" here in an engineering context basically just comes down to an innate feeling of what engineering or product directions are right or wrong. I think this is different from the type of "taste" most people here are talking about, though I'm sure product "taste" specifically is somewhat correlated with your overall "taste." Engineering "taste" seems more correlated with experience building systems and/or strong intuitions about the fundamentals. I think this is a little different from the totally subjective, "vibes based taste" that you might think of in the context of design or art.
Now where I disagree is that
1. "taste" is a defensible moat
2. "taste" is "ai-proof" to some extent
"Taste" is only defensible to the extent that knowing what to do and cutting off the _right_ cruft is essential to moving faster. Moving faster and out executing is the real "moat" there. And obviously any cognitive task, including something as nebulous as "taste," can in theory be done by a sufficiently good AI. Clarity of thought when communicating with AI is, imo, not "taste."
Talking specifically about engineering - the article talks about product constraints and tradeoffs. I'd argue that these are actually _data_ problems, and once you solve those, tradeoffs and solving for constraints go from being a judgement call to being a "correct" solution. That is to say, if you provide more information to your AI about your business context, the less judgement _you_ as the implementer need to give. This thinking is in line with what other people here have already said (real moats are data, distribution, execution speed).
I think there's something a bit more interesting to say about the user empathy part, since it could be difficult for LLMs to truly put themselves in users shows when designing some interactive surfaces. But I'm sure that can be "solved" too, or at least, it can be done with far less human labor than it already takes.
In general though, tech people are some of the least tasteful people, so its always funny to see posts like this.
If one disagrees with that's statement, there is nothing of value to extract from this article.
Words are cheap, bullet point are cheap.
>AI and LLMs have changed one thing very quickly: competent output is now cheap.
Already wrong.
> That is why so much AI-generated work feels familiar:
This was already a complaint people had before Ai. Like when logos and landing pages all used to look the same. Or coffee shops all looking the same.
https://youtu.be/jg1WUOxY6Cg?si=0ajVvgKnyuSz0e2Y
Don't get me wrong, I use AI too for daily tasks and my job as a programmer, and it definitely helps me get the job done faster (sometimes it's almost instant). But it still requires a lot of effort for complex or unconventional tasks.
When I hear "AI does most of my job", I think of DOGE employees who use AI to identify "waste of money". All they do is ask AI with very lazy prompts like "list DEI projects" with the list of government sponsored projects including simple descriptions. They don't even provide what DEI means. And they just cut all projects the AI flagged. I'm sure their "productivity" is very high. They can "complete the job" that would require days, weeks, or months of investigation with a single prompt.
I also think the results have a strong tendency to flag a project as DEI, because "is this DEI" is a question often asked by racists and misogynists on right-wing websites, and often the answers are "Yes", and that likely causes a strong bias.
[1] https://www.9news.com/article/news/politics/doge-chatgpt-dei...
> AI and LLMs have changed one thing very quickly: competent output is now cheap.
If you're working on something not truly novel, sure.
If you're using LLMs to assist in e.g. Mathematics work on as-yet-unproven problems, then this is hardly the case.
Hell, if we just stick to the software domain: Gemini3-DeepThink, GPT-5.4pro, and Opus 4.6 perform pretty "meh" writing CUDA C++ code for Hopper & Blackwell.
And I'm not talking about poorly-spec'd problems. I'm talking about mapping straightforward mathematics in annotated WolframLanguage files to WGMMA with TMA.
you already see it on facebook with all the ai generated meme sharing... taste is being eroded there
Distribution, Data (Proprietary) and Iteration Speed.
Very successful companies have all three: Stripe, Meta, Google, Amazon.
there's a reason people prefer aged products - they gain a character of their own, similar to art or works produced under constraints.
> A practical loop for training taste
Taste is cheap. Taste (or a rudimentary version of it at least) is something you start with at the beginning of your career. Taste is the thing that tells you "this is fucking cool", or "I don't know why but this just looks right". LLM's are not going to replicate that because it's not a human and taste isn't something you can make. Now - MAKING something that "looks right" is hard, and because LLM's are churning out the middle - the middle is moving somewhere else. Just like rich people during the summer.