I find most developers fall into one of two camps:
1. You treat your code as a means to an end to make a product for a user.
2. You treat the code itself as your craft, with the product being a vector for your craft.
The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.
Personally, I fall into the first camp.
No one has ever made a purchasing decision based on how good your code is.
The general public does not care about anything other than the capabilities and limitations of your product. Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.
With that said, I do have respect for people in the latter camp. But they're generally best fit for projects where that level of craftsmanship is actually useful (think: mission critical software, libraries us other devs depend on, etc).
I just feel like it's hard to talk about this stuff if we're not clear on which types of projects we're talking about.
> No one has ever made a purchasing decision based on how good your code is.
absolutely false.
> The general public does not care about anything other than the capabilities and limitations of your product.
also false.
People may not know that the reason they like your product is because the code is so good, but everyone likes software that is mostly free from bugs, performs extremely well, helps them do their work quickly, and is obviously created by people the care deeply about the quality of the product they produce (you know, the kind that acutally read bug reports, and fix problems quickly).
The longer your product exists the more important the quality of the code will be. This obsession so many have with "get it out the door in 5 seconds" is only going to continue the parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.
You don't have to pick on camp over the other. In my opinion, if you want to make a good product for a user, you should also treat the code you produce for them as your craft. There is no substitute for high quality work.
> You don't have to pick on camp over the other. In my opinion, if you want to make a good product for a user, you should also treat the code you produce for them as your craft. There is no substitute for high quality work.
Exactly, thank you for putting it like that.
So far it’s been my observation that it’s only the people who think like the OP who put the situation in the terms they did. It’s a false dichotomy which has become a talking point. By framing it as “there are two camps, it’s just different, none of them is better”, it lends legitimacy to their position.
For an exaggerated, non-comparable example meant only to illustrate the power of such framing devices, one could say: “there are people who think guns should be regulated, and there are people who like freedom”. It puts the matter into an either/or situation. It’s a strategy to frame the conversation on one’s terms.
I agree with OP's distinction. However just because you see software as a means to an ends, doesn't mean that you don't feel that quality and craft are unimportant. You can see the "craft" oriented folks as being obsessed with the form of their software. A "craft" oriented engineer might rewrite a perfectly functioning piece of software to make it what they perceive to be "easier to reason about". I consider most software rewrites to be borderline malpractice.
I think the kind of surface level rewrites that people rag on are pretty rare, at least in my experience. Realistically code that's impossible to understand, underdocumented, and lacking in proper abstractions is also deficient code. If you've ensured that the code is "good enough", you will likely hit a bug or feature request that is hindered by the poor structure and understanding of the code.
It's totally fine to say "the code works, that area is stable, let's not mess with that code". I make those kinds of tradeoffs on a near daily basis. But let's be real, "perfectly functioning code" is an ill defined, moving target. What looks like perfectly functioning code to a sibling team or a PM, could be a massive liability to someone who actually knows the code.
But then again I'm writing OS and performance critical code. A 1 in 1 million bug is easier to ignore in a throwaway log viewer website.
The most productive teams I've seen (eg at Jane Street) rewrite things all the time, and still move faster than any "normal" teams I've seen. I remember when I interned there over a decade ago, they were already on like the seventh? version of their incremental computing framework, and were building a new build system. But they were also incredibly effective at getting things done both on a per-engineer basis and in terms of making money.
With bad code it’s often almost impossible to improve the functionality or correctness or performance of the code, without first rewriting parts of it.
That example doesn't work well. All regulations come at the cost of freedom, and every freedom comes at the cost of regulations. While it isn't a strict binary (either 100% freedom or 100% regulation), enacting regulations do interfere with freedom. So this isn't just framing, it demonstrates a relationship between the two concepts, which may become relevant down in the discussion, if it already hasn't.
Regulation can cause freedom to be balanced differently between parties. For example, regulating smartphone manufacturers can result in more freedom for users. It’s not true that regulation necessarily reduces freedom overall (to the extent that that can even be graded). Just like rights, freedoms aren’t absolute, and one’s freedom often impinges on someone else’s freedom.
The increase of freedom of the users is an indirect side effect, intentional or not, which, as you put it, can happen, or not. But a direct effect, which is guaranteed to happen, is the loss of freedom of the manufacturer. Whether that's a good thing, that's another topic.
The increase of freedom of the slaves is an indirect side effect. But a direct effect, which is guaranteed to happen is the loss of freedom of the slaveholder.
That's certainly a perspective, specially given how slavery is often regulated into law.
But I digress, there's a plentiful discussion to be had about the ethics and morality of freedom/regulations, but my point is how there is, in fact, a dichotomy between both and it isn't just framing. Which, in a sense, you just corroborated.
> People may not know that the reason they like your product is because the code is so good, but everyone likes software that is mostly free from bugs, performs extremely well, helps them do their work quickly, and is obviously created by people the care deeply about the quality of the product they produce (you know, the kind that acutally read bug reports, and fix problems quickly).
I would classify all of those as "capabilities and limitations of your product"
I read OPs "good code" to mean "highly aesthetic code" (well laid out, good abstractions, good comments, etc. etc.), and in that sense I agree no customer who's just using the product actually cares about that.
Another definition of "good code" is probably "code that meets the requirements without unexpected behavior" and in that sense of course end users care about good code, but you could give me two black boxes that act the same externally, one written as a single line , single character variables, etc. etc. etc. and another written to be readable, and I wouldn't care so long as I wasn't expected to maintain it.
>but you could give me two black boxes that act the same externally, one written as a single line , single character variables, etc. etc. etc. and another written to be readable, and I wouldn't care so long as I wasn't expected to maintain it.
The reality of software products is that they are in nearly in all cases developed/maintained over time, though--and whenever that's the case, the black box metaphor fails. It's an idealization that only works for single moments of time, and yet software development typically extends through the entire period during which a product has users.
> I read OPs "good code" to mean "highly aesthetic code" (well laid out, good abstractions, good comments, etc. etc.)
The above is also why these properties you've mentioned shouldn't be considered aesthetic only: the software's likelihood of having tractable bugs, manageable performance concerns, or to adapt quickly to the demands of its users and the changing ecosystem it's embedded in are all affected by matters of abstraction selection, code organization, and documentation.
But those aesthetics stem from that need for fewer bugs, performance, maintainability. Identifying/defining code smell comes from experience of what does and doesn’t work.
> I wouldn't care so long as I wasn't expected to maintain it.
But, if you’re the one putting out that software, of course you will have to maintain it! When your users come back with a bug or a “this flow is too slow,” you will have to wade into the innards (at least until AI can do that without mistakes).
But the thing is that someone has to maintain it. And while beautiful code is not the same as correct code, the first is impactful in getting the second and keeping it.
And most users are not consuming your code. They’re consuming some compiled, transpiled, or minified version of it. But they do have expectations and it’s easier to amend the product if the source code is maintainable.
> obviously created by people the care deeply about the quality of the product they produce
This obviously doesn't represent all of the billions of dollars spent on software like Salesforce, SAP, Realpage, Booking.com, etc. etc. (all notoriously buggy, slow, and complex software). You can't tell me with a straight face that all of the thousands of developers who develop these products/services care deeply about the quality of the product. They get real nice paychecks, benefits and put dinner on the table for their families. That's the market.
> There is no substitute for high quality work.
You're right because there really isn't a consistent definition of what "high quality" software work looks like.
> This obviously doesn't represent all of the billions of dollars spent on software like Salesforce, SAP, Realpage, Booking.com, etc. etc. (all notoriously buggy, slow, and complex software). You can't tell me with a straight face that all of the thousands of developers who develop these products/services care deeply about the quality of the product. They get real nice paychecks, benefits and put dinner on the table for their families. That's the market.
Those first three are "enterprise" or B2B applications, where the person buying the software is almost never one of the people actually using the software. This disconnect means that the person making the buying decision cannot meaningfully judge the quality of any given piece of software they are evaluating beyond a surface level (where slick demos can paper over huge quality issues) since they do not know how it is actually used or what problems the actual users regularly encounter.
> You can't tell me with a straight face that all of the thousands of developers who develop these products/services care deeply about the quality of the product.
What about caring and being depressed because quality comes from systems rather than (just) individuals?
Now imagine how much they would make if their software was good.
Google, Facebook, Apple clearly care deeply about the quality of their code. They have to because bugs, bad performance, outages, vulnerabilities have very direct and immediate costs for them. I know Amazon and Microsoft have their critics but I bet they are also better than we give them credit for.
There are factors besides software quality that affect their success. But running bad software certainly isn’t going to help.
> You're right because there really isn't a consistent definition of what "high quality" software work looks like.
And if you can deterministically define "high quality software" with linters, analysers etc - then an AI Agent can also create high quality software within those limits.
Garbage software that is slow as a dog has been winning. While we’ve been obsessing over our craft and arguing about what makes software beautiful, slow crappy software has taken over the world.
Quality of code is just not that important of a concept anymore for the average web developer building some saas tool. React code was always crap anyways. Unless you are building critical systems like software that powers a plane or medical equipment, then code quality just doesn’t really matter so much in the age of AI. That may be a hard pill to swallow for some.
Thank you for putting out a clear message. I completely agree.
> parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.
and of course, this isn't even the worst. A lot worse can happen, such as data loss and corruption. Things that can directly affect people's lives in real life.
As a developer, these things are constantly on my mind, and I believe this is the case for people who do care about the quality.
As has been said elsewhere many times, AI producing code is not the same as say, a compiler producing machine code. It is not such a well-defined strong abstraction, hence why code quality still is highly relevant.
It is also easily forgotten that code is as much a social construct (e.g. things that have to be done in certain ways due to real life restrictions, that you wouldn't have to do in an ideal world).
Sometimes I feel very powerless though.
It feels as if some of us are talking past each other, even if we seemingly are using the same words like "quality".
Or in a way, that is what makes this more futile-- that we are using the same words and hence seemingly talking about the same thing, when we are referring to completely different things.
It is difficult to have a conversation about a problem when some of us don't even see it as a problem to begin with-- until it reaches a point when it starts affecting their lives as well, whether it be directly or indirectly.
But that takes time.
> The longer your product exists the more important the quality of the code will be. This obsession so many have with "get it out the door in 5 seconds" is only going to continue the parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.
Exactly. A lot of devs optimizing for whether the feature is going to take a day or an hour, but not contemplating that it's going to be out in the wild for 10 years either way. Maybe do it well once.
Meanwhile, the complexity of the average piece of software is drastically increasing. ... The stats suggest that devs are shipping more code with coding agents. The consequences may already be visible: analysis of vendor status pages [3] shows outages have steadily increased since 2022, suggesting software is becoming more brittle.
We've already seen a large-scale AWS outage because of this. It could get much worse. In a few years, we could have major infrastructure outages that the AI can't fix, and no human left understands the code.
AI coders, as currently implemented, don't have a design-level representation of what they're doing other than the prompt history and the code itself. That inherently leads to complexity growth. This isn't fundamental to AI. It's just a property of the way AI-driven coding is done now.
Is anybody working on useful design representations as intermediate forms used in AI-driven coding projects?
"The mending apparatus is itself in need of mending" - "The Machine Stops", by E.M. Forster, 1909.
At this point I can't help but conclude that to a certain class of writers, "economic incentives" are a form of benevolent god who will surely, inevitably make things better in directions they care about, and those who believe this also believe the overwhelming and constant evidence to the contrary of their position is just a temporary anomaly
Whatever the hell economics was supposed to do, right now it seems to be causing every industry to produce worse products, lay off more people, and concentrate wealth in an aristocratic subset of the population, and this has been going on for the better part of my entire lifetime. If we're to reverse this trend, we need to stop pretending that economics is a natural force and remember that it is a complex system made of policy decisions that can in fact be the wrong ones
Why build each new airplane with the care and precision of a Rolls-Royce? In the early 1970s, Kelly Johnson and I [Ben Rich] had dinner in Los Angeles with the great Soviet aerodynamicist Alexander Tupolev, designer of their backfire Bear bomber. 'You Americans build airplanes like a Rolex watch,' he told us. 'Knock it off the night table and it stops ticking. We build airplanes like a cheap alarm clock. But knock it off the table and still it wakes you up.'...The Soviets, he explained, built brute-force machines that could withstand awful weather and primitive landing fields. Everything was ruthlessly sacrificed to cut costs, including pilot safety.
We don't need to be ruthless to save costs, but why build the luxury model when the Chevy would do just as well? Build it right the first time, but don't build it to last forever. - Ben Rich in Skunk Works
The authors updated their title so I've updated it here too. Previous title was "Good code will still win" - but it was leading to too much superficial discussion based entirely on the phrase "good code" in the title. It's amazing how titles do that!
(Confession: "good code will still win" was my suggestion- IIRC they originally had "Is AI slop the future?". You win some you lose some.)
People are not emotionally ready to accept that certain layers of abstraction don’t need as much care and effort if they can be automated.
We are at the point where a single class can be dirty but the API of the classes should be clean. There’s no point reviewing the internals of a class anymore. I’m more or less sure that they would work as intended.
Next step is that of a micro service itself. The api of that micro service should be clean but internals may be however. We are 10% here.
Agreed on the economics side. Clean code saves you time and money whether a human or AI wrote it. That part doesn't change.
But I don't think the models are going to get there on their own. AI will generate a working mess all day long if you let it. The pressure to write good code has to come from the developer actually reviewing what comes out and pushing back. The incentive is there but it only matters if someone acts on it.
The economic angle is not as clear cut as the authors seem to think.
There is an abundance of mediocre and even awful code in products that are not failing because of it.
The worst thing about poorly designed software architecture is that it tends to freeze and accumulate more and more technical debt. This is not always a competitive issue, and with enough money you can maintain pretty much any codebases.
> economic forces will drive AI models toward generating good, simpler, code because it will be cheaper overall
Economic forces are completely irrelevant to the code quality of AI.
> I believe that economic incentives will start to take effect and AI models will be forced to generate good code to stay competitive amongst software developers and companies
Wherever AI succeeds, it will be because a dev is spending time on a process that requires a lot of babysitting. That time is about the same as writing it by hand. Language models reduce the need to manually type something because that's what they are designed to do, but it doesn't mean faster or better code.
AI is rubber duck that can talk back. It's also a natural language search tool. It's training wheels for devs to learn how to plan better and write half-decent code. What we have is an accessibility tool being sold as anything and everything else because investors completely misunderstand how software development works and are still in denial about it.
Code quality starts and ends with business needs being met, not technical capability. There is no way to provide that to AI as "context" or automate it away. AI is the wrong tool when those needs can be met by ideas already familiar to an experienced developer. They can write that stuff in their sleep (or while sitting in the meetings) and quickly move on.
I don't fully agree this optimistic view. Unfortunately, for now, coding agents produce code that, if not further optimized upon "human" request, often generates more complexity than necessary.
It's true that this requires more computational effort for the agents themselves to debug or modify it, but it's also true that the computational cost is negligible compared to the benefit of having features working quickly.
In other words: agents quickly generate hyper-complex and unoptimized code. And the speed of delivery provides more immediate benefits than the costs resulting from bad code.
On the other hand, it's also true that the "careful eye" of an experienced developer can optimize and improve the output in a few simple iterations.
So overall (and unfortunately) the "bad code", if it immediately works, can wins against (or with) a good code.
these economic incentives for good code would also apply to code before llms no? And we have had plenty of shit code that stayed shit for a long time. I find this idea that economic incentives will necessarily drive the outcomes you desire to be akin to a religious belief for some people.
What I would like to add is that coding in the flow state is underestimated. When your brain just clicks with every change and variable it's just different AND more efficient than doing with AI.
I've been talking about 'complexity' for years but business people just didn't get it. Now software development has become almost entirely about complexity management so now I'm hoping that they will finally understand what software engineering actually is. I hope they will finally start valuing engineers who can reduce complexity.
I was always into software architecture and I was dreaming to be a software architect but after completing university, the position was on the way out.
> I want to argue that AI models will write good code because of economic incentives.
The economic incentives on the internet by and large favor the production of slop. A significant proportion of the text-based web was content-farmed even before LLMs - and with the advent of LLMs, you now have slop-results for almost every search query imaginable, including some incredibly niche topics. We've seen the same trend with video: even before gen AI, online video consumption devolved toward carefully-engineered, staged short-form bait (TikTok, YT Shorts, etc). In the same vein, the bulk of the world's email traffic is phishing and spam.
None of this removed the incentive to produce high-quality websites, authentic and in-depth videos, and so on. But in practice, it made such content rare and made it harder for high-quality products to thrive. So yeah, I'm pretty sure that good software will survive in the LLM era. But I'm also absolutely certain that most app stores will be overrun by slop, most games on Steam will be slop, etc.
I wish it was true, but it sounds like copium. I bet garment makers, or artisan woodworkers said the same when big store cheap retails came. I bet they said "people value quality and etc", but in the end, outside of a group of people who has principles, everyone else floods their home with H&Ms and crap from Temu.
So yeah, good code might win among small group of principled people, but the majority will not care. And more importantly, management won't care. And as long as management don't care, you have two choices: "embrace" slop, or risk staying jobless in a though market.
Edit: Also, good code = expensive code. In an economy where people struggle to afford a living, nobody is going to pay for good code when they can get "good enough" code for 200$ a month with Claude.
Maybe not necessarily, but it'll be difficult to avoid. We're in a period where people are constantly creating and constantly changing software. Such rapid change really precludes the possibility of excellent. Very few people want to say "let's not add features, it would conflict with our ability to maintain quality." It's not that no one does this, but it is something that's in the minority.
All the change and shuffle feels like an inevitable consequence of so much communication and competition between companies, and cultures and such. Gone are the days where a software product can remain stagnant. Someone else will build something that does a bit more, or if nothing else, does something new, and it will take people's attention away.
Everyone is stuck trying to keep up with trends, even if those trends don't make any sense.
People stop caring whether something is good or not when there's so much of everything now. Why spend time evaluating when you can just move on to the next thing? That's not really an AI problem, but AI has definitely made it worse.
> Markets will not reward slop in coding, in the long-term.
Forgive my cynical take, but we're currently experiencing a market that doesn't appear to be rewarding anything specific in the long-term, as huge sums of money are traded on a minute-to-minute, day-to-day, and week-to-week basis. There's an explosion of uncertainty in today's markets and complete collapse of long-range planning echoing at many levels in society--particularly at the highest levels of governments. So I kind of don't want to hear about markets are going to reward.
But what exactly is "good code" (presumably the opposite of slop)?
I'd say that good code is terse, robust, suits its function, yet admits just the right amount of testability, performance, and customizability for the future. Good code anticipates change well. That means that if it has one job, it does that one job well, and not twenty others. If the job is going to stay the same, the code is going to stay the same. Good systems are made from well-factored pieces of code that have proper jobs and do their proper jobs without taking on jobs they shouldn't.
I for one think that AI code is going to reflect its training. If it's trained on just a random selection of everything out there, it's probably going to be pretty mediocre and full of bugs.
The current iteration of models don't write clean code by itself but future ones will. The problem in my view is extremely similar to agentic/vibe coding. Instead of optimizing for results you can optimize for clean code. The demand is there, clean code will lead to less bugs, faster running code and less tokens used (thus less cost) when understanding the code from a fresh session. It makes sense that the first generation of vibe coding focused on the results first and not clean code. Am I missing something?
The camps framing misses something. After 15 years writing code I've found the
people who ship the best products understand both sides. You need to care about
craft enough to know when AI output is garbage, and you need to care about
shipping enough to not gold-plate things that don't matter.
The slop problem isn't AI, it's people who can't tell the difference between
good and bad output because they never developed the craft in the first place.
AI just makes that gap more visible.
I don’t know if this prediction is wrong, it might well be right, but the basis for this prediction is “market forces” without (a) an analysis showing the advantage for sets of market participants or (b) fundamental scientific reasoning why the code will improve to that degree. Without those two things it’s just wishful thinking.
It's striking to me that while the article argues that LLMs and agentic development tools will increasingly trend towards higher and higher quality, but the "pro-AI" comments in this discussion and others tend to announce a "quality doesn't matter" camp.
We've been here before. Outsourcing of coding was really big for a while, until the reality of that situation caught up with those who practiced it - if you were saving a bundle on outsourcing your coding work, you were only saving money _now._ Down the line, you'd have to pay extra for someone competent to re-implement the work with an eye to quality.
(Sure, there were good outsourcing shops, but you didn't tend to save too much with them, since they knew they were good and charged appropriately.)
"Slop" ai-generated code is the same tradeoff as cheap outsourcing shops. You move quicker and cheaper now, but there will come a day when code quality will dip low enough that it will be difficult enough to make new changes that a refocus on quality becomes not just worthwhile, but financially required as well.
(And you may argue that you're using ai-generated code, but are maintaining a high code quality, and so for you this day will never come and you might be right! But you're the "good outsourcing shop", and you're not "saving" nearly as much time or money as those just sloppin' it up these days, so you're not really the issue, I'd argue.)
I'm optimistic that AI will actually increase the proportion of good code in the future.
1. IME AI tends to produce good code "in the small." That is, within a function or a file, I've encountered very little sloppy code from AI. Design and architecture is (still) where it quickly tends to go off the rails and needs a heavy hand. However, the bulk of the actual code will tend to be higher quality.
2. Code is now very cheap. And more tests actually results in better results from AI. There is now very little excuse to avoid extensive refactoring to do things "the right way." Especially since there will be a strong incentive to have clean code, because as TFA indicates...
3. Complex, messy code will directly increase token costs. Not just in grokking the codebase, but in the tokens wasted on failed attempts rooted in over-complicated code. Finally, tech debt has a concrete $$$ amount. What can get measured can get fixed, and nothing is easier to measure (or convince execs about!) than $$$.
Right now tokens are extremely cheap because they're heavily subsidized, but when token costs inevitably start ramping up, slop will automatically become less economically viable.
"AI will write good code because it is economically advantageous to do so. Per our definition of good code, good code is easy to understand and modify from the reduced complexity."
---------
This doesn't necessarily follow. Yes, there might be economic pressure for AI to produce "good" code, but that doesn't necessarily mean efforts to make this so will succeed. LLM's might never become proficient at producing "good" code for the same reasons that LLM's perform poorly when trained on their own output. A heuristic prediction of what "good" code for a given solution looks like is likely always going to be less "good" than code produced by skilled and deliberate human design.
Just as there is a place for fast and dirty human code, there will be a place for slop code. Likely the same sort of place. However, we may still need humans to produce "good" code that AI can be trained on as well as for solutions that actually need to be "good". AI might not be able to do that for us anytime soon, no matter what the economic imperatives are.
i.e. no matter what, the answer is always AI. If it's isn't good now it will be so .... AI. Don't forget to take your soma pills if anything isn't perfect.
None of this is true. Pretty much all JavaScript code is slop because the language is god awful. It’s so bad that we spent the last 20 years trying to code around the severe limitations of the language itself. Despite everyone knowing that JS sucks, no one has been able to displace it. Slop wins. Typically because of first mover advantage.
A certain big PaaS I won't name here has had lots of clusterfucks in the last 3 months. The CEO is extremely bought into AI and "code not mattering anymore". He's also constantly talking about the meteoric growth because Claude and other AI providers are using railway as default suggestions.
The toll has come to collect and now a lot of real production users are looking at alternatives.
The reality is the market is rewarding slop and "velocity now". There will come a time where it will reward quality again.
Getting AI tools to produce better code is not that hard; if you know how to do things right yourself. Basically, all you need to do is ask it to. Ask it to follow SOLID principles. Ask it to stick to guard rails. Ask it to eliminate code duplication, write tests, harden code bases, etc. It will do it. Most people just don't know what to ask for. Or even that they should be asking for it. Or how to ask for it. A lot of people getting messy results need to look in the mirror.
I'm using AI for coding just like everybody else. More or less exclusively since a few months. It's sometimes frustrating to get things done the right way but mostly I get the job done. I've been coding since the nineties. So, I know how to do things right and what doing it wrong looks like. If I catch my AI coding tools doing it wrong, I tell it to fix it and then adjust skills and guard rails to prevent it going off the rails.
AI tools actually seem to self correct when used in a nice code base. If there are tests, they'll just write more tests without needing to be prompted. If there is documentation, that gets updated along with the code. When you start with a vibe coded mess it can escalate quickly unless you make it clean up the mess. Sometimes the tests it adds are a bit meh and you have to tell it off by "add some tests for the non happy path cases, make sure to cover all possible exceptions, etc.". You can actually ask for a critical code review and then tell it "fix all of that". Sometimes it's as simple as that.
And just to be clear: AI continues to progress. There are already rumors about the next Anthropic model coming out and we are now in the phase of the biggest centralized reinforcement loop ever existed: everyone using ai for writing and giving it feedback.
We are, thanks to LLMs, able now to codify humans and while its not clear how fast this is, i do not believe anymore that my skills are unique.
A small hobby application costed me 11 dollars on the weekend and took me 3h to 'build' while i would have probably needed 2-3 days for it.
And we are still limited by resources and normal human progress. Like claude team is still exerpimental. Things like gastown or orchestrator architecture/structure is not that estabslihed and consumed quite a lot of tokens.
We have not even had time yet to build optimzed models. Claude code still understand A LOT of languages (human languages and programming languages)
Do not think anyone really cares about code quality. I do but i'm a software engineere. Everyone around me doesn't. Business doesn't. Even fellow co-workers don't or don't understand good code.
Even stupid things like the GTA 5 Online (or was it RDR2?) startup code wasn't found for ages (there was some algo complexity in loading some config file which took ages until someone non rockstar found it and rockstar fixed it).
We also have plenty of code were it doesn't matter as long as it works. Offline apps, scripts, research scripts etc.
484 comments
1. You treat your code as a means to an end to make a product for a user.
2. You treat the code itself as your craft, with the product being a vector for your craft.
The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.
Personally, I fall into the first camp.
No one has ever made a purchasing decision based on how good your code is.
The general public does not care about anything other than the capabilities and limitations of your product. Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.
With that said, I do have respect for people in the latter camp. But they're generally best fit for projects where that level of craftsmanship is actually useful (think: mission critical software, libraries us other devs depend on, etc).
I just feel like it's hard to talk about this stuff if we're not clear on which types of projects we're talking about.
> No one has ever made a purchasing decision based on how good your code is.
absolutely false.
> The general public does not care about anything other than the capabilities and limitations of your product.
also false.
People may not know that the reason they like your product is because the code is so good, but everyone likes software that is mostly free from bugs, performs extremely well, helps them do their work quickly, and is obviously created by people the care deeply about the quality of the product they produce (you know, the kind that acutally read bug reports, and fix problems quickly).
The longer your product exists the more important the quality of the code will be. This obsession so many have with "get it out the door in 5 seconds" is only going to continue the parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.
You don't have to pick on camp over the other. In my opinion, if you want to make a good product for a user, you should also treat the code you produce for them as your craft. There is no substitute for high quality work.
> You don't have to pick on camp over the other. In my opinion, if you want to make a good product for a user, you should also treat the code you produce for them as your craft. There is no substitute for high quality work.
Exactly, thank you for putting it like that.
So far it’s been my observation that it’s only the people who think like the OP who put the situation in the terms they did. It’s a false dichotomy which has become a talking point. By framing it as “there are two camps, it’s just different, none of them is better”, it lends legitimacy to their position.
For an exaggerated, non-comparable example meant only to illustrate the power of such framing devices, one could say: “there are people who think guns should be regulated, and there are people who like freedom”. It puts the matter into an either/or situation. It’s a strategy to frame the conversation on one’s terms.
It's totally fine to say "the code works, that area is stable, let's not mess with that code". I make those kinds of tradeoffs on a near daily basis. But let's be real, "perfectly functioning code" is an ill defined, moving target. What looks like perfectly functioning code to a sibling team or a PM, could be a massive liability to someone who actually knows the code.
But then again I'm writing OS and performance critical code. A 1 in 1 million bug is easier to ignore in a throwaway log viewer website.
But I digress, there's a plentiful discussion to be had about the ethics and morality of freedom/regulations, but my point is how there is, in fact, a dichotomy between both and it isn't just framing. Which, in a sense, you just corroborated.
> People may not know that the reason they like your product is because the code is so good, but everyone likes software that is mostly free from bugs, performs extremely well, helps them do their work quickly, and is obviously created by people the care deeply about the quality of the product they produce (you know, the kind that acutally read bug reports, and fix problems quickly).
I would classify all of those as "capabilities and limitations of your product"
I read OPs "good code" to mean "highly aesthetic code" (well laid out, good abstractions, good comments, etc. etc.), and in that sense I agree no customer who's just using the product actually cares about that.
Another definition of "good code" is probably "code that meets the requirements without unexpected behavior" and in that sense of course end users care about good code, but you could give me two black boxes that act the same externally, one written as a single line , single character variables, etc. etc. etc. and another written to be readable, and I wouldn't care so long as I wasn't expected to maintain it.
>but you could give me two black boxes that act the same externally, one written as a single line , single character variables, etc. etc. etc. and another written to be readable, and I wouldn't care so long as I wasn't expected to maintain it.
The reality of software products is that they are in nearly in all cases developed/maintained over time, though--and whenever that's the case, the black box metaphor fails. It's an idealization that only works for single moments of time, and yet software development typically extends through the entire period during which a product has users.
> I read OPs "good code" to mean "highly aesthetic code" (well laid out, good abstractions, good comments, etc. etc.)
The above is also why these properties you've mentioned shouldn't be considered aesthetic only: the software's likelihood of having tractable bugs, manageable performance concerns, or to adapt quickly to the demands of its users and the changing ecosystem it's embedded in are all affected by matters of abstraction selection, code organization, and documentation.
> I wouldn't care so long as I wasn't expected to maintain it.
But, if you’re the one putting out that software, of course you will have to maintain it! When your users come back with a bug or a “this flow is too slow,” you will have to wade into the innards (at least until AI can do that without mistakes).
If it’s software that will never be modified, sure it doesn’t matter.
And most users are not consuming your code. They’re consuming some compiled, transpiled, or minified version of it. But they do have expectations and it’s easier to amend the product if the source code is maintainable.
> obviously created by people the care deeply about the quality of the product they produce
This obviously doesn't represent all of the billions of dollars spent on software like Salesforce, SAP, Realpage, Booking.com, etc. etc. (all notoriously buggy, slow, and complex software). You can't tell me with a straight face that all of the thousands of developers who develop these products/services care deeply about the quality of the product. They get real nice paychecks, benefits and put dinner on the table for their families. That's the market.
> There is no substitute for high quality work.
You're right because there really isn't a consistent definition of what "high quality" software work looks like.
> This obviously doesn't represent all of the billions of dollars spent on software like Salesforce, SAP, Realpage, Booking.com, etc. etc. (all notoriously buggy, slow, and complex software). You can't tell me with a straight face that all of the thousands of developers who develop these products/services care deeply about the quality of the product. They get real nice paychecks, benefits and put dinner on the table for their families. That's the market.
Those first three are "enterprise" or B2B applications, where the person buying the software is almost never one of the people actually using the software. This disconnect means that the person making the buying decision cannot meaningfully judge the quality of any given piece of software they are evaluating beyond a surface level (where slick demos can paper over huge quality issues) since they do not know how it is actually used or what problems the actual users regularly encounter.
> You can't tell me with a straight face that all of the thousands of developers who develop these products/services care deeply about the quality of the product.
What about caring and being depressed because quality comes from systems rather than (just) individuals?
Google, Facebook, Apple clearly care deeply about the quality of their code. They have to because bugs, bad performance, outages, vulnerabilities have very direct and immediate costs for them. I know Amazon and Microsoft have their critics but I bet they are also better than we give them credit for.
There are factors besides software quality that affect their success. But running bad software certainly isn’t going to help.
> You're right because there really isn't a consistent definition of what "high quality" software work looks like.
And if you can deterministically define "high quality software" with linters, analysers etc - then an AI Agent can also create high quality software within those limits.
Quality of code is just not that important of a concept anymore for the average web developer building some saas tool. React code was always crap anyways. Unless you are building critical systems like software that powers a plane or medical equipment, then code quality just doesn’t really matter so much in the age of AI. That may be a hard pill to swallow for some.
> parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.
and of course, this isn't even the worst. A lot worse can happen, such as data loss and corruption. Things that can directly affect people's lives in real life.
As a developer, these things are constantly on my mind, and I believe this is the case for people who do care about the quality.
As has been said elsewhere many times, AI producing code is not the same as say, a compiler producing machine code. It is not such a well-defined strong abstraction, hence why code quality still is highly relevant.
It is also easily forgotten that code is as much a social construct (e.g. things that have to be done in certain ways due to real life restrictions, that you wouldn't have to do in an ideal world).
Sometimes I feel very powerless though. It feels as if some of us are talking past each other, even if we seemingly are using the same words like "quality". Or in a way, that is what makes this more futile-- that we are using the same words and hence seemingly talking about the same thing, when we are referring to completely different things.
It is difficult to have a conversation about a problem when some of us don't even see it as a problem to begin with-- until it reaches a point when it starts affecting their lives as well, whether it be directly or indirectly. But that takes time.
Time will tell.
> The longer your product exists the more important the quality of the code will be. This obsession so many have with "get it out the door in 5 seconds" is only going to continue the parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.
Exactly. A lot of devs optimizing for whether the feature is going to take a day or an hour, but not contemplating that it's going to be out in the wild for 10 years either way. Maybe do it well once.
We've already seen a large-scale AWS outage because of this. It could get much worse. In a few years, we could have major infrastructure outages that the AI can't fix, and no human left understands the code.
AI coders, as currently implemented, don't have a design-level representation of what they're doing other than the prompt history and the code itself. That inherently leads to complexity growth. This isn't fundamental to AI. It's just a property of the way AI-driven coding is done now.
Is anybody working on useful design representations as intermediate forms used in AI-driven coding projects?
"The mending apparatus is itself in need of mending" - "The Machine Stops", by E.M. Forster, 1909.
Whatever the hell economics was supposed to do, right now it seems to be causing every industry to produce worse products, lay off more people, and concentrate wealth in an aristocratic subset of the population, and this has been going on for the better part of my entire lifetime. If we're to reverse this trend, we need to stop pretending that economics is a natural force and remember that it is a complex system made of policy decisions that can in fact be the wrong ones
(Confession: "good code will still win" was my suggestion- IIRC they originally had "Is AI slop the future?". You win some you lose some.)
We are at the point where a single class can be dirty but the API of the classes should be clean. There’s no point reviewing the internals of a class anymore. I’m more or less sure that they would work as intended.
Next step is that of a micro service itself. The api of that micro service should be clean but internals may be however. We are 10% here.
But I don't think the models are going to get there on their own. AI will generate a working mess all day long if you let it. The pressure to write good code has to come from the developer actually reviewing what comes out and pushing back. The incentive is there but it only matters if someone acts on it.
Did the best processor win? no x86 is trash
Did the best computer language win? no (not that you can can pick a best)
The same is true pretty much everywhere else outside computers, with rare exception.
There is an abundance of mediocre and even awful code in products that are not failing because of it.
The worst thing about poorly designed software architecture is that it tends to freeze and accumulate more and more technical debt. This is not always a competitive issue, and with enough money you can maintain pretty much any codebases.
> economic forces will drive AI models toward generating good, simpler, code because it will be cheaper overall
Economic forces are completely irrelevant to the code quality of AI.
> I believe that economic incentives will start to take effect and AI models will be forced to generate good code to stay competitive amongst software developers and companies
Wherever AI succeeds, it will be because a dev is spending time on a process that requires a lot of babysitting. That time is about the same as writing it by hand. Language models reduce the need to manually type something because that's what they are designed to do, but it doesn't mean faster or better code.
AI is rubber duck that can talk back. It's also a natural language search tool. It's training wheels for devs to learn how to plan better and write half-decent code. What we have is an accessibility tool being sold as anything and everything else because investors completely misunderstand how software development works and are still in denial about it.
Code quality starts and ends with business needs being met, not technical capability. There is no way to provide that to AI as "context" or automate it away. AI is the wrong tool when those needs can be met by ideas already familiar to an experienced developer. They can write that stuff in their sleep (or while sitting in the meetings) and quickly move on.
I don't fully agree this optimistic view. Unfortunately, for now, coding agents produce code that, if not further optimized upon "human" request, often generates more complexity than necessary.
It's true that this requires more computational effort for the agents themselves to debug or modify it, but it's also true that the computational cost is negligible compared to the benefit of having features working quickly.
In other words: agents quickly generate hyper-complex and unoptimized code. And the speed of delivery provides more immediate benefits than the costs resulting from bad code.
On the other hand, it's also true that the "careful eye" of an experienced developer can optimize and improve the output in a few simple iterations.
So overall (and unfortunately) the "bad code", if it immediately works, can wins against (or with) a good code.
- Simple and easy to understand
- Easy to modify”
In my career at fast-moving startups (scaling seed to series C), I’ve come to the same conclusion:
> Simple is robust
I’m sure my former teams were sick of me saying it, but I’ve found myself repeating this mantra to the LLMs.
Agentic tools will happily build anything you want, the key is knowing what you want!
People forget that good engineering isn't "the strongest bridge", but the cheapest bridge that just barely won't fail under conditions.
I was always into software architecture and I was dreaming to be a software architect but after completing university, the position was on the way out.
> I want to argue that AI models will write good code because of economic incentives.
The economic incentives on the internet by and large favor the production of slop. A significant proportion of the text-based web was content-farmed even before LLMs - and with the advent of LLMs, you now have slop-results for almost every search query imaginable, including some incredibly niche topics. We've seen the same trend with video: even before gen AI, online video consumption devolved toward carefully-engineered, staged short-form bait (TikTok, YT Shorts, etc). In the same vein, the bulk of the world's email traffic is phishing and spam.
None of this removed the incentive to produce high-quality websites, authentic and in-depth videos, and so on. But in practice, it made such content rare and made it harder for high-quality products to thrive. So yeah, I'm pretty sure that good software will survive in the LLM era. But I'm also absolutely certain that most app stores will be overrun by slop, most games on Steam will be slop, etc.
The pattern was always: ship fast, fix/document later, but when "later" comes "don't touch what is working".
To date nothing changed yet, I bet it won't change even in the future.
So yeah, good code might win among small group of principled people, but the majority will not care. And more importantly, management won't care. And as long as management don't care, you have two choices: "embrace" slop, or risk staying jobless in a though market.
Edit: Also, good code = expensive code. In an economy where people struggle to afford a living, nobody is going to pay for good code when they can get "good enough" code for 200$ a month with Claude.
All the change and shuffle feels like an inevitable consequence of so much communication and competition between companies, and cultures and such. Gone are the days where a software product can remain stagnant. Someone else will build something that does a bit more, or if nothing else, does something new, and it will take people's attention away.
Everyone is stuck trying to keep up with trends, even if those trends don't make any sense.
> Markets will not reward slop in coding, in the long-term.
Forgive my cynical take, but we're currently experiencing a market that doesn't appear to be rewarding anything specific in the long-term, as huge sums of money are traded on a minute-to-minute, day-to-day, and week-to-week basis. There's an explosion of uncertainty in today's markets and complete collapse of long-range planning echoing at many levels in society--particularly at the highest levels of governments. So I kind of don't want to hear about markets are going to reward.
But what exactly is "good code" (presumably the opposite of slop)?
I'd say that good code is terse, robust, suits its function, yet admits just the right amount of testability, performance, and customizability for the future. Good code anticipates change well. That means that if it has one job, it does that one job well, and not twenty others. If the job is going to stay the same, the code is going to stay the same. Good systems are made from well-factored pieces of code that have proper jobs and do their proper jobs without taking on jobs they shouldn't.
I for one think that AI code is going to reflect its training. If it's trained on just a random selection of everything out there, it's probably going to be pretty mediocre and full of bugs.
The slop problem isn't AI, it's people who can't tell the difference between good and bad output because they never developed the craft in the first place. AI just makes that gap more visible.
(Sure, there were good outsourcing shops, but you didn't tend to save too much with them, since they knew they were good and charged appropriately.)
"Slop" ai-generated code is the same tradeoff as cheap outsourcing shops. You move quicker and cheaper now, but there will come a day when code quality will dip low enough that it will be difficult enough to make new changes that a refocus on quality becomes not just worthwhile, but financially required as well.
(And you may argue that you're using ai-generated code, but are maintaining a high code quality, and so for you this day will never come and you might be right! But you're the "good outsourcing shop", and you're not "saving" nearly as much time or money as those just sloppin' it up these days, so you're not really the issue, I'd argue.)
1. IME AI tends to produce good code "in the small." That is, within a function or a file, I've encountered very little sloppy code from AI. Design and architecture is (still) where it quickly tends to go off the rails and needs a heavy hand. However, the bulk of the actual code will tend to be higher quality.
2. Code is now very cheap. And more tests actually results in better results from AI. There is now very little excuse to avoid extensive refactoring to do things "the right way." Especially since there will be a strong incentive to have clean code, because as TFA indicates...
3. Complex, messy code will directly increase token costs. Not just in grokking the codebase, but in the tokens wasted on failed attempts rooted in over-complicated code. Finally, tech debt has a concrete $$$ amount. What can get measured can get fixed, and nothing is easier to measure (or convince execs about!) than $$$.
Right now tokens are extremely cheap because they're heavily subsidized, but when token costs inevitably start ramping up, slop will automatically become less economically viable.
---------
This doesn't necessarily follow. Yes, there might be economic pressure for AI to produce "good" code, but that doesn't necessarily mean efforts to make this so will succeed. LLM's might never become proficient at producing "good" code for the same reasons that LLM's perform poorly when trained on their own output. A heuristic prediction of what "good" code for a given solution looks like is likely always going to be less "good" than code produced by skilled and deliberate human design.
Just as there is a place for fast and dirty human code, there will be a place for slop code. Likely the same sort of place. However, we may still need humans to produce "good" code that AI can be trained on as well as for solutions that actually need to be "good". AI might not be able to do that for us anytime soon, no matter what the economic imperatives are.
And property testing is going to be an important way to validate.
A certain big PaaS I won't name here has had lots of clusterfucks in the last 3 months. The CEO is extremely bought into AI and "code not mattering anymore". He's also constantly talking about the meteoric growth because Claude and other AI providers are using railway as default suggestions.
The toll has come to collect and now a lot of real production users are looking at alternatives.
The reality is the market is rewarding slop and "velocity now". There will come a time where it will reward quality again.
I'm using AI for coding just like everybody else. More or less exclusively since a few months. It's sometimes frustrating to get things done the right way but mostly I get the job done. I've been coding since the nineties. So, I know how to do things right and what doing it wrong looks like. If I catch my AI coding tools doing it wrong, I tell it to fix it and then adjust skills and guard rails to prevent it going off the rails.
AI tools actually seem to self correct when used in a nice code base. If there are tests, they'll just write more tests without needing to be prompted. If there is documentation, that gets updated along with the code. When you start with a vibe coded mess it can escalate quickly unless you make it clean up the mess. Sometimes the tests it adds are a bit meh and you have to tell it off by "add some tests for the non happy path cases, make sure to cover all possible exceptions, etc.". You can actually ask for a critical code review and then tell it "fix all of that". Sometimes it's as simple as that.
And just to be clear: AI continues to progress. There are already rumors about the next Anthropic model coming out and we are now in the phase of the biggest centralized reinforcement loop ever existed: everyone using ai for writing and giving it feedback.
We are, thanks to LLMs, able now to codify humans and while its not clear how fast this is, i do not believe anymore that my skills are unique.
A small hobby application costed me 11 dollars on the weekend and took me 3h to 'build' while i would have probably needed 2-3 days for it.
And we are still limited by resources and normal human progress. Like claude team is still exerpimental. Things like gastown or orchestrator architecture/structure is not that estabslihed and consumed quite a lot of tokens.
We have not even had time yet to build optimzed models. Claude code still understand A LOT of languages (human languages and programming languages)
Do not think anyone really cares about code quality. I do but i'm a software engineere. Everyone around me doesn't. Business doesn't. Even fellow co-workers don't or don't understand good code.
Even stupid things like the GTA 5 Online (or was it RDR2?) startup code wasn't found for ages (there was some algo complexity in loading some config file which took ages until someone non rockstar found it and rockstar fixed it).
We also have plenty of code were it doesn't matter as long as it works. Offline apps, scripts, research scripts etc.
Microslop is the future.