> LLMs are pretty good at picking up the style in your repo. So keeping it clean and organized already helps.
At least in my experience, they are good at imitating a "visually" similar style, but they'll hide a lot of coupling that is easy to miss, since they don't understand the concepts they're imitating.
They think "Clean Code" means splitting into tiny functions, rather than cohesive functions. The Uncle Bob style of "Clean Code" is horrifying
They're also very trigger-happy to add methods to interfaces (or contracts), that leak implementation detail, or for testing, which means they are testing implementation rather than behavior
- Adhere to rules in "Code Complete" by Steve McConnell.
- Adhere to rules in "The Art of Readable Code" by Dustin Boswell & Trevor Foucher.
- Adhere to rules in "Bugs in Writing: A Guide to Debugging Your Prose" by Lyn Dupre.
- Adhere to rules in "The Elements of Style, Fourth Edition" by William Strunk Jr. & E. B. White
e.g., mentioning Elements of Style and Bugs in Writing certainly has helped our review LLM to make some great suggestions about English documentation PRs in the past.
I'm guessing a lot of similar debates were had in the 1970s when we first started compiling C to Assembly, and I wonder if the outcome will be the same.
(BTW: I was not around then, so if I'm guessing wrong here please correct me!)
Over time compilers have gotten better and we're now at the point where we trust them enough that we don't need to review the Assembly or machine code for cleanliness, optimization, etc. And in fact we've even moved at least one abstraction layer up.
Are there mission-critical inner loops in systems these days that DO need hand-written C or Assembly? Sure. Does that matter for 99% of software projects? Negative.
I'm extrapolating that AI-generated code will follow the same path.
Heres the thing about clean code. Is it really good? Or is it just something that people get familiar with and actually familiarity is all that matters.
You can't really run the experiment because to do it you have to isolate a bunch of software engineers and carefully measure them as they go through parallel test careers. I mean I guess you could measure it but it's expensive and time consuming and likely to have massive experimental issues.
Although now you can sort of run the experiment with an LLM. Clean code vs unclean code. Let's redefine clean code to mean this other thing. Rerun everything from a blank state and then give it identical inputs. Evaluate on tokens used, time spent, propensity for unit tests to fail, and rework.
The history of science and technology is people coming up with simple but wrong untestable theories which topple over once someone invents a thingamajig that allows tests to be run.
I have a bit more practical approach here (write up at some point): the most important thing is to rethink how you are instructing the agents and do not only rely on your existing codebase because: 1) you may have some legacy practices, 2) it is a reflection of many hands, 3) it becomes very random based on what files the agent picks up.
Instead, you should approach it as if instructing the agent to write "perfect" code (whatever that means in the context of your patterns and practices, language, etc.).
How should exceptions be handled? How should parameters be named? How should telemetry and logging be added? How should new modules to be added? What are the exact steps?
Do not let the agent randomly pick from your existing codebase unless it is already highly consistent; tell it exactly what "perfect" looks like.
I’ll go against the prevailing wisdom and bet that clean code does not matter any more.
No more than the exact order of items being placed in main memory matters now. This used to be a pretty significant consideration in software engineering until the early 1990s. This is almost completely irrelevant when we have ‘unlimited’ memory.
Similarly generating code, refactoring, implementing large changes are easy to a point now that you can just rewrite stuff later. If you are not happy about how something is designed, a two sentence prompt fixes it in a million line codebase in thirty minutes.
It is an interesting possibility that must be considered. Only time will tell. However I disagree.
I think complex systems will still turn into a big ball of mud and AI agents will get just as bogged down as humans when dealing with it. And even though re-build from scratch is cheaper than ever, it can't possibly be done cheaply while also remembering the millions+ of specific characteristics that users will have come to rely on.
Maybe if you pushed spec-driven development to the absolute extreme, but i don't think pushing it that far is easy/cheap. Just as the effort to go from 90% unit test coverage to 100% is hard and possibly not worth it, I expect a similar barrier around extreme spec-driven.
Clarification: I'm advocating clean code in the generic sense, not Uncle Bob's definition.
But if you are saying that a human can instruct ai agents to refactor to prevent the big ball of mud, then you are saying that clean code *is* important.
You haven't worked or serviced any engineering systems, I can tell.
There are fundamental truths about complex systems that go beyond "coding". Patterns can be experienced in nature where engineering principals and "prevailing wisdom" are truer than ever.
I suggest you take some time to study systems that are powering critical infrastructure. You'll see and read about grizzled veterans that keep them alive. And how they are even more religious about clean engineering principals and how "prevailing wisdom" is very much needed and will always be needed.
That said there are a lot of spaces where not following wisdom works temporarily. But at scale, it crashes and crumbles. Web-apps are a good example of this.
The llm is forced to eat its own output. If the output is garbage, its inputs will be garbage in future passes. How code is structured makes the llm implement new features in different ways.
Why would “messy” code be garbage? Also LLMs do a great job even today at assessing what code is trying to do and/or asking you for more context. I think the article is well balanced though: it’s probably worth it for the next few months to try to help the agent out a bit with code quality and high level guidance on coding practices. But as OP says this is clearly temporary.
The definitions of what is messy or clean will change will llms…
But there will always be a spectrum of structures that are better for the llm to code with, and coding with less optimal patterns will have negative feedback effects as the loop goes on.
I agree with you but you can dedicate tokens to fixing the bad code that agents do today. I don’t disagree with anything you’re saying. I think the practical implication is instead of pain and jira we’ll just have dedicated audit and refactor token budgets.
I'm dealing with a situation right now where a critical mass of "messy" code means that nobody, human or LLM, can understand what it is trying to do or how a straightforward user-specified update should be applied to the underlying domain objects. Multiple proposed semantics have failed so far.
On the plus side.. AI is pretty good at creating (often excessive) tests around a given codebase in order to (re)implement the utility using different backends or structures. The one thing to look out for is that the agent does NOT try to change a failing test, where the test is valid, but the code isn't.
> Clean code tends to equal simple code, which tends to equal fast code.
Wat? Approximately every algorithm in CS101 has a clean and simple N^2 version, a long menu of complex N*log(N) versions, and an absolute zoo of special cases grafted onto one of the complex versions if you want the fastest code. This pattern generalizes out of the classroom to every corner of industry, but with less clean names+examples. The universal truth is that speed and simplicity are very quick to become opposing priorities. It happens in nanoseconds, one might say.
Cache-aware optimization in particular tends to create unholy code abominations, it's a strange example to pick for clean=simple=fast wishcasting.
I'm not sure if you are considering the patterns actually used in "Clean Code" architectures... which create a lot of, admittedly consistent, levels of interface abstractions in practice. This is not what I would consider simple/kiss or particularly easy to maintain over time and feature bloat.
I tend to prefer feature-oriented structures as an alternative, which I do find simpler and easy enough to refactor over time as complexity is required and not before.
nano seconds matter in some miniscule number of High Frequency and Algorithmic trading use cases. It does not matter in the majority of finance applications. No consumer finance use case cares about nanoseconds. The vast majority of money is moved via ACH, which clears via fixed width text files shared via SFTP, processed once a day. Nanoseconds do not matter here.
Humans are quite capable of bankrupting financial companies with coding issues. Knight Capital Group introduced a bug into their system while using high frequency trading software. 45 minutes later, they were effectively bankrupt.
I actively use AI to refactor a poorly structured two million line Java codebase. A two-sentence prompt does not work. At all.
I think the OP is right; the problem is context. If you have a nicely modularized codebase where the LLM can neatly process one module at a time, you're in good shape. But two million lines of spaghetti requires too much context. The AI companies may advertise million-token windows, but response quality drops off long before you hit the end.
You still need discipline. Personally I think the biggest gains in my company will not come from smarter AIs, but from getting the codebase modularized enough that LLMs can comfortably digest it. AI is helping in that effort but it's still mostly human driven - and not for lack of trying.
I started a side project that was supposed to be 100% vibe coded (because I have a similar view as you). I'm using go and Bubble Tea for a TUI interface. I wanted mouse interaction, etc.. It turns out it defaulted to bubble tea 1.0 (instead of 2.0). The mouse clicks were all between 1 and 3 lines below where the actual buttons were. I kept telling it that the math must be wrong. And then telling it to use Bubble objects to avoid all this crazy math.
I am now hand coding the UI because the vibe coded method does not work.
I then looked at the db-agent I was designing and I explicitly told it to create SQL using the LLM, and it does. But the ACTUAL SQL that it persists to the project is a separate SQL generator that it wrote by hand. The LLM one that gets displayed on the screen looks perfect, then when it comes down to committing it to the database, it runs an alternative DDL generator with lots of hard coded CREATE TABLE syntax etc... It's actually a beautiful DDL generator, for something written in like 2015, but I ONLY wanted the LLM to do it.
I started screaming at the agent. I think when they do take over I might be high up on their hit list.
Just anecdata. I still think in a year or two, we'll be right about clean code not mattering, but 2026 might not be that year.
I think clean architecture matters a lot, even more so than before. I get that you can just rewrite stuff, but that comes with inherent risk, even in the age of agents.
Supporting production applications with low MTTR to me is what matters a lot. If you are relying entirely on your agent to identify and fix a production defect, I'd argue you are out at sea in a very scary place (comprehension debt and all that). It is in these cases where architecture and organization matters, so you can trace the calls and see what's broken. I get that largely the code is a black box as less and less people review the details, but you do have to review the architecture and design still, and that's not going away. To me, things like SRP, SOLID, DRY and ever-more important.
Our company makes extensive use of architectural linters -- Konsist for Android and Harmonize for Swift. At this point we have hundreds of architectural lint rules that the AI will violate, read, and correct. We also describe our architecture in a variety of skills files. I can't imagine relying solely on markdown files to keep consistency throughout our codebase, the AI still makes too many mistakes or shortcuts.
In my experience, one reason for unnecessarily complex solutions during vibe coding is the incremental work pattern. Most users don't spend much time designing the solution, but instead jump quickly to implementation and then iterate. When doing that, the models seem prone to applying more short-sighted patches to existing code instead of doing a larger refactor that would simplify it all.
Other than spending more time on design, I also usually ask the agent to spawn a few subagents to review an implementation from different perspectives like readability, simplicity, maintainability, modularity etc, then aggregate and analyze their proposals and prioritize. It's not a silver bullet and many times there are no objective right answers, but it works surprisingly well.
IME the best approach is keeping prompts as narrow and constrained as possible. its better to structure your own methods etc in a DD, and have implementation details fleshed out under supervision, rather than letting the agent create 30 new methods with unwanted/unused/unprompted arguments. once it starts thinking for more than a minute, i'm already a bit worried, and having it suggest or commit more than 100 lines at a time almost always ends up with more than what I asked for. im sure folks are using it for much larger tasks than what im referring to but this is my experience with small projects
This article is more about how to get LLMs to adhere to existing definitions. I was hoping this would explore some re-definitions of "clean code".
DRY is a principle that comes up frequently. But is repetition really that bad when LLMs can trivially edit all instances of the pattern and keep them in sync? LLMs, by contrast, cannot understand a leaky abstraction - the typical result when you hastily apply DRY. So "clean code" in a era of LLMs might be mean more explicit and repetitive, less abstract.
Ever since AI coding became a thing, Clean Code advocates have been trying to get LLMs to conform. I was hoping this submission would declare "Success!" and show how he did it, but sadly it's devoid of anything actionable.
I'm not a fan of Clean Code[1], but the only tip I can give is: Don't instruct the LLM to write code in the form of Clean Code by Robert Martin. Itemize all the things you view as clean code, and put that in CLAUDE.md or wherever. You'll get better luck that way.
4. Iterate on a AGENTS.md (or any other similar file you can reuse) that you keep updating every time the agent does something wrong. Don't make an LLM write this file, write it with your own words. Iterate on it whenever the agent did something wrong, then retry the same prompt to verify it actually steers the agent correctly. Eventually you'll build up a relatively concise file with your personal "coding guidelines" that the agent can stick with with relative ease.
The last two weeks with Claude has been a nightmare with code quality, it outright ignores standards (in CLAUDE.md). Just yesterday I was reviewed a PR from a coworker where it undid some compliant code, and then proceeded to struggle with exactly what the standards were designed to address.
I threw in the towel last night and switched to codex, which has actually been following instructions.
in my experience, as long as you set up a decent set of agent definitions & a good skillset, and work in an already pretty clean codebase with established standards, the code quality an agent outputs is actually really good.
Couple that with a self-correcting loop (design->code->PR review->QA review in playwright MCP->back to code etc), orchestrated by a swarm coordinator agent, and the quality increases even further.
It's important to remember humans have shipped slop too, and code that isn't clean.
When the training is across code with varying styles, it is going to take effort to get this technology performing in a standardized way, especially when what's possible changes every 3 months.
90 comments
> LLMs are pretty good at picking up the style in your repo. So keeping it clean and organized already helps.
At least in my experience, they are good at imitating a "visually" similar style, but they'll hide a lot of coupling that is easy to miss, since they don't understand the concepts they're imitating.
They think "Clean Code" means splitting into tiny functions, rather than cohesive functions. The Uncle Bob style of "Clean Code" is horrifying
They're also very trigger-happy to add methods to interfaces (or contracts), that leak implementation detail, or for testing, which means they are testing implementation rather than behavior
1 - Surprising success when an agent can build on top of established patterns & abstractions
2 - A deep hole of "make it work" when an LLM digs a whole it can't get out of, and fails to anticipate edge cases/discover hidden behavior.
The same things that make it easier for humans to contribute code make it easier for LLMs to contribute code.
From https://github.com/feldera/feldera/blob/main/CLAUDE.md:
- Adhere to rules in "Code Complete" by Steve McConnell.
- Adhere to rules in "The Art of Readable Code" by Dustin Boswell & Trevor Foucher.
- Adhere to rules in "Bugs in Writing: A Guide to Debugging Your Prose" by Lyn Dupre.
- Adhere to rules in "The Elements of Style, Fourth Edition" by William Strunk Jr. & E. B. White
e.g., mentioning Elements of Style and Bugs in Writing certainly has helped our review LLM to make some great suggestions about English documentation PRs in the past.
(BTW: I was not around then, so if I'm guessing wrong here please correct me!)
Over time compilers have gotten better and we're now at the point where we trust them enough that we don't need to review the Assembly or machine code for cleanliness, optimization, etc. And in fact we've even moved at least one abstraction layer up.
Are there mission-critical inner loops in systems these days that DO need hand-written C or Assembly? Sure. Does that matter for 99% of software projects? Negative.
I'm extrapolating that AI-generated code will follow the same path.
You can't really run the experiment because to do it you have to isolate a bunch of software engineers and carefully measure them as they go through parallel test careers. I mean I guess you could measure it but it's expensive and time consuming and likely to have massive experimental issues.
Although now you can sort of run the experiment with an LLM. Clean code vs unclean code. Let's redefine clean code to mean this other thing. Rerun everything from a blank state and then give it identical inputs. Evaluate on tokens used, time spent, propensity for unit tests to fail, and rework.
The history of science and technology is people coming up with simple but wrong untestable theories which topple over once someone invents a thingamajig that allows tests to be run.
Instead, you should approach it as if instructing the agent to write "perfect" code (whatever that means in the context of your patterns and practices, language, etc.).
How should exceptions be handled? How should parameters be named? How should telemetry and logging be added? How should new modules to be added? What are the exact steps?
Do not let the agent randomly pick from your existing codebase unless it is already highly consistent; tell it exactly what "perfect" looks like.
No more than the exact order of items being placed in main memory matters now. This used to be a pretty significant consideration in software engineering until the early 1990s. This is almost completely irrelevant when we have ‘unlimited’ memory.
Similarly generating code, refactoring, implementing large changes are easy to a point now that you can just rewrite stuff later. If you are not happy about how something is designed, a two sentence prompt fixes it in a million line codebase in thirty minutes.
I think complex systems will still turn into a big ball of mud and AI agents will get just as bogged down as humans when dealing with it. And even though re-build from scratch is cheaper than ever, it can't possibly be done cheaply while also remembering the millions+ of specific characteristics that users will have come to rely on.
Maybe if you pushed spec-driven development to the absolute extreme, but i don't think pushing it that far is easy/cheap. Just as the effort to go from 90% unit test coverage to 100% is hard and possibly not worth it, I expect a similar barrier around extreme spec-driven.
Clarification: I'm advocating clean code in the generic sense, not Uncle Bob's definition.
There are fundamental truths about complex systems that go beyond "coding". Patterns can be experienced in nature where engineering principals and "prevailing wisdom" are truer than ever.
I suggest you take some time to study systems that are powering critical infrastructure. You'll see and read about grizzled veterans that keep them alive. And how they are even more religious about clean engineering principals and how "prevailing wisdom" is very much needed and will always be needed.
That said there are a lot of spaces where not following wisdom works temporarily. But at scale, it crashes and crumbles. Web-apps are a good example of this.
> You haven't worked or serviced any engineering systems, I can tell.
I have worked on compilers and databases the entire world runs on, the code quality (even before AI) is absolutely garbage.
Real systems built by hundreds of engineers over twenty years do not have clean code.
The llm is forced to eat its own output. If the output is garbage, its inputs will be garbage in future passes. How code is structured makes the llm implement new features in different ways.
But there will always be a spectrum of structures that are better for the llm to code with, and coding with less optimal patterns will have negative feedback effects as the loop goes on.
Nanoseconds matter.
Clean code tends to equal simple code, which tends to equal fast code.
The order of items in memory does matter, as does cache locality. 32Kb fits in L1 cache.
If of course you're talking about web apps then that's just always been the Wild West.
> Clean code tends to equal simple code, which tends to equal fast code.
Wat? Approximately every algorithm in CS101 has a clean and simple N^2 version, a long menu of complex N*log(N) versions, and an absolute zoo of special cases grafted onto one of the complex versions if you want the fastest code. This pattern generalizes out of the classroom to every corner of industry, but with less clean names+examples. The universal truth is that speed and simplicity are very quick to become opposing priorities. It happens in nanoseconds, one might say.
Cache-aware optimization in particular tends to create unholy code abominations, it's a strange example to pick for clean=simple=fast wishcasting.
I tend to prefer feature-oriented structures as an alternative, which I do find simpler and easy enough to refactor over time as complexity is required and not before.
https://www.computerenhance.com/p/clean-code-horrible-perfor...
I think the OP is right; the problem is context. If you have a nicely modularized codebase where the LLM can neatly process one module at a time, you're in good shape. But two million lines of spaghetti requires too much context. The AI companies may advertise million-token windows, but response quality drops off long before you hit the end.
You still need discipline. Personally I think the biggest gains in my company will not come from smarter AIs, but from getting the codebase modularized enough that LLMs can comfortably digest it. AI is helping in that effort but it's still mostly human driven - and not for lack of trying.
You might be pleasantly surprised if you haven’t yet.
It's useful, but not "give it a two sentence prompt" useful.
This fact alone insinuates that the idea of having unlimited memory or unlimited CPU clocks is just wrong.
[0]: And TypeScript, technically. But I'd consider TypeScript a fork of JavaScript rather than a new language.
I am now hand coding the UI because the vibe coded method does not work.
I then looked at the db-agent I was designing and I explicitly told it to create SQL using the LLM, and it does. But the ACTUAL SQL that it persists to the project is a separate SQL generator that it wrote by hand. The LLM one that gets displayed on the screen looks perfect, then when it comes down to committing it to the database, it runs an alternative DDL generator with lots of hard coded CREATE TABLE syntax etc... It's actually a beautiful DDL generator, for something written in like 2015, but I ONLY wanted the LLM to do it.
I started screaming at the agent. I think when they do take over I might be high up on their hit list.
Just anecdata. I still think in a year or two, we'll be right about clean code not mattering, but 2026 might not be that year.
Supporting production applications with low MTTR to me is what matters a lot. If you are relying entirely on your agent to identify and fix a production defect, I'd argue you are out at sea in a very scary place (comprehension debt and all that). It is in these cases where architecture and organization matters, so you can trace the calls and see what's broken. I get that largely the code is a black box as less and less people review the details, but you do have to review the architecture and design still, and that's not going away. To me, things like SRP, SOLID, DRY and ever-more important.
Other than spending more time on design, I also usually ask the agent to spawn a few subagents to review an implementation from different perspectives like readability, simplicity, maintainability, modularity etc, then aggregate and analyze their proposals and prioritize. It's not a silver bullet and many times there are no objective right answers, but it works surprisingly well.
DRY is a principle that comes up frequently. But is repetition really that bad when LLMs can trivially edit all instances of the pattern and keep them in sync? LLMs, by contrast, cannot understand a leaky abstraction - the typical result when you hastily apply DRY. So "clean code" in a era of LLMs might be mean more explicit and repetitive, less abstract.
I'm not a fan of Clean Code[1], but the only tip I can give is: Don't instruct the LLM to write code in the form of Clean Code by Robert Martin. Itemize all the things you view as clean code, and put that in CLAUDE.md or wherever. You'll get better luck that way.
[1] I'm also not that anti-Uncle-Bob as some are.
I threw in the towel last night and switched to codex, which has actually been following instructions.
Couple that with a self-correcting loop (design->code->PR review->QA review in playwright MCP->back to code etc), orchestrated by a swarm coordinator agent, and the quality increases even further.
When the training is across code with varying styles, it is going to take effort to get this technology performing in a standardized way, especially when what's possible changes every 3 months.