Every layer of review makes you 10x slower (apenwarr.ca)

by greyface- 315 comments 577 points
Read article View on HN

315 comments

[−] onion2k 60d ago
But you can’t just not review things!

Actually you can. If you shift the reviews far to the left, and call them code design sessions instead, and you raise problems on dailys, and you pair programme through the gnarly bits, then 90% of what people think a review should find goes away. The expectation that you'll discover bugs and architecture and design problems doesn't exist if you've already agreed with the team what you're going to build. The remain 10% of things like var naming, whitespace, and patterns can be checked with a linter instead of a person. If you can get the team to that level you can stop doing code reviews.

You also need to build a team that you can trust to write the code you agreed you'd write, but if your reviews are there to check someone has done their job well enough then you have bigger problems.

[−] alkonaut 60d ago
This falls for the famous "hours of planning can save minutes of coding". Architecture can't (all) be planned out on a whiteboard, it's the response to the difficulty you only realize as you try to implement.

If you can agree what to build and how to build it and then it turns out that actually is a working plan - then you are better than me. That hasn't happened in 20 years of software development. Most of what's planned falls down within the first few hours of implementation.

Iterative architecture meetings will be necessary. But that falls into the pit of weekly meeting.

[−] ventana 60d ago
That's actually one thing that always prevented me from following the standard pathway of "write a design document first, get it approved, then execute" during my years in Google.

I cannot write a realistic non-hand-wavy design document without having a proof of concept working, because even if I try, I will need to convince myself that this part and this part and that part will work, and the only way to do it is to write an actual code, and then you pretty much have code ready, so why bother writing a design doc.

Some of my best (in terms of perf consequences) design documents were either completely trivial from the code complexity point of view, so that I did not actually need to write the code to see the system working, or were written after I already had a quick and dirty implementation working.

[−] eyelidlessness 60d ago
It’s a muscle you can exercise, and doing so helps you learn what to focus on so it’ll be successful. IME a very successful approach is to focus on interfaces, especially at critical boundaries (critical for your use case first, then critical for your existing design/architecture).

Doing this often settles the design direction in a stable way early on. More than that, it often reveals a lot of the harder questions you’ll need to answer: domain constraints and usage expectations.

Putting this kind of work upfront can save an enormous amount of time and energy by precluding implementation work on the wrong things, and ruling out problematic approaches for both the problem at hand as well as a project’s longer term goals.

[−] 2OEH8eoCRo0 60d ago
I've worked waterfall (defense) and while I hated it at the time I'd rather go back to it. Today we move much faster but often build the wrong thing or rewrite and refactor things multiple times. In waterfall we move glacially but what we would build sticks. Also, with so much up front planning the code practically writes itself. I'm not convinced there's any real velocity gains in agile when factoring in all the fiddling, rewrites, and refactoring.

> Most of what's planned falls down within the first few hours of implementation.

Not my experience at all. We know what computers are capable of.

[−] sodapopcan 60d ago
Pair programming 100% of also works. It's unfortunately widely unpopular, but it works.
[−] dotancohen 60d ago

  > Most of what's planned falls down within the first few hours of implementation.
Planning is priceless. But plans are worthless.
[−] apexalpha 60d ago
This might be true for tech companies, but the tech department I am in at a large government could absolutely architecture away >95% of 'problems' we are fixing at the end of the SDLC.
[−] AIorNot 60d ago
“Everyone has a plan until they get punched in the mouth" - Mike Tyson
[−] gunsle 60d ago
Agreed completely. I’ve worked at a couple places that want to design session everything to death and then meticulously convert that design by committee into story requirements to the point that no actual engineering is even needed from the engineer. On top of that, the usual problem occurs - turns out there actually was a lot of unknowns, and now that 2-4 hours you spent with 5-10 other people meticulously crafting the story and execution plan has been completely wasted as the requirements and design shift by extension. It infuriates me to no end that others within the org don’t see how frequently we do redo these meticulously written stories and what a waste of time that is.
[−] teeray 60d ago
Code reviews are a volunteer’s dilemma. Nobody is showered with accolades by putting “reviewed a bunch of PRs” on their performance review by comparison with “shipped a bunch of features.” The two go hand-in-hand, but rewards follow marks of authorship despite how much reviewers influence what actually landed in production.

Consequently, people tend to become invested in reviewing work only once it’s blocking their work. Usually, that’s work that they need to do in the future that depends on your changes. However, that can also be work they’re doing concurrently that now has a bunch of merge conflicts because your change landed first. The latter reviewers, unfortunately, won’t have an opinion until it’s too late.

Fortunately, code is fairly malleable. These “reviewers” can submit their own changes. If your process has a bias towards merging sooner, you may merge suboptimal changes. However, it will converge on a better solution more quickly than if your changes live in a vacuum for months on a feature branch passing through the gauntlet of a Byzantine review and CI process.

[−] thot_experiment 60d ago
Valve is one of the only companies that appears to understand this, as well as that individual productivity is almost always limited by communication bandwidth, and communication burden is exponential as nodes in the tree/mesh grow linearly. [or some derated exponent since it doesn't need to be fully connected]
[−] lelanthran 61d ago
I wonder where the reviewer worked where PRs are addressed in 5 hours. IME it's measured in units of days, not hours.

I agree with him anyway: if every dev felt comfortable hitting a stop button to fix a bug then reviewing might not be needed.

The reality is that any individual dev will get dinged for not meeting a release objective.

[−] yason 60d ago
One thing that often gets dismissed is the value/effort ratio of reviews.

A review must be useful and the time spent on reviewing, re-editing, and re-reviewing must improve the quality enough to warrant the time spent on it. Even long and strict reviews are worth it if they actually produce near bugless code.

In reality, that's rarely the case. Too often, reviewing gets down into the rabbithole of various minutiae and the time spent to gain the mutual compromise between what the programmer wants to ship and the reviewer can agree to pass is not worth the effort. The time would be better spent on something else if the process doesn't yield substantiable quality. Iterating a review over and over and over to hone it into one interpretation of perfection will only bump the change into the next 10x bracket in the wallclock timeline mentioned in this article.

In the adage of "first make it work, then make it correct, and then make it fast" a review only needs to require that the change reaches the first step or, in other words, to prevent breaking something or the development going into an obviously wrong direction straight from the start. If the change works, maybe with caveats but still works, then all is generally fine enough that the change can be improved in follow-up commits. For this, the review doesn't need to be thorough details: a few comments to point the change into the right direction is often enough. That kind of reviews are very efficient use of time.

Overall, in most cases a review should be a very short part of the development process. Most of the time should be spent programming and not in review churn. A review serves as a quick check-point that things are still going the right way but it shouldn't dictate the exact path that should be used in order to get there.

[−] swiftcoder 60d ago

> The job of a code reviewer isn't to review code. It's to figure out how to obsolete their code review comment, that whole class of comment, in all future cases, until you don't need their reviews at all anymore

Amen brother

[−] alkonaut 60d ago
I think this makes an assumption early on which is that things are serialized, when usually they are not.

If I complete a bugfix every 30 minutes, and submit them all for review, then I really don't care whether the review completes 5 hours later. By that time I have fixed 10 more bugs!

Sure, getting review feedback 5 hours later will force me to context switch back to 10 bugs ago and try to remember what that was about, and that might mean spending a few more minutes than necessary. But that time was going to be spent _anyway_ on that bug, even if the review had happened instantly.

The key to keeping speed up in slow async communication is just working on N things at the same time.

[−] kkl 60d ago

> The job of a code reviewer isn't to review code. It's to figure out how to obsolete their code review comment, that whole class of comment, in all future cases, until you don't need their reviews at all anymore.

Making entire classes of issues effectively impossible is definitely the ideal outcome. But, this feels much more complicated when you consider that trust doesn't always extend beyond the company's wall and you cannot always ignore that fact because the negative outcomes can be external to the company.

What if I, a trusted engineer, run npm update at the wrong time and malware makes its way into production and user data is stolen? A mistake to learn from, for sure, but a post-mortem is too late for those users.

I'm certainly not advocating for relying on human checks everywhere, but reasoning about where you crank the trust knob can get very complicated or costly. Occasionally a trustworthy human reviewer can be part of a very reasonable control.

[−] dominicrose 60d ago
Managers are expected to say that we should be productive yet they're responsible for the framework which slows down everyone and it's quite clear that they're perfectly fine with this framework. I'm not saying it's good or bad because it's complicated.
[−] trigvi 60d ago
Excellent article. Based on personal experience, if you build cutting edge stuff then you need great engineers and reviewers.

But for anything else, you just need an individual (not a team) who's okay (not great) at multiple things (architecting, coding, communicating, keeping costs down, testing their stuff). Let them build and operate something from start to finish without reviewing. Judge it by how well their produce works.

[−] SmashngKbds 60d ago
This was well written and it hit hard. It brought clarity to an uneasy collection of gut feels, anecdotal evidence, and fundamental beliefs like nothing else. What I'm feeling might even be categorized as an epiphany.

I've always liked Tailscale as a product and now I might be a fan of their CEO too. Who knew?

I'll be sharing this post widely. Avery - if you're on here, thanks for writing this!

[−] gwern 59d ago
The 10x claim reminds me of various scaling laws in anthropology/sociology, especially the much-debated Haire cube-root scaling law (eg https://gwern.net/doc/sociology/1959-haire.pdf https://gwern.net/doc/economics/1983-stephan.pdf ). I wonder if one could give a derivation from queuing theory where you have an argument along the lines of 'in indefinitely many layers of queues of queues, the total delay is minimized at 90% load per queue which yields a 10x scale-free slowdown'?
[−] TheChelsUK 60d ago
That’s because most teams are doing engineering wrong.

The handover to a peer for review is a falsehood. PRs were designed for open source projects to gate keep public contributors.

Teams should be doing trunk-based development, group/mob programming and one piece flow.

Speed is only one measure and AI is pushing this further to an extreme with the volume of change and more code.

The quality aspect is missing here.

Speed without quality is a fallacy and it will haunt us.

Don’t focus on speed alone, and the need to always be busy and picking up the next item - focus on quality and throughput keeping work in progress to a minimum (1). Deliver meaningful reasoned changed as a team, together.

[−] tptacek 61d ago
Not before coding agents nor after coding agents has any PR taken me 5 hours to review. Is the delay here coordination/communication issues, the "Mythical Mammoth" stuff? I could buy that.
[−] pu_pe 60d ago
Nice piece, and rings true. I also think startups and smaller organizations will be able to capture better value out of AI because they simply don't have all those approval layers.
[−] abtinf 61d ago
I find to be true for expensive approvals as well.

If I can approve something without review, it’s instant. If it requires only immediate manager, it takes a day. Second level takes at least ten days. Third level trivially takes at least a quarter (at least two if approaching the end of the fiscal year). And the largest proposals I’ve pushed through at large companies, going up through the CEO, take over a year.

[−] tomasz-tomczyk 58d ago
Yes, reviewing things make things slower. Not reviewing them produces horrible results. There's a balance to be found and it depends on the team/org.

I hope we shift engineers closer to users than ever before. Get them to understand user's needs and the actual product more - they'll write better plans and prompts. Review the plans.

Code review becomes less of a thing when the team's on the same page, so regularly align on what the goals are.

Accept post-merge code reviews. Things slip, normalise coming back and saying "actually, we should have done this differently". It's not a bad thing, you're learning!

[−] lukaslalinsky 60d ago
Reviewing things is fast and smooth is things are small. If you have all the involved parties stay in the loop, review happens in the real time. Review is only problematic if you split the do and review steps. The same applies to AI coding, you can chose to pair program with it and then it's actually helpful, or you can have it generate 10k lines of code you have no way of reviewing. You just need people understand that switching context is killing productivity. If more things are happening at the same time and your memory is limited, the time spent on load/save makes it slower than just doing one thing at the time and staying in the loop.