Thoughts on slowing the fuck down (mariozechner.at)

by jdkoeck 485 comments 1130 points
Read article View on HN

485 comments

[−] Towaway69 51d ago
What the article doesn't touch on is the vendor lock-in that is currently underway. Many corps are now moving to an AI-based development process that is reliant on the big AI providers.

Once the codebase has become fully agentic, i.e., only agents fundamentally understand it and can modify it, the prices will start rising. After all, these loss making AI companies will eventually need to recoup on their investments.

Sure it will be - perhaps - possible to interchange the underlying AI for the development of the codebase but will they be significantly cheaper? Of course, the invisible hand of the market will solve that problem. Something that OPEC has successfully done for the oil market.

Another issue here is once the codebase is agentic and the price for developers falls sufficiently that it will significant cheaper to hire humans again, will these be able to understand the agentic codebase? Is this a one-way transition?

I'm sure the pro-AIs will explain that technology will only get cheaper and better and that fundamentally it ain't an issue. Just like oil prices and the global economy, fundamentally everything is getting better.

[−] _the_inflator 51d ago
I have similar concerns.

We will miss SaaS dearly. I think history is repeating just with DVD and streaming - we simply bought the same movie twice.

AI more and more feels the same. Half a year ago Claude Opus was Anthropics most expensive model - boy, using Claude Opus 4.6 in the 500k version is like paying 1 dollar per minute now. My once decent budgets get hit not after weeks but days (!) now.

And I am not using agents, subagents which would only multiply the costs - for what?

So what we arrive more and more is the same as always: low, medium, luxury tier. A boring service with different quality and payment structures.

Proof: you cannot compensate with prompt engineering anymore. Month ago you fixed any model discrepancies by being more clever and elaborate with your prompts etc.

Not anymore. There is a hidden factor now that accounts for exactly that. It seems that the reliance on skills and different tiers simply moves us away from prompt engineering which is considered more and more jailbreaking than guidance.

Prompt engineering lately became so mundane, I wonder what vendors were really doing by analyzing the usage data. It seems like that vendors tied certain inquiries with certain outcomes modeled by multistep prompting which was reduced internally to certain trigger sentences to create the illusion of having prompted your result while in fact you haven't.

All you did was asking the same result thousands of user did before and the LLM took an statistical approach to deliver the result.

[−] simonw 52d ago
Useful context here is that the author wrote Pi, which is the coding agent framework used by OpenClaw and is one of the most popular open source coding agent frameworks generally.
[−] SoftTalker 52d ago

> Companies claiming 100% of their product's code is now written by AI consistently put out the worst garbage you can imagine. Not pointing fingers, but memory leaks in the gigabytes, UI glitches, broken-ass features, crashes

One thing about the old days of DOS and original MacOS: you couldn't get away with nearly as much of this. The whole computer would crash hard and need to be rebooted, all unsaved work lost. You also could not easily push out an update or patch --- stuff had to work out of the box.

Modern OSes with virtual memory and multitasking and user isolation are a lot more tolerant of shit code, so we are getting more of it.

Not that I want to go back to DOS but Wordperfect 5.1 was pretty damn rock solid as I recall.

[−] andai 51d ago
It occurred to me on my walk today that a program is not the only output of programming.

The other, arguably far more important output, is the programmer.

The mental model that you, the programmer, build by writing the program.

And -- here's the million dollar question -- can we get away with removing our hands from the equation? You may know that knowledge lives deeper than "thought-level" -- much of it lives in muscle memory. You can't glance at a paragraph of a textbook, say "yeah that makes sense" and expect to do well on the exam. You need to be able to produce it.

(Many of you will remember the experience of having forgotten a phone number, i.e. not being able to speak or write it, but finding that you are able to punch it into the dialpad, because the muscle memory was still there!)

The recent trend is to increase the output called programs, but decrease the output called programmers. That doesn't exactly bode well.

See also: Preventing the Collapse of Civilization / Jonathan Blow (Thekla, Inc)

https://www.youtube.com/watch?v=ZSRHeXYDLko

[−] xivzgrev 51d ago
There's currently a billboard up in San Francisco that basically says "use AI to reduce your saas costs".

And I'm thinking - has anyone actually done that for something meaningful?

Replacing salesforce as your crm or replacing Shopify as your e-commerce platform?

I get the hype but AI doesn't remove accountability, it just moves it up. Oh you can do with 1 person what 3 people used to do? Great, that 1 person is now accountable for 3 person's jobs. And people are naturally uncomfortable with that - you need to understand what's going on and be able to investigate / fix. It's different than say, weaving machines replacing jobs because weaving machines were consistent. 1 person could confidently produce what x weavers could before. But AI is not, and that variability in output & quality introduces massive friction.

So as of now, in both software and people, there's a real limit to how much AI can replace because the remaining people still are equally accountable.

[−] drzaiusx11 51d ago
The article touches on this but I think the key takeaway is that humans need to properly manage the _scope of work_ for their agentic teams in order to have any chance of a successful outcome.

Current gen agents need to be provided with small, actionable units of work that can _easily_ be reviewed by a human. A code deliverable is made easy to review if the scope of change is small and aligned with a specific feature or task, not sprawled across multiple concerns. The changes must be ONLY related to the task at hand. If a PR is generated that does two very different things like fix linting errors in preexisting code AND implement feature X, you're doing it wrong. Or rather, you're simply gambling. I'd rather not leave things up to chance that I may miss something in that new 10000LOC PR. It's better that a 10000LOC never existed at all.

YOLOing out massive, sweeping changes with agents exceed our own (human) "context windows" and as this article points out, we're then left with an inevitable "mess." The untangling of which will take an inordinate amount of time to fix.

[−] youknownothing 51d ago
I understand your pain, we're just a peak hype, I think people will learn to backtrack and use the tool in a more sensible way. It always happens. I remember when MongoDB and other NoSql databases came out, people went as far as to say that "SQL is dead" and refuse to use a normal SQL database for anything. Not even for the most obvious relational application. People would store everything as key-value pairs with no schema and do all the joins in the application layer. Fast forward 10 years and we're back to using SQL for most of our applications. NoSql hasn't disappeared, it has just been reduced to the nice where it's useful.
[−] 0xbadcafebee 52d ago

> it sure feels like software has become a brittle mess, with 98% uptime becoming the norm instead of the exception, including for big services

As somebody who has been running systems like these for two decades: the software has not changed. What's changed is that before, nobody trusted anything, so a human had to manually do everything. That slowed down the process, which made flaws happen less frequently. But it was all still crap. Just very slow moving crap, with more manual testing and visual validation. Still plenty of failures, but it doesn't feel like it fails a lot of they're spaced far apart on the status page. The "uptime" is time-driven, not bugs-per-lines-of-code driven.

DevOps' purpose is to teach you that you can move quickly without breaking stuff, but it requires a particular way of working, that emphasizes building trust. You can't just ship random stuff 100x faster and assume it will work. This is what the "move fast and break stuff" people learned the hard way years ago.

And breaking stuff isn't inherently bad - if you learn from your mistakes and make the system better afterward. The problem is, that's extra work that people don't want to do. If you don't have an adult in the room forcing people to improve, you get the disasters of the past month. An example: Google SREs give teams error budgets; the SREs are acting as the adult in the room, forcing the team to stop shipping and fix their quality issues.

One way to deal with this in DevOps/Lean/TPS is the Andon cord. Famously a cord introduced at Toyota that allows any assembly worker to stop the production line until a problem is identified and a fix worked on (not just the immediate defect, but the root cause). This is insane to most business people because nobody wants to stop everything to fix one problem, they want to quickly patch it up and keep working, or ignore it and fix it later. But as Ford/GM found out, that just leads to a mountain of backlogged problems that makes everything worse. Toyota discovered that if you take the long, painful time to fix it immediately, that has the opposite effect, creating more and more efficiency, better quality, fewer defects, and faster shipping. The difference is cultural.

This is real DevOps. If you want your AI work to be both high quality and fast, I recommend following its suggestions. Keep in mind, none of this is a technical issue; it's a business process isssue.

[−] leonardoe 51d ago
Just yesterday I was discussing many of the ideas presented here with a coworker. I had just walked out of a workshop led by $BIGTECHCOMPANY where someone presented the following toy example:

A service goes down. He tells the agent to debug it and fix it. The agent pulls some logs from $CLOUDPROVIDER, inspects the logs, produces a fix and then automatically updates a shared document with the postmortem.

This got me thinking that it's very hard to internalize both issue and solution -updating your model of the system involved- because there is not enough friction for you to spend time dealing with the problem (coming up with hypotheses, modifying the code, writing the doc). I thought about my very human limitation of having to write things down in paper so that I can better recall them.

Then I recalled something I read years ago: "Cars have brakes so they can go fast."

Even assuming it is now feasible to produce thousands of lines of quality code, there is a limitation on how much a human can absorb and internalize about the changes introduced to a system. This is why we will need brakes -- so we can go faster.

[−] emmitska 51d ago
"Do me a SOLID, YAGNI, give me a DRY KISS" — that's been my coding philosophy for 20 years. So when I came back to building after a long detour, I couldn't stomach watching agents confidently generate 400 lines where 40 would do. What I found is that the discipline was the feature, not the obstacle. I ended up pair programming closely — not because I distrusted the agent, but because I couldn't let go of the architecture. The internet kept telling me to stop going into the weeds. Your article explained why that instinct was right. Everyone else is happy grinding in third the whole race. I went 1, 2, 3 — and because I didn't bury myself getting out of the driveway, I still get to shift into fourth.
[−] badlibrarian 52d ago
I suppose everyone on HN reaches a certain point with these kind of thought pieces and I just reached mine.

What are you building? Does the tool help or hurt?

People answered this wrong in the Ruby era, they answered it wrong in the PHP era, they answered it wrong in the Lotus Notes and Visual BASIC era.

After five or six cycles it does become a bit fatiguing. Use the tool sanely. Work at a pace where your understanding of what you are building does not exceed the reality of the mess you and your team are actually building if budgets allow.

This seldom happens, even in solo hobby projects once you cost everything in.

It's not about agile or waterfall or "functional" or abstracting your dependencies via Podman or Docker or VMware or whatever that nix crap is. Or using an agent to catch the bugs in the agent that's talking to an LLM you have next to no control over that's deleting your production database while you slept, then asking it to make illustrations for the postmortem blog post you ask it to write that you think elevates your status in the community but probably doesn't.

I'm not even sure building software is an engineering discipline at this point. Maybe it never was.

[−] magicmicah85 51d ago
This entire article is basically saying "What are we doing? What's going on?" and I could not agree more. My own experience with coding agents has been FOMO cause if I don't have fifteen claude tabs running with OpenClaw, I'm not going to make it. I much prefer keeping myself in the loop and being active with the process than handing it off to deus ex machina and seeing the eventual results that may be what I like and maybe not what I like.

I do like the tips on how to work with agents for delegation. Let it do boring things. The deterministic things where you know what the result should look like each time.

[−] BloondAndDoom 51d ago
This aligns with my observation from product design point as well.

Product design has a slightly different problem than engineering, because the speed of development is so high we cannot dogfood and play with new product decisions, features. By the time I’ve realized we made a stupid design choice and it doesn’t really work in real world, we already built 4 features on top of it. Everyone makes bad product decisions but it was easy and natural to back out of them.

It’s all about how we utilize these things, if we focus on sheer speed it just doesn’t work. You need own architecture and product decisions. You need to use and test your products with humans (and automate those as regression testing). You need to able to hold all of the product or architecture in your mind and help agents to make the right decisions with all the best practice you’ve learned.

[−] gurachek 51d ago
The compounding booboos bit is the key insight here. Humans are a bottleneck and that bottleneck is actually load-bearing. You feel the pain of bad decisions slowly enough to course correct.

I've been building the same AI product for months - a coaching loop that persists across sessions. Every few weeks someone ships a "competitor" in a weekend. Feature list looks similar. The difference is everything that breaks when a real user comes back for session 3 or 4. Context drifts, scores stop calibrating, plans don't adapt. None of that shows up in a demo. You only find it after sitting in the same codebase for weeks, running real sessions, getting confused by your own data. That's the friction the post is talking about and I don't think you can skip it.

[−] convexly 51d ago
I started writing down any of the technical decisions I needed to make before implementing them, usually just a sentence or two on what I'm choosing and why. I I looked back after 6 months and the pattern was embarrassing. I spent days agonizing over choices that turned out to be totally reversible and made quick decisions on things that actually mattered.
[−] rglover 52d ago
Nature will handle this in time. Just expect to see a "Bear Stearns moment" in the software world if this spirals completely out of control (and companies don't take a hint from recent outages).
[−] bigstrat2003 51d ago
I really don't get the author's conclusion here. I agree with his premises: organizations using LLMs to churn out software are turning out terrible quality software. But the conclusion from that shouldn't be "slow down", it should be "this tool isn't currently fit for use, don't use it". It feels like the author starts from the premise of "I want to use AI" and is trying to figure out how to make that work, rather than "I want to make good software" and trying to figure out how to do that.
[−] boxerbk 51d ago
The cognitive surrender study from UPenn highlights the risks of agents producing all of the code - eventually you give up verifying the result. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646

There’s going to be a bottleneck on what is verified because over time we will realize how much tail risk we are creating by simply surrendering our own agency to the agents - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6298838

[−] aerhardt 51d ago
I'm capturing videos of all the bugs I am seeing as of late. The folder is filling fast. I'll write a compilation post but I'm thinking a techno remix video could be fitting too.

If there are any common apps which are unhinged please do share your experiences. LinkedIn was never great quality but it's off the charts. Also catching some on Spotify.

[−] jaffee 52d ago

> You installed Beads, completely oblivious to the fact that it's basically uninstallable malware.

Did I miss something? I haven't used it in a minute, but why is the author claiming that it's "uninstallable malware"?

[−] bluGill 52d ago
I only have so long on earth. (I have no idea how long) I need things to be faster for me. Sometimes that means I need to take extra time now so they don't come back to me later.
[−] gmuslera 52d ago
This assumes that only (AI/Agentic) stupidity comes into play, with no malice on sight. But if things go wrong because you didn't noticed the stupidity, malice will pass through too. And there is a a big profit opportunity, and a broad vulnerable market for malice. Is not just correctness or uptime what comes into play, but bigger risks for vulnerabilities or other malicious injected content.
[−] gedy 52d ago
It's not even the complexity which, you have to realize: many managers and business types think it's just fine to have code no one understands because AI will do it.

I don't agree, but bigger issue to me is many/most companies don't even know what they want or think about what the purpose is. So whereas in past devs coding something gave some throttle or sanity checks, now we'd just throw shit over wall even faster.

I'm seeing some LinkedIn lunatics brag about "my idea to production in an hour" and all I can think is: that is probably a terrible feature. No one I've worked with is that good or visionary where that speed even matters.

[−] Vektorceraptor 51d ago
Fine to read a fellow countryman on HN :) "Dere!" I have disabled my coding agent by default. I first try to think, plan, code something myself and only when I get stuck or the code gets repetitive, only then I tell him to do the stuff. But I get what you are saying, and I agree ... I am clearly pro human on this debate, and the low bloat trash everywhere is annoying. I have come to the conclusion - if you find docs on something, and it is plain HTML - it will be probably of high quality. If you find docs with a flashy, dynamic, effectful and unnecessary 100mb js booboo, then you what you are about to read ...
[−] ketzo 52d ago
I think the core idea here is a good one.

But in many agent-skeptical pieces, I keep seeing this specific sentiment that “agent-written code is not production-ready,” and that just feels… wrong!

It’s just completely insane to me to look at the output of Claude code or Codex with frontier models and say “no, nothing that comes out of this can go straight to prod — I need to review every line.”

Yes, there are still issues, and yes, keeping mental context of your codebase’s architecture is critical, but I’m sorry, it just feels borderline archaic to pretend we’re gonna live in a world where these agents have to have a human poring over every single line they commit.

[−] jbs789 51d ago

> You realize you can no longer trust the codebase.

This cuts to the problem and is excellent framing. A rogue employee can achieve the same, but probably less quickly, and we've designed systems to help catch them early.

[−] Anamon 50d ago

> Because the simple act of having to write the thing or seeing it being built up step by step introduces friction that allows you to better understand what you want to build [...]

I would go further and remove that second option. If the code is important, LLM support or not, write it yourself.

At least for me, there is a clear qualitative difference in thinking between typing the code and watching it being typed, even if I follow along with every line.

If I type it, my brain is constantly questioning whether what I'm doing is correct. What are the edge cases here? Is this introducing a vulnerability? Am I getting the right data from the right place?

By watching an agent or someone else code, the mindset is different. I'm checking someone else's work under the implicit assumption that they have some idea of what they're doing and I'm just reviewing mostly for superficial stuff. I can force myself to ask those other questions, but it takes conscious effort and isn't sustainable over long sessions.

I play around with agentic coding, but I'm always shocked at how much worse the result is compared to working in a separate chat and typing (not pasting!) the suggestions. In the direct comparison, it's easy to see how agentic code turns so incredibly shit so ridiculously fast.

[−] gchamonlive 52d ago
I think before even being able to entertain the thought of slowing the fuck down, we need to seriously consider divorcing productivity. Or at least asking a break, so you can go for a walk in the park, meet some friends and reflect on how you are approaching development.

I think this is very good take on AI adoption: https://mitchellh.com/writing/my-ai-adoption-journey. I've had tremendous success with roughly following the ideas there.

> The point is: let the agent do the boring stuff, the stuff that won't teach you anything new, or try out different things you'd otherwise not have time for. Then you evaluate what it came up with, take the ideas that are actually reasonable and correct, and finalize the implementation.

That's partially true. I've also had instances where I could have very well done a simple change by myself, but by running it through an agent first I became aware of complexities I wasn't considering and I gained documentation updates for free.

Oh and the best part, if in three months I'm asked to compile a list of things I did, I can just look at my session history, cross with my development history on my repositories and paint a very good picture of what I've achieved. I can even rebuild the decision process with designing the solution.

It's always a win to run things through an agent.

[−] 6510 51d ago
I keep returning to this thought: Assuming our abstraction architecture is missing something fundamental, what is it?

My gut says something simple is missing that makes all of the difference.

One thought I had was that our problem lives between all the things taking something in and spitting something out. Perhaps 90% of the work writing a "function" should be to formally register it as taking in data type foo 1.54.32 and bar 4.5.2 then returning baz 42.0 The register will then tell you all the things you can make from baz 42.0 and the other data you have. A comment(?) above the function has a checksum that prevents anyone from changing it.

But perhaps the solution is something entirely different. Maybe we just need a good set of opcodes and have abstractions represent small groups of instructions that can be combined into larger groups until you have decent higher languages. With the only difference being that one can read what the abstraction actually does. The compiler can figure lots of things out but it wont do architecture.

[−] yrashk 51d ago
I've been working on some parts of this problem, specifically capturing and retaining other semantically useful layers of the systems we build as we build and maintain them.

By introducing progressive semantically enriching layers (starting with prose, reasoning and terminology and going all the way into specifying interaction surfaces), we can reduce the dark matter between spec and code, make code more disposable – if your semantics live in the spec layer rather than the implementation, you can throw away and regenerate the implementation without losing understanding – and, critically, give LLMs a way to navigate a graph of knowledge instead of gobbling up walls of text.

https://clayers.com -- https://github.com/CognitiveLayers/clayers

[−] ramon156 51d ago
Now that the pop media is finally letting go a bit of the topic "AI is the new X!", I'm starting to notice a few more high quality posts seeping through. This is one of them.

I really want to read people's perspectives on LLM's, it was just impossible to find quality when everyone wanted to give their opinion. This is the worst on LinkedIn, where mentioning AI gives you free "brownie points" (I have yet to figure out what Managers gained from this). I don't care what you use it for, unless you have a new perspective I can ponder over.

Regardless, nothing is black and white, and most things are a shade of grey. LLM's have been more positive leaning, making the CTA for working on something a lot simpler. Although, I end up refactoring my day away (which I am fine with, I quite enjoy putting the dots on the i's).

[−] trinsic2 51d ago

> And I would like to suggest that slowing the fuck down is the way to go. Give yourself time to think about what you're actually building and why. Give yourself an opportunity to say, fuck no, we don't need this. Set yourself limits on how much code you let the clanker generate per day, in line with your ability to actually review the code.

This is a great point.

I have been avoiding LLM's for awhile now, but realized that I might want to try working on a small PDF book to Markdown conversion project[0]. I like the Claude code because command line. I'm realizing you really need to architect with good very precise language to avoid mistakes.

I didn't try to have a prompt do everything at once. I prompted Claude Code to do the conversion process section by section of the document. That seemed to reduce the mistake the agent would make

[0]: https://www.scottrlarson.com/publications/publication-my-fir...

[−] Zachzhao 51d ago

> Coding agents are sirens, luring you in with their speed of code generation and jagged intelligence, often completing a simple task with high quality at breakneck velocity. Things start falling apart when you think: "Oh golly, this thing is great. Computer, do my work!".

But the rough edges are temporary. Coding agents are becoming superhuman along certain dimensions; the progress is staggering. As Andrej Karpathy put it, anything measurable or legible can be optimized by AI. The gaps will close fast.

The harder question is HCI. How do you expose this kind of intelligence in interfaces that actually align with human values? That's the design problem worth obsessing over.

[−] cobbzilla 51d ago
articles like these make me think that coding with AI is a little bit like writing Perl code: if you know what you’re doing, you can do brilliant things very quickly, but if you don’t, you can make spaghetti very quickly.
[−] shevy-java 52d ago

> While all of this is anecdotal, it sure feels like software has become a brittle mess

That may be the case where AI leaks into, but not every software developer uses or depends on AI. So not all software has become more brittle.

Personally I try to avoid any contact with software developers using AI. This may not be possible, but I don't want to waste my own time "interacting" with people who aren't really the ones writing code anymore.

[−] casey2 51d ago
People always talk about velocity and speed debating slowing down and speeding up. But the wider tech industry hasn't solved any real problems in a decades, even in mobile things are pretty much the same. We are well into the optimization stage.

AI is the only growth industry of the last decade, and it's the only thing people talk about, we've been so long without growth that people are scared of it now.

[−] aswegs8 51d ago
I love the use of the term clanker. There is just no one there that can be offended by this.
[−] Hackbraten 51d ago

> There were precursors like Aider and early Cursor, but they were more assistant than agent.

I use Aider on my private computers and Copilot at work. Both feel equally powerful when configured with a decent frontier model. Are they really generations apart? What am I missing?

[−] voidUpdate 51d ago
I feel like people are getting too comfortable saying "clanker". It's a word that was literally conceived as a slur against a group, but I guess people feel ok using it because its not aimed at humans?
[−] riazrizvi 51d ago
This is what I call content based on 'garbage'. Because garbage is the random collection of peoples' stuff. You can try and make sense and commentary on a society through the garbage dump, but it's pretty superficial. It doesn't tell you a lot about any real person's motivations. So it's not a great basis for commenting on real people. OPs comments are on the collection of things that they happen to come across through news and social media. Sure it looks like a lot is happening, but look at any one person's or business's approach and it will make a lot more sense. Yes, I realize people are producing content that appeals to the 'garbage' mindset, but it's obviously theater. A system that writes 10,000 lines of code for you a week, is headline theater.
[−] atemerev 51d ago
I expected this to be yet another anti-AI rant, but the guy is actually right. You should guide the agents, and this is a full-time job where you have to think hard.
[−] impulser_ 51d ago
I think this post should be directed to every Typescript developer.

I think a lot of this is just Typescript developers. I bet if you removed them from the equation most of the problem he's writing about go away. Typescript developers didn't even understand what React was doing without agent, now they are just one-shot prompting features, web apps, clis, desktop apps and spitting it out to the world.

The prime example of this is literally Anthropic. They are pumping out features, apps, clis and EVERY single one of them release broken.

[−] ontouchstart 52d ago
I am "playing" with both pi and Claude (in docker containers) with local llama.cpp and as an exercise, I asked both the same question and the results are in this gist:

https://gist.github.com/ontouchstart/d43591213e0d3087369298f...

(Note: pi was written by the author of the post.)

Now it is time to read them carefully without AI.

[−] ramesh31 51d ago
Why is it that every single one of these think pieces feel terminally 3 months behind on the times?
[−] markus_zhang 52d ago
If there is anyone who absolutely should slow down, it's the folks who are actively integrating company data with an agent -- you are literally helping removing as many jobs as possible, from your colleagues, and from yourselves, not in the long term, but in the short term.

Integration is the key to the agents. Individual usages don't help AI much because it is confined within the domain of that individual.

[−] anishgupta 51d ago
building because its always the dopamine from the coding agents than the problem getting solved. Github contribution graph is rigged because higher number of commits doesnt make you a better engineer. We needed this blog, ty
[−] adamtaylor_13 51d ago
Once again I appeal: who is shipping code they don't understand? Those who do so are creating the problem, not the coding agent.

I use agents all day, every single day. But I also push back, understand what was written, and ensure I read and understand everything I ship.

Does it slow me down? Uh, yup. You bet.

Yes, this article literally advocates for slowing the fuck down, but it also makes the coding agents out to be the problem, but they're not.