I have come to the conclusion that many people are going to live this AI period pretty much like the five stages of grief: denial that it can work, anger at the new robber barons, bargaining that yeah it kinda works but not really well enough, catastrophic world view and depression, and finally acceptance of the new normality.
What's the 'new normality' in the fifth stage? Do you think you'll start to believe it actually works 100%? Or that you won't change your assessment that it works only sometimes, but maybe pulling the lever on the slot machine repeatedly is better/more efficient than doing it yourself?
No this is still the "bargaining/negotiating" phase thinking. After this is when depression hits when for your usecases you see that the code quality and security audit is very good.
People will accept it as a way to build good software.
Many are still in denial that you can do work that is as good as before, quicker, using coding agents. A lot of people think there has to be some catch, but there really doesn’t have to be. If you continue to put effort in, reviewing results, caring about testing and architecture, working to understand your codebase, then you can do better work. You can think through more edge cases, run more experiments, and iterate faster to a better end result.
I'm kind of excited about that though. What I've come to realize is that automated testing and linting and good review tools are more important than ever, so we'll probably see some good developments in these areas. This helps both humans and AIs so it's a win win. I hope.
> it's looking like assessment and evaluation are massive bottlenecks.
So I think LLMs have moved the effort that used to be spent on fun part (coding) into the boring part (assessment and evaluation) that is also now a lot bigger..
You could build (code, if you really want) tools to ease the review. Of course we already have many tools to do this, but with LLMs you can use their stochastic behavior to discover unexpected problems (something a deterministic solution never can). The author also talks about this when talking about the security review (something I rarely did in the past, but also do now and it has really improved the security posture of my systems).
You can also setup way more elaborate verification systems. Don't just do a static analyis of the code, but actually deploy it and let the LLM hammer at it with all kinds of creative paths. Then let it debug why it's broken. It's relentless at debugging - I've found issues in external tools I normally would've let go (maybe created an issue for), that I can now debug and even propose a fix for, without much effort from my side.
So yeah, I agree that the boring part has become the more important part right now (speccing well and letting it build what you want is pretty much solved), but let's then automate that. Because if anything, that's what I love about this job: I get to automate work, so that my users (often myself) can be lazy and focus on stuff that's more valuable/enjoyable/satisfying.
When writing banal code, you can just ask it to write unit tests for certain conditions and it'll do a pretty good job. The cutting edge tools will correctly automatically run and iterate on the unit tests when they dont pass. You can even ask the agent to setup TDD.
Cars removed the fun part (raising and riding horses) and automatic transmissions removed the fun part (manual shifting), but for most people it's just a way to get from point A to B.
I'm not sure, but I think it boils down to accepting that some things we were attached to are no longer important or normal (not just software building).
But specifically to your examples, the latter: I think the "brute force the program" approach will be more common that doing things manually in many cases (not all! I'm still a believer in people!).
Edit:
Well, I wrote a bad blog post on this some time ago, I might as well share it: I think the accepting means engaging with the change rather than ignoring it.
It doesn't have to work 100% of the time to be ubiquitous! This is just the strangest point of view. People don't work 100% of the time either, and they wrote all the code we had until a couple of years ago. How did we deal with that? Many different kinds of checks and mitigations. And sometimes we get bugs in prod and we fix them.
The new normal will be: Everything will get worse and far more unstable (both in terms of UI/UX and reliability), and many of us will loose their jobs. Also the next generation of the programmers will have shallower understanding of the tools they use.
AI doesn't need to outrun the bear; it only needs to outrun you.
Once the tools outperform humans at the tasks to which they were applied (and they will), you don't need to be involved at all, except to give direction and final acceptance. The tools will write, and verify, the code at each step.
> Once the tools outperform humans at the tasks to which they were applied (and they will)
I don't get why some people are so convinced that this is inevitable. It's possible, yes, but it very well might be the case, that models cannot be stopped from randomly doing stupid things, cannot be made more trustworthy, cannot be made more verifiable, and will have to be relegated to the role of brainstorming aids.
I think they meant that people insisting total genAI takeover of coding is inevitable are likely people who stand to profit greatly by everyone giving up and using the unmind machines for everything.
the original post is an example of how. Every programmer is discovering slowly, for their own usecases, that the agent can actually do it. This happens to an individual when they give it a shot without reservation..
Large scale AI datacenters require a very expensive physical supply chain that includes cheap land, water, and electricity, political leverage, human architects and builders to build datacenters, and massive capital investments. Yes, AI will outperform humans, but at some point it may become cheaper to hire a human programmer.
I hear you, but let me point out that Ned Ludd didn't stop the industrial revolution.
I think in the foreseeable future we have open models running on commonly available hardware, and that is not a change that can be stopped (and arguably it's the commons getting back their own value). What we can do is fight for proper taxation, for compensatory fees, for regulation that limits plagiarism, for regulation of the most extreme externalities.
But it makes no sense, to me, to fight the technology tout court.
My existence is defined not but what I adopted but what I sabotaged or refused to deal with. 30 years in I haven't made a mistake and I don't think I am making one here. The positive bets I made have been spot on as well. I think I have a handle on what works for society and humanity at least.
When I say AI, I mean specifically LLMs. There isn't a single future position where all the risks are suitably managed, there is a return of investment and there is not a net loss to society. Faith, hope, lies, fraud and inflated expectations don't cut it and that is what the whole shebang is built on. On top of that, we are entering a time of serious geopolitical instability. Creating more dependencies on large amounts of capital and regional control is totally unacceptable and puts us all at risk.
My integrity is worth more than sucking this teat.
“The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”
— George Bernard Shaw
The antidote to runaway hype is for someone to push back, not to just relent and accept your fate. Who cares about affording to. We need more people with ideals stronger than the desire to make a lot of money.
I remember that around 2023, when I first encountered colleagues trying to use ChatGPT for coding, I thought "by the time you are done with your back-and-forth to correct all the errors, I would have already written this code manually".
Less than 6 months ago I would say about 50% of HN was at the denial phase saying it's just a next token predictor and that it doesn't actually understand code.
To all of you I can only say, you were utterly wrong and I hope you realize how unreliable your judgements all are. Remember I'm saying this to roughly 50% of HN., an internet community that's supposedly more rational and intelligent than other places on the internet. For this community to be so wrong about something so obvious.... That's saying something.
> I hated writing software this way. Forget the output for a moment; the process was excruciating. Most of my time was spent reading proposed code changes and pressing the 1 key to accept the changes, which I almost always did. [...]
That's why they hated it. Approving every change is the most frustrating way of using these tools.
I genuinely think that one of the biggest differences between people who enjoy coding agents and people who hate them is whether or not they run in YOLO mode (aka dangerously-skip-permissions). YOLO mode feels like a whole different product.
I get the desire not to do that because you want to verify everything they do, but you can still do that by reviewing the code later on without the pain of step-by-step approvals.
The door is really opening for programmers who like getting stuff made, and really closing for those who like making stuff at a low level.
No need to get out the chisel to carve those intricate designs in your chair back. We can just get that made by pressing "1". Sorry, those of you who took pride in chiseling.
I'm definitely in the latter group. I can and do use AI to build things, but it's pretty dull for me.
I've spent hours and hours putting together a TUI window system by hand recently (on my own time) that Claude could have made in minutes. I rewrote it a number of times, learning new things each time. There's a dual goal there: learn things and make a thing.
Times change, certainly. Glad to be in semi-retirement where I still get to hand carve software.
I recently spoke to a very junior developer (he's still in school) about his hobby projects.
He doesn't have our bagage. He doesn't feel the anxiety the purists feel.
He just pipes all errors right back in his task flow. He does period refactoring. He tests everything and also refactors the tests. He does automated penetration testing.
There are great tools for everything he does and they are improving at breakneck speeds.
He creates stuff that is levels above what I ever made and I spent years building it.
The author has arrived at resentful acceptance of the models power(eg: "negative externalities", "condemn those who choose").
But the next step for many is championing acceptance. Eg "that the same kind of success is available outside the world of highly structured language" .. it actually is visible when you engage with people. I'm myself going through this transition.
They really shouldnt have read all the changes individually. What you gotta do is set up your VC properly so these changes are seperated from good code, and then review the whole set of changes in an IDE that highlights them, like a proto PR. Thats far far less taxing since you get the whole picture
giving partial credit to Rust, the language, for shipping production code because you "hate" the experience of agent-driven development so much is an amazing move. i didn't think we could push things forward so fast. i guess Rust is just that powerful
> I have no reason to expect this technology can succeed at the same level in law, medicine, or any other highly human, highly subjective occupation.
I mean, if anything, I would expect it to help bring structure to medicine, which is an often sloppy profession killing somewhere between tens of thousands and hundreds of thousands of people a year through mistakes and out of date practices.
As medicine is currently very subjective. As a scientific field in the realm of physical sciences, it shouldn't be.
These takes are growing increasingly tiresome, I have to admit. They are pretty much all just tacit admissions of some kind of skill issue with this new class of tool, but presented with a sheen of moral outrage. I don’t think anyone’s buying it anymore. Figure it out.
128 comments
I'm still at the bargaining phase, personally.
Many are still in denial that you can do work that is as good as before, quicker, using coding agents. A lot of people think there has to be some catch, but there really doesn’t have to be. If you continue to put effort in, reviewing results, caring about testing and architecture, working to understand your codebase, then you can do better work. You can think through more edge cases, run more experiments, and iterate faster to a better end result.
> it's looking like assessment and evaluation are massive bottlenecks.
So I think LLMs have moved the effort that used to be spent on fun part (coding) into the boring part (assessment and evaluation) that is also now a lot bigger..
You can also setup way more elaborate verification systems. Don't just do a static analyis of the code, but actually deploy it and let the LLM hammer at it with all kinds of creative paths. Then let it debug why it's broken. It's relentless at debugging - I've found issues in external tools I normally would've let go (maybe created an issue for), that I can now debug and even propose a fix for, without much effort from my side.
So yeah, I agree that the boring part has become the more important part right now (speccing well and letting it build what you want is pretty much solved), but let's then automate that. Because if anything, that's what I love about this job: I get to automate work, so that my users (often myself) can be lazy and focus on stuff that's more valuable/enjoyable/satisfying.
But specifically to your examples, the latter: I think the "brute force the program" approach will be more common that doing things manually in many cases (not all! I'm still a believer in people!).
Edit: Well, I wrote a bad blog post on this some time ago, I might as well share it: I think the accepting means engaging with the change rather than ignoring it.
https://riffraff.info/2026/03/my-2c-on-the-ai-genai-llm-bubb...
Once the tools outperform humans at the tasks to which they were applied (and they will), you don't need to be involved at all, except to give direction and final acceptance. The tools will write, and verify, the code at each step.
> Once the tools outperform humans at the tasks to which they were applied (and they will)
I don't get why some people are so convinced that this is inevitable. It's possible, yes, but it very well might be the case, that models cannot be stopped from randomly doing stupid things, cannot be made more trustworthy, cannot be made more verifiable, and will have to be relegated to the role of brainstorming aids.
>I don't get why some people are so convinced that this is inevitable.
Someone once said that It is hard to make a man understand things if their profit depends on them not understanding it...
We don’t have to accept things.
I think in the foreseeable future we have open models running on commonly available hardware, and that is not a change that can be stopped (and arguably it's the commons getting back their own value). What we can do is fight for proper taxation, for compensatory fees, for regulation that limits plagiarism, for regulation of the most extreme externalities.
But it makes no sense, to me, to fight the technology tout court.
When I say AI, I mean specifically LLMs. There isn't a single future position where all the risks are suitably managed, there is a return of investment and there is not a net loss to society. Faith, hope, lies, fraud and inflated expectations don't cut it and that is what the whole shebang is built on. On top of that, we are entering a time of serious geopolitical instability. Creating more dependencies on large amounts of capital and regional control is totally unacceptable and puts us all at risk.
My integrity is worth more than sucking this teat.
Or is it limited to refusal to use LLM, which is a strategy, but more like becoming a hobbyist programmer then.
— George Bernard Shaw
The antidote to runaway hype is for someone to push back, not to just relent and accept your fate. Who cares about affording to. We need more people with ideals stronger than the desire to make a lot of money.
> yeah it kinda works but not really well enough
I mean, at some point it was true.
I remember that around 2023, when I first encountered colleagues trying to use ChatGPT for coding, I thought "by the time you are done with your back-and-forth to correct all the errors, I would have already written this code manually".
That was true then, but not anymore.
To all of you I can only say, you were utterly wrong and I hope you realize how unreliable your judgements all are. Remember I'm saying this to roughly 50% of HN., an internet community that's supposedly more rational and intelligent than other places on the internet. For this community to be so wrong about something so obvious.... That's saying something.
> I hated writing software this way. Forget the output for a moment; the process was excruciating. Most of my time was spent reading proposed code changes and pressing the 1 key to accept the changes, which I almost always did. [...]
That's why they hated it. Approving every change is the most frustrating way of using these tools.
I genuinely think that one of the biggest differences between people who enjoy coding agents and people who hate them is whether or not they run in YOLO mode (aka dangerously-skip-permissions). YOLO mode feels like a whole different product.
I get the desire not to do that because you want to verify everything they do, but you can still do that by reviewing the code later on without the pain of step-by-step approvals.
No need to get out the chisel to carve those intricate designs in your chair back. We can just get that made by pressing "1". Sorry, those of you who took pride in chiseling.
I'm definitely in the latter group. I can and do use AI to build things, but it's pretty dull for me.
I've spent hours and hours putting together a TUI window system by hand recently (on my own time) that Claude could have made in minutes. I rewrote it a number of times, learning new things each time. There's a dual goal there: learn things and make a thing.
Times change, certainly. Glad to be in semi-retirement where I still get to hand carve software.
He doesn't have our bagage. He doesn't feel the anxiety the purists feel.
He just pipes all errors right back in his task flow. He does period refactoring. He tests everything and also refactors the tests. He does automated penetration testing.
There are great tools for everything he does and they are improving at breakneck speeds.
He creates stuff that is levels above what I ever made and I spent years building it.
I accepted months ago: adapt or die.
But the next step for many is championing acceptance. Eg "that the same kind of success is available outside the world of highly structured language" .. it actually is visible when you engage with people. I'm myself going through this transition.
> I have no reason to expect this technology can succeed at the same level in law, medicine, or any other highly human, highly subjective occupation.
I mean, if anything, I would expect it to help bring structure to medicine, which is an often sloppy profession killing somewhere between tens of thousands and hundreds of thousands of people a year through mistakes and out of date practices.
As medicine is currently very subjective. As a scientific field in the realm of physical sciences, it shouldn't be.