Reports of code's death are greatly exaggerated (stevekrouse.com)

by stevekrouse 447 comments 611 points
Read article View on HN

447 comments

[−] lateforwork 55d ago
Chris Lattner, inventor of the Swift programming language recently took a look at a compiler entirely written by Claude AI. Lattner found nothing innovative in the code generated by AI [1]. And this is why humans will be needed to advance the state of the art.

AI tends to accept conventional wisdom. Because of this, it struggles with genuine critical thinking and cannot independently advance the state of the art.

AI systems are trained on vast bodies of human work and generate answers near the center of existing thought. A human might occasionally step back and question conventional wisdom, but AI systems do not do this on their own. They align with consensus rather than challenge it. As a result, they cannot independently push knowledge forward. Humans can innovate with help from AI, but AI still requires human direction.

You can prod AI systems to think critically, but they tend to revert to the mean. When a conversation moves away from consensus thinking, you can feel the system pulling back toward the safe middle.

As Apple’s “Think Different” campaign in the late 90s put it: the people crazy enough to think they can change the world are the ones who do—the misfits, the rebels, the troublemakers, the round pegs in square holes, the ones who see things differently. AI is none of that. AI is a conformist. That is its strength, and that is its weakness.

[1] https://www.modular.com/blog/the-claude-c-compiler-what-it-r...

[−] elgertam 54d ago
You know where LLMs boost me the most? When I need to integrate a bunch of systems together, each with their own sets of documentation. Instead of spending hours getting two or three systems to integrate with mine with the proper OAuth scopes or SAML and so on, an LLM can get me working integrations in a short time. None of that is ever going to be innovative; it's purely an exercise in perseverance as an engineer to read through the docs and make guesses about the missing gaps. LLMs are just better at that.

I spend the other time talking through my thoughts with AI, kind of like the proverbial rubber duck used for debugging, but it tends to give pretty thoughtful responses. In those cases, I'm writing less code but wanting to capture the invariants, expected failure modes and find leaky abstractions before they happen. Then I can write code or give it good instructions about what I want to see, and it makes it happen.

I'm honestly not sure how a non-practitioner could have these kinds of conversations beyond a certain level of complexity.

[−] abcde666777 54d ago
It seems to be inevitable that with any new technology we go through a phase of super duper excitement about the possibilities, where we try to use it to the extreme, and through that process start to absorb what it actually is and isn't capable of.

The hype cycle's distasteful of course, but I've accepted that this is how humans figure out what things are. Like a child we have to abuse it before we learn how to properly use it.

I think many of us sense and have sensed that the promises made of agentic programming smell too good to be true, owing to our own experiences as programmers and engineers. But experts in a domain are always the minority, so we have to understand that everyone else is going to have to reach the same intuition the hard way.

[−] pacman128 55d ago
In a chat bot coding world, how do we ever progress to new technologies? The AI has been trained on numerous people's previous work. If there is no prior art, for say a new language or framework, the AI models will struggle. How will the vast amounts of new training data they require ever be generated if there is not a critical mass of developers?
[−] evanmoran 54d ago
I’m writing a new type of CRDT that supports move/reorder/remove ops within a tree structure without tombstones. Claude Code is great at writing some of the code but it keeps adding tombstones back to my remove ops because “research requires tombstones for correctness”.

This is true for a usual approach, but the whole reason I’m writing the CRDT is to avoid these tombstones! Anyway, a long story short, I did eventually convince Claude I was right, but to do it I basically had to write a structural proof to show clear ordering and forward progression in all cases. And even then compaction tends to reset it. There are a lot of subtleties these systems don’t quite have yet.

[−] GavinAnderegg 54d ago

> AI is getting better/faster/cheaper at incredible rates, but regardless of when, unless you believe in magic, it's only a matter of time until we reach the point at which machine intelligence is indistinguishable from human intelligence. We call that point AGI.

I still don’t think this is certain. It’s telling that code generation is one of the few things these systems do extremely well. Translating between English and French isn’t that much different than translating between English and Python. These are both tasks where the most likely next token has a good shot of being correct. I’m still not sold that we should assume that LLM-based tech will be well-generalized beyond that. Maybe some new tech will come along to augment or replace LLMs and that will get us there, who knows. Just because the line is going up quickly at the moment doesn’t mean it always will.

[−] idopmstuff 55d ago
I don't know that people are saying code is dead (or at least the ones who have even a vague understanding of AI's role) - more that humans are moving up a level of abstraction in their inputs. Rather than writing code, they can write specs in English and have AI write the code, much in the same way that humans moved from writing assembly to writing higher-level code.

But of course writing code directly will always maintain the benefit of specificity. If you want to write instructions to a computer that are completely unambiguous, code will always be more useful than English. There are probably a lot of cases where you could write an instruction unambiguously in English, but it'd end up being much longer because English is much less precise than any coding language.

I think we'll see the same in photo and video editing as AI gets better at that. If I need to make a change to a photo, I'll be able to ask a computer, and it'll be able to do it. But if I need the change to be pixel-perfect, it'll be much more efficient to just do it in Photoshop than to describe the change in English.

But much like with photo editing, there'll be a lot of cases where you just don't need a high enough level of specificity to use a coding language. I build tools for myself using AI, and as long as they do what I expect them to do, they're fine. Code's probably not the best, but that just doesn't matter for my case.

(There are of course also issues of code quality, tech debt, etc., but I think that as AI gets better and better over the next few years, it'll be able to write reliable, secure, production-grade code better than humans anyway.)

[−] picafrost 55d ago
So much of society's intellectual talent has been allocated toward software. Many of our smartest are working on ad-tech, surveillance, or squeezing as much attention out of our neighbors as possible.

Maybe the current allocation of technical talent is a market failure and disruption to coding could be a forcing function for reallocation.

[−] smokedetector1 54d ago

> unless you believe in magic, it's only a matter of time until we reach the point at which machine intelligence is indistinguishable from human intelligence

I find this flippancy about the greatest mystery in the universe extremely arrogant and incurious and wish it wouldn't be so prevalent.

[−] deadbabe 55d ago
My problem is that while I know “code” isn’t going away, everyone seems to believe it is, and that’s influencing how we work.

I have not really found anything that shakes these people down to their core. Any argument or example is handwaved away by claims that better use of agents or advanced models will solve these “temporary” setbacks. How do you crack them? Especially upper management.

[−] rvz 55d ago
From "code" to "no-code" to "vibe coding" and back to "code".

What you are seeing here is that many are attempting to take shortcuts to building production-grade maintainable software with AI and now realizing that they have built their software on terrible architecture only to throw it away, rewriting it with now no-one truly understanding the code or can explain it.

We have a term for that already and it is called "comprehension debt". [0]

With the rise of over-reliance of agents, you will see "engineers" unable to explain technical decisions and will admit to having zero knowledge of what the agent has done.

This is exactly happening to engineers at AWS with Kiro causing outages [1] and now requiring engineers to manually review AI changes [2] (which slows them down even with AI).

[0] https://addyosmani.com/blog/comprehension-debt/

[1] https://www.theguardian.com/technology/2026/feb/20/amazon-cl...

[2] https://www.ft.com/content/7cab4ec7-4712-4137-b602-119a44f77...

[−] gedy 55d ago
When I started my professional life in the 90s, we used Visual J++ (Java) and remember all this damn code it generated to do UIs...

I remember being aghast at all the incomprehensible code and "do not modify" comments - and also at some of the devs who were like "isn't this great?".

I remember bailing out asap to another company where we wrote Java Swing and was so happy we could write UIs directly and a lot less code to understand. I'm feeling the same vibe these days with the "isn't it great?". Not really!

[−] Waterluvian 54d ago
I feel pretty strongly about a set of somewhat at-odds thoughts:

- in a non-hobby setting, code is a liability

- I want to solve problems, not write code

- I love writing code as a hobby.

- being paid to do my hobby professionally is amazing.

- I love the idea of the Star Trek Ship’s Computer. To just ask for things and for it to do the work. It sometimes feels like we’re very close.

[−] ihodes 54d ago
I agree that programming language can be a better (denser, more precise) encapsulator of intent than natural language. But the converse is more often true; natural language is a denser and more precise encapsulator of intent than programming language.

I think there's some irony in using Russell's quote being used this way. My intent will often be less clear to a reader once encoded in a language bound inextricably to a machine's execution context.

Good abstraction meaningfully whittles away at this mismatch, and DSLs in powerful languages (like ML-family and lisp-family languages) have often mirrored natural(ish) language. Observe that programming languages themselves have natural language specifications that are meaningfully more dense than their implementations, and often govern multiple implementations.

Code isn't just code. Some code encapsulates intent in a meaningfully information and meaning-dense way: that code is indeed poetry, and perhaps the best representation of intent available. Some code, like nearly every line of the code that backs your server vs client time example, is an implementation detail. The Electric Clojure version is a far better encapsulation of intent (https://electric.hyperfiddle.net/fiddle/electric-tutorial.tw...). A natural language version, executed in the context of a program with an existing client server architecture, is likely best: "show a live updated version of the servers' unix epoch timestamp and the client's, and below that show the skew between them."

Given that we started with Russell, we could end with Wittgenstein's "Is it even always an advantage to replace an indistinct picture by a sharp one? Isn't the indistinct one often exactly what we need?"

[−] bluGill 55d ago
A week ago there was an artical about Donald Knuth asking an ai to prove something then unproved and it found the proof. I suppose it is possible that the great Knuth didn't know how to find this existing truth - but there is a reason we all doubted it (including me when I mentioned it there)

i have never written a c compiler yet I would bet money if you paid me to write one (it would take a few years at least) it wouldn't have any innovations as the space is already well covered. Where I'm different from other compilers is more likely a case of I did something stupid that someone who knows how to write a compiler wouldn't.

[−] irchans 54d ago
This morning a person posted a question to the Reddit group r/Mathematica (https://www.reddit.com/r/Mathematica/comments/1s1fin2/can_ho...).

I asked GPT to write code to address their question and the code was quite acceptable drawing the circle and finding the correct intersection point. It would have take me about 40 minutes to write the code, so I would not have done it myself.

Currently, GPT is great for writing short programs. The results often have a bug or two that is easy to fix, but it's much faster to have GPT write the code. This works fine for projects that are less than 100 lines of code where you just want something that works.

[−] randcraw 55d ago
Krouse points to a great article by Simon Willison who proposes that the killer role for vibe coding (hopefully) will be to make code better and not just faster.

By generating prototypes that are based on different design models each end product can be assessed for specific criteria like code readability, reliability, or fault tolerance and then quickly be revised repeatedly to serve these ends better. No longer would the victory dance of vibe coding be simply "It ran!" or "Look how quickly I built it!".

[−] erichocean 55d ago

>

If you know of any other snippet of code that can master all that complexity as beautifully, I'd love to see it.

Electric Clojure: https://electric.hyperfiddle.net/fiddle/electric-tutorial.tw...

[−] flitzofolov 55d ago
r0ml's third law states that: “Any distributed system based on exchanging data will be replaced by a system based on exchanging programs.”

I believe the same pattern is inevitable for these higher level abstractions and interfaces to generate computer instructions. The language use must ultimately conform to a rigid syntax, and produce a deterministic result, a.k.a. "code".

Source: https://www.youtube.com/watch?v=h5fmhYc4U-Y

[−] tantalor 54d ago

> Nobody is out there claiming that ChatGPT is putting the great novelists or journalists out of jobs. We all know that's nonsense.

Of course they are taking about that!!

[−] jama211 54d ago
The biggest point everyone keeps missing is that a single code review makes your vibe coded code go from “terrifyingly dangerous” to “better than most people’s code” in one step.

We’re at a point where LLMs write great code, way better than my average coworkers used to anyway. Of course, not reviewing said code by an expert would be a silly as not reviewing a coworkers code, there might be security vulnerabilities in there, hardcoded api keys, etc. But once it’s been professionally reviewed, it’s just as safe as any code written by a human only probably if a higher quality than most people write.

On HN there’s an argument I keep seeing go back and forth which is like “vibe coding is the worst thing ever” and the other side will be like “AI is the second coming of christ and we don’t need programmers” - I think the reason we have what appears to be such opposing views is that those views are actually really close to one another, and proper review is all that separates one from the other.

If you’re already an expert and you don’t vibe code most things and then carefully test and review after, you’re wasting the benefits of these machines. If you’re not an expert then you shouldn’t be employed in the first place, as the main thing people are employed for is responsibility, not output.

This has always been the way in everything. A foreperson gets paid more than a worker on a building site not because they build more than the worker, but because they’re responsible for more than the worker. This is the real reason why programmer jobs won’t go away in my opinion.

[−] koromak 54d ago
I desperately want this to be true, but at least in my sector, it isn't. You still need talented and knowledgable programmers, but they don't do very much programming. Its all code review, infrastructure, devops.

At least for a small business, users are catching on that they can build a dirty app that gets them what they specifically want, instead of relying on some paid software to give everyone a little bit of what they want. Partially this suggests I'm just in the wrong sector, but it is absolutely happening.

I dont' think this matters to Google or Amazon, they can't be replaced. But small businesses are a different story.

And the result of all this? We need to heavily rely on AI, so that we can outpace individual users in delivering what they want. I hate it, I didn't give the order, but I do see the writing on the wall. This workflow is miserable, it sucks the fun out of the job, but unfortunately it really is faster. And small businesses rely on the income coming in next year, not in 5 years.

As a side note, I also think users are becoming extremely used to having a chatbot do everything for them. Every site is going to have one, and apps that don't will fall behind.

I'd like to be on a different multiverse timeline honestly

[−] cat-turner 54d ago
I think what people don't realize is that rent and the mortgage isn't paid through art. It's paid through boring, important work that is mostly uncreative and requires precision. A lot of people don't really care about doing anything innovative, they just want to do something to get money to sustain their life. The same goes for businesses.

What a lot of people don't realize about software is that it is one of the few industries that offered a means to greatly improve your standard of living without requiring a formal degree.

AI just one-shotted that kind of work. There will always be a place for humans to do creative things, but there won't be a place for average people to make a living.

Example - look at animated movies. In the past studios hired hundreds of people to draw the movie. Now, its nearly all automated with software.

The need for human artistic ability for commercial work is nearly gone, and only left for nice to have products.

In 5 years we will see the same for software. It will be much faster than what happened to art because software is already in nearly every aspect of life.

[−] 01100011 54d ago
I don't expect AI to replace me anytime soon, but...

AI is already letting me care less about the languages I use and focus more on the algorithms. AI helps me write tests. AI suggests improvements and catches bugs before compiling. AI writes helper scripts/tools for me. All of these things are good enough for me to accept paying a few hundred dollars every month, although I don't have to because my employer already does do that for me.

6 months ago I was arguing that AI wasn't very good and code was more precise than english for specifying solutions. The first part is not true anymore for many things I care about. The second is still true but for many things I care about it doesn't matter.

I'm getting tired of articles that try to tell me what to think about AI. "AI is great and will replace all programmers!"... "AI sucks and will ruin your brain and codebase!"... both of these are tired and meaningless arguments.

[−] eichin 54d ago
Got flashbacks to 1999 from some of those charts - I had a pair of design charts (partly for arguments, partly for onboarding) that were 17 nodes each and a lot of lines. (A coworker snuck in some extra nodes and an arrow labeled "troops move through Austria" and it was a while before anyone other than me noticed - yeah, that kind of chart.) This is not a lesson in design complexity - the design was pretty tight for what it did, even if you go back and read the patents - it's a lesson in the use of abstraction for explanation complexity and that you can break up the presentation more sanely than the code-on-disk actually is, you just have to stop and think about it (and have a bit more empathy for the people you're presenting to than, well, anyone in 1999 actually had :-)
[−] rglover 55d ago
It's only dead to those who are ignorant to what it takes to build and run real systems that don't tip over all the time (or leak data, embroil you in extortion, etc). That will piss some people off but it's worth considering if you don't want to perma-railroad yourself long-term. Many seem to be so blinded by the glitz, glamour, and dollar signs that they don't realize they're actively destroying their future prospects/reputation by getting all emo about a non-deterministic printer.

Valuable? Yep. World changing? Absolutely. The domain of people who haven't the slightest clue what they're doing? Not unless you enjoy lighting money on fire.

[−] z3t4 54d ago
Programming is an abstraction of the machine code which describe what the computer should do. You could in theory program in prose, meaning the description of the program compiles into an app.
[−] cratermoon 55d ago
Yet again we can pull out Edsger W.Dijkstra's 1978 article, "On the foolishness of "natural language programming""

"In order to make machines significantly easier to use, it has been proposed (to try) to design machines that we could instruct in our native tongues. this would, admittedly, make the machines much more complicated, but, it was argued, by letting the machine carry a larger share of the burden, life would become easier for us. It sounds sensible provided you blame the obligation to use a formal symbolism as the source of your difficulties. But is the argument valid? I doubt."

[−] OutOfHere 53d ago
It's a horrific article that starts off wrong by equating specification with code. In reality, the relevance of a specification comes with substantial abstraction that its author doesn't care about. The goodness of a spec is not just from what is defined, but also from what is left out. Code on the other hand leaves nothing out, unless you get into compiler level optimizations. The two are not the same.
[−] standarditem 54d ago
I've enjoyed using Claude to essentially build my own APIs at whatever level of complexity I'm comfortable with at the time. I can use lower level APIs for graphics (for example), and Claude can abstract the boiler plate into my own personal API. Then when performance gets to be an issue, I can dig into the abstractions Claude handled for me and start to pick apart the slow-downs.
[−] jmull 53d ago

> unless you believe in magic, it's only a matter of time until we reach the point at which machine intelligence is indistinguishable from human intelligence.

I'm sure it will be possible, but it may well be very expensive. If it is, why would anyone spend the resources?

AI evolution will certainly follow the money, which is not necessarily the same as the path to AGI.

[−] sathish316 54d ago
Some of the good quotes or analogies in this article:

1 - “It seems like 99% of society has agreed that code is dead. …It's the same as thinking storytelling is dead at the invention of the printing press. No you dummies, code is just getting started. AI is going to be such a boon for coding.“

2 - Another one comparing writing and coding, and explaining how Code is both a means and an end to manage complexity:

“we're confused because we (incorrectly) think that code is only for the software it produces. It's only partly about that. The code itself is also a centrally important artifact… I think this is a lot clearer if you make an analogy to writing. Isn't it fucking telling that nobody is talking about "vibe writing"?”

[−] _pdp_ 55d ago
Remember Deep Thought, the greatest computer ever built that spent 7.5 million years computing the Answer to the Ultimate Question of Life, the Universe, and Everything? The answer was 42, perfectly correct, utterly useless because nobody understood the question they were asking.

That's what happens when you hand everything to a machine without understanding the problem yourself.

AI can give you correct answers all day long, but if you don't understand what you're building, you'll end up just like the people of Magrathea, staring at 42 and wondering what to do with it.

True understanding is indistinguishable from doing.

[−] cratermoon 55d ago
I can't tell if the author's "when we get AGI" is sarcasm or genuine.
[−] pier25 54d ago

>

AI is getting better/faster/cheaper at incredible rates

Maybe but all technologies have limits.

It's irrational to believe any single technology can be improved forever.

[−] ssowonny 54d ago
the moment your vibe-coded bot hits edge cases in message threading, you need someone who actually understands the abstraction layer.
[−] soumyaskartha 55d ago
Every few years something is going to kill code and here we are. The job changes, it does not disappear.
[−] _the_inflator 54d ago
The people who hold funeral addresses over "coders" or developers tend to miss the point. If devs are disposable now, then the only question is: who is next?

I know of quite a lot of business people who kind of frolicking over the idea that the former behemoth got humbled so massively. From eating the world to unemployed in no time.

This delusion itself is telling and perpetuates the clinging to the sinking ship that is still the Elephant in the Room.

If something as complex or even complicated as app development including SDLCs etc. could simply be prompted now, then AI will eat anything less complicated alive.

So people better start considering the implications and ramifications of their statements. Otherwise we are all doomed and part of a darwinian system that will weed out the unnecessary parts of the system or we acknowledge the fact that a traditional profession such as app development is fundamentally changing.

This is something that happened before and constantly does. Otherwise we would not use DSL or Java.

But the fundamentals still work and therefore you need abstractions.

[−] ljlolel 54d ago
Code will be replaced by EnglishScript running on ClaudeVM https://jperla.com/blog/the-future-is-claudevm
[−] bryanrasmussen 54d ago
should note that Mark Twain died 13 years after he announced reports of his death an exaggeration.

We may expect code to be killed off in AI's troublesome teen years.

[−] gignico 54d ago
I don’t know if someone said it already, but when Steve Jobs said this famous quote (“reports of my death are greatly exaggerated”) he then died maybe just a couple of years later.

Hope this does not happen to code :)

[−] ggamezar 54d ago
I remember when I moved to C++ from Python when I was a Junior. After getting deeper into C++, I started questioning whether Python programmers are really programmers or what we now call vibe coders. Just through a bit of experience, I realised that in a sense, Python just operates on different layers of abstractions and allows you to do more , and much faster, if in skilful hands. On the other hand, an uneducated person will just generate what we now call slop. For some reason, this parallel resonates with the current state of affairs
[−] peter_retief 54d ago
The cartoon told me everything...
[−] vicchenai 54d ago
[dead]
[−] Plutarco_ink 55d ago
[dead]
[−] aplomb1026 54d ago
[dead]
[−] ryguz 54d ago
[dead]
[−] machuz 54d ago
[dead]