Improving developer skills is not valuable to your company. They don't tell a customer how many person-hours of engineering talent improvement their contract is responsible for. They just want a solved problem. Some companies comprehend how short-sighted this is and invest in professional development in one way or another. They want better engineers so that their operations run better. It's an investment and arguably a smart one.
Adoption of AI at a FOMO corporate pace doesn't seem to include this consideration. They largely want your skills to atrophy as you instead beep boop the AI machine to do the job (arguably) faster. I think they're wrong and silly and any time they try to justify it, the words don't reconcile into a rational series of statements. But they're the boss and they can do the thing if they want to. At work I either do what they want in exchange for money or I say no thank you and walk away.
Which led me to the conclusion I'm currently at: I think I'm mostly just mourning the fact that I got to do my hobby as a career for the past 15 years, but that’s ending. I can still code at home.
This is going to catch some heat, but what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents?
I saw something similar in ML when neural nets came around. The whole “stack moar layerz” thing is a meme, but it was a real sentiment about newer entrants into the field not learning anything about ML theory or best practices. As it turns out, neural nets “won” and using them effectively required development and acquisition of some new domain knowledge and best practices. And the kids are ok. The people who scoffed at neural nets and never got up to speed not so much.
Edit: as an aside, I have learned plenty from reviewing coding agent generated implementations of various algorithms or methods.
> what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents?
Well, it's not. There's a small moat around that right now because the UX is still being ironed out, but in a short while able to use coding agents will be the new able to use Excel.
What will remain are the things that already differentiate a good developer from a bad one:
- Able to review the output of coding agents
- Able to guide the architecture of an application
> in a short while able to use coding agents will be the new able to use Excel.
Yeah, but there’s “able to use Excel”, and then there’s “able to use Excel.”
There is a vast skill gap between those with basic Excel, those who are proficient, and those who have mastered it.
As in intermittent user of Excel I fall somewhere in the middle, although I’m probably a master of knowing how to find out how to do what I need with Excel.
The same will be true for agentic development (which is more than just coding).
And the last two are much more important.
Don't forget that most decision makers and people with capital are normies, they don't live in a tech bubble.
If we know the outcome of that code, such as whether it caused bugs or data corruption or a crappy UX or tech debt -- which is potentially available in subsequent PR commit messages -- it's still valuable training data.
Probably even more valuable than code that just worked, because evidently we have enough of that and AI code still has issues.
I see this line of thought put out there many times, and I've been thinking: why do people do anything at all? What's the point? If no one at all is even reviewing the output of coding agents, genuinely, what are we doing as a society?
I fail to see how we transition society into a positive future without supplying means of verifying systemic integrity. There is a reason that Upton Sinclair became famous: wayward incentives behind closed doors generally cause subpar standards, which cause subpar results. If the FDA didn't exist, or they didn't "review the output", society would be materially worse off. If the whole pitch for AI ends with "and no one will even need to check anything" I find that highly convenient for the AI industry.
You could e.g. write specs and only review high level types plus have deterministic validation that no type escapes/"unsafe" hatches were used, or instruct another agent to create adversarial blackbox attempts to break functionality of the primary artifact (which is really just to say "perform QA").
As a simple use-case, I've found LLMs to be much better than me at macro programming, and I don't really need to care about what it does because ultimately the constraint is just that it bends the syntax I have into the syntax I want, and things compile. The details are basically irrelevant.
Code quality will impact the effectiveness of ai. Less code to read and change in subsequent changes is still useful. There was a while where I became more of a paper architect and stopped coding for a while and I realized I wasn't able to do sufficient code reviews anymore because I lacked context. I went back into the code at some point and realized the mess my team was making and spent a long while cleaning it up. This improved the productivity of everyone involved. I expect AI to fall into a similar predicament. Without first hand knowledge of the implementation details we won't know about the problems we need to tell the AI to address. There are also many systems which are constrained in terms of memory and compute and more code likely puts you up against those limits.
I don't disagree that code quality is currently more important than it's ever been (to get the most out of the tools). I expect that quality will increase though as people refine either training or instructions. I was able to get much better (well factored, aligned to business logic) output that I'm generally happy-ish with a couple months ago with some coding guidelines I wrote. It's possible that newer models don't even need that, but they work well enough with it that I haven't touched those instructions since.
I mean, sure, for programming macros. Or programming quick scripts, or type-safe or memory-safe programs. Or web frontends, or a11y, or whatever tasks for which people are using AI.
But if you peel back that layer to the point where you are no longer discussing the code, and just saying "code X that does Y"... how big is X going to get without verifying it? This is a basic, fundamental question that gets deflected by evaluating each case where AI is useful.
When you stop being specific about what the AI is doing, and switch to the general tense, there is a massive and obvious gap that nobody is adequately addressing. I don't think anyone would say that details are irrelevant in the case of life-threatening scenarios, and yet no one is acknowledging where the logical end to this line of thinking goes.
I mean, the promise of perfect AI and perfect robotics is that humans would no longer have to do anything. They could live a life of leisure. Unfortunately, we're going to get these perfect AI and perfect robotics before we transition socially into a post-scarcity, post-ownership society. So what will happen is that ownership of the AI and robots will be consolidated into the hands of the few, the vast rest of us will have nothing economically relevant to do, and we'll probably just subsist or die.
We're already seeing this today. Every year, thousands of people are becoming essentially irrelevant to the economy. They don't own much, they don't invest much, they don't spend much money, they don't make much money, and they are invisible to economics.
> They don't own much, they don't invest much, they don't spend much money, they don't make much money, and they are invisible to economics.
Indeed. Sometimes I think the so-called “lower classes” end up functioning more like crops to be farmed by the rich. Think, dollar stores that sell tiny packages of things at worse unit cost, checking account fees, rent-a-center, 15% interest auto loans and store credit cards with 30% interest…
I've definitely felt this kind of way in the past. But these days I'm not so sure.
Setting aside the AI point about it, the idea of people becoming essentially irrelevant to the economy is an indictment on society. But I'd argue that the indictment really is towards what constitutes measurement in the economy. Not an indictment on society itself, or technology.
Sure, someone may not spend much money or produce much money, but if they produce scientific research or cultural work that is intangibly valuable it is still valuable regardless of whether economists can point to a metric or not. Same goes for the infinite amounts of contributions to our world from nature: what is the economic value of a garden snake or a beetle? A meaningless question when the economy can only see things in dollars.
They will still be turning out the same problematic code in a few years that they do now, because they aren’t intelligent and won’t be intelligent unless there is a fundamental paradigm shift in how an LLM works.
I use LLMs with best practices to program professionally in an enterprise every day, and even Opus 4.6 still consistently makes some of the dumbest architectural decisions, even with full context, complete access to the codebase and me asking very specific questions that should point it in the right direction.
I keep hearing “they aren’t intelligent” and spit out “crap code”. That’s not been my experience. LLMs prevented and also caught intricate concurrency issues that would have taken me a long time.
I just went “hmmm, nice” and went on. The problem there is that I didn’t get that sense of accomplishment I crave and I really didn’t learn anything. Those are “me” problems but I think programmers are collectively grappling with this.
They are not intelligent. Full stop. Very sophisticated next word prediction is not intelligence. LLMs don’t comprehend or understand things. They don’t think, feel or comprehend things. That’s just not how they work.
That said, very sophisticated next word predictors can and sometimes do write good code. It’s amazing some of the things they get right and then can turn around and make the weirdest dumbest mistakes.
It’s a tool. Sometimes it’s the right tool, sometimes it’s not.
None of those things will be necessary if progress continues as it has. The AI will do all of that. In fact it will generate software that uses already proven architectures (instead of inventing new ones for every project as human developers like to do). The testing has already been done: they work. There are no vulnerabilites. They are able to communicate with stakeholders (management) using their native language, not technobabble that human developers like to use, so they understand the business needs natively.
If this is the case then none of us will have jobs; we will be completely useless.
I think, most likely, you'll still need developers in the mix to make sure the development is going right. You can't just have only business people, because they have no way to gauge if the AI is making the right decisions in regards to technical requirements. So even if the AI DOES get as good as you're saying, they wouldn't know that without developers.
For some definition of work, yes, not every definition. Their product is not without flaw, leaving room at for improvement, and room for improvement by more than only other AI.
> There are no vulnerabilities
That's just not true. There's loads of vulnerabilities, just as there's plenty of vulnerabilities in human written code. Try it, point an AI looking for vulns at the output of an AI that's been through the highest intensity and scrutiny workflow, even code that has already been AI reviewed for vulnerabilities.
> This is going to catch some heat, but what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents?
If it does go as far that way as many seem to expect (or, indeed, want), then most people will be able to do it, there will be a dearth of jobs and many people wanting them so it'll be a race to the bottom for all but the lucky few: development will become a minimum wage job or so close to that it'll make no odds. If I'm earning minimum wage it isn't going to be sat on my own doing someone else's prompting, I'll find a job that involves not sitting along in front of a screen and reclaim programming for hobby time (or just stop doing it at all, I have other hobbies to divide my time between). I dislike (effectively) being a remote worker already, but put up with it for the salary, if the salary goes because “AI” turns it into a race-to-the-bottom job then I'm off.
Conversely: if that doesn't happen then I can continue to do what I want, which is program and not instruct someone else (be it a person I manage or an artificial construct) to program. I'm happy to accept the aid of tools for automation and such, I've written a few of my own, but there is a line past which my interest will just vanish.
What the people excited about the race to the bottom scenario don’t seem to understand is that it doesn’t mean low skill people will suddenly be more employable, it means fewer high skill people will be employable.
No one will be eager to employ “ai-natives” who don’t understand what the llm is pumping out, they’ll just keep the seasoned engineers who can manage and tame the output properly. Similarly, no one is going to hire a bunch of prompt engineers to replace their accountants, they’ll hire fewer seasoned accountants who can confidently review llm output.
And those that do have not yet understood what will happen when those seasoned workers retire, and there are no juniors or mid that can grow because they have been replaced by AI
> What the people excited about the race to the bottom scenario
I'm not excited about it. I just see it as a logical consequence if what people are predicting comes to pass, and I've thought about how I will deal with that.
The endgame in programming is reducing complexity before the codebase becomes impossible to reason about. This is not a solved problem, and most codebases the LLMs were trained on are either just before that phase transition or well past it.
Complexity is not just a matter of reducing the complexity of the code, it's also a matter of reducing the complexity of the problem. A programmer can do the former alone with the code, but the latter can only be done during a frank discussion with stakeholders.
A vibe coder using an LLM to generate complexity will not be able to tell which complexity to get rid of, and we don't have enough training data of well-curated complexity for LLMs to figure it out yet.
No kidding. So far the complexity introduced by LLM-generated code in my current codebase has taken far more time to deal with than the hand-written code.
Overall, we are trying to "silo" LLM-generated code into its own services with a well-defined interface so that the code can just be thrown away and regenerated (or rewritten by hand) because maintaining it is so difficult.
Yeah, same. I like the silo idea, I'll have to explore that.
I'm relieved to hear this because the LLM hype in this thread is seriously disorienting. Deeply convinced that coding "by hand" is just as defensible in the LLM age as handwriting was in the TTY age. My dopamine system is quite unconvinced though, killing me.
I have a silo’d service that handles file uploads of PDFs, images and so on. It was largely vibe coded.
It sits on an isolated tier and isn’t allowed to persist state or have permanent storage. We wanted to reduce the impact of a security flaw in this code.
We’ve ended up doing similar things for search and for an orchestration tool used for testing. The key thing is it’s non critical so we can live without it.
Yes, a retreading of the accidental vs. implicit complexity discussion is in order here. I asked an AI agent to implement function calls in a programming language the other day. It decided the best way to do this was to spin up a new interpreter for every function call and evaluate the function within that context. This actually worked but it was very very very slow.
The only way I was able to direct the AI to a better design was by saying the words I know in my head that describe better designs. Anyone without that knowledge wouldn't be able to tell the heavy interpreter architecture wasn't good, because it was fast enough for simple test cases which all passed.
And you can say "just prompt better" but we're very quickly coming to a place where people won't even have the words to say without AI first telling them what they are. At that point it might as well just say "The design is fine don't worry about it" and how would the user know any better.
I also remember a similar wave around 10-15 years ago regarding ML tooling and libraries becoming more accessible, more open source releases etc. People whose value add was knowing MATLAB toolboxes and keeping their code private got very afraid when Python numpy, scikit learn and Theano etc came to the forefront. And people started releasing the code with research papers on github. Anyone could just get that working code and start tweaking the equations put different tools and techniques together even if you didn't work in one of those few companies or didn't do an internship at a lab who were in the know.
Or other people who just kept their research dataset private and milked it for years training incrementally better ML models on the same data. Then similar datasets appeared openly and they threw a hissy fit.
Usually there are a million little tricks and oral culture around how to use various datasets, configurations, hyperparameters etc and papers often only gave the high level ideas and math away. But when the code started to become open it freaked out many who felt they won't be able to keep up and just wanted to keep on until retirement by simply guarding their knowledge and skill from getting too known. Many of them were convinced it's going to go away. "Python is just a silly, free language. Serious engineers use Matlab, after all, that's a serious paid product. All the kiddies stacking layers in Theano will just go away, it's just a fad and we will all go back to SVM which has real math backing it up from VC theory." (The Vapnik-Chervonenkis kind, not the venture capital kind.)
I don't want to be too dismissive though. People build up an identity, like the blacksmith of the village back in the day, and just want to keep doing it and build a life on a skill they learn in their youth and then just do it 9 to 5 and focus on family etc. I get it. But wishing it won't make it so.
Talented, skilled people with good intuition and judgements will be needed for a long time but that will still require adapting to changing tools and workflows. But the bulk of the workforce is not that.
This is so true... I am having issues with the change right now.. being older and trying to incorporate agentic workflow into MY workflow is difficult as I have trust issues with the new codebase.. I do have good people skills with my clients, but my secret sauce was my coding skilz.. and I built my identity around that..
The cure for me has been to write an agent myself from first principles.
Tailored to my workflow, style, goals, projects and as close as possible to what I think is how an agent should work. I’m deliberately only using an existing agent as a rubber duck.
Using a coding agent seems quite low skill to me. It’s hard to see it becoming a differentiator. Just look at the number of people who couldn’t code before and are suddenly churning out work to confirm that.
I think your argument is predicated on LLM coding tools providing significant benefit when used effectively. Personally I still think the answer is "not really" if you're doing any kind of interesting work that's not mostly boilerplate code writing all day.
Define interesting. In my experience most business logic is not innovative or difficult, but there are ways to do it well or ways to do it terribly. At the senior levels I feel 90% of the job is deciding the shape of what to build and what NOT to build. I find AI very useful in exploring and trying more things but it doesn’t really change the judgment part of the job.
How much of software programmer work is interesting? A fraction of a percent? I'd argue most of us including most startups work on things that help make businesses money and that's pretty "boring" work.
It absolutely is, but the fundamental misunderstanding around this seems to be that "effectively using coding agents" is a superset of the 2023-era general understanding of "Senior Software Engineer".
At least when you're talking about shipping software customers pay for, or debugging it, etc. Research, narrow specializations, etc may be a different category and some will indeed be obsoleted.
I don’t think it could be the most important skill to have. The most common, and the most standardized one for sure, but if coding agents are doing fundamental R&D or running ops then nobody needs skills anyway.
> As it turns out, neural nets “won”
> The people who scoffed at neural nets and never got up to speed not so much.
I get the feeling you don’t know what you’re talking about. LLMs are impressive but what have they “won” exactly? They require millions of dollars of infrastructure to run coming around a decade after their debut, and we’re really having trouble using them for anything all that serious. Now I’m sure in a few decades’ time this comment will read like a silly cynic but I bet that will only be after those old school machine learning losers come back around and start making improvements again.
Neural nets are used in way more applications than just LLMs. They did win. They won decisively in industry, for all kinds of tasks. Equating the use of one with the other is a pretty strong signal of:
> you don’t know what you’re talking about
Consider: Why did Google have a bazillion TPUs, anyway?
Not sure why this would catch heat rationally speaking. It is quite clear in a professional setting effective use of coding agents is the most important skill to develop as an individual developer.
It’s also the most important capability engineering orgs can be working on developing right now.
I'd offer an edit that the most important skill may be knowing when the agent is wrong.
There's so much hand wringing about people not understanding how LLMs work and not nearly enough hand wringing about people not understanding how computer systems work.
I'd say viewing it as most important is pretty unprofessional. But isn't it the point of this extreme AI push? To replace professional skills with dummy parrots.
> This is going to catch some heat, but what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents?
Doing so will effectively force a (potentially unwanted) career change for many people and will lead to the end of software engineering (and software as a career), assuming AI continues to improve.
"Effectively" using agents means that you're writing specs and reading code (in batches through change diffs) instead of writing code directly. This requires the ability to write well (or well enough to get what you want from the agent) and clearly communicate intent (in your language of choice, not code; very different IMO).
The way that you read code is different with agents as well. Agents can produce a smattering of tests alongside implementation in a single turn. This is usually a lot of code. Thus, instead of red-green-refactor'ing a single change that you can cumulatively map in your head, you're prompt-build-executing entire features all at once and focusing on the result.
Code itself loses its importance as a result. See also: projects that are moving towards agentic-first development using agents for maintenance and PR review. Some maintainers don't even read their codebases anymore. They have no idea what the software is actually doing. Need security? Have an agent that does nothing but security look at it. DevOps? Use a DevOps agent.
This isn't too far off from what I was doing as a business analyst a little over 20 years ago (and what some technical product managers do now for spikes/prototypes). I wrote FRDs [^0] describing what the software should do. Architects would create TRDs [^1] from those FRDs. These got sent off to developers to get developed, then to QA to get bugs hammered out, then back to my team for UAT.
If agents existed back then, there would've been way fewer developers/QA in the middle. Architects would probably do a lot of what they would've done. I foresee that this is the direction we're heading in, but with agents powered by staff engineers/Enterprise Architects in the middle.
> Edit: as an aside, I have learned plenty from reviewing coding agent generated implementations of various algorithms or methods.
People learn differently. I (and others) learn from doing. Typing code from Stack Overflow/Expertsexchange/etc instead of pasting it, then modifying it is how I learned to code. Some can learn from reading alone.
> This requires the ability to write well (or well enough to get what you want from the agent) and clearly communicate intent (in your language of choice, not code; very different IMO).
I do not see why you can't write your spec in pseudocode if you really want to - communicating your intent to the LLM, for how the code should be developed is far closer to programming than writing skillwise.
Doing so will effectively force a (potentially unwanted) career change for many people and will lead to the end of software engineering (and software as a career), assuming AI continues to improve.
If you expected things to stay the same forever, maybe software engineering wasn't the right career move for you. Even though it looked safe enough, given that we've spent 50 years writing the same old code the same old way, that was never guaranteed.
I for one am glad to see something genuinely new come along. The last dozen or so "paradigm shifts" turned out to be disappointing variations on the same old paradigm. Not this one, though.
I think you missed the part where I outlined how software engineering will become a business analyst spec-writing kind of job, a job I did and know that I dislike...
But, hey! Different strokes for different folks. This might be for you, and that's cool! I'm allowed to be sad about it, though.
I've worked for 35ish companies (contract and fulltime), largely on the west coast of the US. I have experienced the lip service, from the vast majority. I have experienced maybe 2 or 3 earnest attempts at growing engineer skills through subsidized admission/travel to talks, tools, or invited instructors.
Every company I worked for didn’t give a shit about my skills. They just wanted to solve the problem in front of them and if they couldn’t then they would hire someone in with the right skills. Improving my skills was seen as a risk as I might leave.
Given the rest of the paragraph, I believe the parent is trying to say that merely improving developer skills is not valuable to the company, not that improving developer skills cannot provide value in terms of improved work product, morale, retention, etc.
The opposite is true in my case - though 1 organization that had a small budget for things like AWS certs. I remember almost everyone who would get those certificates would never really learn anything from it either. They would just take the exams.
> Improving developer skills is not valuable to your company. They don't tell a customer how many person-hours of engineering talent improvement their contract is responsible for. They just want a solved problem.
Doesn't credentialism kinda throw a spanner in that - where it's not enough to have people with a good track record of solving issues, but then someone along the way says "Yeah, we'd also like the devs who'll work on the project to have Java certs." (I've done those certs, they're orthogonal to one's ability to produce good software)
Might just be govt. projects or particular orgs where such requirements are drawn up by dinosaurs, go figure (as much as I'd love software development to be "real" engineering with best practices spanning decades, it's still the Wild West in many respects). Then again, the same thing more or less applies to security, a lot of it seems like posturing and checklists (how some years back the status quo was that you'll change your password every 30-90 days because IT said so) instead of the stuff that actually matters.
Not to detract from the point too much, but I've very much seen people not care about solving problems and shipping fast as stuff like that, or covering their own asses by paying for Oracle support or whatever (even when it gets in the way of actually shipping, like ADF and WebLogic and the horror that is JDeveloper).
But yeah, I think many companies out there don't care that much about the individual growth of their employees that much, unless they have the ability to actually look further into the future - which most don't, given how they prefer not to train junior devs into mid/senior ones over years.
Pour yourself a drink, as I have a longish story that might be a useful metaphor.
Back in the day, there were more or less two consumer flight sims: MS Flight Simulator and XPlane. MSFS was and has always been the much prettier one, much easier to work with; xplane is kludgy, very old-school *NIX, and chonky in terms of resource usage. I was doing some work integrating flight systems data (FDAU/FDR outputs) into a cheaper flight re-creation tool, since the aircraft OEM's tool cost more than my annual salary. Hmm, actually, ten years of my salary.
So why use xplane at all, then?
The difference was that MSFS flight dynamics was driven from a model using table-based lookup that reproduced performance characteristics for a given airframe, whereas xplane (as you might be able to tell from the company name, Laminar Research) does fluid and gas simulation over the actual skin of the airframe, and then does the physics for the forces and masses and such.
I caught some flack for going with xplane: "Why not MSFS!? It's so much prettier!"
Unless the airframe is in a state that is near-equivalent with tabular lookup model, the actual flight is not going to be actually re-created. A plane in distress is very often in a boundary state- at best. OR you might be flying a plane that doesn't really have a model, like, say, a brand new planform (like the company was trying to develop). Without the aerodynamic fundamentals, the further away you get from the model represented by the tabular lookups, the greater the risk gets.
And how does this relate?
Those fundamentals- aerodynamic or mathematical or electrical- will be able to deal with a much broader range than models trained on existing data, regardless of whether or not they are LLMs or tabular lookups. If we rely on LLMs for aerodynamics, for chemistry, for electrical engineering, we are setting ourselves up for something like the 2008 Econopalypse except now it affects ALL the physical sciences; a Black Swan event that breaks reality.
I am genuinely worried we're working outselves into just such an event, where the fundamentals are all but forgotten, and a new phenomenon simply breaks the nuts and bolts of the applied sciences.
As for my xplane selection, it helped in other ways. Because often the FDR data is just plain wrong, but with xplane you could actually tell, because a control surface sticking out one way, while the flight instruments say another, lights up a "YOU GOT PROBLEMS" light in the cockpit as the aircraft inexplicably lurches to the right.
> I think I'm mostly just mourning the fact that I got to do my hobby as a career for the past 15 years, but that’s ending. I can still code at home.
It could hardly have been a hobby if people were willing to pay you for it (and good rates too)?
I will rephrase it like this - the market has shifted away from providing value to the customers of said companies to pumping itself instead and it does not need to employ people for that. Simple as.
The irony is that the vast deskilling that's happening because of this means that most "software engineers" will become incapable of understanding, let alone fixing or even building new versions of the systems that they are utterly dependent on.
There should be thousands or tens of thousands people worldwide that can build the operating systems, virtual machines, libraries, containers, and applications that AI is built on. But the number will dwindle and we'll ironically be unable to build what our ancestors did, utterly dependent on the AI artifacts to do it for us.
Maybe I’m just getting extremely lucky, but I don’t use AI to code at work and I’m still keeping up with my peers who are all Clauded up. I do a lot of green field network appliance design and implementation and have not felt really felt the pressure in that space.
I do use Claude code at home maybe a couple hours a week, mostly for code base exploration. Still haven’t figured out how to fully vibe code: the generated code just annoys me and the agents are too chatty. (Insert old man shaking fist at cloud).
> I got to do my hobby as a career for the past 15 years, but that’s ending.
Frankly I don't think so. The AI using LLMs is the perpetual motion mechanism scam of our time. But it is cloaked in unimaginable complexity, and thus it is the perfect scam. But even the most elaborately hidden power source in a perpetual motion machine cannot fool nature and should come to a complete stop as it runs out.
I agree with the sentiment, but I think the problem is much wider.
Managers at companies are just doing what they've optimized their careers for: maintaining some edge over some competition, at some cost. What is pure FOMO to you or me, is good strategy to anyone trying to win [1]. In other words, FOMO was always the strategy.
This self-reinforcing loop is also not going away. There hasn't been any real evidence that any part of knowledge work, including coding, cannot be automated [2]. Even if human-level quality or cost-effectiveness takes 10 more years, all tasks are functionally solved or about to be. I don't like it, but it's true.
The big problem is that the people who are removed from this loop, who have the time to understand its effects and the power to make changes, are doing fuck-all.
So, whether the loop stops for a while or speeds up even more, we're fucked until we figure out how to detach full-time employment from survival.
[1] I believe this is called meta in PvP games; even if you want to subvert the meta, you gotta know it well first.
[2] Although it could just be my impression, and I'd be happy to be proven otherwise.
There's a catch. Do not break customer trust. Many people are just tinkering with solving the problem but the indirect effects have not been tackled either by the tool, processes or just some human thinking.
Picking out my favorite idea out of many: we do need ways to stay mentally sharp in the age of AI. Writing and publishing is a good one. I also recommend stimulating human conversations and long-form reading.
More and more the bar is being lowered. Don’t fall to brain rot. Don’t quite quit. Stay active and engaged, and you’ll begin to stand out among your peers.
Funnily enough I saw this post as I was placing my HN account on hiatus, because I'm tired pretending that the quality of discourse is on par with what I've been used to read and participate in.
We're obviously in an era where "good enough" is taken so far that, what used to be the middle of the fictional line is not the middle point anymore but a new extreme. You're either someone who cares for the output or someone who cares how readable and easy to extend the code is.
I can only assume this is done on hopeful purpose, with the hope that the LLM's will "only keep improving linearly" to the point where readability and extendability is not my problem by it's "tomorrow's LLM" problem.
I do find it hard to tolerate the feeling of being watched online. The second-most trending dataset on huggingface right now is a snapshot of HN updating at a 5 minute interval. It makes me not want to really comment at all, just like how I don’t really publish any software I write anymore.
Turns out it sucks to produce original works when you know that, whereas previously a few people at best might see your work, now it’s a bunch of omniscient robots and maybe half of those original people are using the robots instead.
> The AI industry is 99% hype; a billion dollar industrial complex to put a price tag on creation. At this point if you believe AI is ‘just a tool’ you’re wilfully ignoring the harm.
> (Regardless, why do I keep being told it’s an ‘extreme’ stance if I decide not to buy something?)
> The 1% utility AI has is overshadowed by the overwhelming mediocracy it regurgitates.
This sort of reasoning is why you might have been called extreme.
It's less extreme to say "many people see and/or get lots of benefit, but it's wrong to use the tool due to the harms it has".
There's nothing wrong with extreme, but since you asked.
> The giant plagiarism machines have already stolen everything. Copyright is dead. Licenses are washed away in clean rooms.
Isn't this what the free software movement wanted? Code available to all?
Yes, code is cheap now. That's the new reality. Your value lies elsewhere.
You can lament the loss of your usefulness as a horse buggy mechanic, or you can adapt your knowledge and experience and use it towards those newfangled automobiles.
Anti-AI articles like this seem to be the new "Doing my part to resist big tech: Why I'm switching back from Chrome to Firefox" genre that popped up on HN for a decade or so. If it makes you feel better, great, but don't kid yourself that your actions will make any difference whatsoever to the overall trajectory of AI adoption in IT or society.
I think it's probably accurate to say that the vast majority of writers throughout history were writing for an extremely tiny or nonexistent audience. My favorite example of this is Nietzsche, who basically had zero readership during most of his life, beyond a few close friends, and even had to personally pay to get his books published. He only posthumously became one of the most influential thinkers of the 20th century.
So while I do worry about AI's impact on blogging/writing/etc., I do think to some extent, you either love the process or you don't. If you only write in order to have readers, you're in the wrong game.
> First let’s accept the realities. The giant plagiarism machines have already stolen everything. Copyright is dead. Licenses are washed away in clean rooms. Mass surveillance and tracking are a feature, privacy is a bug. Everything is an “algorithm” optimised to exploit.
Suppose that I have discovered a novel algorithm that solves an important basic problem much more efficiently than current techniques do. How do I hide it from the web scrapers that will steal it if I put it on GitHub or elsewhere? Should I just write it up as a paper and be content with citations and minor glory? Or should I capture AI search results today for "write me code that does X", put my new code up under a restrictive license, capture search results a day later, demonstrate that an AI scraper has acquired the algorithm in violation of the license, and seek damages?
One problem writing does have: we grew up in a massive changing and progressing software writing area. A golden area.
Now i still show clean code videos from bob and other old things to new hires and young collegues.
Java got more features, given but the golden area of discovery is over.
The new big thing is ai and i'm curious to see how it will feel to write real agents for my company specific use cases.
But i'm also seeing people so bad in their daily jobs, that I wish to get their salary as tokens to use. It will change and it changes our field.
Btw. "Is there anything, in the entire recorded history of human creation, that could have possibly mattered less than the flatulence Sora produced? NFTs had more value." i disagree, video generation has a massive impact on the industry for a lot of people. Don't down play this. NFTs btw. never had any impact besides moving money from a to b
I don't see any proof that software development is not dead. Software engineering is not, and it's much more than writing code, and it can be fun. But writing code is dead, there is no point of doing it if an LLM can output the same code 100x faster. Of course, architecture and operations stays in our hands (for now?).
Initially I was very sceptic, first versions of ChatGPT or Claude were rather bad. I kept holding to a thought that it cannot get good. Then I've spend a few months evaluating them, if you know how to code, there is no point of coding anymore, just instruct an LLM to do something, verify, merge, repeat. It's an editor of some sorts, an editor where you enter a thought and get code as an output. Changes the whole scene.
LLMs can produce text information but they cannot have experiences. Writing about authentic experience is still a worth while endeavor. Expression of a preference is also an experience when framed correctly.
I find it funny how clanker took off and everyone uses it. It was edited in a video where someone was otherwise saying something extremely racist (the more offensive version of the n-word). For those curious it involves a burger king hat, schizophrenia and an airplane, someone edited the n-word out and put clanker with AI (because why not insult AI by using AI?). I do wonder if the AI uprising will involve robots killing anyone who used clanker in a derogatory way and sparing everyone else.
Also, yes, I know the origin is Star Wars, but it went viral recently a very specific way.
If writing code was the only value your employer ever saw in you, it's for the best that you're forced to find a better job now. You would have eventually hit this point in your career anyway regardless of "AI".
The broader corporate world has never wanted code monkeys. They want "boring" reliability and pay a reasonable wage for it. On the other hand, they also won't tolerate contrarians who can't deliver, so maybe some of the fear from people posting this sort of thing really is justified.
Alas I think tech crowd have collectively painted humanity into a corner where not playing is not an option anymore.
The combination of having subverted copyright and enabled cheap machine replication kills large swaths of creativity. At least as a viable living. One can still do many things on an artisanal level certainly and as excited as I am about AI it’s hard not to see it as a big L for humanity’s creative output
Old web stuff is still around. RSS feeds are out there. Some parts of masto are generally chill and filled with people having interesting convos.
You don't have to give up on everything to participate, but it can be a space to go to if you're tired of every social interaction being mediated by (I'm being glib) hustlers
This person gives me the vibe that they are so attached to their craft that they can't seem to do anything about LLM's ubiquity rising but scold and vaguely sloganeer.
Was this how other professionals dealt with their grief? Like a translator in the advent of ML based translations? Like a lift man?
I think the "Leave them Behind" section at the end sort of ignores the whole "they will ruthlessly copy your material, and put aggressive extra load on your server while repeatedly stealing your work" dimension.
You can try to avoid consuming AI-generated material, but of course part-way through a lot of things you may wonder whether it is partly AI-generated, and we don't yet have a credible "human-authored" stamp. But you can't really keep them from using your work to make cheap copies of you, or at least reducing your audience by including information or insights from your work in the chat sessions of people who otherwise might have read your work.
I’ve decided the only way I’ll adopt a full automated agentic AI workflow the way companies want, is if I am allowed to hold multiple jobs at multiple companies.
Imagine having 6 software engineering jobs, each paying maybe $150k a year, all being done by agents.
Hell, I might even do this secretly without their consent. If I can hold just 10 jobs for about 3 or 4 years, I can retire and leave the industry before it all comes crumbling down in 2030.
The problem of course, is securing that many jobs. But maybe agents can help with applying for jobs.
When will we understand that not everybody works in a FAANG?
Assuming that the way to put some food in the table for all software developers is always a matter of creating a new magical algor. in a mystical programming language deployed in an unicorn architecture is so childish.
99% of all software development today is simply creating a crud or refactoring a codebase because React guys decided to change everything again.
I'm building in the opposite direction. An AI that participates in group conversations but knows when to shut up. The hardest part is making it not talk when humans are handling it. Turns out coding social awareness into an AI is way harder than coding productivity. I get why people feel replaced by the productivity tools. The social side is nowhere close to replacing anyone.
Yeah well I just don’t care about „AI dark forest”
You seriously need to go outside and touch grass if you are so defeated by another chess winning machine
Nobody wants to Watch AI play chess, nobody wants to read ai blogposts
AI makes human writing more valuable, not less.
I will pay good money for pure human made books certified as made without a single word auto generated whether in original or during process of Translation.
>One upside of this looming economic and intellectual depression is that the media is beginning to recognise gate keepers are no longer the hand that feeds them.
In what world is "the media" not an integral, tightly-bound part of the ratchet mechanism that seeks to suppress all distinction?
It’s never been more important to blog. There has never been a better time to blog. I will tell you why. We’re being starved for human conversation and authentic voices
The supposedly starved don't seem to care much for such food. Blogs are kind of a wasteland.
So many insecure AI boosters in the comments slandering and mocking the author. And yet the upvotes clearly indicate that the sentiment in the article resonates widely with the community.
Well, there’s not much of a point leaving a comment saying “yes, this, exactly this,” so I’ll leave one here on behalf of my fellow lurkers.
The more AI gets shoved down my throat, the less I’m inclined to use it for anything, and the more I’m inspired to write my own writing, make my own art, and create my own code — with great creative joy and burning anger. Enjoy your 1000x productivity gains and your inevitable burnout as you downskill to a glorified inference loop.
I work for a bank and Im basically just an ai user now. Honestly its like pulling teeth to get anyone to look at your code here. Im just on hackernews lol
I just don't see it. What's truly sad is that a field allegedly filled with technically proficient people is filled with trend chasing and FOTM trends. To this day, people with nuanced takes are far and between and most people either go full force into LLM evangelism or LLM denial, the first more annoying, by far.
I'm really getting tired of the programming obituaries. As if LLMs didn't fail at any complex task, as if they didn't vomit shit code and as if they didn't just copy patterns surrounding the new code, and as if they didn't hallucinate and downright write wrong code or made up libraries. Yet for some reason, every time you bring it up, someone will come along and say "You're not using it right then.", Is it that, or is it just that they're only doing toy projects? I'm led to believe the latter.
At this point I don't know what's organic and what's not. Reddit is filled with astroturfing for big LLM. Maybe this place is too? Even if that was not the case, I'm led to believe that it isn't uncommon for people to swallow up all of the big LLM propaganda and fall into despair, or fall into unrealistic expectations, and just parrot it everywhere else. One thing is for certain, LLM evangelism has all the money of the world, and LLM denial doesn't. It's only natural to think that the balance is tilted in terms of media presence.
Even at the best, or worst, LLMs can't do anything you couldn't do yourself better with a scaffolding prompt + manual editing, and at the end of the day, you still need the cognitive energy to review, veto, come up with, the implementation. What does this exactly do anyways other than saving you a bunch of keypresses? I wonder if the people touting it to be all that really didn't think before LLMs, of just switched their brain off on them.
I used to really like this site, but I think that just consuming the RSS feed is enough for me. I think that lobster.rs has less "trend chasing" point of views these days, and I do wonder if it might be on here there's larger amounts of non technical people jumpy to call for the funeral of things.
Blog posts are an interesting case, they are a very good example of something where abundance of supply outstrips any demand so much that it cannot be realistic to expect a median level contribution to receive any significant attention.
Setting aside the self delusion that makes a considerable number to erroneously rate themselves above average, the reason you create blog posts should not be for the attention you might gain, there simply are not the eyeballs. You create as a form of self expression, to organise your thoughts, to create a record of them.
AI can never challenge in those areas because it is, as it has always been, the act of creation is the goal.
"Generative AI is art. It’s irredeemably shit art; end of conversation."
I think most people cannot destinguish between "genuine" creativity and an artificial almalgamation of training data and human provided context. For one, I do not know what already exsists. Some work created by AI may be an obvious rip off of the style of a particular artist, but I wouldnt know. To me it might look awesome and fresh.
I think many of the more human centric thinkers will be disappointed at how many people just wont care.
I laugh jollily in the face of AI. I know the coming shit pile, its nature isn't going to be surprising, only the speed and utter surrender of the vast majority of humanity to mediocrity.
What AI represents to me is a teacher! I have so long lacked a music teacher and musical tools. I spent my entire career doing invisible software at the lowest levels and now I can finally build cool tools that help me learn and practice and enjoy playing music! Screw all the haters; if you're curious about a wide range of topics and already have some knowledge, you can galavant across a vast space and learn a lot along the way.
AI is a bit of a bullshitter but don't take its bullshit as truth, like you should never take anything your teacher says as gospel. How do we know what's true? The truth of the universe and the world is that underneath it all, it is self consistent, and we keep making measurement errors. The AI is an enormous pot of magic that it's up to you to organize with...your own skills.
You have to actively resist deskilling by doing things. AI should challenge you and reward you, not make you passive.
Use AI to teach yourself by asking lots of questions and constantly testing the results against reality.
Such a bizarre sentiment the web and internet as we know/knew them is some bastion of freedom and future for humanity.
According to the author AI is 99% hype.
That 1% of AI utility can unlock more for humanity than 99.999% of blogs; static text hosted from a laptop in a closet.
Odd ball position that cheap publishing via the web is a path to The Next Generation for humanity is 100% hype.
Other than feeding dopamine addiction humanity has not improved greatly since we read all those insipid posts on GeoCities no one remembers today.
It's all been 99%+ hype to feed Wall Street. Young GenX and older Millennials with tech jobs were temporary political pawns and gonna end up bag holders like may older GenXers and Boomers who lived through the car boom, the housing boom, the retail boom.
You have to write for yourself. People have said this for years, decades, millennia even - but nobody really believes that writing to an audience of zero (or one, if Mom is still around) is worth it.
Everyone wants to be a famous author, or at least a published/somewhat acknowledged one; few are willing to write their novel and be satisfied with zero or near-zero sales/readings.
But that is exactly what you need to do, especially in the age of AI. Everyone who was "in it to win it" (think linkedinslop which existed before AI) is going to certainly use AI - because they do not give a shit about the quality of themselves - they just want the result.
And you can only become a writer (unpublished, unread, or no) by doing the writing - it takes time (10,000 hours?) that cannot be replaced by AI, just like you can't have the body of a marathon runner without running (yes, yes, the joke). You may be able to get 26 miles and change away, even very fast, but unless you personally do the running of that distance without cheating, you will not get the inherent benefits.
And if you instruct an AI, or another human even, to write for you, you may get some of the results you want, but you won't have changed to become a writer.
We shouldn't celebrate the successful blogs; they're already rewarded enough. It's celebrating the unsuccessful blogs that is needed - even if, frankly, the vast majority of them are sub-AI levels of crap it is still a human changing and progressing behind them.
Babies fall over a lot but unless you take them out of the stroller and let them do so, they'll never progress to crawling, walking, running.
This feels weird somehow. It feels like: Damn we can’t train our AI any better as everything regurgitated slop now.
How can we get people to create new content for us, hopefully with new ideas …
Might be just me though, but I definitely don’t get why blogging should be the solution.
Considering that the rant is using photos from Star Trek and Oliver Twist to make their points, copyrighted material with no indication of permission, they're less "creative" than a stochastic parrot.
484 comments
Adoption of AI at a FOMO corporate pace doesn't seem to include this consideration. They largely want your skills to atrophy as you instead beep boop the AI machine to do the job (arguably) faster. I think they're wrong and silly and any time they try to justify it, the words don't reconcile into a rational series of statements. But they're the boss and they can do the thing if they want to. At work I either do what they want in exchange for money or I say no thank you and walk away.
Which led me to the conclusion I'm currently at: I think I'm mostly just mourning the fact that I got to do my hobby as a career for the past 15 years, but that’s ending. I can still code at home.
I saw something similar in ML when neural nets came around. The whole “stack moar layerz” thing is a meme, but it was a real sentiment about newer entrants into the field not learning anything about ML theory or best practices. As it turns out, neural nets “won” and using them effectively required development and acquisition of some new domain knowledge and best practices. And the kids are ok. The people who scoffed at neural nets and never got up to speed not so much.
Edit: as an aside, I have learned plenty from reviewing coding agent generated implementations of various algorithms or methods.
> what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents?
Well, it's not. There's a small moat around that right now because the UX is still being ironed out, but in a short while able to use coding agents will be the new able to use Excel.
What will remain are the things that already differentiate a good developer from a bad one:
- Able to review the output of coding agents
- Able to guide the architecture of an application
- Able to guide the architecture of a system
- Able to minimize vulnerabilities
- Able to ensure test quality
- Able to interpret business needs
- Able to communicate with stakeholders
> in a short while able to use coding agents will be the new able to use Excel.
Yeah, but there’s “able to use Excel”, and then there’s “able to use Excel.”
There is a vast skill gap between those with basic Excel, those who are proficient, and those who have mastered it.
As in intermittent user of Excel I fall somewhere in the middle, although I’m probably a master of knowing how to find out how to do what I need with Excel.
The same will be true for agentic development (which is more than just coding).
> Able to review the code output of coding agents
That probably won’t be necessary in a few years.
Probably even more valuable than code that just worked, because evidently we have enough of that and AI code still has issues.
I fail to see how we transition society into a positive future without supplying means of verifying systemic integrity. There is a reason that Upton Sinclair became famous: wayward incentives behind closed doors generally cause subpar standards, which cause subpar results. If the FDA didn't exist, or they didn't "review the output", society would be materially worse off. If the whole pitch for AI ends with "and no one will even need to check anything" I find that highly convenient for the AI industry.
As a simple use-case, I've found LLMs to be much better than me at macro programming, and I don't really need to care about what it does because ultimately the constraint is just that it bends the syntax I have into the syntax I want, and things compile. The details are basically irrelevant.
But if you peel back that layer to the point where you are no longer discussing the code, and just saying "code X that does Y"... how big is X going to get without verifying it? This is a basic, fundamental question that gets deflected by evaluating each case where AI is useful.
When you stop being specific about what the AI is doing, and switch to the general tense, there is a massive and obvious gap that nobody is adequately addressing. I don't think anyone would say that details are irrelevant in the case of life-threatening scenarios, and yet no one is acknowledging where the logical end to this line of thinking goes.
We're already seeing this today. Every year, thousands of people are becoming essentially irrelevant to the economy. They don't own much, they don't invest much, they don't spend much money, they don't make much money, and they are invisible to economics.
> They don't own much, they don't invest much, they don't spend much money, they don't make much money, and they are invisible to economics.
Indeed. Sometimes I think the so-called “lower classes” end up functioning more like crops to be farmed by the rich. Think, dollar stores that sell tiny packages of things at worse unit cost, checking account fees, rent-a-center, 15% interest auto loans and store credit cards with 30% interest…
Setting aside the AI point about it, the idea of people becoming essentially irrelevant to the economy is an indictment on society. But I'd argue that the indictment really is towards what constitutes measurement in the economy. Not an indictment on society itself, or technology.
Sure, someone may not spend much money or produce much money, but if they produce scientific research or cultural work that is intangibly valuable it is still valuable regardless of whether economists can point to a metric or not. Same goes for the infinite amounts of contributions to our world from nature: what is the economic value of a garden snake or a beetle? A meaningless question when the economy can only see things in dollars.
I use LLMs with best practices to program professionally in an enterprise every day, and even Opus 4.6 still consistently makes some of the dumbest architectural decisions, even with full context, complete access to the codebase and me asking very specific questions that should point it in the right direction.
I just went “hmmm, nice” and went on. The problem there is that I didn’t get that sense of accomplishment I crave and I really didn’t learn anything. Those are “me” problems but I think programmers are collectively grappling with this.
That said, very sophisticated next word predictors can and sometimes do write good code. It’s amazing some of the things they get right and then can turn around and make the weirdest dumbest mistakes.
It’s a tool. Sometimes it’s the right tool, sometimes it’s not.
I think, most likely, you'll still need developers in the mix to make sure the development is going right. You can't just have only business people, because they have no way to gauge if the AI is making the right decisions in regards to technical requirements. So even if the AI DOES get as good as you're saying, they wouldn't know that without developers.
It’s short term thinking IMO, but it’s my interpretation of the argument AI proponents are making.
> They work
For some definition of work, yes, not every definition. Their product is not without flaw, leaving room at for improvement, and room for improvement by more than only other AI.
> There are no vulnerabilities
That's just not true. There's loads of vulnerabilities, just as there's plenty of vulnerabilities in human written code. Try it, point an AI looking for vulns at the output of an AI that's been through the highest intensity and scrutiny workflow, even code that has already been AI reviewed for vulnerabilities.
If it does go as far that way as many seem to expect (or, indeed, want), then most people will be able to do it, there will be a dearth of jobs and many people wanting them so it'll be a race to the bottom for all but the lucky few: development will become a minimum wage job or so close to that it'll make no odds. If I'm earning minimum wage it isn't going to be sat on my own doing someone else's prompting, I'll find a job that involves not sitting along in front of a screen and reclaim programming for hobby time (or just stop doing it at all, I have other hobbies to divide my time between). I dislike (effectively) being a remote worker already, but put up with it for the salary, if the salary goes because “AI” turns it into a race-to-the-bottom job then I'm off.
Conversely: if that doesn't happen then I can continue to do what I want, which is program and not instruct someone else (be it a person I manage or an artificial construct) to program. I'm happy to accept the aid of tools for automation and such, I've written a few of my own, but there is a line past which my interest will just vanish.
No one will be eager to employ “ai-natives” who don’t understand what the llm is pumping out, they’ll just keep the seasoned engineers who can manage and tame the output properly. Similarly, no one is going to hire a bunch of prompt engineers to replace their accountants, they’ll hire fewer seasoned accountants who can confidently review llm output.
I'm not excited about it. I just see it as a logical consequence if what people are predicting comes to pass, and I've thought about how I will deal with that.
Complexity is not just a matter of reducing the complexity of the code, it's also a matter of reducing the complexity of the problem. A programmer can do the former alone with the code, but the latter can only be done during a frank discussion with stakeholders.
A vibe coder using an LLM to generate complexity will not be able to tell which complexity to get rid of, and we don't have enough training data of well-curated complexity for LLMs to figure it out yet.
Overall, we are trying to "silo" LLM-generated code into its own services with a well-defined interface so that the code can just be thrown away and regenerated (or rewritten by hand) because maintaining it is so difficult.
I'm relieved to hear this because the LLM hype in this thread is seriously disorienting. Deeply convinced that coding "by hand" is just as defensible in the LLM age as handwriting was in the TTY age. My dopamine system is quite unconvinced though, killing me.
It sits on an isolated tier and isn’t allowed to persist state or have permanent storage. We wanted to reduce the impact of a security flaw in this code.
We’ve ended up doing similar things for search and for an orchestration tool used for testing. The key thing is it’s non critical so we can live without it.
The only way I was able to direct the AI to a better design was by saying the words I know in my head that describe better designs. Anyone without that knowledge wouldn't be able to tell the heavy interpreter architecture wasn't good, because it was fast enough for simple test cases which all passed.
And you can say "just prompt better" but we're very quickly coming to a place where people won't even have the words to say without AI first telling them what they are. At that point it might as well just say "The design is fine don't worry about it" and how would the user know any better.
Or other people who just kept their research dataset private and milked it for years training incrementally better ML models on the same data. Then similar datasets appeared openly and they threw a hissy fit.
Usually there are a million little tricks and oral culture around how to use various datasets, configurations, hyperparameters etc and papers often only gave the high level ideas and math away. But when the code started to become open it freaked out many who felt they won't be able to keep up and just wanted to keep on until retirement by simply guarding their knowledge and skill from getting too known. Many of them were convinced it's going to go away. "Python is just a silly, free language. Serious engineers use Matlab, after all, that's a serious paid product. All the kiddies stacking layers in Theano will just go away, it's just a fad and we will all go back to SVM which has real math backing it up from VC theory." (The Vapnik-Chervonenkis kind, not the venture capital kind.)
I don't want to be too dismissive though. People build up an identity, like the blacksmith of the village back in the day, and just want to keep doing it and build a life on a skill they learn in their youth and then just do it 9 to 5 and focus on family etc. I get it. But wishing it won't make it so.
Talented, skilled people with good intuition and judgements will be needed for a long time but that will still require adapting to changing tools and workflows. But the bulk of the workforce is not that.
Tailored to my workflow, style, goals, projects and as close as possible to what I think is how an agent should work. I’m deliberately only using an existing agent as a rubber duck.
It’s a very empowering learning experience.
> Using a coding agent seems quite low skill to me.
I agree if that's all you can do. Using a coding agent to complement a valuable domain-specific skill is gold.
At least when you're talking about shipping software customers pay for, or debugging it, etc. Research, narrow specializations, etc may be a different category and some will indeed be obsoleted.
> As it turns out, neural nets “won”
> The people who scoffed at neural nets and never got up to speed not so much.
I get the feeling you don’t know what you’re talking about. LLMs are impressive but what have they “won” exactly? They require millions of dollars of infrastructure to run coming around a decade after their debut, and we’re really having trouble using them for anything all that serious. Now I’m sure in a few decades’ time this comment will read like a silly cynic but I bet that will only be after those old school machine learning losers come back around and start making improvements again.
> you don’t know what you’re talking about
Consider: Why did Google have a bazillion TPUs, anyway?
It’s also the most important capability engineering orgs can be working on developing right now.
Software Engineering itself is being disrupted.
>This is going to catch some heat [...]
It does seem like a roundabout way of saying "but what if full sending on AI didn't have downsides, tho?"
Just phrased in a way that can put the onus on the other party with a perfect weasel word qualifier like "most important".
There's so much hand wringing about people not understanding how LLMs work and not nearly enough hand wringing about people not understanding how computer systems work.
> This is going to catch some heat, but what if the most important professional “developer skill” to learn or improve is how to effectively use coding agents?
Doing so will effectively force a (potentially unwanted) career change for many people and will lead to the end of software engineering (and software as a career), assuming AI continues to improve.
"Effectively" using agents means that you're writing specs and reading code (in batches through change diffs) instead of writing code directly. This requires the ability to write well (or well enough to get what you want from the agent) and clearly communicate intent (in your language of choice, not code; very different IMO).
The way that you read code is different with agents as well. Agents can produce a smattering of tests alongside implementation in a single turn. This is usually a lot of code. Thus, instead of red-green-refactor'ing a single change that you can cumulatively map in your head, you're prompt-build-executing entire features all at once and focusing on the result.
Code itself loses its importance as a result. See also: projects that are moving towards agentic-first development using agents for maintenance and PR review. Some maintainers don't even read their codebases anymore. They have no idea what the software is actually doing. Need security? Have an agent that does nothing but security look at it. DevOps? Use a DevOps agent.
This isn't too far off from what I was doing as a business analyst a little over 20 years ago (and what some technical product managers do now for spikes/prototypes). I wrote FRDs [^0] describing what the software should do. Architects would create TRDs [^1] from those FRDs. These got sent off to developers to get developed, then to QA to get bugs hammered out, then back to my team for UAT.
If agents existed back then, there would've been way fewer developers/QA in the middle. Architects would probably do a lot of what they would've done. I foresee that this is the direction we're heading in, but with agents powered by staff engineers/Enterprise Architects in the middle.
> Edit: as an aside, I have learned plenty from reviewing coding agent generated implementations of various algorithms or methods.
People learn differently. I (and others) learn from doing. Typing code from Stack Overflow/Expertsexchange/etc instead of pasting it, then modifying it is how I learned to code. Some can learn from reading alone.
[^0]: https://www.modernanalyst.com/Resources/Articles/tabid/115/I...
> This requires the ability to write well (or well enough to get what you want from the agent) and clearly communicate intent (in your language of choice, not code; very different IMO).
I do not see why you can't write your spec in pseudocode if you really want to - communicating your intent to the LLM, for how the code should be developed is far closer to programming than writing skillwise.
If you expected things to stay the same forever, maybe software engineering wasn't the right career move for you. Even though it looked safe enough, given that we've spent 50 years writing the same old code the same old way, that was never guaranteed.
I for one am glad to see something genuinely new come along. The last dozen or so "paradigm shifts" turned out to be disappointing variations on the same old paradigm. Not this one, though.
But, hey! Different strokes for different folks. This might be for you, and that's cool! I'm allowed to be sad about it, though.
> Improving developer skills is not valuable to your company
Every company I've ever worked at has genuinely believed in and invested in improving developer skills.
There doesn’t seem to be a plan for maintaining that culture.
> Improving developer skills is not valuable to your company. They don't tell a customer how many person-hours of engineering talent improvement their contract is responsible for. They just want a solved problem.
Doesn't credentialism kinda throw a spanner in that - where it's not enough to have people with a good track record of solving issues, but then someone along the way says "Yeah, we'd also like the devs who'll work on the project to have Java certs." (I've done those certs, they're orthogonal to one's ability to produce good software)
Might just be govt. projects or particular orgs where such requirements are drawn up by dinosaurs, go figure (as much as I'd love software development to be "real" engineering with best practices spanning decades, it's still the Wild West in many respects). Then again, the same thing more or less applies to security, a lot of it seems like posturing and checklists (how some years back the status quo was that you'll change your password every 30-90 days because IT said so) instead of the stuff that actually matters.
Not to detract from the point too much, but I've very much seen people not care about solving problems and shipping fast as stuff like that, or covering their own asses by paying for Oracle support or whatever (even when it gets in the way of actually shipping, like ADF and WebLogic and the horror that is JDeveloper).
But yeah, I think many companies out there don't care that much about the individual growth of their employees that much, unless they have the ability to actually look further into the future - which most don't, given how they prefer not to train junior devs into mid/senior ones over years.
Back in the day, there were more or less two consumer flight sims: MS Flight Simulator and XPlane. MSFS was and has always been the much prettier one, much easier to work with; xplane is kludgy, very old-school *NIX, and chonky in terms of resource usage. I was doing some work integrating flight systems data (FDAU/FDR outputs) into a cheaper flight re-creation tool, since the aircraft OEM's tool cost more than my annual salary. Hmm, actually, ten years of my salary.
So why use xplane at all, then?
The difference was that MSFS flight dynamics was driven from a model using table-based lookup that reproduced performance characteristics for a given airframe, whereas xplane (as you might be able to tell from the company name, Laminar Research) does fluid and gas simulation over the actual skin of the airframe, and then does the physics for the forces and masses and such.
I caught some flack for going with xplane: "Why not MSFS!? It's so much prettier!"
Unless the airframe is in a state that is near-equivalent with tabular lookup model, the actual flight is not going to be actually re-created. A plane in distress is very often in a boundary state- at best. OR you might be flying a plane that doesn't really have a model, like, say, a brand new planform (like the company was trying to develop). Without the aerodynamic fundamentals, the further away you get from the model represented by the tabular lookups, the greater the risk gets.
And how does this relate?
Those fundamentals- aerodynamic or mathematical or electrical- will be able to deal with a much broader range than models trained on existing data, regardless of whether or not they are LLMs or tabular lookups. If we rely on LLMs for aerodynamics, for chemistry, for electrical engineering, we are setting ourselves up for something like the 2008 Econopalypse except now it affects ALL the physical sciences; a Black Swan event that breaks reality.
I am genuinely worried we're working outselves into just such an event, where the fundamentals are all but forgotten, and a new phenomenon simply breaks the nuts and bolts of the applied sciences.
As for my xplane selection, it helped in other ways. Because often the FDR data is just plain wrong, but with xplane you could actually tell, because a control surface sticking out one way, while the flight instruments say another, lights up a "YOU GOT PROBLEMS" light in the cockpit as the aircraft inexplicably lurches to the right.
>
Improving developer skills is not valuable to your companyWhat's valuable to a company is not necessarily what's valuable to the customers or even more so, to a civilization at large.
> I think I'm mostly just mourning the fact that I got to do my hobby as a career for the past 15 years, but that’s ending. I can still code at home.
It could hardly have been a hobby if people were willing to pay you for it (and good rates too)?
I will rephrase it like this - the market has shifted away from providing value to the customers of said companies to pumping itself instead and it does not need to employ people for that. Simple as.
There should be thousands or tens of thousands people worldwide that can build the operating systems, virtual machines, libraries, containers, and applications that AI is built on. But the number will dwindle and we'll ironically be unable to build what our ancestors did, utterly dependent on the AI artifacts to do it for us.
God I hope it doesn't all crash at once.
I do use Claude code at home maybe a couple hours a week, mostly for code base exploration. Still haven’t figured out how to fully vibe code: the generated code just annoys me and the agents are too chatty. (Insert old man shaking fist at cloud).
> I got to do my hobby as a career for the past 15 years, but that’s ending.
Frankly I don't think so. The AI using LLMs is the perpetual motion mechanism scam of our time. But it is cloaked in unimaginable complexity, and thus it is the perfect scam. But even the most elaborately hidden power source in a perpetual motion machine cannot fool nature and should come to a complete stop as it runs out.
Managers at companies are just doing what they've optimized their careers for: maintaining some edge over some competition, at some cost. What is pure FOMO to you or me, is good strategy to anyone trying to win [1]. In other words, FOMO was always the strategy.
This self-reinforcing loop is also not going away. There hasn't been any real evidence that any part of knowledge work, including coding, cannot be automated [2]. Even if human-level quality or cost-effectiveness takes 10 more years, all tasks are functionally solved or about to be. I don't like it, but it's true.
The big problem is that the people who are removed from this loop, who have the time to understand its effects and the power to make changes, are doing fuck-all.
So, whether the loop stops for a while or speeds up even more, we're fucked until we figure out how to detach full-time employment from survival.
[1] I believe this is called meta in PvP games; even if you want to subvert the meta, you gotta know it well first.
[2] Although it could just be my impression, and I'd be happy to be proven otherwise.
> Improving developer skills is not valuable to your company.
Yet every company does it, except the worst sweatshops.
More and more the bar is being lowered. Don’t fall to brain rot. Don’t quite quit. Stay active and engaged, and you’ll begin to stand out among your peers.
We're obviously in an era where "good enough" is taken so far that, what used to be the middle of the fictional line is not the middle point anymore but a new extreme. You're either someone who cares for the output or someone who cares how readable and easy to extend the code is.
I can only assume this is done on hopeful purpose, with the hope that the LLM's will "only keep improving linearly" to the point where readability and extendability is not my problem by it's "tomorrow's LLM" problem.
Turns out it sucks to produce original works when you know that, whereas previously a few people at best might see your work, now it’s a bunch of omniscient robots and maybe half of those original people are using the robots instead.
> The AI industry is 99% hype; a billion dollar industrial complex to put a price tag on creation. At this point if you believe AI is ‘just a tool’ you’re wilfully ignoring the harm.
> (Regardless, why do I keep being told it’s an ‘extreme’ stance if I decide not to buy something?)
> The 1% utility AI has is overshadowed by the overwhelming mediocracy it regurgitates.
This sort of reasoning is why you might have been called extreme.
It's less extreme to say "many people see and/or get lots of benefit, but it's wrong to use the tool due to the harms it has".
There's nothing wrong with extreme, but since you asked.
> The giant plagiarism machines have already stolen everything. Copyright is dead. Licenses are washed away in clean rooms.
Isn't this what the free software movement wanted? Code available to all?
Yes, code is cheap now. That's the new reality. Your value lies elsewhere.
You can lament the loss of your usefulness as a horse buggy mechanic, or you can adapt your knowledge and experience and use it towards those newfangled automobiles.
So while I do worry about AI's impact on blogging/writing/etc., I do think to some extent, you either love the process or you don't. If you only write in order to have readers, you're in the wrong game.
> First let’s accept the realities. The giant plagiarism machines have already stolen everything. Copyright is dead. Licenses are washed away in clean rooms. Mass surveillance and tracking are a feature, privacy is a bug. Everything is an “algorithm” optimised to exploit.
Suppose that I have discovered a novel algorithm that solves an important basic problem much more efficiently than current techniques do. How do I hide it from the web scrapers that will steal it if I put it on GitHub or elsewhere? Should I just write it up as a paper and be content with citations and minor glory? Or should I capture AI search results today for "write me code that does X", put my new code up under a restrictive license, capture search results a day later, demonstrate that an AI scraper has acquired the algorithm in violation of the license, and seek damages?
Now i still show clean code videos from bob and other old things to new hires and young collegues.
Java got more features, given but the golden area of discovery is over.
The new big thing is ai and i'm curious to see how it will feel to write real agents for my company specific use cases.
But i'm also seeing people so bad in their daily jobs, that I wish to get their salary as tokens to use. It will change and it changes our field.
Btw. "Is there anything, in the entire recorded history of human creation, that could have possibly mattered less than the flatulence Sora produced? NFTs had more value." i disagree, video generation has a massive impact on the industry for a lot of people. Don't down play this. NFTs btw. never had any impact besides moving money from a to b
I don't see any proof that software development is not dead. Software engineering is not, and it's much more than writing code, and it can be fun. But writing code is dead, there is no point of doing it if an LLM can output the same code 100x faster. Of course, architecture and operations stays in our hands (for now?).
Initially I was very sceptic, first versions of ChatGPT or Claude were rather bad. I kept holding to a thought that it cannot get good. Then I've spend a few months evaluating them, if you know how to code, there is no point of coding anymore, just instruct an LLM to do something, verify, merge, repeat. It's an editor of some sorts, an editor where you enter a thought and get code as an output. Changes the whole scene.
Also, yes, I know the origin is Star Wars, but it went viral recently a very specific way.
The power of edgelord memes.
For the article it was nice, but the font is really what got me.
The broader corporate world has never wanted code monkeys. They want "boring" reliability and pay a reasonable wage for it. On the other hand, they also won't tolerate contrarians who can't deliver, so maybe some of the fear from people posting this sort of thing really is justified.
> The only winning move is not to play.
Alas I think tech crowd have collectively painted humanity into a corner where not playing is not an option anymore.
The combination of having subverted copyright and enabled cheap machine replication kills large swaths of creativity. At least as a viable living. One can still do many things on an artisanal level certainly and as excited as I am about AI it’s hard not to see it as a big L for humanity’s creative output
Therefore, things like writing, film, sales, etc are less productively scalable by bots
And things like code, where people don't care how the sausage is made as long as it "works" are more productively scalable by bots
And even in the situation of code, the job description leans more on defining what "works" which requires the human touch
You don't have to give up on everything to participate, but it can be a space to go to if you're tired of every social interaction being mediated by (I'm being glib) hustlers
Was this how other professionals dealt with their grief? Like a translator in the advent of ML based translations? Like a lift man?
You can try to avoid consuming AI-generated material, but of course part-way through a lot of things you may wonder whether it is partly AI-generated, and we don't yet have a credible "human-authored" stamp. But you can't really keep them from using your work to make cheap copies of you, or at least reducing your audience by including information or insights from your work in the chat sessions of people who otherwise might have read your work.
Imagine having 6 software engineering jobs, each paying maybe $150k a year, all being done by agents.
Hell, I might even do this secretly without their consent. If I can hold just 10 jobs for about 3 or 4 years, I can retire and leave the industry before it all comes crumbling down in 2030.
The problem of course, is securing that many jobs. But maybe agents can help with applying for jobs.
You seriously need to go outside and touch grass if you are so defeated by another chess winning machine
Nobody wants to Watch AI play chess, nobody wants to read ai blogposts
AI makes human writing more valuable, not less.
I will pay good money for pure human made books certified as made without a single word auto generated whether in original or during process of Translation.
>One upside of this looming economic and intellectual depression is that the media is beginning to recognise gate keepers are no longer the hand that feeds them.
In what world is "the media" not an integral, tightly-bound part of the ratchet mechanism that seeks to suppress all distinction?
>
It’s never been more important to blog. There has never been a better time to blog. I will tell you why. We’re being starved for human conversation and authentic voicesThe supposedly starved don't seem to care much for such food. Blogs are kind of a wasteland.
Well, there’s not much of a point leaving a comment saying “yes, this, exactly this,” so I’ll leave one here on behalf of my fellow lurkers.
The more AI gets shoved down my throat, the less I’m inclined to use it for anything, and the more I’m inspired to write my own writing, make my own art, and create my own code — with great creative joy and burning anger. Enjoy your 1000x productivity gains and your inevitable burnout as you downskill to a glorified inference loop.
I'm really getting tired of the programming obituaries. As if LLMs didn't fail at any complex task, as if they didn't vomit shit code and as if they didn't just copy patterns surrounding the new code, and as if they didn't hallucinate and downright write wrong code or made up libraries. Yet for some reason, every time you bring it up, someone will come along and say "You're not using it right then.", Is it that, or is it just that they're only doing toy projects? I'm led to believe the latter.
At this point I don't know what's organic and what's not. Reddit is filled with astroturfing for big LLM. Maybe this place is too? Even if that was not the case, I'm led to believe that it isn't uncommon for people to swallow up all of the big LLM propaganda and fall into despair, or fall into unrealistic expectations, and just parrot it everywhere else. One thing is for certain, LLM evangelism has all the money of the world, and LLM denial doesn't. It's only natural to think that the balance is tilted in terms of media presence.
Even at the best, or worst, LLMs can't do anything you couldn't do yourself better with a scaffolding prompt + manual editing, and at the end of the day, you still need the cognitive energy to review, veto, come up with, the implementation. What does this exactly do anyways other than saving you a bunch of keypresses? I wonder if the people touting it to be all that really didn't think before LLMs, of just switched their brain off on them.
I used to really like this site, but I think that just consuming the RSS feed is enough for me. I think that lobster.rs has less "trend chasing" point of views these days, and I do wonder if it might be on here there's larger amounts of non technical people jumpy to call for the funeral of things.
Setting aside the self delusion that makes a considerable number to erroneously rate themselves above average, the reason you create blog posts should not be for the attention you might gain, there simply are not the eyeballs. You create as a form of self expression, to organise your thoughts, to create a record of them.
AI can never challenge in those areas because it is, as it has always been, the act of creation is the goal.
I think most people cannot destinguish between "genuine" creativity and an artificial almalgamation of training data and human provided context. For one, I do not know what already exsists. Some work created by AI may be an obvious rip off of the style of a particular artist, but I wouldnt know. To me it might look awesome and fresh.
I think many of the more human centric thinkers will be disappointed at how many people just wont care.
What AI represents to me is a teacher! I have so long lacked a music teacher and musical tools. I spent my entire career doing invisible software at the lowest levels and now I can finally build cool tools that help me learn and practice and enjoy playing music! Screw all the haters; if you're curious about a wide range of topics and already have some knowledge, you can galavant across a vast space and learn a lot along the way.
AI is a bit of a bullshitter but don't take its bullshit as truth, like you should never take anything your teacher says as gospel. How do we know what's true? The truth of the universe and the world is that underneath it all, it is self consistent, and we keep making measurement errors. The AI is an enormous pot of magic that it's up to you to organize with...your own skills.
You have to actively resist deskilling by doing things. AI should challenge you and reward you, not make you passive.
Use AI to teach yourself by asking lots of questions and constantly testing the results against reality.
For me right now, that's the fretboard.
According to the author AI is 99% hype.
That 1% of AI utility can unlock more for humanity than 99.999% of blogs; static text hosted from a laptop in a closet.
Odd ball position that cheap publishing via the web is a path to The Next Generation for humanity is 100% hype.
Other than feeding dopamine addiction humanity has not improved greatly since we read all those insipid posts on GeoCities no one remembers today.
It's all been 99%+ hype to feed Wall Street. Young GenX and older Millennials with tech jobs were temporary political pawns and gonna end up bag holders like may older GenXers and Boomers who lived through the car boom, the housing boom, the retail boom.
Same old human shit, different hype.
Everyone wants to be a famous author, or at least a published/somewhat acknowledged one; few are willing to write their novel and be satisfied with zero or near-zero sales/readings.
But that is exactly what you need to do, especially in the age of AI. Everyone who was "in it to win it" (think linkedinslop which existed before AI) is going to certainly use AI - because they do not give a shit about the quality of themselves - they just want the result.
And you can only become a writer (unpublished, unread, or no) by doing the writing - it takes time (10,000 hours?) that cannot be replaced by AI, just like you can't have the body of a marathon runner without running (yes, yes, the joke). You may be able to get 26 miles and change away, even very fast, but unless you personally do the running of that distance without cheating, you will not get the inherent benefits.
And if you instruct an AI, or another human even, to write for you, you may get some of the results you want, but you won't have changed to become a writer.
We shouldn't celebrate the successful blogs; they're already rewarded enough. It's celebrating the unsuccessful blogs that is needed - even if, frankly, the vast majority of them are sub-AI levels of crap it is still a human changing and progressing behind them.
Babies fall over a lot but unless you take them out of the stroller and let them do so, they'll never progress to crawling, walking, running.
Might be just me though, but I definitely don’t get why blogging should be the solution.