Write less code, be more responsible (blog.orhun.dev)

by orhunp_ 126 comments 169 points
Read article View on HN

126 comments

[−] agentultra 31d ago
I’m working as a single solo developer of a tiny video game. I’m writing it in C with raylib. No coding assistants, no agents, not even a language server.

I only work on it for a few hours during the week. And it’s progressing at a reasonable pace that I’m happy with. I got cross-compilation from Linux to Windows going early on in a couple of hours. Wasn’t that hard.

I’ve had to rework parts of the code as I’ve progressed. I’ve had to live with decisions I made early on. It’s code. It’s fine.

I don’t really understand the, “more, better, faster,” cachet to be honest. Writing the code hasn’t been the bottle neck to developing software for a long time. It’s usually the thinking that takes most of the time and if that goes away well… I dunno, that’s weird. I will understand it even less.

Agree with writing less code though. The economics of throwing out 37k lines of code a week is… stupid in the extreme. If we get paid by the line we could’ve optimized for this long before LLM’s were invented. It’s not like more lines of code means more inventory to sell. It’s usually the opposite: the more bugs to fix, the more frustrated customers, the higher churn of exhausted developers.

[−] 20k 31d ago

>I don’t really understand the, “more, better, faster,” cachet to be honest. Writing the code hasn’t been the bottle neck to developing software for a long time. It’s usually the thinking that takes most of the time and if that goes away well… I dunno, that’s weird. I will understand it even less.

This is what I've always found confusing as well about this push for AI. The act of typing isn't the hard part - its understanding what's going on, and why you're doing it. Using AI to generate code is only faster if you try and skip that step - which leads to an inevitable disaster

[−] koolba 31d ago

> The act of typing isn't the hard part - its understanding what's going on, and why you're doing it. Using AI to generate code is only faster if you try and skip that step - which leads to an inevitable disaster

It’s more than just typing though. A simple example remembering the exact incantation of CSS classes to style something that you can easily describe in plain English.

Yes, you could look them up or maybe even memorize them. But there’s no way you can make wholesale changes to a layout faster than a machine.

It lowers the cost for experimentation. A whole series of “what if this was…” can be answered with an implementation in minutes. Not a whole afternoon on one idea that you feel a sunk cost to keep.

[−] lelanthran 31d ago

> It’s more than just typing though. A simple example remembering the exact incantation of CSS classes to style something that you can easily describe in plain English.

Do that enough and you won't know enough about your codebase to recognise errors in the LLM output.

[−] groby_b 31d ago
That's a bold assertion without any proof.

It also means you're so helpless as a developer that you could never debug another person's code, because how would you recognize the errors, you haven't made them yourself.

[−] mrklol 31d ago
imo a question is, do you still need to understand the codebase? What if that process changes and the language you’re reading is a natural one instead of code?
[−] lelanthran 31d ago

> What if that process changes and the language you’re reading is a natural one instead of code?

Okay, when that happens, then sure, you don't need to understand the codebase.

I have not seen any evidence that that is currently the case, so my observation that "Continue letting the LLM write your code for you, and soon you won't be able to spot errors in its output" is still applicable today.

When the situation changes, then we can ask if it is really that improtant to understand the code. Until that happens, you still need to understand the code.

[−] kjksf 31d ago
The same logic applies to your statement:

> Do that enough and you won't know enough about your codebase to recognise errors in the LLM output.

Okay, when that happens, then sure, you'll have a problem.

I have not seen any evidence that that is currently the case i.e. I have no problems correcting LLM output when needed.

When the situation changes, then we can talk about pulling back on LLM usage.

And the crucial point is: me.

I'm not saying that everyone that uses LLM to generate code won't fall into "not able to use LLM generated code".

I now generate 90% of the code with LLM and I see no issues so far. Just implementing features faster. Fixing bugs faster.

[−] lelanthran 30d ago

> The same logic applies to your statement:

>> Do that enough and you won't know enough about your codebase to recognise errors in the LLM output.

> Okay, when that happens, then sure, you'll have a problem.

It's not exactly the same: how will you know that you are missing errors due to lack of knowledge?

> I now generate 90% of the code with LLM and I see no issues so far.

Well, that's my point, innit? "I see no errors" is exactly the same outcome from "missing the errors that are generated".

[−] pdimitar 31d ago
You do have a point but as the sibling comment pointed out, the negative eventuality you are describing also has not happened for many devs.

I quite enjoy being much more of an architect than I could compared to 90% of my career so far (24 years in total). I have coded my fingers and eyes out and I spot idiocies in LLM output from trivially easy to needing an hour carefully reviewing.

So, I don't see the "soon" in your statement happening, ahem, anytime soon for me, and for many others.

[−] lelanthran 30d ago

> I have coded my fingers and eyes out and I spot idiocies in LLM output from trivially easy to needing an hour carefully reviewing.

This is exactly the opposite experience of sibling, who reports not seeing any issues in the generated code.

You report spotting idiocies, he reports seeing nothing, and you are both making the same argument :-/

[−] cassianoleal 31d ago
What happens when your LLM of choice goes on an infinite loop failing to solve a problem?

What happens when your LLM provider goes down during an incident?

What happens when you have an incident on a distributed system so complex that no LLM can maintain a good enough understanding of the system as a whole in a single session to spot the problem?

What happens when the LLM providers stop offering loss leader subscriptions?

[−] the_af 31d ago

>

What if that process changes and the language you’re reading is a natural one instead of code?

Natural language is not a good way to specify computer systems. This is a lesson we seem doomed to forget again and again. It's the curse of our profession: nobody wants to learn anything if it gets in the way of the latest fad. There's already a historical problem in software engineering: the people asking for stuff use plain language, and there's a need to convert it to a formal spec, and this takes time and is error prone. But it seems we are introducing a whole new layer of lossy interpretation to the whole mess, and we're doing this happily and open eyed because fuck the lessons of software engineering.

I could see LLMs being used to check/analyze natural language requirements and help turn them into formal requirements though.

[−] sublinear 31d ago
This is not correct. CSS is the style rules for all rendering situations of that HTML, not just your single requirement that it "looks about right" in your narrow set of test cases.

Nobody writing production CSS for a serious web page can avoid rewriting it. Nobody is memorizing anything. It's deeply intertwined with the requirements as they change. You will eventually be forced to review every line of it carefully as each new test is added or when the HTML is changed. No AI is doing that level of testing or has the training data to provide those answers.

It sounds like you're better off not using a web page at all if this bothers you. This isn't a deficiency of CSS. It's the main feature. It's designed to provide tools that can cover all cases.

If you only have one rendering case, you want an image. If you want to skip the code, you can just not write code. Create a mockup of images and hand it off to your web devs.

[−] slopinthebag 31d ago
Eh, I've written so much CSS and I hate it so much I use AI to write it now not because it's faster or better at doing so, just so I don't need to do it.
[−] CivBase 31d ago

> But there’s no way you can make wholesale changes to a layout faster than a machine.

You lost me here. I can make changes very quickly once I understand both the problem and the solution I want to go with. Modifying text is quite easy. I spend very little time doing it as a developer.

[−] EagnaIonat 31d ago

> It lowers the cost for experimentation. A whole series of “what if this was…”

Anecdotal, but I've noticed while this is true it also adds the danger of knowing when to stop.

Early on I would take forever trying to get something exactly to whats in my head. Which meant I would spend too much time in one sitting then if I had previously built it by hand.

Now I try to time box with the mindset "good enough".

[−] cess11 31d ago
"This is what I've always found confusing as well about this push for AI."

I think it's a few things converging. One is that software developers have become more expensive for US corporations for several reasons and blaming layoffs on a third party is for some reason more palatable to a lot of people.

Another is that a lot of decision makers are pretty mediocre thinkers and know very little about the people they rule over, so they actually believe that machines will be able to automate what software developers do rather than what these decision makers do.

Then there's the ever-present allure of the promise that middle managers will somehow wrestle control over software crafts from the nerds, i.e. what has underpinned low-code business solutions for ages and always, always comes with very expensive consultants, commonly software developers, on the side.

[−] rwmj 31d ago
Don't worry. In a few years we'll be like the COBOL programmers who still understand how things work, our brains haven't atrophied, and we make good money fixing the giant messes created by others.
[−] Applejinx 31d ago
Sounds awful. I'm not interested in fixing giant messes. I'll just be tinkering away making little things (at scale) where the scope is very constrained and the fixing isn't needed.

People can do their vibecoding to make weird rehackings of stuff I did, almost always to make it more mainstream, limited, and boring, and usually to some mainstream acclaim. And they can flame out, not my problem.

I'm not fixing anybody's giant mess. I'm doing the equivalent of simply refusing to give up COBOL. To stop me, people will have to EOL a huge amount of working useful stuff for no good reason and replace it with untrustworthy garbage.

I am aware this is exactly the plan on so many levels. Bring it. I don't think it's going to be popular, or rather: I think only at this historical moment can you get away with that and not immediately be called on it, as a charlatan.

When our grandest celebrity charlatans go in the bin, the time for vibecoding will truly be over.

[−] rvz 31d ago

> This is what I've always found confusing as well about this push for AI.

They want you to pay for their tokens at their casino and rack up a 5 - 6 figure bill.

[−] vbezhenar 31d ago
AI not just types code for you. It can assist with almost every part of software development. Design, bug hunting, code review, prototyping, testing.
[−] tonyedgecombe 31d ago
It can even create a giant ball of mud ten times faster than you can.
[−] NilMostChill 31d ago
A Luddite farm worker can assist in all those things, the question is, can it assist in a useful manner?
[−] kjksf 31d ago
Not only it can but it does.

Just as I was reading this claude implemented a drag&drop of images out of SumatraPDF.

I asked:

> implement dragging out images; if we initiate drag action and the element under cursor is an image, allow dragging out the image and dropping on other applications

then it didn't quite work:

I'm testing it by trying to drop on a web application that accepts dropped images from file system but it doesn't work for that

Here's the result: https://github.com/sumatrapdfreader/sumatrapdf/commit/58d9a4...

It took me less than 15 mins, with testing.

Now you tell me:

1. Can a farm worker do that?

2. Can you improve this code in a meaningful way? If you were doing a code review, what would you ask to be changed?

3. How long would it take you to type this code?

Here's what I think: No. No. Much longer.

[−] svieira 31d ago
Why is it using a temp file? Is there really no more elegant way to pass around pointers to images than spilling to disk?
[−] 16bitvoid 31d ago
Of course there is, but slop generators be slopping
[−] kjksf 31d ago
What is it, o wise person stingy with the information.
[−] 16bitvoid 31d ago
I admire you for what you've created wrt Sumatra. It's an excellent piece of software. But, as a matter of principle, I refuse to knowingly contribute to codebases using AI to generate code, including drive-by hints, suggestions, etc.

You, or rather Claude, are not the first to solve this problem and there are examples of better solutions out there. Since you're willing to let Claude regurgitate other people's work, feel free to look it up yourself or have Claude do it for you.

[−] Chaosvex 31d ago
The code is really bad, so I'd have a lot to say about it in a review. Couldn't do it in 15 minutes, though.
[−] NilMostChill 28d ago
1. I mean, yes ? the average farm worker is probably capable of writing a sentence similar to the one you just did and sicking it a prompt.

Unless you mean without LLM assistance, then no.

2. I've no idea, i haven't touched c++ in an age, if i got back up to speed then possibly.

3. To learn how to program in c++ again, figure out best practices and then write the code? A while probably.

But then i'd have to to that anyway to be able to spot any problems in the code and know what to test.

because i'm for sure not putting code out there that i don't understand, especially when the code has been generated by a non-deterministic system prone to subtle hallucinations.

I'm not saying LLM's have no uses, they do some things fine, inflating the capabilities of a tool because of hype isn't a viable mid to long term strategy.

LLM's are poor(but improving in some ways) at consistent multiple-boundary complexity.

My issue wasn't with the statement itself, just that is was very broad, hence my reply.

LLMs can assist with all of those steps, potentially, if you use them for the things they are suited for and have a plan for maintaining quality and consistency beyond "let the LLM's review and test it for me", i'd consider that professional negligence given the current SOTA.

The assistance should be subject an accurate cost/benefit analysis before implying the assistance is worthwhile, was my point.

[−] the_overseer 28d ago
Just uninstalled Sumatra. Jesus that code is garbage.
[−] locknitpicker 31d ago

> This is what I've always found confusing as well about this push for AI. The act of typing isn't the hard part - its understanding what's going on, and why you're doing it.

This is a very superficial and simplistic analysis of the whole domain. Programmers don't "type". They apply changes to the code. Pressing buttons in a keyboard is not the bottleneck. If that was the case, code completion and templating would have been a revolutionary, world changing development in the field.

The difficult part is understanding what to do and how to do it, and why. It turns out LLMs can handle all these types of task. You are onboarding onto a new project? Hit a LLM assistant with /explain. You want to implement a feature that matches a specific requirement? You hit your LLM assistant with /plan followed by apply. You want to cover some code with tests? You hit your LLM assistant with /tests.

In the end you review the result,and do with it whatever you want. Some even feel confident enough to YOLO the output of the LLM.

So while you still try to navigate through files, others already have features out.

[−] prox 31d ago
It always seemed to me like its lootbox behavior. Highly addictive for the dopamine hit you get.
[−] brianwmunz 31d ago
Honestly I think you can tell pretty quickly if a company or person is approaching AI from the viewpoint of accelerating development and innovation or just looking to do the same amount of work with less people. The space has been flooded by mean-spirited people who love the idea of developers becoming obsolete, which is a viewpoint that isn't working out for a lot of companies right now...many are already scrambling to rehire. Approaching the situation practically, integrating AI as a tool and accelerator is the much smarter way and if done right will pay for itself anyway.
[−] hnthrowaway0315 31d ago
“more, better, faster,”

I have heard these words, almost verbatim, from manager-yes-men coming from a FAANG background, and surprisingly concentrated in a certain demography (if someone find this offensive, I'll remove this part).

My CTO wants us to "deliver as fast as possible", and my VP wants us to "go much faster, and more ownership". "Better" or anything related with quality was definitely mentioned, too, but always at a second place.

To this day, I consider these yes-men to be a major red flag, so I always tried to probe for such information during interviews.

[−] YesBox 31d ago
I've been developing a moderately popular (for an indie) game for over 4 years at this point (full time). C++, SFML, SQLite. Same as you: no coding assistants, no agents, etc. I also don't use git. [1]

One of the largest speedups is from how much of the codebase I can keep in my head. Because I started from an empty C++ file, the engine reflects how I reason and organize concepts (lossless compression). Thus most of the codebase is in my brains RAM.

I don't see how LLM agents are going to improve my productivity in the long run. The less a person understands their code (organized logic), the more abstracted the conversation is going to become when directing an agent. The higher up the abstraction ladder you go, the less distinct your product becomes.

[1] And very, very rarely have I wished I had it for a moment. Not using git simplifies abstracted parts of development. No branches, no ballooning of conceptual tangents, etc. Focus on one thing at a time. Daily backups and a log of what I worked on for the day suffices should I need to revisit/remember earlier changes. I've never been in a situation where I change I made over a week ago interfered with todays work.

[−] colechristensen 31d ago

>Writing the code hasn’t been the bottle neck to developing software for a long time.

Then we're doing different things.

I didn't like GitHub so I wrote my own. 60k lines of code later... yes writing code was the bottleneck which has been eliminated. The bottleneck is now design, review, and quality assessments that can't be done trivially.

This isn't even the project I wanted to be doing, the tools that were available were holding me back so I wrote my own. It also consumes a few hours a week.

If you think writing code isn't the bottleneck then you aren't thinking big enough. If you don't WANT to think big enough, that's fine, I also do things for the joy of doing them.

[−] pdimitar 31d ago
This:

> I don’t really understand the, “more, better, faster,” cachet to be honest

And this:

> I’m working as a single solo developer

...I believe explain it all here. You likely are not beholden to PMs, CEOs and the like. Of course you can go at your own pace. I am actually puzzled that you don't understand that aspect yourself.

> The economics of throwing out 37k lines of code a week is… stupid in the extreme

Again, bosses. CEOs have 14 calls a week with potential prospects and sometimes want demos, sometimes they sign quickly and want a prototype, and sometimes they arrange a collab with a friend or family. Then 3 weeks later the whole thing falls apart and you have to throw it away because it's getting in the way of delivering what actually still pays the bills.

I am not the CEO. I try making his visions come true. I don't get to make the calls on whether 37k of lines will be quickly churned out and then deleted some weeks later.

I think your comment is overly focused only on the coding/programming aspect of things. We don't exist in a vacuum. May I ask how do you make your living? That might shed extra light on your trouble understanding the inevitable churn when writing code for money.

---

All of this does not even mention the fact that I 100% agree that less coding lines == less trouble. Code is generally a liability, I believe every mature dev understands that. But often we are not given a choice so we have to produce more code and periodically compress it / re-architect it (while never making the mistake of asking to be given time to do so because we never will).

[−] quacker 30d ago
Writing the code hasn’t been the bottle neck to developing software for a long time.

Code may not be the bottleneck, but writing it absolutely does consume time.

Especially with solo game dev, I can prototype ideas, try them out, and then refine or scrap them at a rate I could never do without AI. This type of experimentation is a perfect use-case for AI. It’s actually super fun, and if I pay attention and give the AI decent instructions, I don’t really lose out on code quality.

[−] h4kunamata 30d ago

>I don’t really understand the, “more, better, faster,” cachet to be honest

Numbers, it is all about numbers.

I have worked in a company where one release a week would drive the managers insane, by them we should have one release every minute.

Things break all the time because in order to deploy faster and faster, corners were cut and now one of the most visited online pet store in the country is down :)

[−] dpark 31d ago

>

Writing the code hasn’t been the bottle neck to developing software for a long time. It’s usually the thinking that takes most of the time

Does your coding not involve thinking? And if not, why are you not delighted to have AI take that over? Writing unthinking boilerplate is tedious garbage work.

Today I wanted to address a bug I found on a product I work on. At the intersection of platform migration and backwards compatibility I found some customers getting neither. I used an LLM to research the code paths and ensure that my understanding of the break was correct and what the potential side effects of my proposed fix would be. AI saved me stepping through code for hours to understand the side effects. I asked it for a nice description of the flow and it gave it to me, including the pieces I didn’t really know because I’d never even touched that code before. I could have done this. Would it have been a better use of my time than moving on to the next thing? Probably not. Stepping through function calls in an IDE is not my idea of good “thinking” work. Tracing through glue to understand how a magical property gets injected is a great job for a machine.

[−] anitil 30d ago
I had a look at your github and blog but couldn't find the game, is it public? Or do I need to watch your streams to see it?
[−] ignoramous 31d ago

>

Writing the code hasn’t been the bottle neck to developing software for a long time.

For who? There's no lack of professional programmers who couldn't clear FizzBuzz now coding up company-sized systems using Agents. This is all good as long as agents can stick to the spec/req & code it all up with decent enough abstractions... as the professional approving it is in no position to clue it on code organization or bugs or edge cases. I think, we (as a society) are looking at something akin to "reproducibility crisis" (software full of Heisenbugs) as such "vibe coded" systems get widely sold & deployed, 'cause the "pros" who excel at this are also good at... selling.

[−] osm3000 31d ago
I am a machine learning engineer. I've been in the domain almost 12 years now (different titles and roles).

In my current role (and by no means that is unique), I don't know how to write less code.

Here are problems I am facing: - DS generating a lot of code - Managers who have therapy sessions with Gemini, and in which their ideas have been validated - No governance on DS (you want this package? import it) - No governance on Infrastructure (I spent a couple of months upskilling in a pipeline technology that were using: reading documentation and creating examples, until I became very good it...just for the whole tech to be ditched) - Libraries and tools that have been documentation, or too complex (GCP for example)

The cognitive overload is immense.

Back few years ago, when I was doing my PhD, immersing in PyTorch and Scipy stack had a huge return on investment. Now, I don't feel it.

So, how do I even write less code? Slowly, I am succumbing to the fact that my tools and methods are inappropriate. I am steadily shifting towards offloading this to Claude and its likings.

Is it introducing risks? For sure. It's going to be a disaster at one point. But I don't know what to do. Do I need a better abstraction? Different way to think about it? No clue

[−] bob1029 31d ago

> Nowadays many people are pushing AI-assisted code, some of them in a responsible way, some of them not. So... what do we do?

You hold them accountable.

Once upon a time we used to fire people from their jobs for doing things poorly. Perhaps we could return to something approximating this model.

[−] gbro3n 31d ago
My current take is that AI is helping me experiment much faster. I can get less involved with the parts of an application that matter less and focus more (manually) on the parts that do. I agree with a lot of the sentiment here - even with the best intentions of reviewing every line of AI code, when it works well and I'm working fast on low stakes functionality, that sometimes doesn't happen. This can be offset however by using AI efficiencies to maintain better test coverage than I would by hand (unit and e2e), having documentation updated with assistance and having diagrams maintained to help me review. There are still some annoyances, when the AI struggles with seemingly simple issues, but I think that we all have to admit that programming was difficult, and quality issues existed before AI.
[−] voidUpdate 31d ago
I'm not entirely sure I can trust the opinions of someone on LLMs when their blog is sponsored by an AI company. Am I not simply seeing the opinions that the AI company is paying for?
[−] philipwhiuk 31d ago

> It's something ethical that I don't know the answer to. In my case, it was the guy's first ever open source project and he understandably went for the quickest way of creating an app. While I appreciate their contribution to open source, they should be responsible for the quality of what they put out there.

Pitching this is the exact opposite of the maintainer burden of expectation.

> Sometimes I discover a project that is truly wonderful but visibly vibe-coded. I start using it without the guarantee of next release not running rm -rf and wipe my system.

For me this is on you, not the developer.

[−] Witty0Gore 31d ago
I think that generally creators being responsible for what they ship applies across the board. That doesn't change because AI has it's fingers in it.
[−] chillaranand 31d ago
For various internal tools & other projects, I started using config only tools and avoid code as much as possible.

https://avilpage.com/2026/03/config-first-tools.html

[−] globnomulous 30d ago
I'm quite surprised by the negativity of the comments in this thread, especially contrasted with the positivity and enthusiasm I see in other threads. I'm an AI pessimist. I don't like it. I have resisted it. You'll find plenty of Rage against the Machine comments in my account history on Hacker News. The AI optimists drive me up the wall.

And I can tell all of the nay-sayers in this thread, from first-hand experience, that the AI tools can be useful. When you use them well, they can save time. If you're writing just a dinky webapp for your "radio on the internet" startup, it can do a lot of grunt work. It's better auto completion, at a minimum.

Last week I was struggling with an annoying, interlocking-race-condition/-stale-state bug. Fixing one issue kept reintroducing others that I'd just fixed. Skill issue, right? Right. And Clause 4.6 Opus diagnosed the problem and fixed it with just a little bit of coaxing.

Then I asked it to fix another issue and it wound up chasing its tail, as it tried to apply the same principle to unrelated code with unrelated problems.

Call these tools stochastic parrots. Call them autocorrect on steroids. Call them whatever you want. If you think they're worthless or have no use, you're living either in a fantasy land or in 2022 just after openai released its first, hilariously stupid chatbot.

[−] travelalberta 31d ago
Code Complete came out in '93 and even then they acknowledge most of the work around development wasn't actually programming but architecture, requirements, and design.

Sure you can let Claude have a field day and churn out whatever you want but the question is: a) Did you read the diffs and provide the necessary oversight to make sure it actually does what you want properly, b) Is the feature actually useful?

If you've worked on legacy systems you know there's so much garbage floating around that the bar isn't that high generally for code as long as it seems to work. If you read the code and documentation Claude makes thoroughly and aren't blindly accepting every commit there is not really a problem as long as you are responsible and can put your stamp of approval on it. If you are pushing garbage through it doesn't matter if a junior dev, yourself, or Claude wrote it, the problem isn't the code but your CI/CD process.

I think the problem is expectations. I know some devs at 'AI-native' organizations that have Claude do a lot for them. Which is fine, for a lot of boiler plate or standard requests they can now ship 2X code. The problem is the expectation is now that they ship 2X code. I think if you leave timelines relatively the same as pre-AI then having an agent generate, document, refactor, test, and evaluate code with you can lead to a better product.

[−] qudat 31d ago
A similar post with more emphasis on validating changes: https://bower.sh/thinking-slow-writing-fast
[−] Radle 31d ago
My repos for personal projects are split in two. One side contains code of better quality than I could write myself. The other side is throwaway vibe-coded shit that works somehow.
[−] 0xnadr 31d ago
This resonates. Smaller codebases are easier to audit, easier to maintain, and usually faster. The best code is the code you don't write.
[−] nour833 31d ago
Yeah many newbies are thinking that all ai generated code is safe while they can poison the next gen ai by training on wrong data.
[−] andai 31d ago
After experimenting with various approaches, I arrived at Power Coding (like Power Armor). This requires:

- small codebases (whole thing is injected into context)

- small, fast models (so it's realtime)

- a custom harness (cause everything I tried sucks, takes 10 seconds to load half my program into context instead of just doing it at startup lmao)

The result is interactive, realtime, doesn't break flow (no waiting for "AI compile", small models are very fast now), and most importantly: active, not passive.

I make many small changes. The changes are small, so small models can handle them. The changes are small, so my brain can handle them. I describe what I want, so I am driving. The mental model stays synced continuously.

Life is good.

[−] stratts 31d ago
It was always possible to write large amounts of crappy code if you were motivated or clueless enough (see https://github.com/radian-software/TerrariaClone). It's now just easier, and the consequences less severe, as the agent has code comprehension superpowers and will happily extend your mud ball of a codebase.

There are still consequences, however. Even with an agent, development slows, cost increases, bugs emerge at a higher rate, etc. It's still beneficial to focus on code quality instead of raw output. I don't think this is limited writing it yourself, mind - but you need to actually have an understanding of what's being generated so you can critique and improve it.

Personally, I've found the accessibility aspect to be the most beneficial. I'm not always writing more code, but I can do much more of it on my phone, just prompting the agent, which has been so freeing. I don't feel this is talked about enough!

[−] shevy-java 31d ago

> So you are saying that the quality of the projects is going down?

The website seems to at the least be semi-generated via AI. But I think the statement that the quality of many projects went downwards, is true.

I am not saying all projects became worse, per se, but if you, say, search for some project these days, often you land on a github page only. Or primarily. How is the documentation there? Usually there is README.md and some projects have useful documentation. But in most cases that I found, open source projects really have incredibly poor documentation for the most part. Documentation is not code, so the code could be great, but I am increasingly noticing that even if the code gets better, the documentation just gets worse; rarely updated, if at all. Even when you file requests for specific improvements, often there is no response or change, probably because the author just lacks time to do so, anyway.

But I am also seeing that the code also gets worse. AI generated slop is often unreadable and unmaintainable. I have even recently seen AI spam slop used on mailing lists - look here:

https://lists.ffmpeg.org/archives/list/ffmpeg-devel@ffmpeg.o...

Michael Niedermayer does not seem to understand why AI slop is a problem. One comment reveals that. I don't read mailing lists myself really (never was able to keep up with traffic) but I would be pissed to no ends if AI spam like that would land into my mailbox and waste my time. Yet the people who use AI spam, don't seem to understand mentally why that is a problem. This is interesting. They suddenly think spam is ok if AI generated it. So the overall trend is that quality goes down more and more. Not in all projects but in many of them.

[−] AlexSalikov 31d ago
Good framing. I’d add that “be responsible” extends well beyond code quality - it’s about product responsibility.

AI making code cheaper to produce doesn’t make the decisions around it any cheaper. What to build, for whom, and why — that’s still fully on you. It should free up more time for strategy, user understanding, and saying “no” to things that shouldn’t exist regardless of how easy they are to ship.

The maintainability concern Orhun raises is real, but I think the root cause isn’t AI — it’s ownership. If you don’t understand what was built, you can’t evolve it. It’s the same failure mode as a PM who doesn’t grasp the technical implementation — they end up proposing expensive features that fight the architecture instead of working with it. Eventually, someone has to pay for that disconnect, and it’s usually the team

[−] jsxyzb 31d ago
[dead]
[−] mary00114477 30d ago
[dead]
[−] Kiyo-Lynn 31d ago
[dead]