Eight years of wanting, three months of building with AI (lalitm.com)

by brilee 301 comments 959 points
Read article View on HN

301 comments

[−] Aurornis 40d ago
Refreshing to see an honest and balanced take on AI coding. This is what real AI-assisted coding looks like once you get past the initial wow factor of having the AI write code that executes and does what you asked.

This experience is familiar to every serious software engineer who has used AI code gen and then reviewed the output:

> But when I reviewed the codebase in detail in late January, the downside was obvious: the codebase was complete spaghetti14. I didn’t understand large parts of the Python source extraction pipeline, functions were scattered in random files without a clear shape, and a few files had grown to several thousand lines. It was extremely fragile; it solved the immediate problem but it was never going to cope with my larger vision,

Some people never get to the part where they review the code. They go straight to their LinkedIn or blog and start writing (or having ChatGPT write) posts about how manual coding is dead and they’re done writing code by hand forever.

Some people review the code and declare it unusable garbage, then also go to their social media and post how AI coding is completely useless and they’re not going to use it for anything.

This blog post shows the journey that anyone not in one of those two vocal minorities is going through right now: A realization that AI coding tools can be a large accelerator but you need to learn how to use them correctly in your workflow and you need to remain involved in the code. It’s not as clickbaity as the extreme takes that get posted all the time. It’s a little disappointing to read the part where they said hard work was still required. It is a realistic and balanced take on the state of AI coding, though.

[−] yojo 40d ago
+1

I’ve been driving Claude as my primary coding interface the last three months at my job. Other than a different domain, I feel like I could have written this exact article.

The project I’m on started as a vibe-coded prototype that quickly got promoted to a production service we sell.

I’ve had to build the mental model after the fact, while refactoring and ripping out large chunks of nonsense or dead code.

But the product wouldn’t exist without that quick and dirty prototype, and I can use Claude as a goddamned chainsaw to clean up.

On Friday, I finally added a type checker pre-commit hook and fixed the 90 existing errors (properly, no type ignores) in ~2 hours. I tried full-agentic first, and it failed miserably, then I went through error by error with Claude, we tightened up some exiting types, fixed some clunky abstractions, and got a nice, clean result.

AI-assisted coding is amazing, but IMO for production code there’s no substitute for human review and guidance.

[−] libraryofbabel 40d ago
Agree. This is such a good balanced article. The only things that still make the insights difficult to apply to professional software development are: this was greenfield work and it was a solo project. But that’s hardly the author’s fault. It would however be fantastic to see more articles like this about how to go all in on AI tools for brownfield projects involving more than one person.

One thing I will add: I actually don’t think it’s wrong to start out building a vibe coded spaghetti mess for a project like this… provided you see it as a prototype you’re going to learn from and then throw away. A throwaway prototype is immensely useful because it helps you figure out what you want to build in the first place, before you step down a level and focus on closely guiding the agent to actually build it.

The author’s mistake was that he thought the horrible prototype would evolve into the real thing. Of course it could not. But I suspect that the author’s final results when he did start afresh and build with closer attention to architecture were much better because he has learned more about the requirements for what he wanted to build from that first attempt.

[−] csallen 40d ago
I'll take the other side of this.

Professional software engineers like many of us have a big blind spot when it comes to AI coding, and that's a fixation on code quality.

It makes sense to focus on code quality. We're not wrong. After all, we've spent our entire careers in the code. Bad code quality slows us down and makes things slow/insecure/unreliable/etc for end users.

However, code quality is becoming less and less relevant in the age of AI coding, and to ignore that is to have our heads stuck in the sand. Just because we don't like it doesn't mean it's not true.

There are two forces contributing to this: (1) more people coding smaller apps, and (2) improvements in coding models and agentic tools.

We are increasingly moving toward a world where people who aren't sophisticated programmers are "building" their own apps with a user base of just one person. In many cases, these apps are simple and effective and come without the bloat that larger software suites have subjected users to for years. The code is simple, and even when it's not, nobody will ever have to maintain it, so it doesn't matter. Some apps will be unreliable, some will get hacked, some will be slow and inefficient, and it won't matter. This trend will continue to grow.

At the same time, technology is improving, and the AI is increasingly good at designing and architecting software. We are in the very earliest months of AI actually being somewhat competent at this. It's unlikely that it will plateau and stop improving. And even when it finally does, if such a point comes, there will still be many years of improvements in tooling, as humanity's ability to make effective use of a technology always lags far behind the invention of the technology itself.

So I'm right there with you in being annoyed by all the hype and exaggerated claims. But the "truth" about AI-assisted coding is changing every year, every quarter, every month. It's only trending in one direction. And it isn't going to stop.

[−] hbarka 40d ago

> Some people never get to the part where they review the code. They go straight to their LinkedIn or blog and start writing (or having ChatGPT write) posts about how manual coding is dead and they’re done writing code by hand forever. Some people review the code and declare it unusable garbage, then also go to their social media and post how AI coding is completely useless and they’re not going to use it for anything. This blog post shows the journey that anyone not in one of those two vocal minorities is going through right now.

What’s really happening is that you’re all of those people in the beginning. Those people are you as you go through the experience. You’re excited after seeing it do the impossible and in later instances you’re critical of the imperfections. It’s like the stages of grief, a sort of Kübler-Ross model for AI.

[−] atomicnumber3 40d ago
I'm deeply convinced that there's 2 reasons we don't see real takes like this: 1) is because these people are quietly appreciating the 2-50% uplift you get from sanely using LLMs instead of constantly posting sycophantic or doomer shit for clout and/or VC financing. 2) is because the real version of LLM coding is boring and unsexy. It either involves generating slop in one shot to POC, then restarting from scratch for the real thing or doing extensive remediation costing far more than the initial vibe effort cost; or it involves generally doing the same thing we've been doing since the assembler was created except now I don't need to remember off-hand how to rig up boilerplate for a table test harness in ${current_language}, or if I wrote a snippet with string ops and if statements and I wish it were using regexes and named capture groups, it's now easy to mostly-accurately convert it to the other form instead of just sighing and moving on.

But that's boring nerd shit and LLMs didn't change who thinks boring nerd shit is boring or cool.

[−] te_chris 40d ago
Without wanting to sound rude: I think the mistake people make with AI prototypes is keeping the code at all.

The AI’s are more than capable of producing a mountain of docs from which to rebuild, sanely. They’re really not that capable - without a lot of human pain - of making a shit codebase good.

[−] bmitc 39d ago

> This blog post shows the journey that anyone not in one of those two vocal minorities is going through right now: A realization that AI coding tools can be a large accelerator but you need to learn how to use them correctly in your workflow and you need to remain involved in the code. It’s not as clickbaity as the extreme takes that get posted all the time. It’s a little disappointing to read the part where they said hard work was still required. It is a realistic and balanced take on the state of AI coding, though.

I appreciate the balanced takes and also the notion that one can use these AI tools to build software with principled use.

However, what I am still failing to see is concrete evidence that this is all faster and cheaper than just a human learning and doing everything themself or with a small team. The cat is out of the bag, so to speak, but I think it's still correct to question these things. I am putting in a _lot_ of work to reach a principled status quo with these tools, and it is still quite unclear whether it's actually improvement versus just a side quest to wrangle tools that everyone else is abusing.

[−] zahlman 40d ago
I feel like recently HN has been seeing more takes like this one and at least slightly less of the extremist clickbaity stuff. Maybe it's a sign of maturity. (Or maybe it's just fatigue with the cycle of hyping the absolute-latest model?)
[−] sharperguy 40d ago
It's actually common for human-written projects to go through an initial R&D phase where the first prototypes turn into spaghetti code and require a full rewrite. I haven't been through this myself with LLMs, but I wonder to what extent they could analyse the codebase, propose and then implement a better architecture based on the initial version.
[−] nickstinemates 40d ago
There is a lot you can do to shape the end result to not have these faults. In the end, the engineering mind and rigor still needs to apply, so the hard work doesn't go away.

But, the errors that are described - no architecture adhesion, lack of comprehension, random files, etc. are a matter of not leveling up the sophistication of use further, not a gap in those tools.

As an example. Very clearly laying out your architecture principles, guidance, how code should look on disk, theory on imports, etc. And then - objectively analyzing any proposed change against those principles, converges toward sane and understandable.

We've been calling it adversarial testing across a number of dimensions - architecture, security, accessibility, among other things. Every pr gets automatically reviewed and scored based on these perspectives. If an adversary doesn't OK the PR, it doesn't get merged.

[−] dirtbag__dad 40d ago

> Tests created a similar false comfort. Having 500+ tests felt reassuring, and AI made it easy to generate more. But neither humans nor AI are creative enough to foresee every edge case you’ll hit in the future; there are several times in the vibe-coding phase where I’d come up with a test case and realise the design of some component was completely wrong and needed to be totally reworked. This was a significant contributor to my lack of trust and the decision to scrap everything and start from scratch.

This is my experience. Tests are perhaps the most challenging part of working with AI.

What’s especially awful is any refactor of existing shit code that does not have tests to begin with, and the feature is confusing or inappropriately and unknowingly used multiple places elsewhere.

AI will write test cases that the logic works at all (fine), but the behavior esp what’s covered in an integration test is just not covered at all.

I don’t have a great answer to this yet, especially because this has been most painful to me in a React app, where I don’t know testing best practices. But I’ve been eyeing up behavior driven development paired with spec driven development (AI) as a potential answer here.

Curious if anyone has an approach or framework for generating good tests

[−] rokob 40d ago

> architecture is what happens when all those local pieces interact, and you can’t get good global behaviour by stitching together locally correct components

This is a great article. I’ve been trying to see how layered AI use can bridge this gap but the current models do seem to be lacking in the ambiguous design phase. They are amazing at the local execution phase.

Part of me thinks this is a reflection of software engineering as a whole. Most people are bad at design. Everyone usually gets better with repetition and experience. However, as there is never a right answer just a spectrum of tradeoffs, it seems difficult for the current models to replicate that part of the human process.

[−] lubujackson 40d ago
Long term, I think the best value AI gives us is a poweful tool to gain understanding. I think we are going to see deep understanding turn into the output goal of LLMs soon. For example, the blocker on this project was the dense C code with 400 rules. Work with LLMs allowed the structure and understanding to be parsed and used to create the tool, but maybe an even more useful output would be full documentation of the rules and their interactions.

This could likely be extracted much easier now from the new code, but imagine API docs or a mapping of the logical ruleset with interwoven commentary - other devtools could be built easily, bug analysis could be done on the structure of rules independent of code, optimizations could be determined on an architectural level, etc.

LLMs need humans to know what to build. If generating code becomes easy, codifying a flexible context or understanding becomes the goal that amplifies what can be generated without effort.

[−] PaulHoule 40d ago
Note I believe this one because of the amount of elbow grease that went into it: 250 hours! Based on smaller projects I’ve done I’d say this post is a good model for what a significant AI-assisted systems programming project looks like.
[−] mossBenchwright 40d ago
This is a really good article but one of the paragraphs at the end rubs me the wrong way.

> In theory, you can try to preserve this context by keeping specs and docs up to date. But there’s a reason we didn’t do this before AI: capturing implicit design decisions exhaustively is incredibly expensive and time-consuming to write down. AI can help draft these docs, but because there’s no way to automatically verify that it accurately captured what matters, a human still has to manually audit the result. And that’s still time-consuming.

I agree that it's time consuming and we don't have a good solution yet, but my guess is that a huge part of the next 3 years of iteration in the craft of Software Engineering is going to be creating tools and practices to make this possible. Especially as AIs get better at the actual writing of the code, the key failure mode for agentic coding is going to be the intent gap between what you asked for and what you wanted.

[−] moshib 40d ago

> There’s an uncomfortable parallel between using AI coding tools and playing slot machines28. You send a prompt, wait, and either get something great or something useless. I found myself up late at night wanting to do “just one more prompt,” constantly trying AI just to see what would happen even when I knew it probably wouldn’t work. The sunk cost fallacy kicked in too: I’d keep at it even in tasks it was clearly ill-suited for, telling myself “maybe if I phrase it differently this time.”

Oof, this hit very close to home. My workplace recently got, as a special promotion, unlimited access to a coding agents with free access to all the frontier models, for a limited period of time. I find it extremely hard to end my workday when I get into the "one more prompt" mindset, easily clocking 12-hour workdays without noticing.

[−] jillesvangurp 40d ago
This is the hardest it's ever going to be. That's been my mode for the last year. A lot of what I did in the last month was complete science fiction as little as six months ago. The scope and quality of what is possible seems to leap ahead every few weeks.

I now have several projects going in languages that I've never used. I have a side project in Rust, and two Go projects. I have a few decades experience with backend development in Java, Kotlin (last ten years) and occasionally python. And some limited experience with a few other languages. I know how to structurer backend projects, what to look for, what needs testing, etc.

A lot of people would insist you need to review everything the AI generates. And that's very sensible. Except AI now generates code faster than I can review it. Our ability to review is now the bottleneck. And when stuff kind of works (evidenced by manual and automated testing), what's the right point to just say it's good enough? There are no easy answers here. But you do need to think about what an acceptable level of due diligence is. Vibe coding is basically the equivalent of blindly throwing something at the wall and seeing what sticks. Agentic engineering is on the opposite side of the spectrum.

I actually emphasize a lot of quality attributes in my prompts. The importance of good design, high cohesiveness, low coupling, SOLID principles, etc. Just asking for potential refactoring with an eye on that usually yields a few good opportunities. And then all you need to do is say "sounds good, lets do it". I get a little kick out of doing variations on silly prompts like that. "Make it so" is my favorite. Once you have a good plan, it doesn't really matter what you type.

I also ask critical questions about edge cases, testing the non happy path, hardening, concurrency, latency, throughput, etc. If you don't, AIs kind of default to taking short cuts, only focus on the happy path, or hallucinate that it's all fine, etc. But this doesn't necessarily require detailed reviews to find out. You can make the AI review code and produce detailed lists of everything that is wrong or could be improved. If there's something to be found, it will find it if you prompt it right.

There's an art to this. But I suspect that that too is going to be less work. A lot of this stuff boils down to evolving guardrails to do things right that otherwise go wrong. What if AIs start doing these things right by default? I think this is just going to get better and better.

[−] ang_cire 40d ago
It's a huge mistake to start building with Claude without mapping out a project in detail first, by hand. I built a pretty complex device orchestration server + agent recently, and before I set Claude to actually coding I had ~3000 lines of detailed design specs across 7 files that laid out how and what each part of the application would do.

I didn't have to review the code for understanding what Claude did, I reviewed it for verifying that it did what it had been told.

It's also nuts to me that he had to go back in later to build in tests and validation. The second there is an input able to be processed, you bet I have tests covering it. The second a UI is being rendered, I have Playwright taking screenshots (or gtksnapshot for my linux desktop tools).

I think people who are seeing issues at the integration phase of building complex apps are having that happen because they're not keeping the limited context in mind, and preempting those issues by telling their tools exactly how to bridge those gaps themselves.

[−] zer00eyz 40d ago
This article is describing a problem that is still two steps removed from where AI code becomes actually useful.

90 percent of the things users want either A) dont exist or B) are impossible to find, install and run without being deeply technical.

These things dont need to scale, they dont need to be well designed. They are for the most part targeted, single user, single purpose, artifacts. They are migration scripts between services, they are quick and dirty tools that make bad UI and workflows less manual and more managable.

These are the use cases I am seeing from people OUTSIDE the tech sphere adopt AI coding for. It is what "non techies" are using things like open claw for. I have people who in the past would have been told "No, I will not fix your computer" talk to me excitedly about running cron jobs.

Not everything needs to be snap on quality, the bulk of end users are going to be happy with harbor freight quality because it is better than NO tools at all.

[−] cloche 40d ago
Really great to see a realistic experience sans hype about AI tools and how they can have an impact.

> But when I reviewed the codebase in detail in late January, the downside was obvious: the codebase was complete spaghetti...It was extremely fragile; it solved the immediate problem but it was never going to cope with my larger vision...I decided to throw away everything and start from scratch

This part was interesting to me as it lines up with Fred Brooks "throw one away" philosophy: "In most projects, the first system built is barely usable. Hence plan to throw one away; you will, anyhow."

As indicated by the experience, AI tools provide a much faster way of getting to that initial throw-away version. That's their bread and butter for where they shine.

Expecting AI tools to go directly to production quality is a fool's errand. This is the right way to use AI - get a quick implementation, see how it works and learn from it but then refactor and be opinionated about the design. It's similar to TDD's Red, Green, Refactor: write a failing test, get the test passing ASAP without worrying about code quality, refactor to make the code better and reliable.

In time, after this hype cycle has died down, we'll come to realize that this is the best way to make use of AI tools over the long run.

> When I had energy, I could write precise, well-scoped prompts and be genuinely productive. But when I was tired, my prompts became vague, the output got worse

This part also echoes my experience - when I know well what I want, I'm able to write more specific specifications and guide along the AI output. When I'm not as clear, the output is worse and I need to spend a lot more time figuring it out or re-prompting.

[−] zellyn 40d ago
Does SQLite not have a lemon parser generated for its SQL?

When I ported pikchr (also from the SQLite project) to Go, I first ported lemon, then the grammar, then supporting code.

I always meant to do the same for its SQL parser, but pikchr grammar is orders of magnitude simpler.

[−] smj-edison 40d ago
The description of working with AI tools really resonates with me. It's dangerous to work on my codebase when I'm tired, since I don't feel like doing it properly, so I play slots with Claude, and stay up later than I should. I usually come back later and realize the final code that gets generated is an absolute mess.

It is really good for getting up to speed with frameworks and techniques though, like they mentioned.

[−] bytefish 40d ago
This resonates with my experience.

I have several Open Source projects and wanted to refactor them for a decade. A week ago I sat down with Google Gemini and completely refactored three of my libraries. It has been an amazing experience.

What’s a game changer for me is the feedback loop. I can quickly validate or invalidate ideas, and land at an API I would enjoy to use.

[−] pwr1 40d ago
This resonates. I had a project sitting in my head for years and finally built it in about 6 weeks recently. The AI part wasn't even the hard part honestly, it was finally commiting to actually shipping instead of overthinking the architecture. The tools just made it possible to move fast enough that I didn't lose momentum and abandon it like every other time.
[−] DareTheDev 40d ago
This is very close to my experience. And I agree with the conclusion I would like to see more of this
[−] darkstarsys 39d ago
This post is excellent, and accurately describes my experience writing pcons (pcons.org) as a side project. I was one of the original developers of SCons and have wanted to rebuild it better for more than a decade. All the same roadblocks Maganti describes kept me from starting it, and Claude Opus 4.6 suddenly opened the door, and now it's live and people are starting to use it as a cmake or scons replacement. My experience over the last few months mirror Maganti's in many ways: ease of refactoring, investigating many more design ideas, getting frustrated with blind alleys and its not understanding the big picture, and ultimately getting a product I'm proud of.

Vision, taste and good judgment are going to be the key skills for software developers from now on.

[−] billylo 40d ago
Thank you. The learning aspect of reading how AI tackles something is rewarding.

It also reduces my hesitation to get started with something I don't know the answer well enough yet. Time 'wasted' on vibe-coding felt less painful than time 'wasted' on heads-down manual coding down a rabbit hole.

[−] myultidevhq 40d ago
The 8-year wait is the part that stands out. Usually the question is "why start now" not "why did it take 8 years". Curious if there was a specific moment where the tools crossed a threshold for you, or if it was more gradual.
[−] bvan 40d ago
This a very insightful post. Thanks for taking the time to share your experience. AI is incredibly powerful, but it’s no free-lunch.
[−] alexpotato 39d ago
Been using LLMs both at work (FinTech DevOps/SRE) and on side projects (big data, games, websites) and here has been my "arc"

- first used copy and paste in and out of Grok

- started using CLI tools e.g. Claude and OpenCode

- move up to using 3 and sometimes 4 agents at the same time

- considered going to the agents managing agents

- have settled on having LLMs build tools that are both deterministic, usable by humans and the agent, and also faster (b/c there is less "back and forth")

Honestly, it feels a LOT like when Kubernetes came out. e.g. you stopped running containers on a box using Docker Compose plus scripts/configs etc. Instead gave a large part of the operation to an "agent" (in this case k8s) that managed all of the details you didn't need to care about anymore.

I've also realized that while the LLMs can crank out code at a very high rate, someone still needs to make sure everything is running, debug issues etc. You could set up agents to monitor what the agents do but then you still end up with someone needing to keep an eye on everything. If anything, you need MORE people b/c now you can just keep spinning up new components etc.

Also, was in a discussion with one of the best developers I've ever worked with. It came down to the following point:

"Programming is rapidly becoming a hobby. Software engineering is becoming more important than ever."

[−] eviks 40d ago

> spent weeks in the early days following AI down dead ends, exploring designs that felt productive in the moment but collapsed under scrutiny

> I paid for that with a total rewrite.

With so much waste and not a single example of the "brilliant at giving you the right answer to a specific technical question"

> The takeaway for me is simple: AI is an incredible force multiplier

Seems more like a feel multiplier, rather than force.

> 500 tests, many of which I felt I could reuse

Indeed, feeling is the only saving grace for a mountain of random unreviewed tests

[−] bigcat12345678 40d ago

> Unfortunately, unlike many other languages, SQLite has no formal specification describing how it should be parsed.

BorgCfg had exactly the same situation.

mpvl (borgcfg original author, author of https://cuelang.org/) and others had tried to refine bcl while bcl itself is underspecified.

Eventually, the team built a drop-in replacement of bcl and specced out the language almost entirely.

The biggest lesson to me was that engineering never has any short cut.

[−] simondotau 40d ago
This essay perfectly encapsulates my own experience. My biggest frustration is that the AI is astonishingly good at making awful slop which somehow works. It’s got no taste, no concern for elegance, no eagerness for the satisfyingly terse. My job has shifted from code writer to quality control officer.

Nowhere is this more obvious in my current projects than with CRUD interface building. It will go nuts building these elaborate labyrinths and I’m sitting there baffled, bemused, foolishly hoping that THIS time it would recognise that a single SQL query is all that’s needed. It knows how to write complex SQL if you insist, but it never wants to.

But even with those frustrations, damn it is a lot faster than writing it all myself.

[−] dcre 40d ago
"Knowing where you are on these axes at any given moment is, I think, the core skill of working with AI effectively."

I like this a lot. It suggests that AI use may sometimes incentivize people to get better at metacognition rather than worse. (It won't in cases where the output is good enough and you don't care.)

[−] tech_ken 39d ago

> When I was working on something I already understood deeply, AI was excellent. I could review its output instantly, catch mistakes before they landed and move at a pace I’d never have managed alone.

This precisely captures my experience with AI tools. When I understand the domain very deeply, AI feels like magic. I can tell it exactly how I want something implemented and it just appears in 30 seconds. When I don't understand something very well, however, I get easily misled by bogus design choices that I've delegated to the AI. It's so easy for me to spend 4 hours drafting some prototype in an almost dreamlike state of productive bliss, only for it to crash apart when I discover some fundamental bug in the thing I've vibecoded.

[−] The_Goonies1985 40d ago
The author mentions a C codebase. Is AI good at coding in C now? If so, which AI systems lead in this language?

Ideally: local; offline.

Or do I have to wrestle it for 250 hours before it coughs up the dough? Last time I tried, the AI systems struggled with some of the most basic C code.

It seemed fine with Python, but then my cat can do that.

[−] stepan_l 40d ago
I had the same experience, been working on my project for a few months and it started very easy and then I lost control of the code base. Had to rewrite a lot of things. The code AI writes does not look bad, but there is something wrong about it. It just does not feel right. You still need to steer it a lot. But I am very happy that I could write a quite complex project with almost no dependencies at all. Only used Electron. I don't even use npm. That is very promising how far you can get without relying on any libraries/frameworks. You can check it here https://github.com/AgentWFY/AgentWFY MIT license.
[−] sebastianconcpt 40d ago
In a not so far future, people will be amazed that these dense pieces of source code were done by hand and meant to be maintained by people. Same type of amazing you see when thinking in the internals of The Silver Swan or any other famous mechanical automaton.
[−] javierhonduco 40d ago
Great write-up. As a side note (not a Googler myself and this is 100% my opinion) Lalit’s team was hiring in London, UK. If you are interested in working in low level performance tools, this might be a very cool opportunity!
[−] FpUser 40d ago
I do not have anything resembling problems described. Before I ask AI to create new code (except super trivial things). I first split application into smaller functional modules. I then design structure of the code down to main classes and methods and their interaction. Also try to keep scope small. Then AI just fills out the actual code. I have no problems reviewing it. Sometimes I discover some issues - like using arrays instead of maps leading to performance issues but it is easily spotted.
[−] nektro 39d ago

> I’ve long been puzzled that no one has invested in building a really good developer experience for it.

https://sqlitebrowser.org/

> Unfortunately, unlike many other languages,

what

> SQLite has no formal specification describing how it should be parsed.

https://sqlite.org/syntax.html

[−] throwaway47001 40d ago
I appreciate these kind of fact-based posts. Thank you for this.

Unfortunately, AI seems to be divisive. I hope we will find our way back eventually. I believe the lessons from this era will reverberate for a long time and all sides stand to learn something.

As for me, I can’t help but notice there is a distinct group of developers that does not get it. I know because they are my colleagues. They are good people and not unintelligent, but they are set in their ways. I can imagine management forcing them to use AI, which at the moment is not the case, because they are such laggards. Even I sometimes want to “confront” them about their entire day wasted on something even the free ChatGPT would have handled adequately in a minute or two. It’s sad to see actually.

We are not doing important things and we ourselves are not geniuses. We know that or at least I know that. I worry for the “regular” developer, the one that is of average intellect like me. Lacking some kind of (social) moat I fear many of us will not be able to ride this one out into retirement.

[−] moropex 39d ago
Had a similar experience recently. AI-generated code that worked, tests passing, but I couldn't explain how half of it worked. Starting over with a clear mental model and using AI as an accelerator instead of a replacement made all the difference.
[−] looshch 40d ago
completely off-topic, but i love the fact that this blog has the exact shade of black for the background as my site loosh.ch. Guess we both took it from some of the Google product’s night theme
[−] deterministic 36d ago
What an excellent read. Balanced and full of real insight. How rare that is nowadays.
[−] Srinathprasanna 38d ago
AI coding gives you mass. You still need to provide direction. Mass without direction is just a mess that runs.
[−] amai 39d ago
The takeaway from the article:

"AI is an incredible force multiplier for implementation, but it’s a dangerous substitute for design."