I am leaving the AI party after one drink (lara-aigmueller.at)

by speckx 130 comments 121 points
Read article View on HN

130 comments

[−] legitster 49d ago

> I don’t want to feel this kind of “addiction.”

> I don’t want to depend on something doing the work I earn money with.

> I don’t want to give up my brain and become lazy and not think for myself anymore.

There are a lot of good reasons we should be skeptical of AI and not give up on essential skills. But sometimes I want to shake these people by the shoulders. Do you drive an automatic car? Do you use a microwave? Do you buy food from a grocery store? Do you own power tools?

The entire point of civilization and society is that we are all "addicted" to technology and progress. But the invention of the plow did not, in fact, make us lazier or stop using our brain. We just moved on to the next problems. Maybe the Amish are have it right and we should just be happy with a certain level of technology. But none of us have "lost" the ability to go backwards if we really wanted.

You can finally ask a computer to think and solve problems, and it will! People act like this is a brave new world, but this is literally what computers were supposed to be doing for us 50 years ago! If somebody finally came out with a fusion reactor tomorrow I would half expect people to suddenly come out and say "Oh, I don't think I can support this. What about the soul of solar panels? I think cheap electricity is going to make things too easy."

[−] andai 49d ago
Reposting a comment (and one of the replies) since it's relevant:

---

It occurred to me on my walk today that a program is not the only output of programming. The other, arguably far more important output, is the programmer.

The mental model that you, the programmer, build by writing the program.

And -- here's the million dollar question -- can we get away with removing our hands from the equation? You may know that knowledge lives deeper than "thought-level" -- much of it lives in muscle memory. You can't glance at a paragraph of a textbook, say "yeah that makes sense" and expect to do well on the exam. You need to be able to produce it.

(Many of you will remember the experience of having forgotten a phone number, i.e. not being able to speak or write it, but finding that you are able to punch it into the dialpad, because the muscle memory was still there!)

The recent trend is to increase the output called programs, but decrease the output called programmers. That doesn't exactly bode well.

See also: Preventing the Collapse of Civilization / Jonathan Blow (Thekla, Inc)

https://www.youtube.com/watch?v=ZSRHeXYDLko

---

Munksgaard 1 day ago:

Peter Naur had that realization back in 1985: https://pages.cs.wisc.edu/~remzi/Naur.pdf

[−] allenrb 49d ago
For what it’s worth (ie, absolutely nothing), I agree with her 100%. I didn’t get into this field in order to prompt an AI to take care of the details. I got into it because I love the details.

I’m a strong performer on a good team at a company many people would want to work at… and I know the clock is ticking. Sooner or later, I will be too slow.

I’m not going to claim that this is the wrong way to go. It’s obviously the future, and the future doesn’t care what allenrb does or does not want. I’m somewhat hopeful that power and cooling requirements will come down by multiple factors of 10x over time, reducing the environmental damage.

The fact is, I love what I’ve been able to do “the old way” and just don’t feel the urge to move on. So it goes.

[−] mstank 49d ago
While I applaud her and wish her well — writing like this reminds me of a couple of things.

First my aging father insisting on navigating using his unfortunately fading memory instead of Google maps. Some people just won’t pick up technology out of habit or spite, even if it hinders them.

Second, a quote I read here that I’ll paraphrase “you can be the best marathon runner in the world and still lose a race to a guy on a bike.” Know the race you’re racing. It often changes.

I think it’s valid and commendable to keep the old ways alive, but also potentially dangerous to not realize they’re old ways.

[−] firmretention 49d ago
I've found two personal use cases for LLM generated code:

(1) I have an idea for some app, but either I feel it won't be useful enough/save me enough time to justify developing it, or I simply don't feel the problem is interesting enough to be motivated by it. In that case, a vibe coded tool is perfect. It generally does one simple thing, and I don't care about long term maintenance, because it just needs to keep doing that thing.

(2) Adding a feature to an open source project. Again, it's a case of "I want this feature, but am not willing to spend the time needed to implement it." Even a relatively simple open source project can take a day or two just to get a basic understanding of the code and where I need to make the changes. Now I can often just get a functioning vibe-coded implementation within a few hours.

(2) leaves me with some unsettling feelings about how this will affect the future of open source software. Some of the features I've implemented this way may very well be useful to other users, but I can't in good conscience just dump a vibe coded pull request on a project and except them to do the work of vetting it. But if I didn't have the energy to implement the change myself, I'm definitely not going to bother doing the work of going through all the LLM generated code, cleaning it up to the standards of the project, etc. Whereas before I didn't have a choice, and the idea of getting the change ready for a PR was much less daunting since I understood the problem space and solution well.

So at least for myself, I can see a future where many of the apps I use are bespoke forks of popular applications. Extrapolate that to many, many people and an interesting landscape emerges.

[−] trinsic2 49d ago
Yeah I think this article put a finger on what I was feeling after using Claude Code for the first time to convert an PDF to an Markdown document[0]. I think I will update my article on these thoughts. Thanks for touching on something I had been feeling. It also feel like I was cheating. I also used CC to update the version of my SSG and that was good because I did not want to spend my time dealing with that. But there are certain projects that I can see myself not feeling good about if I used the tool to help me with.

[0]: https://www.scottrlarson.com/publications/publication-my-fir...

[−] Nevermark 49d ago
Some of us have been waiting our whole lives for a comprehensive DWIM command.

> DWIM is an embodiment of the idea that the user is interacting with an agent who attempts to interpret the user's request from contextual information. Since we want the user to feel that he is conversing with the system, he should not be stopped and forced to correct himself or give additional information in situations where the correction or information is obvious. [0]

— Teitelman and his Xerox PARC colleague Larry Masinter, Xerox PARC, in 1981

[0] https://en.wikipedia.org/wiki/DWIM

[−] haolez 49d ago
I don't have a stake on AI, but more and more I see the following patterns:

- people that give in to AI do so because the technical merits suddenly became too big to ignore (even for seasoned developers that were previously against it) - people who avoid AI center their arguments on principles and personal discomfort

Just from that, you can kind of see where this is going.

[−] basket_horse 49d ago
Why can’t people just acknowledge AI is good at some things and bad at others. Why does every post say AI is either groundbreaking or terrible. Get a grip people. It’s a tool.
[−] politelemon 49d ago

> When you know CSS well (and I would consider myself as someone who does), you quickly find weird and broken things in the generated code.

You have encountered https://en.wiktionary.org/wiki/Gell-Mann_Amnesia_effect

[−] PeterStuer 49d ago
I am the opposite. I do not understand how people do not get at least 5x more productive with GenAI.

Maybe it is because I do not do much front-end design. Maybe it is because I'm a bit more diligent than your average "viber", or maybe because for me it is easier to spot a suboptimal solution, or challange with edge cases from experience etc.

But these people turning their backs, not in principle, because that I fully understand, but because of underperformance?

Maybe their expectations are way out there? Maybe (most likely) it is the application domain? Maybe plainly a skill issue?

But seeing how GenAI is plowing through through fields, I would not turn my back on it even if it wasn't there (yet) in my domain.

[−] thepra 49d ago
That's true for new projects, I've found that as a bootstrap mechanism it is not ideal, takes away your hard earned preferences/know-how abs leaves you barely engaged on the technical path. On the other side I already have quite big codebases, some legacy and inherited, there exist vast amount of context to work on and figure out stuff/fixes/bugs/improvements and Claude Code excels 95% of the time in figuring the existing flows and functionality, if a vague or precise issue is known that's facing some files elaborations or user work flows it can do a good job in pointing out what are, with a high chance, the issues.
[−] tjchear 49d ago
Here’s my experience: just yesterday I had to tackle this task that’d have required a backend engineer and a frontend engineer several days, so I tasked several Claude code agents to work on them autonomously. With the time freed up, I didn’t just twiddle my thumbs. I used it to read up on this topic that was making the rounds yesterday and gained a better understanding of it - something hard to do when you juggle both a job and raising a family. I could then reinvest the time I used to learn something by using them in some other projects.

Just my two cents. No matter whether you use AI or not, I’m sure you’ll gain something.

[−] piloto_ciego 49d ago
There are always a lot of people afraid of change. People in my rough age bracket have lived in a time where change usually sucked for us, so we tend to look at this latest change and think, "oh my god, not again."

It's fear of being worthless in a society where previously being "smart" (because they could write code) made them special. The truth is and it's a bummer, but none of us are special anymore. That's OK, get over it and go build cool things! Life is short.

[−] TheGRS 49d ago
I know I read several posts like these every day, but I've been thinking more and more that I should stick to the chat window and having AI guide me instead of doing the work for me. Great tool for showing me what to do when I don't know and offering me guidance, and rubber ducking of course, but I definitely lose the context and understanding when I just let it rip and write everything for me.
[−] cowlby 49d ago
Who else struggles with both sides of this? My engineer side values curiosity, brain power, and artistanship. My capitalist side says it's always the product not the process. My formula is something like this: product = money, process = happiness, money != happiness, no money = unhappiness.

I think the optimal solution is min/maxing this thing. Find the AI process that minimizes unhappiness, and maximizes money.

[−] itomato 49d ago
What did you first order at the bar? Did it burn? Did you become a teetotaler?

Latecomers lack the hundreds of iterations and the experience that comes with it. The senses haven't been trained.

There's a business here. Not one I want to be in, or one without major ethical drag, but a viable one. Fend off extinction, get fit.