A Eulogy for Vim (drewdevault.com)

by mtts 143 comments 138 points
Read article View on HN

143 comments

[−] embedding-shape 52d ago

> I think it’s more important that we stop collectively pretending that we don’t understand how awful all of this is

Lord forbid if people disagree with you. I know Drew's vibe is always "I'm right because I'm the only one with the correct opinions", but it does get tiring after a while.

Not to say AI isn't having huge drawbacks being introduced, and aren't exactly worry-free, but why not change your frame of mind from "Why don't others understand how awful it is?!" to "People are seeing something I'm not, what am I missing?" so your article could actually contain something else than personal and emotions rants?

[−] MSFT_Edging 51d ago

> "People are seeing something I'm not, what am I missing?"

I've seen people celebrate horrors beyond my comprehension. Cheer the deaths of innocent people because it may inch some abstract national goal closer to a similarly abstract measurement. Insist that lives in one place are worth less than lives in another.

Should I ask "what am I missing"?

I don't think so, sometimes you draw a line on moral or ethical grounds. Some of those lines should never be given the ability to be fluid. It will always be wrong to bomb a school of children, just like (for Drew and I) it will be wrong to rip the livelyhood from under millions of people's feet for shareholder value. It will be wrong to ignore damaging consequences to the environment. It will be wrong to insist a low quality imitation should ever hold the same value as the original idea.

[−] nh23423fefe 51d ago
This unpersuasive moralizing demonstrates the blindspot GP is talking about. You invent a moral/ethical line because you can't find a good line.

using gpt is like bombing schools?

[−] tovej 51d ago
I think they may be referring to the story about how the US bombed an Iranian school on the dirst day of the war due to data sourced from an LLM.
[−] skeledrew 51d ago

> rip the livelyhood from under millions of people's feet

I have never gotten this. How is livelihood being "ripped away"? There is enormous capability made available to anyone and everyone who wants to take hold of and do something with it. Just as it's on each individual to go through the process and pains of landing a job (or building a business, etc), it's also on each individual to keep up with changes that may affect their livelihood. If they want to keep it.

[−] MSFT_Edging 51d ago

> How is livelihood being "ripped away"?

Who does it benefit to automate away well paid industries? For every well paid industry mostly automated away, you remove one more path for financial mobility.

One less path available means more people doomed to the service economy serfdom. You can be incredibly intelligent, creative, personable, and driven, but bad luck can still doom you to the role of a serf.

It's incredibly naive to assume the pattern of the short history of industrialization will continue. More jobs may have been created in the past, but where are those plans for the future? Why is it imperative we accept the plans of people making money hand over fist, while also forced to endure the hardships of adapting?

Jeff Bezos won't have difficulties adapting, but the average citizen will lose their healthcare and get beaten by a cop for protesting their own social murder.

Pure automation and efficiency can't be the one true path if we want to maintain our current economic system. Capitalism needs waste and inefficiency. It has little room for charity when the shareholders are the end beneficiary.

[−] skeledrew 49d ago
It benefits humanity as a whole to have all industries, across the board, automated away. Right now that's primarily happening in service economy, which essentially means either there are increasingly fewer "serfs", or they're moving up the ladder. This is just accelerating the process and pushing from the top.

In the end eventually everyone will be at the same industry earning potential level (or whatever it's called), and then there will literally be no more "potential for earning" because there would be 0 economic value to human labor (but there will always be aesthetic value). And by then the greatest collective decision the majority of mankind will have to make in its existence would already have been made: do away with this highly flawed and unsustainable economic system, or be wholly at the mercy and whims of those unreasonably trying to keep it in place. It's up to us whether the inevitably fully automated future is a dystopia, or utopia. There is no viable middle ground.

https://marshallbrain.com/manna

[−] cindyllm 51d ago
[dead]
[−] dpatterbee 52d ago
I think the point is that regardless of what benefits LLMs are bringing to the table, there are a list of downsides that Drew views as non-negotiables. It doesn't matter what other people are seeing, because he sees a fundamental issue underlying the entire premise.

It does seem like most people completely ignore the obvious harms caused by AI when talking about using LLMs for programming, as though somehow it is disconnected from the other deployments of the technology.

[−] MisterTea 51d ago

> It does seem like most people completely ignore the obvious harms caused by AI when talking about using LLMs for programming, as though somehow it is disconnected from the other deployments of the technology.

I feel that the people who are completely ignoring the harms are the ones who need and/or benefit from it and do whatever it takes to justify their use of it. The rest are people who understand the harms and minimize interaction followed by the blissfully ignorant.

I was just talking to a content creator who uses AI at work social media platforms to display her personal projects. She talked about how she is fully aware of the harm social media platforms bring while acknowledging they empower her to present her work to the world without gatekeeping. AI allows her to power through boring office tasks but she loathes their use in the art world and replacing people in general.

[−] bigbadfeline 51d ago

> It does seem like most people completely ignore the obvious harms caused by AI when talking about using LLMs for programming,

as though somehow it is disconnected from the other deployments of the technology.

I would insist that the deployments of a technology should be disconnected from the technology itself - I criticize AI too, and I get a lot of downvotes for it, but I try to separate the science of AI from its economics and politics.

The harms of AI and other technologies come from two sources 1. Capital destroying market bubbles and 2. Deployments motivated and enabled by political and moral corruption.

Both of these are in turn enabled and sustained by legislation. That is, we have to talk politics, not technology and not AI. AI has a great potential - both for improving human life and for making it a lot worse and which way it goes depends entirely on politics.

If we fail to cleanly separate these issues and keep moralizing about technology, we will be chasing red herrings and bumping heads in the dark all the while the tech is being deployed against us.

[−] embedding-shape 52d ago

> there are a list of downsides that Drew views as non-negotiables

Which is all fine and dandy. But why play the "You simply don't understand it as well as I do" rather than something more investigative and curious? Just fuels the whole "holier than thou" vibe Drew been trying to increase seemingly every day.

It's a disagreement of opinion, not some "I'm the only smart person who can realize this", which is why it kind of sours the entire piece.

[−] chromacity 51d ago

> Which is all fine and dandy. But why play the "You simply don't understand it as well as I do"

I'll say this from the perspective of a person who publishes content online: because people's revealed preference is for content written this way. You can spend weeks polishing thoughtful, original content that will get few clicks, or you can crank out throwaway op-eds about AI and get thousands of likes and upvotes from people who just wanted to hear their own beliefs explained back to them.

My stuff appeared on HN a couple of times over the years and the less effort I put into it, the better it fared. The temptation to change your writing style and to offer increasingly more provocative and shallow opinions is difficult to resist.

My point is probably this: if you want to see better stuff, I think you gotta stop engaging with articles like this. Patrol /newest and upvote cool in-depth stuff.

[−] lelanthran 51d ago

> But why play the "You simply don't understand it as well as I do" rather than something more investigative and curious?

That's not the tone of the article; he uses the word "pretending". That tells me that he thinks that people do understand, but they don't want to admit that they understand because that would reveal their values.

[−] dpatterbee 51d ago
In fairness he pretty explicitly states that he thinks people do understand it, but are pretending not to to wash their hands of the consequences. I'm definitely not reading it in the same way you are.
[−] mikkupikku 51d ago
It's a difference in values, not understanding. I understand that AI burns tons of power, and I don't care. Drew understands it the same as I, but he does care. The difference is in what people value, and relative to what.
[−] sarchertech 52d ago
Well I think he’s taken a moral stance against AI, so it doesn’t matter to him if other people find it useful.
[−] embedding-shape 52d ago
Right, a difference of opinion, which is fine, OK and even good. But why paint it as "Obviously the rest of you aren't smart enough to understand" instead of "Other's disagree", seems really strange (although in-character).
[−] tovej 51d ago
Don't put thing in quotes if they're not actual quotes. Especially not if you mischaracterize the article.
[−] monkaiju 50d ago
He never said that, you're just strawmanning the piece
[−] ChrisLTD 52d ago
Why should he should say something he doesn't believe? We don't have to agree with him.
[−] embedding-shape 52d ago

> I think it’s more important that we stop collectively pretending that we don’t understand how awful all of this is

Would be very different from say:

> I'd like to understand people who don't see it the same way as me, that it's mostly awful and not good.

Or similar, rather than "I'm right, everyone else don't understand it properly". Very HN-esque, but oh so tiring after 100s of articles in the exactly same vein from the same author.

[−] tovej 51d ago
Not everything needs to be looked at from "both sides".

Drew is correct, the impact of generative "AI" on society is overwhelmingly negative.

[−] grayhatter 51d ago

> Lord forbid if people disagree with you.

This is too shallow of a take. Especially when your very next point objects to what he uses as a default reference frame that you disagree with. Lord forbid drew disagree about, I think priorities, and values?

> why not change your frame of mind from "Why don't others understand how awful it is?!" to "People are seeing something I'm not, what am I missing?"

It's the same question. I sympathize with both questions, I constant feel both frustrated, and broken with how few people care about quality, and participating fairly. I try very hard to find the positive aspects "everyone" claims llm codegen provides. I'm looking hard, and can't find them. It's painfully average, often worse so when it gets lost. It doesn't and can not help me, only get in the way, what am I doing wrong? Why is everyone missing something I see as obvious? But again, both could easily be true from both frames you suggest. "Why can't people identify this as trash" could very easily be followed by "what I'm I missing from the equation?" and be the same thought/idea.

> so your article could actually contain something else than personal and emotions rants?

I mean, it's titled, A Eulogy for Vim. That seems to be what it says on the tin, no?

[−] lelanthran 51d ago

> why not change your frame of mind from "Why don't others understand how awful it is?!" to "People are seeing something I'm not, what am I missing?"

Ironically, you are not considering that he is seeing something that you are not, but you are not asking "What am I missing?"

See, that sword cuts both ways.

[−] herodoturtle 52d ago

> And at a moment when the climate demands immediate action to reduce our footprint on this planet, the AI boom is driving data centers to consume a full 1.5% of the world’s total energy production in order to eliminate the jobs of the poor and replace them with a robot that lies.

That sentence jumped out at me.

[−] CobrastanJorji 52d ago
It's a little wrong. It's probably going to replace middle class jobs more than the jobs of the poor.
[−] soperj 52d ago
The middle class is poor now.
[−] nickvec 51d ago
Yep. The middle class is quickly disappearing.
[−] sam_lowry_ 52d ago
Poor tend to think of themselves as middle-class.
[−] metalliqaz 51d ago
so do the rich

note I used "rich" there, not "wealthy"

[−] linkregister 52d ago
For an environmentmaxxer, eliminating upper-middle class jobs is extremely effective, as this group consumes the lion's share of resources and bears the greatest impact on carbon emissions. Remember that the majority of industry is upstream of consumption.

Not endorsing this world view, just noting that the wealthiest 1% of people in the world (encompasses most US citizens) have an enormously outsized impact on climate.

[−] CobrastanJorji 51d ago
The "upper middle class" is not strictly defined, but they are pretty clearly the folks below the wealthiest 1%. You can't be in the middle without something on either side.

They certain consume far more than the poor, on account of having resources, but they also consume far less than the wealthiest 1%.

[−] linkregister 51d ago
You are almost certainly in the global 1% of the wealthy [1]. Compared to developing nations' residents, we have an order of magnitude greater impact on the environment.

1. https://www.bbc.com/news/magazine-17512040

[−] Legend2440 51d ago

>Remember that the majority of industry is upstream of consumption.

People forget this. Oil companies may have dug up the oil, but they did so because we paid them to, so we could use the energy for good and useful things.

Climate change isn't 'evil billionaire companies are ruining the world', it's 'these things we did to improve our lives turn out to have side effects'.

[−] sam_lowry_ 51d ago
The discussion is about the current generation of LLMs. It's not yet clear whether side-effects outweigh the advantages.

OTOH, I can already argue with numbers at hand that Bitcoin made the world poorer and worse off.

[−] Legend2440 51d ago
This is backwards. If it weren't for 'eliminating jobs' we'd both be peasant farmers right now. Automation has improved the standard of living and raised wages for everyone, rich and poor alike.
[−] snackerblues 51d ago
Ok, Claude.
[−] roryrjb 52d ago
I think I will use vim-classic and possibly contribute to it. Not because of AI, but because I actually want to use Vim over say something like Neovim* and I actually like vimscript, which imo didn't need the development of vim9script to improve it.

Regarding why not Neovim, I think it's because a large section of the community want to create more complex TUI elements or replicate GUI interfaces and make it more like VS Code. I use Vim for the "vim way" not because it's in a terminal or it's not bloated like some other editors.

[−] omoikane 52d ago
The linked reference said:

> The maturity of Vim9 script's modern constructs is now being leveraged by advanced AI development tools. Contributor Yegappan Lakshmanan recently demonstrated the efficacy of these new features through two projects generated using GitHub Copilot

https://www.vim.org/vim-9.2-released.php#:~:text=The%20matur...

I am not sure I understand the author's concern, is he saying that VIM 9.2 is problematic because it enables AI integration due to the maturity of Vim9 script?

[−] cldellow 51d ago
Buried (IMO) in the post is:

> sadly even Vim now comes under scrutiny in that effort as both Vim and NeoVim are relying on LLMs to develop the software.

...where he links to a comment in a closed issue where someone accuses a contributor of using an LLM to generate patches: https://github.com/vim/vim/issues/18800#issuecomment-3568099...

The tl;dr: Drew thinks Vim development has been tainted by LLM contributions, and is thus morally unsuitable to be used, and he will therefore be forking it.

[−] taviso 51d ago
I'm experiencing something similar with another piece of software. ledger-cli is a boring, dependable accounting application.

The next release will be the first where the majority of commits will be made by AI, and it has definitely not gone smoothly.

After a dozen or so bug reports, it's mostly in a working state, but I worry the output is no longer reliable in subtle ways.

[−] Kwpolska 52d ago
A Vim contributor vibe-coded some toy plugins, and the reaction to that is forking Vim? Sounds like throwing out the baby with the bathwater.
[−] elcapitan 52d ago
One of the side effects of AI is definitely that a lot of people have way too much time at their hands which they can now invest in pointless community drama.
[−] AlexandrB 52d ago

> I won’t speculate on how he would have felt about generative AI, but I can say that GenAI is something I care about. It causes a lot of problems for a lot of people. It drives rising energy prices in poor communities, disrupts wildlife and fresh water supplies, increases pollution, and stresses global supply chains.

This kind of stuff drives me crazy sometimes. There's is little that's unique to AI here. These are the effects of any kind of industrial expansion. They're also the effects of population growth, in general. This stuff is a problem iff AI is a scam or hugely oversold and these resources are being wasted. But that's a different argument and a less clear-cut one.

> It re-enforces the horrible, dangerous working conditions that miners in many African countries are enduring to supply rare metals like Cobalt for the billions of new chips that this boom demands.

This point also deserves special mention. Most green technologies (solar panels, electric cars) also require a bunch of cobalt. Again, the "badness" seems to depend on your a priori evaluation of what the cobalt is being used for and not the cobalt mining itself.

I think there's also a pretty good chance that if a robot that could mine the same cobalt with no human intervention appeared tomorrow, many folks would complain about "hard working cobalt miners in Africa losing their livelihood to automation".

[−] jmclnx 52d ago
Interesting he forked Vim 8.2.0148, but I am fine with that. I think I had to update ~/.vimrc to disable some a new default in v9 that annoyed me. I actually forgot what it was :)

I will have to look into his fork because I too do not want to see any form of AI in vim.

I may also look to see what Elvis looks like these days. I really liked the GUI and colors Elvis defaulted to and I stuck with it for a while, but eventually I went to vim in the v5 days for reasons I forgot.

[−] nickandbro 51d ago
Without getting into some of the other things mentioned in the article,

I don't think Vim is going away. Even with all the AI code written, engineers navigate through Claude Code / Codex using Vim (ex: Vim mode in Claude Code).

I really like Vim so much that I've built a gamified way to learn it at https://vimgolf.ai that I am working on completing.

[−] anitil 51d ago
I couldn't quite follow the logic in this article, though some comments on here have clarified what it seems Drew means here. To be honest, this spiralling logic reminds me of my thought processes when I was at my most depressed
[−] skybrian 52d ago
This doesn't seem like a good cost-benefit analysis for AI. Not sure what that would look like, but it seems like making some attempt to quantify the benefits would help.
[−] sourcegrift 52d ago
Drew is genius but a toxic genius, I've been using vim for 23 years and I'd rather not use vim than use his version of that's my only choice of vim
[−] my_throwaway23 51d ago
I opened the comment section expecting to find a slew (a slop?) of LLM enthusiasts. I was not disappointed.

Whether you're a fanatical or not, of either side, LLM usage is driving energy and hardware prices to go up, it is an implicit driver of climate change, and it will replace jobs. I don't see what there's to argue.

Great article through and through.

[−] kgwxd 52d ago
If you're not using any software that might include code that originally came from an LLM, might as well give up on everything now. I'll give up the base if I ever have to remove built-in AI tools, but I don't foresee Vim dev getting that dumb anytime soon.
[−] arjie 52d ago
I love it. Some alternative pathways in code make finding good solutions more likely. I've always liked Neovim, so I'm going to stick with it. I use it in a pretty much vanilla mode. Just deoplete et al.
[−] dasil003 51d ago
This is a weird hill to die on. As much as I resonate with many of the concerns, I don't see refusing to use AI as something that will actually help any of those things. Forking a stable version of vim is something I guess, but I don't really see the sky falling with mainline vim or neovim.

Personally the leverage I have as a bit of a cranky graybeard myself is that I understand how software works and I can distinguish between good and bad uses of AI and think critically about how to influence things towards better software. Just declaring AI as unequivocally bad and evil will do nothing more than make me irrelevant. At some point being right is useless without some measure of also being effective.

[−] smitty1e 52d ago
The user experience is alive and well: https://www.spacemacs.org/
[−] beastman82 52d ago
Perhaps an argument against LLMs should acknowledge its awesome power could be harnessed for good, if only nominally
[−] mikkupikku 52d ago
I've long had great respect for Drew, way since way back when he was sircmpwn writing cool calculator software. Great programmer, and an incredibly based individual. Stays true to himself even in the face of overwhelming pressure.

I completely disagree with his take on this; battleship vibecoder in vimscript is awesome and important, socially, because vibe coding makes computer programming accessible to the masses. I don't expect him to ever agree, but much respect nonetheless

[−] anthk 52d ago
There's nvi2.
[−] love2read 51d ago
I really hate that the author tugged at the heart strings of someone who is not alive anymore to back their cause of hating AI.
[−] UweSchmidt 52d ago
It doesn't look like they put AI into vim like Microsoft into Notepad. Someone used an outside AI to code something with vimscript, what do you expect? I'll be worried if they mess with even the smallest bit of established muscle memory of any vim user, but a separate language (probably a dead end) and apparently some new diff options don't seem too terrible.
[−] felixagentai 52d ago
[flagged]
[−] exitb 52d ago
[dead]
[−] elif 52d ago
If emacs can live through Stallman's descent into absurd un-asked-for pedophilliac defense positions, not limited to defense of Jeffrey Epstein himself, Vim can survive the simple passing of its creator.
[−] aaroninsf 52d ago
People sure hate change.
[−] ectospheno 52d ago
Plenty of people already submit AI code as their own change. I’d argue every open source program is already “tainted” in that way.
[−] blastonico 52d ago
I prefer VS Code with Claude Opus 4.6, it makes me more productive during my working hours so I can take more quality time with my family.

I know that my point of view is considered .+(cist|phobic) (based on the post). I'm sorry for that.