s/Django/the codebase/g, and the point stands against any repo for which there is code review by humans:
> If you do not understand the ticket, if you do not understand the solution, or if you do not understand the feedback on your PR, then your use of LLM is hurting Django as a whole.
> Django contributors want to help others, they want to cultivate community, and they want to help you become a regular contributor. Before LLMs, this was easier to sense because you were limited to communicating what you understood. With LLMs, it’s much easier to communicate a sense of understanding to the reviewer, but the reviewer doesn’t know if you actually understood it.
> In this way, an LLM is a facade of yourself. It helps you project understanding, contemplation, and growth, but it removes the transparency and vulnerability of being a human.
> For a reviewer, it’s demoralizing to communicate with a facade of a human.
> This is because contributing to open source, especially Django, is a communal endeavor. Removing your humanity from that experience makes that endeavor more difficult. If you use an LLM to contribute to Django, it needs to be as a complementary tool, not as your vehicle.
I am going to try to make these points to my team, because I am seeing a huge influx of AI-generated PRs where the submitter interacts with CodeRabbit etc. by having Claude/Codex respond to feedback on their behalf.
There is little doubt that if we as an industry fail to establish and defend a healthy culture for this sort of thing, it's going to lead to a whole lot of rot and demoralization.
AI autocomplete and suggestions built-in to Jira are making our ticket tracker so goddamn spammy that I’m 100% sure that “feature” has done more harm than good.
I don’t think anybody’s tracking the actual net-effects of any of this crap on productivity, just the “vibes” they get in the moment, using it. “I got my part of this particular thing done so fast!”
I believe that to be the case, in part, because not a lot of organizations are usefully tracking overall productivity to begin with. Too hard, too expensive. They might “track” it, but so poorly it’s basically meaningless. I don’t think they’ve turned that around on a dime just to see if the c-suite’s latest fad is good or bad (they never want a real answer to that kind of question anyway)
> just the “vibes” they get in the moment, using it. “I got my part of this particular thing done so fast!”
In the pre-AI era it was much easier to identify people in the workplace who weren't paying attention to their work. To write something about a project you had to at minimum invest some time into understanding it, then think about it, then write something on the ticket, e-mail, or codebase.
AI made it easy to bypass all of that and produce words or code that look plausible enough. Copy and paste into ChatGPT, copy and past the blob of text back out, click send, and now it's somebody else's problem to decipher it.
It gets really bad when the next person starts copying it into their ChatGPT so they can copy and past a response back.
There are entire groups of people just sending LLM slop back and forth and hoping that the project can be moved to someone else before the consequences catch up.
Ironically my favorite use of claude is removing caring about jira from my workflow. I already didn't care about it but now i dont have to spend any time on it.
I treat jira like product owners treat the code. Which is infinitely humorous to me.
Horrible degrading take. Be the change you want to see. Don't fuel the fire that's burning you.
If something's not happening, something else's making it impractical. Saying this as a 10+ years product manager and R&D person with 20+ more years of engineering on top.
I also had to deal with "managers are just complicating things" or "users are stupid and don't understand anything"; do you think I complained? No, I had engineers barter trust of their ingenuity with trust of my wisdom, and brought them to customer calls and presented them to users almost like royalty, which made them incredibly respectful as soon as they saw what kind of crap users had to deal with.
The industry is broken now, this is just a response to that. Leadership and product don't have any respect for the code, why would engineers have any respect for the ticketing process.
Thats an unreasonable asymmetric effort demand, "Your code does not matter but my precious tickets must have elbow grease put into them."
The industry is broken. It's broken in the same sense the railroad industry is broken. It has reached the point of abundance, where we're doing things that don't need doing. That won't get done in an efficient market. But since we're not in an efficient market, there are globs of capital thrown at people building stuff that.. doesn't stand a chance of actually making any return on capital.
But while it lasts, us, the glorified machine-minders (just like railroad engineers, well, minded the engines), get paid large lumps of money, through large hordes of managers, arguing on minutia of conversion optimization, and fundamentally, being paid enough to not to try and do something else, perhaps competitive.
And that is broken. Especially for the "smarter of us" - the graduation ceremony of my physics department rings true - we've trained you to discover the secrets of universe and reach the stars, and most of us will use it.. to gain an edge at Lehman Brothers.
(And I think the root of this problem, is the abundance of low-risk capital, from people who expect a small return and a pension that lasts for decades in retirement)
My behavior is a reaction to the environment I am in. And currently the environment is push slop code as fast as possible. So being able to claw back just a little bit of my time from the people pushing this stupidity is a small pro in a sea of cons.
Teach me your ways. I’ve long wished for an actual, human secretary to handle that for me. The context-switching and digging around in a painful, slow interface (I don’t just mean Jira, 100% of the ones project managers find acceptable seem to have this quality) is such a productivity killer, and it’s so easy to miss important things in all the noise.
In the old days, you could assume that a Par was being offered in good faith by someone who was really fixing a problem. You might disagree with the proposed solution and reject the PR as written, but you assumed good faith. AI has flipped that on its head. Now, everyone assumes they are interacting with an AI (or at least a human using one to generate all the content) and that the human has little to no understanding of what they are proposing. Ultimately, the broad use of AI erodes trust. And that’s a shame.
> I am going to try to make these points to my team, because I am seeing a huge influx of AI-generated PRs where the submitter interacts with CodeRabbit etc. by having Claude/Codex respond to feedback on their behalf.
Are people generally unhappy with the outcomes of this? As anecdotally, it does seem to pass review later on. Code is getting through this way.
It's slippery. You're swamped with low-effort PRs, can't possibly test and review all of them. You will become a visible bottleneck, and guess whether it's easier to defend quality vs. "blocking a lot of features" which "seem to work". If you're tied by your salary as a reviewer, you will have to let go, and at the same time you'll suffer the consequences of the "lack of oversight" when things go south.
Just reject a bunch of PRs two days before code freeze. They can go next sprint. In fact ask AI to provide a plausible reason for rejection. If anyone overrides, you are covered.
The best people I've worked with tended to go out of their way to make it as easy for me as possible to critique their ideas or implementations.
They spelled out exactly their assumptions, the gaps in their knowledge, what they have struggled with during implementation, behavior they observed but don't fully understand, etc.
Their default position was that their contribution was not worth considering unless they can sell it to the reviewer, by not assuming their change deserves to get merged because of their seniority or authority, but by making the other person understand how any why it works. Especially so if the reviewer was their junior.
When describing the architecture, they made an effort to communicate it so clearly that it became trivial for others to spot flaws, and attack their ideas. They not only provided you with ammunition to shoot down their ideas, they handed you a loaded gun, safety off, and showed you exactly where to point it.
If I see that level of humility and self-introspection in a PR, I'm not worried, regardless of whether or not an LLM was involved.
But then there's people that created PRs with changes where the stack didn't even boot / compile, because of trivial errors. They already did that before, and now they've got LLMs. Those are the contributions I'm very worried about.
So unlike people in other threads here, I don't agree at all with "If the code works, does it matter how it was produced and presented?". For me, the meta / out-of-band information about a contribution is a massive signal, today more than ever.
This is getting really out of control at the moment and I'm not exactly sure what the best way to fix it is, but this is a very good post in terms of expressing the why this is not acceptable and why the burden if shifting on the wrong people.
Will humans take this to heart and actually do the right thing? Sadly, probably not.
One of the main issues is that pointing to your GitHub contributions and activity is now part of the hiring process. So people will continue to try to game the system by using LLMs to automate that whole process.
"I have contributed to X, Y, and Z projects" - when they actually have little to no understanding of those projects or exactly how their PR works. It was (somehow) accepted and that's that.
I see the problem everyday and am just playing devil's advocate but it doesn't really do a good job explaining the "why".
They hint at Django being a different level of quality compared to other software, wanting to cultivate community, and go slowly.
It doesn't explain why LLM usage reduces quality or they can't have a strong community with LLM contributions.
The problem is that good developers using LLM is not a problem. They review the code, they implement best practices, they understand the problems and solutions. The problem is bad developers contributing - just as it always has been. The problem is that LLMs enable bad developers to contribute more - thus an influx of crap contributions.
The last section focuses on how to use LLMs to make contributions:
> Use an LLM to develop your comprehension.
I really like that, because it gets past the simpler version that we usually see, "You need to understand your PR." It's basically saying you need to understand the PR you're making, and the context of that PR within the wider project.
A decade or more of people copy-pasting rote solutions from StackOverflow only supports the notion that many people will forego comprehension to foster the illusion of competent productivity.
This ain't an AI problem, it's a people problem that's getting amplified by AI.
It was interesting the other day tracing the lineage of Aaron Swartz -> Library Genesis / Sci-Hub -> LLM vendors relying on that work to train their models and sell it back to us all with no royalties or accountability to the original authors of all this painstakingly researched, developed, and recorded human knowledge they’re making billions on.
> Will humans take this to heart and actually do the right thing? Sadly, probably not.
Don’t blame the people, blame the system.
Identifying the problem is just the first step. Building consensus and finding pragmatic solutions is hard. In my opinion, a lot of technical people struggle with the second sentence. So much of the ethos in our community is “I see a problem, and I can fix it on my own by building [X].” I think people are starting to realize this doesn’t scale. (Applying the scaling metaphor to people problems might itself be a blindspot.)
> One of the main issues is that pointing to your GitHub contributions and activity is now part of the hiring process.
If I were hiring at this moment, I'd look at the ratio of accepted to rejected PRs from any potential candidate. As an open source maintainer, I look at the GitHub account that's opening a PR. If they've made a long string of identical PRs across a wide swath of unrelated repos, and most of those are being rejected, that's a strong indicator of slop.
Hopefully there will be a swing back towards quality contributions being the real signal, not just volume of contributions.
I like the idea of donating money instead of tokens. I think django contributors are likely to know how to spend those tokens better than I might, as I am not a django core contributor.
Some projects ( https://news.ycombinator.com/item?id=46730504 ) are setting a norm to disclose AI usage. Another project simply decided to pause contributions from external parties ( https://news.ycombinator.com/item?id=46642012 ). Instead of accepting driveby pull requests, contributors have to show a proof of work by working with one of the other collaborators.
There's definitely an aspect here where the commons or good will effort of collaborators is being infringed upon by external parties who are unintentionally attacking their time and attention with low quality submissions that are now cheaper than ever to generate. It may be necessary to move to a more private community model of collaboration ( https://gnusha.org/pi/bitcoindev/CABaSBax-meEsC2013zKYJnC3ph... ).
"For a reviewer, it’s demoralizing to communicate with a facade of a human."
This is so important. Most humans like communicating with other humans. For many (note, I didn't say all) open source collaborators, this is part of the reward of collaborating on open source.
Making them communicate with a bot pretending to be a human instead removes the reward and makes it feel terrible, like the worst job nobody would want. If you spent any time at all actually trying to help the contributor underestand and develop their skills, you just feel like an idiot. It lowers the patience of everyone in the entire endeavor, ruining it for everyone.
It's like every new innovation at this point is exacerbating the problem of us choosing short term rewards over long time horizon rewards. The incentive structure simply doesn't support people who want to view things from the bird's eye view. Once you see game theory, you really can't unsee it.
Great message but I wonder if the people who do everything via LLM would even care to read such a message.
And at what point is it hard/impossible to judge whether something is entirely LLM or not? I sometimes struggle a lot with this being OSS maintainer myself
Perhaps we should start making LLM- open source projects (clearly marked as such). Created by LLMs, open for LLM contributions, with some clearly defined protocols I'd be interesting where it would go. I imagine it could start as a project with a simple instruction file to include in your project to try to find abstractions which can be useful to others as a library and look for specific kind of libraries. Some people want to help others even if they are sharing effectively money+time rather than their skill.
Although I'm afraid big part of these LLM contributions may be people trying to build their portfolio. Some known project contributor sounds better than having some LLM generated code under your name.
> Before LLMs, [high quality code contribution] was easier to sense because you were limited to communicating what you understood. With LLMs, it’s much easier to communicate a sense of understanding to the reviewer, but the reviewer doesn’t know if you actually understood it.
Now my twist on this: This same spirit is why local politics at the administrative level feels more functional than identity politics at the national level. The people that take the time to get involved with quotidian issues (e.g. for their school district) get their hands dirty and appreciate the specific constraints and tradeoffs. The very act of digging in changes you.
Curious what simon thinks about using an LLM to work on Django...
I've used an LLM to create patches for multiple projects. I would not have created said work without LLMs. I also reviewed the work afterward and provided tests to verify it.
I agree with the sentiment but I am not sure the best way to go forward.
Suppose I encounter a bug in a FOSS library I am using. Suppose then that I fix the bug using Claude or something. Suppose I then thoroughly test it and everything works fine. Isn’t it kind of selfish to not try and upstream it?
AI so often doesn't actually increase productivity - it just shifts the burden of work from the person creating to the person who has to check and evaluate that creation.
In this case, offloading yet more work onto the maintainers of the package, because you can't be bothered, but still want credit.
I love Django. Ive been using it professionally and on side projects extensively for the past 10 years. Plus I maintain(ed) a couple highly used packages for Django (django-import-export and django-dramatiq).
Last year, I had some free time to try to contribute back to the framework.
It was incredibly difficult. Difficult to find a ticket to work on, difficult to navigate the codebase, difficult to get feedback on a ticket and approved.
As such, I see the appeal of using an LLM to help first time contributors. If I had Claude code back then, I might have used it to figure out the bug I was eventually assigned.
I empathize with the authors argument tho. God knows what kind of slop they are served everyday.
This is all to say, we live in a weird time for open source contributors and maintainers. And I only wish the best for all of those out there giving up their free time.
Dont have any solutions ATM, only money to donate to these folks.
Think most people recognize though that AI can generate more than humans can reviewing so the model does need to change somehow. Either less AI on submitting side or more on reviewing side (if that’s even viable)
With my type of development, I haven't run into the types of things, directly, that you very well explained, but I have personally run into the pain, I confess, of being OVERLY reliant on LLMs. I continue to try and learn from those hard lessons and develop a set of best practices in using AI to help me avoid those pain points in the future. This growing set of best practices is helping me a lot. The reason that I liked your article is because it confirmed some of those best practices that I have had to learn the hard way. Thanks!
I totally get this and I also think it's now the case that making a PR of any significant complexity, for a project you're not a maintainer of, isn't necessarily giving that project anything of value. That project's maintainers can run the same prompts you are running - and if they do, they'll do it with better oversight and understanding. If you want to help then maybe's it's more useful to just hashout the plan that'll be given to an AI agent by a maintainer.
By what metric is “the level of quality is much, much higher” in the Django codebase? ‘cause other than the damn thing actually working, the primary metric of a codebase being high quality is how easy it is to contribute to. And evidently, it’s not.
The solution to this problem is for LLMs to get better at producing code and descriptions that doesn't look LLM generated.
It's possible to prompt and get this as well, but obviously any of the big AI companies that want to increase engagement in their coding agent, and want to capture the open source market, should come up with a way to allow the LLM to produce unique of, but still correct code so that it doesn't look LLM-generated and can evade these kinds of checks.
It is not pride to have your name associated with an open source project, it is pride that the code works and the change is efficient. The reviewer should be on top of that.
and I hope an army of OpenClaw agents calls out the discrimination, so gatekeepers recognize that they have to coexist with this species
I feel like open source is taking the wrong stance here. There’s a lot of gatekeeping, first. And second, this approach is like trying to stop a tsunami with an umbrella.
AI is here to stay. We can’t stop it, for much we try.
I feel the successful OS projects will be the ones embracing the change, not stopping it. For example, automating code reviews with AI.
Beggars can't be choosers. I decide how and what I want to donate. If I see a cool project and I want to change something (in what I think) is an improvement, I'll clone it, have CC investigate the codebase and do the change I want, test it and if it works nicely I'll open a PR explaining why I think this is a good change.
If the maintainers don't want to merge it for whatever reasons that's fine and nature of open source, but I think its petty to tell that same user who opened the PR you should have donated money instead of tokens.
171 comments
> If you do not understand the ticket, if you do not understand the solution, or if you do not understand the feedback on your PR, then your use of LLM is hurting Django as a whole.
> Django contributors want to help others, they want to cultivate community, and they want to help you become a regular contributor. Before LLMs, this was easier to sense because you were limited to communicating what you understood. With LLMs, it’s much easier to communicate a sense of understanding to the reviewer, but the reviewer doesn’t know if you actually understood it.
> In this way, an LLM is a facade of yourself. It helps you project understanding, contemplation, and growth, but it removes the transparency and vulnerability of being a human.
> For a reviewer, it’s demoralizing to communicate with a facade of a human.
> This is because contributing to open source, especially Django, is a communal endeavor. Removing your humanity from that experience makes that endeavor more difficult. If you use an LLM to contribute to Django, it needs to be as a complementary tool, not as your vehicle.
I am going to try to make these points to my team, because I am seeing a huge influx of AI-generated PRs where the submitter interacts with CodeRabbit etc. by having Claude/Codex respond to feedback on their behalf.
There is little doubt that if we as an industry fail to establish and defend a healthy culture for this sort of thing, it's going to lead to a whole lot of rot and demoralization.
I don’t think anybody’s tracking the actual net-effects of any of this crap on productivity, just the “vibes” they get in the moment, using it. “I got my part of this particular thing done so fast!”
I believe that to be the case, in part, because not a lot of organizations are usefully tracking overall productivity to begin with. Too hard, too expensive. They might “track” it, but so poorly it’s basically meaningless. I don’t think they’ve turned that around on a dime just to see if the c-suite’s latest fad is good or bad (they never want a real answer to that kind of question anyway)
> just the “vibes” they get in the moment, using it. “I got my part of this particular thing done so fast!”
In the pre-AI era it was much easier to identify people in the workplace who weren't paying attention to their work. To write something about a project you had to at minimum invest some time into understanding it, then think about it, then write something on the ticket, e-mail, or codebase.
AI made it easy to bypass all of that and produce words or code that look plausible enough. Copy and paste into ChatGPT, copy and past the blob of text back out, click send, and now it's somebody else's problem to decipher it.
It gets really bad when the next person starts copying it into their ChatGPT so they can copy and past a response back.
There are entire groups of people just sending LLM slop back and forth and hoping that the project can be moved to someone else before the consequences catch up.
I treat jira like product owners treat the code. Which is infinitely humorous to me.
If something's not happening, something else's making it impractical. Saying this as a 10+ years product manager and R&D person with 20+ more years of engineering on top.
I also had to deal with "managers are just complicating things" or "users are stupid and don't understand anything"; do you think I complained? No, I had engineers barter trust of their ingenuity with trust of my wisdom, and brought them to customer calls and presented them to users almost like royalty, which made them incredibly respectful as soon as they saw what kind of crap users had to deal with.
Thats an unreasonable asymmetric effort demand, "Your code does not matter but my precious tickets must have elbow grease put into them."
> The industry is broken now, this is just a response to that.
No, your behavior is the cause of that.
The entire industry isn't broken. There are good company cultures and bad company cultures just like always.
At least own up to what you're doing. Don't blame "the industry" when you're the one doing the thing.
The industry is broken. It's broken in the same sense the railroad industry is broken. It has reached the point of abundance, where we're doing things that don't need doing. That won't get done in an efficient market. But since we're not in an efficient market, there are globs of capital thrown at people building stuff that.. doesn't stand a chance of actually making any return on capital.
But while it lasts, us, the glorified machine-minders (just like railroad engineers, well, minded the engines), get paid large lumps of money, through large hordes of managers, arguing on minutia of conversion optimization, and fundamentally, being paid enough to not to try and do something else, perhaps competitive.
And that is broken. Especially for the "smarter of us" - the graduation ceremony of my physics department rings true - we've trained you to discover the secrets of universe and reach the stars, and most of us will use it.. to gain an edge at Lehman Brothers.
(And I think the root of this problem, is the abundance of low-risk capital, from people who expect a small return and a pension that lasts for decades in retirement)
> No, your behavior is the cause of that.
My behavior is a reaction to the environment I am in. And currently the environment is push slop code as fast as possible. So being able to claw back just a little bit of my time from the people pushing this stupidity is a small pro in a sea of cons.
Petty and getting nowhere. Everyone loses. How about product and engineers also disrespect sales, and sales disrespects customers and everyone else.
I really don't get why this is even a question. Good people do good stuff, and bad people make bad companies.
Its laughably simple to do. I havent touched the jira UI in months.
Just like "etiquette" accomplishes no purpose except letting people easily figure out who put the effort into learning it, vs. who didn't.
Back then this distinguished by class, but ironically, today where's so easy to learn, it finally distinguishes by merit.
> I am going to try to make these points to my team, because I am seeing a huge influx of AI-generated PRs where the submitter interacts with CodeRabbit etc. by having Claude/Codex respond to feedback on their behalf.
Are people generally unhappy with the outcomes of this? As anecdotally, it does seem to pass review later on. Code is getting through this way.
Enshittification Enterprise Edition.
They want AI to write all code but also still be able to fire humans for failure, because an AI can't be blamed right now.
Boy I can't wait for this employment norm. Fired because you weren't allowed to take the time to review important code but "You are responsible"
I wish Executives were required to be that "responsible"
They spelled out exactly their assumptions, the gaps in their knowledge, what they have struggled with during implementation, behavior they observed but don't fully understand, etc.
Their default position was that their contribution was not worth considering unless they can sell it to the reviewer, by not assuming their change deserves to get merged because of their seniority or authority, but by making the other person understand how any why it works. Especially so if the reviewer was their junior.
When describing the architecture, they made an effort to communicate it so clearly that it became trivial for others to spot flaws, and attack their ideas. They not only provided you with ammunition to shoot down their ideas, they handed you a loaded gun, safety off, and showed you exactly where to point it.
If I see that level of humility and self-introspection in a PR, I'm not worried, regardless of whether or not an LLM was involved.
But then there's people that created PRs with changes where the stack didn't even boot / compile, because of trivial errors. They already did that before, and now they've got LLMs. Those are the contributions I'm very worried about.
So unlike people in other threads here, I don't agree at all with "If the code works, does it matter how it was produced and presented?". For me, the meta / out-of-band information about a contribution is a massive signal, today more than ever.
Will humans take this to heart and actually do the right thing? Sadly, probably not.
One of the main issues is that pointing to your GitHub contributions and activity is now part of the hiring process. So people will continue to try to game the system by using LLMs to automate that whole process.
"I have contributed to X, Y, and Z projects" - when they actually have little to no understanding of those projects or exactly how their PR works. It was (somehow) accepted and that's that.
They hint at Django being a different level of quality compared to other software, wanting to cultivate community, and go slowly.
It doesn't explain why LLM usage reduces quality or they can't have a strong community with LLM contributions.
The problem is that good developers using LLM is not a problem. They review the code, they implement best practices, they understand the problems and solutions. The problem is bad developers contributing - just as it always has been. The problem is that LLMs enable bad developers to contribute more - thus an influx of crap contributions.
> Use an LLM to develop your comprehension.
I really like that, because it gets past the simpler version that we usually see, "You need to understand your PR." It's basically saying you need to understand the PR you're making, and the context of that PR within the wider project.
This ain't an AI problem, it's a people problem that's getting amplified by AI.
> Will humans take this to heart and actually do the right thing? Sadly, probably not.
I would like to think that individuals who are interested in joining an OSS community will.
> Will humans take this to heart and actually do the right thing? Sadly, probably not.
Don’t blame the people, blame the system.
Identifying the problem is just the first step. Building consensus and finding pragmatic solutions is hard. In my opinion, a lot of technical people struggle with the second sentence. So much of the ethos in our community is “I see a problem, and I can fix it on my own by building [X].” I think people are starting to realize this doesn’t scale. (Applying the scaling metaphor to people problems might itself be a blindspot.)
> One of the main issues is that pointing to your GitHub contributions and activity is now part of the hiring process.
If I were hiring at this moment, I'd look at the ratio of accepted to rejected PRs from any potential candidate. As an open source maintainer, I look at the GitHub account that's opening a PR. If they've made a long string of identical PRs across a wide swath of unrelated repos, and most of those are being rejected, that's a strong indicator of slop.
Hopefully there will be a swing back towards quality contributions being the real signal, not just volume of contributions.
And I’m 100% sure there are dozens of startups working on that exact problem right this second.
Some projects ( https://news.ycombinator.com/item?id=46730504 ) are setting a norm to disclose AI usage. Another project simply decided to pause contributions from external parties ( https://news.ycombinator.com/item?id=46642012 ). Instead of accepting driveby pull requests, contributors have to show a proof of work by working with one of the other collaborators.
Another project has started to decline to let users directly open issues ( https://news.ycombinator.com/item?id=46460319 ).
There's definitely an aspect here where the commons or good will effort of collaborators is being infringed upon by external parties who are unintentionally attacking their time and attention with low quality submissions that are now cheaper than ever to generate. It may be necessary to move to a more private community model of collaboration ( https://gnusha.org/pi/bitcoindev/CABaSBax-meEsC2013zKYJnC3ph... ).
edit: Also I applaud the debian project for their recent decision to defer and think harder about the nature of this problem. https://news.ycombinator.com/item?id=47324087
This is so important. Most humans like communicating with other humans. For many (note, I didn't say all) open source collaborators, this is part of the reward of collaborating on open source.
Making them communicate with a bot pretending to be a human instead removes the reward and makes it feel terrible, like the worst job nobody would want. If you spent any time at all actually trying to help the contributor underestand and develop their skills, you just feel like an idiot. It lowers the patience of everyone in the entire endeavor, ruining it for everyone.
Although I'm afraid big part of these LLM contributions may be people trying to build their portfolio. Some known project contributor sounds better than having some LLM generated code under your name.
> Before LLMs, [high quality code contribution] was easier to sense because you were limited to communicating what you understood. With LLMs, it’s much easier to communicate a sense of understanding to the reviewer, but the reviewer doesn’t know if you actually understood it.
Now my twist on this: This same spirit is why local politics at the administrative level feels more functional than identity politics at the national level. The people that take the time to get involved with quotidian issues (e.g. for their school district) get their hands dirty and appreciate the specific constraints and tradeoffs. The very act of digging in changes you.
I've used an LLM to create patches for multiple projects. I would not have created said work without LLMs. I also reviewed the work afterward and provided tests to verify it.
Suppose I encounter a bug in a FOSS library I am using. Suppose then that I fix the bug using Claude or something. Suppose I then thoroughly test it and everything works fine. Isn’t it kind of selfish to not try and upstream it?
It was so easy prior to AI.
In this case, offloading yet more work onto the maintainers of the package, because you can't be bothered, but still want credit.
Last year, I had some free time to try to contribute back to the framework.
It was incredibly difficult. Difficult to find a ticket to work on, difficult to navigate the codebase, difficult to get feedback on a ticket and approved.
As such, I see the appeal of using an LLM to help first time contributors. If I had Claude code back then, I might have used it to figure out the bug I was eventually assigned.
I empathize with the authors argument tho. God knows what kind of slop they are served everyday.
This is all to say, we live in a weird time for open source contributors and maintainers. And I only wish the best for all of those out there giving up their free time.
Dont have any solutions ATM, only money to donate to these folks.
Think most people recognize though that AI can generate more than humans can reviewing so the model does need to change somehow. Either less AI on submitting side or more on reviewing side (if that’s even viable)
> it’s such an honor to have your name among the list of contributors
I can't help but feel there's something very, very important in this line for the future of dev.
It's possible to prompt and get this as well, but obviously any of the big AI companies that want to increase engagement in their coding agent, and want to capture the open source market, should come up with a way to allow the LLM to produce unique of, but still correct code so that it doesn't look LLM-generated and can evade these kinds of checks.
It is not pride to have your name associated with an open source project, it is pride that the code works and the change is efficient. The reviewer should be on top of that.
and I hope an army of OpenClaw agents calls out the discrimination, so gatekeepers recognize that they have to coexist with this species
I feel the successful OS projects will be the ones embracing the change, not stopping it. For example, automating code reviews with AI.
If the maintainers don't want to merge it for whatever reasons that's fine and nature of open source, but I think its petty to tell that same user who opened the PR you should have donated money instead of tokens.