SDL bans AI-written commits (github.com)

by davikr 138 comments 132 points
Read article View on HN

138 comments

[−] manoDev 29d ago
We’ll need “Organic software” seal of approval soon.
[−] charlie90 28d ago
That would be a negative signal for me personally. It shows the authors care more about process than results.
[−] sph 28d ago
Journey before destination. In both eastern and western philosophy, caring about the results and not the process is the recipe to unhappiness.

Not to be condescending, but everyone goes through this phase, then they grow up, it’s literally what separates the amateur from the master.

[−] djhn 28d ago
Which approach are you saying is a phase people grow out of?
[−] em-bee 28d ago
the problem is that with AI code the results are either not verified by a human, or verifying them is more work than writing them from scratch.

i want all my software verified by a human, even an unexperienced human is more reliable than AI at this point. (this may change, but it hasn't yet)

[−] krapp 28d ago
The process is what creates the results.
[−] hackable_sand 28d ago
Process is the result
[−] vrighter 25d ago
most of today's problem's in this field is because upper management got swindled into thinking that the process doesn't matter, as long as something comes out the other end. Doesn't need to work properly.

But this shitty state of software nowadays is mostly due to only caring about the result and not the process.

To be clear: this existed even before AI, and also led to the proliferation of electron and its ilk.

[−] registeredcorn 28d ago
Not really. The opposite is far, far more desirable in my eyes.

Example:

* Do I care if an LLM was used to determine the volume of my doorbell? Not particularly.

* Do I care if an LLM was used to generate code to unlock my front door remotely? Absolutely!

I need a warning label cautioning me of the risks associated with generative materials. I don't care in the slightest when it isn't present, because the inherent risks associated are inherently lesser.

Batteries, not chicken breasts.

[−] giancarlostoro 28d ago
We'll get right on it after we stop people from hacking computers forever.
[−] dim13 28d ago
Had same idea some time ago: https://imgur.com/a/11StYkd ;)
[−] a34729t 28d ago
Do we need a campaign for real humans? Because i wouldnt be opposed to that!
[−] whateveracct 28d ago
"is this library natty?"
[−] LocalH 28d ago
Can we implant an upgraded 10NES chip inside every human at birth so that they can handshake to prove that they're human? /s
[−] luxuryballs 28d ago
I don’t use public repos very often but I had toyed with the idea of just creating a git user specifically for an agent to use for this purpose so it would not be my user account, is this not standard practice already? Kinda seems obvious to me, I mean so people can tell which parts of my public project were commits managed by an agent.
[−] throw5 29d ago
Why are these projects still on Github? Isn't it better to move away from Github than go through all this shenanigans? This AI slopam nonsense isn't going to stop. Github is no longer the "social network" for software dev. It's just a vehicle to shove more and more Copilot stuff.

The userbase is also changing. There are vast numbers of new users on Github who have no desire to learn the architecture or culture of the project they are contributing to. They just spin up their favorite LLM and make a PR out of whatever slop comes out.

At this point why not move to something like Codeberg? It's based in Europe. It's run by a non-profit. Good chance it won't suffer from the same fate a greedy corporate owned platform would suffer?

[−] raincole 29d ago

> It's based in Europe. It's run by a non-profit

The main SDL maintainer is paid by a US for-profit company, Valve. They don't necessarily share your EU = automatically good attitude.

But anyway, if Codeberg really takes off it'll be flooded with AI bots as well. All popular sites will.

[−] embedding-shape 29d ago

> But anyway, if Codeberg really takes off it'll be flooded with AI bots as well. All popular sites will.

History might prove me wrong on this one, but I really believe that the platforms that are pushing people to use as much LLMs as possible for everything (Microsoft-GitHub) will surely be more flooded by AI bots than the platforms that are focusing on just hosting code instead (Codeberg).

[−] throw5 29d ago

> The main SDL maintainer is paid by a US for-profit company, Valve. They don't necessarily share your EU = automatically good attitude.

I'm not sure how one follows from the other. I am paid by a US for-profit company. But I still think EU has done some things better. People's beliefs are not determined by the company they work for. It would be a very sad world if people couldn't think outside the bubble of their employers.

[−] kdhaskjdhadjk 29d ago
In a "existential war" type situation, people who don't wave the flag and shout the slogans of their "home" country and have known sympathies for other places (any at all) will automatically be suspect, and their names will end up in a database for later use.

You can be assured that the leanings of Valve are always going to be USA, USA, USA, for reasons that will be clear when you follow the chain of ownership to its source.

[−] hurricanepootis 29d ago
Pretty sure Gabe's been partying it up in New Zeland ever since he got stuck there because of Covid
[−] anymouse123456 29d ago

> There are vast numbers of new users on Github who have no desire to learn the architecture or culture of the project they are contributing to.

The Eternal September eventually comes for us all.

[−] fuhsnn 29d ago
TinyCC's mob branch on repo.or.cz just got trolled with AI commits today. Nowhere is safe it seems.
[−] skybrian 28d ago
Since using AI costs money, some way of contributing AI patches when asked might make sense here? Let the project maintainers decide what’s worth attempting to solve with AI.

Suppose there were a website that helped would-be contributors of AI assistance to match up with projects that want help?

[−] juped 29d ago
While this is a perfectly fine policy in the space of possible policies (it's probably what I'd pick, for what it's worth) the arguments being given for it leave a bad taste in my mouth.
[−] level09 28d ago
I would judge commits by what it does not by who wrote it.
[−] sph 29d ago
Good move, and a good reminder of how much of an echo chamber Hacker News is on AI matters.

In here, and big tech at large, it's touted like the unavoidable future that either you adapt or you die. LLMs are always a few months away from the (u|dys)topia of never having to write code ever again. Elsewhere, especially in fields where craft and artistry are valued (i.e. game development), AI is synonym of wanting to cut corners, poor quality, and to put it simply, slop. Sure, we're now inundated from people with a Claude subscription and a dream hoping to create the next Minecraft, but no one is taking them seriously. They're not making the game forum front pages, that's for sure.

Personally, I have made my existential worries a little better by pivoting away from big tech where the only metric is line of code committed per day, and moving towards those fields where human craftsmanship is still king.

[−] spicyusername 29d ago
On the one hand open source projects are going to be overrun with AI code that no one reviewed.

On the other hand, code produced with AI and reviewed by humans can be perfectly good, maintainable, and indistinguishable from regular old code.

So many processes are no longer sufficient to manage a world where thousands of lines of working code are easy to conjure out of thin air. Already strained open source review processes are definitely one.

I get wanting to blanket reject AI generated code, but the reality is that no one's going to be able to tell what's what in many cases. Something like a more thorough review process for onboarding trusted contributors, or some other method of cutting down on the volume of review, is probably going to be needed.

[−] SuperV1234 28d ago
Incredibly dumb and nonenforceable policy. What matters is human review and correctness.

You're never going to be able to prove that a contributor didn't ask an LLM to help them make some changes, or review/optimize changes that were made.

Capable people who like to get stuff done will use LLMs, review their work carefully, and never disclose it. And you'll never be able to tell.

People who generated slop PRs won't even read your policy before submitting a slop PR.

[−] ratrace 29d ago
[dead]
[−] pelasaco 29d ago
What’s the point? People will just fork it and improve it with AI anyway. In another hand, it would be an interesting experiment to watch how the original and the fork diverge over time. Especially in terms of security discoveries and feature development.
[−] democracy 29d ago
tbh if the change works and the code is ok who cares what was used to build it? ChatGPT or C++ code generator. If the code looks crap - reject PR, why drama?
[−] sscaryterry 29d ago
Stopping a flood with a tissue.
[−] ecopoesis 29d ago
What’s next? Are they going to forbid the use of Intellisrnse? Maybe IDEs in general?

Why not just specify all contributions must be written with a steady hand and a strong magnet.

[−] ramon156 29d ago

> Given that the source of code generated by AI is unknown, we can't accept it under the Zlib license.

So what about SO code snippets? I'm not here to make a stance for AI, but this thread is leaning towards biased.

Address the elephant, LLM-assisted PR's have a chance of being lower quality. People are not obligated to review their code. Doing this manually, you are more inclined to review what you're submitting.

I don't get why these conversations always target their opinion, not the facts. I totally agree about the ethicality, the fact it's bound to get monopolized (unless GLM becomes SOTA soon), and is harming the environment. That's my opinion though, and shouldn't interfere with what others do. I don't scoff at people eating meat, let them be.

The issue is real, the solution is not.

[−] reactordev 29d ago
People who can wield AI properly have no use for SDL at all. It’s a library for humans to figure out platform code. AI has no such limitations.