> people will finally understand that security bugs are bugs, and that the only sane way to stay safe is to periodically update, without focusing on "CVE-xxx"
Linux devs keep making that point, but I really don't understand why they expect the world to embrace that thinking. You don't need to care about the vast majority of software defects in Linux, save for the once-in-a-decade filesystem corruption bug. In fact, there is an incentive not to upgrade when things are working, because it takes effort to familiarize yourself with new features, decide what should be enabled and what should be disabled, etc. And while the Linux kernel takes compatibility seriously, most distros do not and introduce compatibility-breaking changes with regularity. Binary compatibility is non-existent. Source compatibility is a crapshoot.
In contrast, you absolutely need to care about security bugs that allow people to run code on your system. So of course people want to treat security bugs differently from everything else and prioritize them.
I think part of it is that, especially at the kernel level, it can be hard to really categorise bugs into security or not-security (it has happened in the past that an exploit has used a bug that was not thought to be a security problem). There's good reason to want to avoid updates which add new features and such (because such changes can introduce more bugs), but linux has LTS releases which contain only bug fixes (regardless of security impact) for that situation, and in that case you can just stay up to date with very minimal risk of disruption.
And this is the best-case scenario. Because once updates become opt-out it simply becomes an attack vector of another type.
If the updated code is not open source, you are trusting blindly that not some kind of different remote code execution just happened without you knowing it.
As blind as my belief that Asia exists, because I haven't personally navigated there. Hell, I've used electricity (using it right now), but I couldn't do the experiments you need to do to get myself to an 1850s level of understanding of how it works, much less our current level.
I trust that Linux has a process. I do not believe it is perfect. But it gives me a better assurance than downloading random packages from PyPi (though I believe that the most recent release of any random package on PyPi is still more likely safe than not--it's just a numbers game).
I get what you are saying but as you said, if you are already under attack you can't trust your own computer, you just hope that you aren't downloading another exploit/bogus update. Real software I imagine is not so easy to pwn so completely but I don't know.
>it takes effort to familiarize yourself with new features, decide what should be enabled and what should be disabled, etc.
What features? I update my rolling release once a month and nothing changes for the last 10 ish years. Maybe pipewire/pulse thingy was annoying and bluetooth acted a bit. With docker on rpi I even upgrade the whole zoo of things by just rebooting.
exactly. it is something you genuinely never need to think about, except for once in a blue moon. or, more like once in a leap year. and completely unmeasured by the "we will update it when our [horrific] business processes say it's okay" crowd is the cumulative angst of shit being broken FOR NO REASON. and that is to say nothing of the security vulnerabilities and all the other reasons that exist for updating your software.
The slower you update, & the longer you try to maintain a "long-term support" branch, the harder updates get. Gradual changes with a rolling release system are much, much simpler than the massive step changes of a "stable" distro.
> Linux devs keep making that point, but I really don't understand why they expect the world to embrace that thinking. You don't need to care about the vast majority of software defects in Linux, save for the once-in-a-decade filesystem corruption bug.
The point is that all of those bugs are now trivial to exploit and so will be exploited
but this simply isn't true. everyone thinks "oh well my use cases will never hit any of those bugs", but then there is one person in your org who hits that particular bug and it drives them batty. it is a retro-justification for doing things the wrong way "For the Right Reason". like... no one would be like "NEVER change the oil in your car unless the light goes off". we're not talking about Micro$oft here, where you literally have to pray to your deity of choice every time you click the update button. we are talking about the Linux kernel. i do not even need a thumb to count on one hand the amount of times a kernel update has significantly impacted my life. whereas probably 50% of my Windows updates break at least one of my peripherals, and OS X isn't exactly much better these days.
Details are important, but my mental model has settled as: Security bugs are being use in a manner to how politicians use think of the children. It's used as an auto-win button. There are things to me that compete with them in priorities. (Performance, functionality, friction, convenience, compatibility etc); it's one thing to weigh. In some cases, I am asking: "Why is this program or functionality an attack surface? Why can someone on the internet write to this system?"
Many times, there will be a system that's core purpose is to perform some numerical operations, display things in a UI, accept user input via buttons etc, and I'm thinking "This has a [mandatory? automatic? People are telling me I have to do this or my life will be negatively affected in some important way?] security update? There's a vulnerability?" I think: Someone really screwed up at a foundational requirements level!.
> In some cases, I am asking: "Why is this program or functionality an attack surface? Why can someone on the internet write to this system?"
With the help of LLMs, every software not in a vault has an attack surface. LLMs are quite good at finding different, non-obvious paths, and you can easily test their exploit candidates.
1. That's bollocks. Obvious bullshit. All software doesn't have the same security track record. Do you also think sendmail and seL4 have an equally poor security track record?
2. Even if everything did have an equally poor security track record, why would that mean security bugs are no more significant than any other bug?
Honestly I'm dubious you've thought about this at all.
I didn't say "all software has the same security track record". seL4 has a much better track record than Sendmail by dint of not doing very much. I'm pretty comfortable with what people do and don't think about how much thinking I've done on this topic. Done much work with L4?
Without even wading into trying to rank projects by track record, it's worth noting that "Everything has a poor security track record" and "All software doesn't have the same security track record" are not contradictory statements.
As `tptacek caught on to, I was joking since OpenBSD's published claim is such a convenient comparison to the idea upthread that Linux specifically had a poor track record.
The last paragraph is interesting: "Overall I think we're going to see a much higher quality of software, ironically around the same level than before 2000 when the net became usable by everyone to download fixes. When the software had to be pressed to CDs or written to millions of floppies, it had to survive an amazing quantity of tests that are mostly neglected nowadays since updates are easy to distribute."
Was software made before 2000 better? And, if so, was it because of better testing or lower complexity?
>software that used to follow the "release-then-go-back-to-cave" model will have to change to start dealing with maintenance for real, or to just stop being proposed to the world as the ultimate-tool-for-this-and-that because every piece of software becomes a target.
Actually, some software are running the water-heater/heat-pump system in my basement. There is a small blue light screen, it keeps logs of consumed electricity/produced heat and can make small histograms. Of course there is a smart option to make it internet connected. The kind of functionality I’m glad it’s disabled by default and not enforced to be able to operate. If possible, I’ll never upgrade it. Release then go back to the cave has definitely its place in many actual physical product in the world.
I’ll deal with enough WTF software security in my daily job during my career. Sparing some cognitive load of whatever appliance being turned into a brick because the company that produced it or some script-kiddy-on-ai-steroid decided it was desirable to do so, that’s more time to do whatever other thing cosmos allows to explore.
>people will finally understand that security bugs are bugs, and that the only sane way to stay safe is to periodically update, without focusing on "CVE-xxx"
The problem is that the very same tools, I expect, are behind the supply chain attacks that seem to be particularly notorious recently. No matter where you turn, there's an edge to cut you on that one.
> I don't know how long this pace will last. I suspect that bugs are reported faster than they are written, so we could in fact be purging a long backlog
Hopefully these same tools will also help catch security bugs at the point they're written. Maybe one day we'll reach a point where the discovery of new, live vulnerabilities is extremely rare?
There's no way the AI is a priori understanding codebases with millions of LoC now. We've tried that already, it failed. What it is doing now is setting up its own extremely powerful test harnesses and getting the information and testing it efficiently.
Sure, its semantic search is already strong, but the real lesson that we've learned from 2025 is that tooling is way more powerful.
That's cool! I've always wanted to learn how kernel devs properly test stuff reliably but it seemed hard. As someone who's dabbled in kernel dev for his job. Like real variable hardware, and not just manual testing shit.
Honestly, AI has only helped me become a better SWE because no one else has the time or patience to teach me.
This is "the bomber will always get through" mentality for the modern era. You will invent air defences. You will write fewer bugs. You will leave code that doesn't have bugs alone, so it gains no more bugs. You will build software that finds bugs as easily as you think "enemies" find bugs, and you'll run it before you release your code.
What's the saying? Given many eyes, all bugs are shallow? Well, here are some more eyes.
I'd be very curious to know what class of vulnerability these tend to be (buffer overrun, use after free, misset execute permissions?), and if, armed with that knowledge, a deterministic tool could reliably find or prevent all such vulnerabilities. Can linters find these? Perhaps fuzzing? If code was written in a more modern language, is it sill likely that these bugs would have happened?
> I suspect that bugs are reported faster than they are written, so we could in fact be purging a long backlog (and I hope so).
It's hard for me to imagine how this wouldn't be true. This isn't the "new normal", everyone is just running it into the ground and wringing every drop they can out of it right now.
It would be interesting to "backtest" how much higher the rate of vulnerability discovery would have been if all these new vulnerabilities were discovered in near real time as they were created, since that would be more predictive of the "new normal", in my opinion. I suspect it's not very significant: we're flushing a 20+ year backlog, and generally the rate at which vulnerabilities are created is lower today.
Interesting that it's been higher than forecast since 2023. Personally I'd expect that trend to continue given that LLMs both increase bugs written as well as bugs discovered.
Why don't we just pagerank github contributors? Merged PRs approved by other quality contributors improves rank. New PRs tagged by a bot with the rank of the submitter. Add more scoring features (account age? employer?) as desired.
It's interesting to hear from people directly in the thick of it that these bug reports are apparently gaining value and are no longer just slop. Maybe there is hope for a world where AI helps create bug free software and doesn't just overload maintainers.
The slapocalipse is here, but I would propose the idea that open source maintainer get free access to AI tools from these big companies, so at least they can aggregate the problems and have some level of automation of the process.
For me, this seems something that would make sense for all dev community to push for.
I wish they wouldn’t call it “AI slop” before acknowledging that most of the bugs are correct.
Let’s bring a bit of nuance between mindless drivel (e.g. LinkedIn influencing posts, spammed issues that are LLMs making mistakes) vs using LLMs to find/build useful things.
"On the kernel security list we've seen a huge bump of reports. We were between 2 and 3 per week maybe two years ago, then reached probably 10 a week over the last year with the only difference being only AI slop, and now since the beginning of the year we're around 5-10 per day depending on the days (fridays and tuesdays seem the worst). Now most of these reports are correct, to the point that we had to bring in more maintainers to help us."
Reports being written faster than bugs being created? Better quality software than before the 2000s?
Oh my sweet summer child.
This is some seriously delusional cope from someone who drank the entire jug of kool-aid.
I’d love to be proven wrong but the current trajectory is pretty plain as day from current outcomes. Everything is getting worse, and everyone is getting overwhelmed and we are under attack even more and the attacks are getting substantially more sophisticated and the blast radius is much bigger.
An AI enthusiast having a breathless and predictive position on the future of the technology? No way! It's almost like Wall Street is about to sour on the whole stack and there is a concerted effort to artificially push these views into the conversation to get people on board.
Then again, I'm a known crank and aggressive cynic, but you never really see any gathered data backing these points up.
158 comments
> people will finally understand that security bugs are bugs, and that the only sane way to stay safe is to periodically update, without focusing on "CVE-xxx"
Linux devs keep making that point, but I really don't understand why they expect the world to embrace that thinking. You don't need to care about the vast majority of software defects in Linux, save for the once-in-a-decade filesystem corruption bug. In fact, there is an incentive not to upgrade when things are working, because it takes effort to familiarize yourself with new features, decide what should be enabled and what should be disabled, etc. And while the Linux kernel takes compatibility seriously, most distros do not and introduce compatibility-breaking changes with regularity. Binary compatibility is non-existent. Source compatibility is a crapshoot.
In contrast, you absolutely need to care about security bugs that allow people to run code on your system. So of course people want to treat security bugs differently from everything else and prioritize them.
If the updated code is not open source, you are trusting blindly that not some kind of different remote code execution just happened without you knowing it.
I trust that Linux has a process. I do not believe it is perfect. But it gives me a better assurance than downloading random packages from PyPi (though I believe that the most recent release of any random package on PyPi is still more likely safe than not--it's just a numbers game).
https://blog.yossarian.net/2025/11/21/We-should-all-be-using...
>it takes effort to familiarize yourself with new features, decide what should be enabled and what should be disabled, etc.
What features? I update my rolling release once a month and nothing changes for the last 10 ish years. Maybe pipewire/pulse thingy was annoying and bluetooth acted a bit. With docker on rpi I even upgrade the whole zoo of things by just rebooting.
Or just use an off-brand RHEL I guess.
> Linux devs keep making that point, but I really don't understand why they expect the world to embrace that thinking. You don't need to care about the vast majority of software defects in Linux, save for the once-in-a-decade filesystem corruption bug.
The point is that all of those bugs are now trivial to exploit and so will be exploited
Many times, there will be a system that's core purpose is to perform some numerical operations, display things in a UI, accept user input via buttons etc, and I'm thinking "This has a [mandatory? automatic? People are telling me I have to do this or my life will be negatively affected in some important way?] security update? There's a vulnerability?" I think: Someone really screwed up at a foundational requirements level!.
> In some cases, I am asking: "Why is this program or functionality an attack surface? Why can someone on the internet write to this system?"
With the help of LLMs, every software not in a vault has an attack surface. LLMs are quite good at finding different, non-obvious paths, and you can easily test their exploit candidates.
I suspect it's just an excuse for Linux's generally poor security track record.
2. Even if everything did have an equally poor security track record, why would that mean security bugs are no more significant than any other bug?
Honestly I'm dubious you've thought about this at all.
Was software made before 2000 better? And, if so, was it because of better testing or lower complexity?
>software that used to follow the "release-then-go-back-to-cave" model will have to change to start dealing with maintenance for real, or to just stop being proposed to the world as the ultimate-tool-for-this-and-that because every piece of software becomes a target.
Actually, some software are running the water-heater/heat-pump system in my basement. There is a small blue light screen, it keeps logs of consumed electricity/produced heat and can make small histograms. Of course there is a smart option to make it internet connected. The kind of functionality I’m glad it’s disabled by default and not enforced to be able to operate. If possible, I’ll never upgrade it. Release then go back to the cave has definitely its place in many actual physical product in the world.
I’ll deal with enough WTF software security in my daily job during my career. Sparing some cognitive load of whatever appliance being turned into a brick because the company that produced it or some script-kiddy-on-ai-steroid decided it was desirable to do so, that’s more time to do whatever other thing cosmos allows to explore.
>people will finally understand that security bugs are bugs, and that the only sane way to stay safe is to periodically update, without focusing on "CVE-xxx"
The problem is that the very same tools, I expect, are behind the supply chain attacks that seem to be particularly notorious recently. No matter where you turn, there's an edge to cut you on that one.
> I don't know how long this pace will last. I suspect that bugs are reported faster than they are written, so we could in fact be purging a long backlog
Hopefully these same tools will also help catch security bugs at the point they're written. Maybe one day we'll reach a point where the discovery of new, live vulnerabilities is extremely rare?
There's no way the AI is a priori understanding codebases with millions of LoC now. We've tried that already, it failed. What it is doing now is setting up its own extremely powerful test harnesses and getting the information and testing it efficiently.
Sure, its semantic search is already strong, but the real lesson that we've learned from 2025 is that tooling is way more powerful.
That's cool! I've always wanted to learn how kernel devs properly test stuff reliably but it seemed hard. As someone who's dabbled in kernel dev for his job. Like real variable hardware, and not just manual testing shit.
Honestly, AI has only helped me become a better SWE because no one else has the time or patience to teach me.
What's the saying? Given many eyes, all bugs are shallow? Well, here are some more eyes.
> I suspect that bugs are reported faster than they are written, so we could in fact be purging a long backlog (and I hope so).
It's hard for me to imagine how this wouldn't be true. This isn't the "new normal", everyone is just running it into the ground and wringing every drop they can out of it right now.
It would be interesting to "backtest" how much higher the rate of vulnerability discovery would have been if all these new vulnerabilities were discovered in near real time as they were created, since that would be more predictive of the "new normal", in my opinion. I suspect it's not very significant: we're flushing a 20+ year backlog, and generally the rate at which vulnerabilities are created is lower today.
Seems supported by this as well: https://www.first.org/blog/20260211-vulnerability-forecast-2...
Interesting that it's been higher than forecast since 2023. Personally I'd expect that trend to continue given that LLMs both increase bugs written as well as bugs discovered.
For me, this seems something that would make sense for all dev community to push for.
Let’s bring a bit of nuance between mindless drivel (e.g. LinkedIn influencing posts, spammed issues that are LLMs making mistakes) vs using LLMs to find/build useful things.
Oh my sweet summer child.
This is some seriously delusional cope from someone who drank the entire jug of kool-aid.
I’d love to be proven wrong but the current trajectory is pretty plain as day from current outcomes. Everything is getting worse, and everyone is getting overwhelmed and we are under attack even more and the attacks are getting substantially more sophisticated and the blast radius is much bigger.
Then again, I'm a known crank and aggressive cynic, but you never really see any gathered data backing these points up.