Will the sudden flood of AI-discovered security fixes overwhelm distros like Debian that backport security fixes to old software versions?
Posted by we_are_mammals@reddit | linux | View on Reddit | 158 comments
If Firefox is any indication, the new AI discovers two years' worth of vulnerabilities in a short period of time. Firefox seems to be an early adopter of this technology, but we should see a huge flux of newly discovered vulnerabilities across various packages.
It seems like this might overwhelm the distro security teams that backport the fixes to old software versions, like what Debian is doing. They'd have to do two years' worth of work very quickly, or they risk leaving old packages in their distributions exposed.
mmmboppe@reddit
is the AI crunching the OpenBSD codebase as well?
ckg603@reddit
The thing is: those vulnerabilities were always there. You have to realize that some of them were known to some few individuals, and used by this individuals to establish novel exploits. We are going to massively improve the quality of countless systems because of automated testing.
lordcirth@reddit
Yeah, but in the meantime, it's going to be a rocky few years.
redd1ch@reddit
If you want secure software, don't use backports. There were instances where the backport reintroduced the vulnerability.
we_are_mammals@reddit (OP)
Firefox-ESR has much fewer vulnerabilities than the main version. Firefox-ESR is Debian-like in that the fixes get backported.
gordonmessmer@reddit
Firefox ESR is preferred in environments where it's important to test compatibility between complex web sites and the client, and to resolve them before rolling out an update. The overlapping maintenance windows give those environments the ability to delay feature updates until sites adapt to them, while continuing to deliver security updates.
I don't think I've ever seen the claim that ESR has fewer vulnerabilities or that it's more secure. Why do you think that?
we_are_mammals@reddit (OP)
There are sites/organization that track those. I remember looking into that a while ago. Try https://www.cvedetails.com
gordonmessmer@reddit
The results presented by CVE Details (and probably any visualization, really) are a little difficult to interpret.
One of the reasons is that some of the vulnerabilities affecting Firefox ESR are listed on an ESR-specific page (https://www.cvedetails.com/version-list/452/22101/1/Mozilla-Firefox-Esr.html) while other vulnerability counts (much larger counts) are listed on the Firefox page (https://www.cvedetails.com/version-list/0/3264/1/) in rows that list ESR as the Edition.
Another reason is that each minor release of Firefox ESR has a row, and each row has a count, but each row is tallied for each vulnerability that was present in any release in that minor series. So it looks like 128.8 had 294 vulnerabilities and 128.7 also had 294 vulnerabilities, but those are mostly or entirely the same vulnerabilities.
I think what you really want to know is how many known and unpatched vulnerabilities each edition had at any given point in time, and I don't know of a site that presents data in that fashion, because security resources aren't centering the question of what edition you should choose.
Anyway, the point is that I think the idea that the ESR edition is more secure or has fewer vulnerabilities is probably a myth. (And the idea that Debian has fewer vulnerabilities or is more secure than other systems is DEFINITELY a myth. The truth is very much the opposite.)
we_are_mammals@reddit (OP)
Why? You make a confident statement, but I'm not seeing any data or even an argument.
It makes sense for Debian (and Firefox ESR) to be more secure, since many security vulnerabilities get discovered before they even make it into their aged repos.
gordonmessmer@reddit
https://www.reddit.com/r/Fedora/comments/fg385b/congrats_to_fedora_for_having_less_cves_than/
Hi, I've been developing software on GNU/Linux systems and operating production networks professionally since sometime around 1997. I've worked in large high security environments like Salesforce and Google. A lot of my experience is in build systems where we need to remediate security vulnerabilities, specifically, and I also work on distributions.
I think a lot of people who don't have an infosec background specifically tend to think that Debian must be the best by any given metric, because it has built a strong reputation over a long time.
Debian is a great project that's an excellent model for governance and organization in Free Software. And it has an excellent reputation for reliability. But that doesn't mean that it's secure.
Most Free Software projects maintain a release series for around a year at most. Each Debian release is maintained for 3 years (or 5, if you count the extended LTS maintenance, which is less staffed and therefor even less secure). The vast majority of the software in a Debian release is unmaintained upstream, for the vast majority of the release's maintenance window. That means that for most discovered vulnerabilities, the Debian security team has to backport a patch from a newer release of an upstream project. That work is very very laborious, and not always very reliable because the Debian maintainers are less familiar with the codebase they're working on than the upstream projects. Even in the best case, the security team's time is limited and there are far more vulnerabilities than they have time to fix. They have to prioritize work to fix the most severe vulnerabilities and that means that a lot of stuff isn't addressed. Some of it would surprise you. Qt 6 in Debian 12 STILL has high severity vulnerabilities that never got addressed. If you were using KDE or KDE apps on Debian 12, you had severe vulnerabilities for something like 12-18 months before Debian 13 was released.
The idea that vulnerabilities are discovered before they get into Debian is backward. Functional bugs tend to be discovered quickly, but security vulnerabilities often exist for years before they're discovered. The most secure practice is to ship software that is maintained by an upstream project. I think you'll have a very hard time finding anyone with an infosec background or non-trivial development experience that will disagree.
we_are_mammals@reddit (OP)
My opinion on this was based on the Firefox vs Firefox ESR vulnerability count ratio, but since it is incorrect (thanks for pointing it out), I'm no longer making this claim -- I'd like to see more data on this, but I don't have it myself.
These numbers seem very suspect, frankly. Fedora includes much of the up-to-date software from Linux, Mozilla, Apache and GNU. Yet its vulnerability count is way lower than their sum.
gordonmessmer@reddit
Sure, they are suspect. All data is suspect, in that you have to be really careful about whether the thing you infer from the data is actually supported by the data. Especially when data confirms what you want to be true, like "older software has fewer vulnerabilities."
You hypothesized that Debian is a more secure distribution because "many security vulnerabilities get discovered before they even make it into their aged repos". I can't know your mind, but I think that you envision security vulnerabilities as a point that moves toward the distribution, when you should see the vulnerability as a *span*. The vulnerability exists in the source code, and from that point on into the future, it affects all releases until it's discovered, and when it's discovered it affects all of its releases and all of the releases of anything that included it (which includes both distributions and applications that bundled the affected software as a dependency, delivered by other channels.)
One question that might be relevant is "what is the average age of a vulnerability?"
https://www.usenix.org/conference/usenixsecurity22/presentation/alexopoulos
A few years ago there was a Usenix presentation that studied that question, concluding that the average age was 4 years. Well, what does that mean?
If you have a system compose of software that is all at least 4 years old, does it mean that this system is more secure than a system running software that is exclusively recent releases? No! A system with fresh software has very very few known vulnerabilities, and a system with very old software potentially has a very VERY large number of known vulnerabilities that have to be individually checked to determine whether or not they were fixed, because NO distribution fixes 100% of known CVEs, not even distributions managed by professional engineers, much less the ones managed by volunteers.
Does it mean that most of the vulnerabilities that exist are already known and the software will not become any less secure than it is? No! At 4 years, you've really only reached the average, and known vulnerabilities are likely to double as time goes on. At the same time, the maintenance effort invested in that release is pretty likely to have trailed off, so new vulnerabilities are even less likely to get fixed than the ones that were discovered earlier in the cycle. The 4 year old system is going to get less secure, faster, late in the release.
The only thing that you can actually infer from this data is that the rate at which vulnerabilities are discovered for a release will start to decrease after 4 years, so a maintainer's workload for that release will start to decrease at that point. It doesn't mean that they're going to go back and start catching up on all of the vulnerabilities they chose not to fix because there wasn't time, earlier, so it doesn't mean that the number of known and unfixed CVEs will decrease. And it's very likely that they're prioritizing the maintenance of newer releases, so it doesn't mean that the system will start to get more secure from that point on.
There are specific scenarios where a static system is desirable for compatibility reasons, where the expectation is that mitigating compatibility issues will drop off. But from a SECURITY point of view, old releases are basically always worse. If you have to guess whether Fedora 44 is more secure than Debian 12, that's a very very easy choice. It is OBVIOUSLY Fedora 44, because there are vastly fewer old releases with a large CVE backlog.
we_are_mammals@reddit (OP)
You have to weigh these effects against each other:
It's not clear to me which had been bigger until the recent AI developments.
You posted the pic as evidence that Fedora is safer than Debian. Do you not acknowledge that doing that was wrong?
gordonmessmer@reddit
Why would you weigh them against each other? They both serve to increase the number of security vulnerabilities that exist in LTS distributions.
If you are looking for opposing pressures on production environments, it is typically:
complex environments may benefit from lower change rate for a fixed period, especially when they interface with systems that are not under their control
the security of any system tends to deteriorate as it ages and as more of its flaws become known
Those are the two competing pressures. New systems are more secure and more capable. Old systems might be more compatible with whatever ecosystem they exist within.
I think that most infosec and SRE professionals take for granted that this is obvious, because we are aware that the costs of securing a system increase as it ages. But I think systems like Debian and Ubuntu LTS tend to obscure the reality, because users don't pay for anything, and because no one is generating or publishing reports on the number of known vulnerabilities in the software they publish. They are putting the label "stable" on software that is actually "unmaintained" and I think that's a huge disservice to users.
Software is "stable" when someone is still maintaining it and users expect patches. Most of the software in Debian and Ubuntu LTS isn't that. And, to their credit, Canonical at least puts the software they don't maintain in the separate "universe" component so that the handful of people with minimal security awareness can turn it off and use only the stuff that Canonical employees are actually maintaining.
You have to get over this social-media driven tendency to see the world in personal terms. The question is not whether you are right or someone else is wrong. You are not the topic, and neither am I.
we_are_mammals@reddit (OP)
A lot of vulnerabilities get discovered and fixed before they make it into an LTS release.
Typical redditor. Brings shitty/misleading/inappropriate data to an argument. Never admits he was wrong. Note the difference between you and me.
martyn_hare@reddit
It's worth noting that Canonical employees now maintain Universe and their commercial customers pay them (roughly $500/year/machine) to do it as part of Ubuntu Pro which is only available for the LTS releases.
We're talking \~25,000 packages that Canonical is claiming they're providing security patches for now. I'm not one to go sniping them but I do wonder given their two closest commercial rivals are heavily scaling back on which desktop application packages they're shipping for security reasons... will Canonical still be able to make good on their support contracts?
gordonmessmer@reddit
I don't know what will happen in the future, but the last time I looked at 24.04 LTS, the Pro service did not have patches for Qt 6, which has high severity security vulnerabilities.
we_are_mammals@reddit (OP)
Linux had 2,370 vulnerabilities (from your pic). Firefox ESR had 506 vulnerabilities (which you claim are undercounted). But Fedora, despite including up-to-date versions of both (and a lot more), only had 757 vulnerabilities. That number seems very suspect. I think they just don't report vulnerabilities of software in Fedora as "Fedora vulnerabilities".
we_are_mammals@reddit (OP)
If some vulnerability affects 15 different versions of Firefox ESR, they are not going to count as 15 separate vulnerabilities! That would be utterly ridiculous.
I mean, man, have some humility! You JUST learned about CVEs from me, in the above comment, and you jump to the conclusion that people who work in this field are total morons.
gordonmessmer@reddit
I invited you to tell me why you think Firefox ESR is more secure than the regular release. I did not just learn about CVEs.
I'm not sure how you misread what I wrote that badly.
we_are_mammals@reddit (OP)
I'm not looking at rows. I'm looking at the totals. It probably makes sense to restrict the dates (like look only at the last 10 years), but I don't know how to do that for a specific piece of software.
gordonmessmer@reddit
Yeah, that's my point... the totals are wrong for the "Firefox ESR" product because MOST of the vulnerabilities are actually reported on the "Firefox" product, for releases that are labeled the "ESR" edition.
The totals you're looking at are just wrong.
gordonmessmer@reddit
Those totals are miscounted. Look at the page that shows you the "3097" count (the page for "Firefox"), scroll down a tiny bit and look at rows where "esr" is listed in the "Edition" column.
Firefox ESR 128.9.0 ALONE had 294 known vulnerabilities.
we_are_mammals@reddit (OP)
And? 294 is less than 506. You claim that the totals are miscounted. Prove it.
gordonmessmer@reddit
Are you struggling with the concept that the vulnerability counts appear on the same page as the total?
Do you not see that Firefox ESR vulnerabilities are mostly reported on the "Firefox" page?
Less-Literature-8171@reddit
That looks like a bad report.
Psionikus@reddit
Oh no. Let's put the bugs back in and delete AI amirite Reddit?
redfox87@reddit
Yes.
Bunslow@reddit
the older the bug the easier the backport, no?
anto77_butt_kinkier@reddit
Yes and no, it depends on the bug (obviously)
If it's an old bug in an old piece of code that has had dozens of things build on top of it, changing that code to fix the bug may cause problems in the code built on top of it.
If it's an old bug in an old piece of software that's been doing its original function without being the base for other software, then it's easier to fix since nothing relies on it functioning exactly the way it did before.
Sometimes it requires a whole rewrite of certain parts of code, and sometimes it's just adding adding quotes where none existed before. It's complicated (as most things are)
Bunslow@reddit
i get the impression that most of these are not "structure of the code" bugs, which would indeed require a lot of work to fix, but more "the structure is fine but the implementation needs touching up". mostly
anto77_butt_kinkier@reddit
I'm not entirely sure,, tbh I haven't really looked into the nature of the bugs themselves
abotelho-cbn@reddit
My expectation is that it'll peak and then starting trending back down. It'll become standard practice to run a security check using AI as part of the release pipelines.
Flashy_Walk2806@reddit
That's what they want
abotelho-cbn@reddit
I mean sure.
But what else can we do? If AI can ingest, scan, and detect vulnerabilities at the scale that it has been shown to be capable of, we don't really have a choice.
What we need is more truly FOSS models and relatively simple ways to run them.
Flashy_Walk2806@reddit
I just say that AI companies need market and running a security check is one of the use case, getting downvote there ? I don't get it
abotelho-cbn@reddit
Because it's not relevant?
It's going to become necessary.
we_are_mammals@reddit (OP)
Same. But if the peak lasts, say 3 months, the security teams would have to do 20x more work than usual during these 3 months.
PigSlam@reddit
The teams have some capacity. They’ll reach that capacity, then have a backlog until the get past the peak. If the demand is 1.1x what they can handle, or 10000x what they can handle, they can only do what they can do. Essentially, a pile will form, then they’ll keep shoveling the pile away until it’s gone.
LvS@reddit
TL;DR: Pretty much all the LTS distros will have tons of known security vulnerabilites for the next few years.
PentagonUnpadded@reddit
Which is better than the distros having tons of unknown security vulnerabilities used by malicious orgs. This is net positive for the open source community.
CrazyKilla15@reddit
But not better than the distros that use or closely track real upstream supported kernel versions(either stable or LTS)
PentagonUnpadded@reddit
btw, what distro do you use? ;)
CrazyKilla15@reddit
I use Arch, btw
PentagonUnpadded@reddit
To quote destiny's child,
Say
uname, sayunameIf no kernel bugs around you, say
uname -aIf you aint runnin game
say
unamesayunameOffbeatalchemy@reddit
its been 17 minutes and they haven't answered! they're clearly a phony!!
CrazyKilla15@reddit
I use Arch, btw
adenosine-5@reddit
Unknown vulnerability is always better than known vulnerability, until the known one its fixed.
abotelho-cbn@reddit
You actually don't know that an "unknown" vulnerability is actually unknown. There may be groups of people who know of them. That's what a zero day is.
Manbeardo@reddit
But fixed vuln is better than unknown (to the public) vuln. The unknown vulns are how groups like Black Cube get into places they really shouldn’t be able to.
creeper6530@reddit
That's the point. Known. In the meantime we can, say, blacklist kernel modules
DustyAsh69@reddit
Can't we help solve bugs? I'd like to contribute but I have never contributed to OSS before.
GreatBigBagOfNope@reddit
If you can write a good fix, what's stopping you from making the PR? You'll just need to make sure both code and surrounding documentation meet the standards of whatever project you're submitting the PR to, but all of these FOSS tools have public repos that you can submit to and the maintainer(s) can accept or reject freely
alex2003super@reddit
A lot of the time, the bureaucracy and process is. I understand why the pomp is necessary with projects of this scale but then again, sometimes it's a bit overwhelming.
DustyAsh69@reddit
I've already mentioned this is another comment thread below this comment. The problem is that I've not only never contributed to OSS before but I'm also not good at C / C++ and asm, which most OSSes use.
GreatBigBagOfNope@reddit
Then no, you probably can't help with fixing them without some learning
DustyAsh69@reddit
Yeah. That's why I'm planning to learn C. I'm gonna need it in uni as well.
PigSlam@reddit
You can if you can...can you?
DustyAsh69@reddit
That is an interesting question. It depends on what language the code is in. I can't help with C / C++ or asm since I never gave them a try but I can help with Python, C#, JS and TS which most OSSes don't use...
PotatoTime@reddit
Python is used a bit and I know Gnome uses JS for theming
DustyAsh69@reddit
I'll have to search for where it's used. But, I'm going to start learning C anyways so might as well help with C itself :)
TheYang@reddit
Wasn't this graphic misleading?
because one issue caused like 350 of these bug fixes?
therealmrbob@reddit
That’s not really how patching works.
TRKlausss@reddit
Or ride it out, assume there is going to be security holes, and then use a version where the incidence of security issues has gone down.
Yes, this would mean a phase of insecurity, but your team can do so much…
dnu-pdjdjdidndjs@reddit
I'll create an exploit chain targeting debian's firefox and post it just to troll debian then
CrazyKilla15@reddit
Please do, especially focus on all the debian-specific patches because its important to remember debian software versions dont actually correspond to upstream software versions, and this includes security fixes.
dnu-pdjdjdidndjs@reddit
yep
OmagaIII@reddit
Why would that be? If AI can detect it, AI can fix it.
If you take a TDD approach, you have it test for the vulnerabilities and then write code that ensures it passes the test.
DuendeInexistente@reddit
Do you really think that or are you just a bot on AI defense duty
Y-M-M-V@reddit
They may have other people they can pull in temporally too if the backlog gets bad. There is also a decent chance that different projects see upticks at different times, which could flatten the curve a bit so to speak.
dnu-pdjdjdidndjs@reddit
debian's fault
Achereto@reddit
Why would it last, though? That would mean that the AI would find more vulnerabilities it didn't find the first time around.
Unless that AI already found 1000+ vulnerabilities that can't all be fixed at once, I wouldn't expect the list of fixes be longer than maybe 200.
we_are_mammals@reddit (OP)
Not sure I understand you. For Firefox, the number of fixes is already 500 more than normal (Feb-Apr).
I have no reason to think it will last beyond April. It's more of a gut feeling -- things that build up slowly, usually don't disappear right away.
thephotoman@reddit
99 tickets on the Jira board to do
99 tickets to do
Take one down, patch it around
102 tickets on the Jira board to do
The only kind of bug-free code is code that was never written. But there should be a dropoff if there are enough people working those tickets.
whosdr@reddit
So given the CVEs are discovered with the help of AI technology..
And the AI technology runs on top of the software that these bugs are going to be found in..
Would it not make sense for the companies who own these AI systems and data centres to help backport the fixes?
KittensInc@reddit
Ha! No. The AI companies are only doing this as a marketing stunt, trying to scare CTOs into ~~buying~~ renting their Magic Vulnerability Scanning Bot for $$$$.
whosdr@reddit
In a pragmatic sense, the reasoning doesn't matter. If more CVEs are appearing rapidly, something does need to be done to keep up-to-date.
Moscato359@reddit
"Would it not make sense for the companies who own these AI systems and data centres to help backport the fixes?"
If they're paid to
But a lot of opensource technology they *don't* use in their stack
Firefox is an example... the AI isn't using firefox, it'll use some kind of web mcp, or just plain curl or libcurl
whosdr@reddit
Unless I'm incorrect, the explicit subject of the post was 'Distros like Debian'. That's more the angle I was looking at it from.
0riginal-Syn@reddit
They are not going to do it for free. These are companies. Community projects like Debian are not going to pay. Now you could potentially have the likes of Red Hat, Ubuntu, etc. decide to pay. But then you have to ask, do we really want it to. That is a very slippery slope.
whosdr@reddit
Do we want Red Hat to fund backporting bug-fixes to older distro versions? Is there a reason that's bad?
0riginal-Syn@reddit
That is not what I meant. Sorry if I was not clear. Talking about AI patching the bugs themselves.
whosdr@reddit
Ah. The discussion was on the extra work for patching and backporting a torrent of new security bugs. So to me it would make sense if the companies who need this work to be done be the ones to volunteer workers for the job.
Moscato359@reddit
Thats still up to a company to run and pay for the bug fixes
Moscato359@reddit
Someone has to pay for the ai compute
Debian sure can't afford it all
Corporations may be be willing to use ai to audit the stuff they use themselves though
SeriousPlankton2000@reddit
They mean that they are fixing the bugs that affect themselves, thereby protecting their own business. They get paid in not having downtime to upgrade to the new version of the OS.
laffer1@reddit
Claude and codex have browser integrations now
nicman24@reddit
No? The issues were there all along
whosdr@reddit
I mean it would be in their own interests to see the exploits patched.
MatchingTurret@reddit
Google donates Gemini tokens for https://sashiko.dev/
throwaway234f32423df@reddit
I'm kind of surprised by how far behind Ubuntu is falling, they deployed a mitigation for CopyFail but no fixed kernels yet, and for DirtyFrag they haven't even deployed a mitigation, just published manual mitigation instructions.
LordAlfredo@reddit
As someone working on an enterprise distro that's in a similar boat: I am pretty sure they have mechanisms for handling multiple disparate versions (e.g. we have a "unified" kernel package so the team can apply patches to 5 kernels at a time).
cig-nature@reddit
I'm expecting an ocean of "Cannot reproduce"
Mr_s3rius@reddit
Wouldn't be so sure about that.
The curl project reports that the flood of AI reports are now higher quality with a higher rate of confirmed vulnerabilities than pre-AI. An
https://daniel.haxx.se/blog/2026/04/22/high-quality-chaos/
Eliarece@reddit
Wouldn't be so sure about that either...Same source.
https://daniel.haxx.se/blog/2026/05/11/mythos-finds-a-curl-vulnerability/
I am still waiting to see the research papers
Mr_s3rius@reddit
The sheer number of reports will produce more false positives, sure.
But the percentage of confirmed vulns pre-AI-wave in 2024 was 13% and has risen to over 15% now. They expect 2026 to unveil 3x more vulns than in previous years.
Even in your example, 1 out of 5 means 20%.
53120123@reddit
yeah, how many are real security flaws and how many are hallucinations? either total ones or just recreating a previous bug report that was in the training data?
anto77_butt_kinkier@reddit
Exactly this. Also a bunch of bug reports that consist of somebody reporting intended behavior as a bug. "When I click the reb button with the "X" in it, the program completely goes away!! Please fix!!!"
Booty_Bumping@reddit
Backports could never keep up to begin with. It's always been a bit of a sisyphean task.
we_are_mammals@reddit (OP)
Check out the big brains on /u/Booty_Bumping !
BCMM@reddit
6.12 is a longterm branch, so for the serious vulnerabilities, upstream back-ports the patches themselves.
Tutorbin76@reddit
Unpopular opinion: In the long run it will force people to take a proactive approach to security and start writing better code.
That's also why we're seeing an uptick in memory-safe languages that make it harder (still not impossible) to write bad code.
SunlightScribe@reddit
The reality is that it's going to be done badly. I'm currently in the middle of a security blitz at work because of AI tools detecting "vulnerabilities".
The security team just set the tool loose and said we weren't allowed to flag anything as a false positive, likely in an effort to reduce their burden. We must fix things to satisfy the tool, end of story. In practice it means escaping all input several times more than necessary because the AI tool doesn't really understand where those variables come and if they've already been sanitized.
90% of the issues flagged were false positives of that nature. My team got off easy with only a few hundred issues flagged, some teams got hit with thousands. We're also burning individual token quotas at an alarming rate in order to run the test tool locally.
KittensInc@reddit
Have you considered solving this using the type system?
A lot of languages support using "wrapper" types like
value class SanitizedString(val s: String), which are entirely eliminated during compilation. Give it a single constructor, which happens to be the sanitizer function, and the compiler suddenly proves for you that every SanitizedString instance is indeed sanitized while still giving you the runtime performance of a bare string.SunlightScribe@reddit
The LLM doesn't care about what the compiler thinks and weather or not it recognizes
SanitizedStringas having sanitized input depends on the prompt it was given and how deep it digs. Right now it seems to accept using regex as having sanitized a string. But if I instead used some other equally valid method likeStringUtils.isAlpha()it would be ignored.The maddening thing about all this is that what the code actually does no longer matters. If the LLM can't recognize what is being done then it does not count.
Tutorbin76@reddit
You have bad management. Not everyone works like that.
SunlightScribe@reddit
I know. But the tools are by far too easy to abuse and misuse. Therefore that's likely going to be how most people will be using them.
Very few of us have the privilege of working under what you would consider "good management". The majority of programmers are like me and work for a non-tech corporation under people who push tech mandates without full understanding of their potential consequences.
alex2003super@reddit
I feel this can apply to much more than AI though.
Squalphin@reddit
Found the optimist. As if people will be writing code the way things are going.
Tutorbin76@reddit
Lol, and who else would be writing it? LLM's? They're just another tool in the chain. Anyone using them to actually write code in any significant quantity is doing it very wrong.
ozone6587@reddit
Do you know what Codex is? Claude Code? The writing is on the wall. You would have to be extremely dimwitted not to see it.
Just being extremely pessimist, in 20 years LLMs will write code better than any human and almost all new code will be AI. I say 20 years because I'm 100% sure I will nor be wrong but I expect this to happen sooner.
0riginal-Syn@reddit
More like there will be more tools available to find these types of issues early. I have worked with some of the best senior-level devs in the world, many in the security arena. You will never find all the holes, especially when you have 100s of people working on different parts of these larger projects.
sleepingonmoon@reddit
Architectural issue is difficult to rectify though.
Holiday_Management60@reddit
Maybe let the AI also fix them, treat the AI written code as temporary until a human can come and write their own fix? I'd rather a partially AI written OS to one filled with known vulnerabilities.
Dyson201@reddit
AI is good at discovering vulnerabilities because it's a realively "solved" problem. We know what kinds of vulnerabilities are out there and how to exploit them. Just a function of time, which AI has lots of.
Sure it took someone the team behind Claude Mythos a lot of work to get a system that doesn't suck. But it's doing what humans already do, just faster and more comprehensively.
Fixing those bugs isn't as easy. We don't have thousands of pages of documentation to train AI on how to address a bug like copy fail. Each bug, in some way, is novel and AI would struggle. You could probably vibe code through the problem with AI's help, (not a good idea still) but you can't just run a vuln scan and then say "thanks. Now fix them." At best it doesn't know how. At worst, it tries.
adrianmonk@reddit
I thought the person above was suggesting that humans could create a fix (for the current versions of the software) and AI could then do the backporting of that fix to older software versions. I have to admit they didn't say that explicitly, but the subject of this thread is the amount of work involved in backporting fixes, so in context, I think that's a reasonable interpretation.
If so, then you have a human-created fix that you can point the AI at to give it a much stronger direction about what to do. Essentially you'd ask it something like, "Here is a fix for vulnerability ABC in version N of the software. Produce a fix for version N-1. As much as possible, re-use the fix exactly. Only change or add code if truly necessary. If code does need to be changed, follow the same approach as much as possible."
Holiday_Management60@reddit
What I was saying was to have the AI try to fix every vulnerability it finds, but make note of all the fixes its made so humans could gradually go through and vet all of the fixes. Looking back it was pretty naive of me. But I'm going to leave my comment up because u/Dyson201 made a pretty good reply explaining why it doesn't work that way and his comment requires the context of mine.
Holiday_Management60@reddit
Ahha that all makes perfect sense. Thanks for teaching me.
ExaHamza@reddit
In my view, it seems that Debian is in a more favorable context. The security team doesn't have to worry about new releases that require drastic changes to the code, and they focus only on fixing vulnerabilities, whereas rolling release distributions are concerned with both aspects.
Polar_Banny@reddit
Read your own comment but slowly.
ldn-ldn@reddit
It just shows that package management is an outdated concept. The bugs should be fixed by Mozilla team, not Debian team. Debian team should only provide tools for the users to install an official build of Firefox.
MUSTDOS@reddit
Debian team, when they have to maintain software other than SystemD...
freakwent@reddit
Thisnisnangraph of allegations.
You need a graph of accepted patches.
eclipsenow@reddit
I'm a noob - but it makes me wonder if we need a WHOLE lot of Distro merging to combine the human capital of all those coders and programmers and engineers? I mean, what hobby-ist distros are doing stuff that's going to backfire spectacularly with AI able to scan the whole thing at once, see the big picture, and see holes us humans cannot?
roankr@reddit
OS containerization MIGHT help with that.
Fedora is pushing for a tooling system where OSes can be built like containers where changes between downstream and upstream distros are trackable on a git-tree like way. This way if any vulns are found downstream which can be traced to upstream that diff can be tracked with clear blame.
These tooling methods are OSTree and bootc.
Zipdox@reddit
With Firefox specifically, they maintain an ESR release themselves.
Suvalis@reddit
I've always wondered at the complexity of doing back ports vs just using the version that the orgs who make the software recommend.
Backports might become less viable to do.
SeriousPlankton2000@reddit
Sometimes the bugs are local and the patch will easily apply to both versions. Some not.
SeriousPlankton2000@reddit
If the new version is compatible with the old one they can just ship it.
Otherwise it's probably the same as before, apply the patch, if it works double check that it went right and that it doesn't critically change the function. E.g. if there is a new return code or if there is a new case for a return with failure you need to be very careful, if they just increased a buffer it's usually OK.
mooky1977@reddit
AI fixes still need to be vetted by humans to make sure they are correct and don't break anything with the fix nor introduce any new bugs or security holes. Therefore it is a time/resource limited endeavor.
AI penetration testing to break things can run via scrips and setting parameters and letting the AI go hog wild nearly 24 hours a day.
We're more apt to see more holes and flaws and problems exposed than ever get fixed. That's just my opinion though.
cookiengineer@reddit
Note that all "...-2" and "...-3" debian security advisories were mistakes where they had regressions because of trying to backport security fixes to outdated or incompatible code. And there's a lot of them.
So my guess is yes, it's inevitable with a non-rolling release concept. Currently, the python and npm ecosystem are essentially on fire.
Fun fact: vllm's docker container can't be rebuilt since the llmlite package hack, because there's always dependencies of dependencies being too slopcoded and incompatible with their APIs.
After all this we might need to rethink how we do CI/CD pipelines, code reviews, and unit tests. Because our current approach isn't really working.
Internet-of-cruft@reddit
Just want to point out we're going to hit a boom of vulnerabilities discovered followed by a taper to a new baseline.
In any given code base you're going to have a fixed number of possible security vulnerabilities.
The ability to detect all of them may increase over time, but actual vuln count is going to be bounded.
The only thing preventing new vulns is for changes to stop happening, which for most major software is "never".
TL;DR: Things are going to suck for a while then it will be less sucky than now, but more than before.
ifq29311@reddit
not really. right now we deal with new bug discovery tech, but that tech gonna be standard from now on. ie. a lof of bugs discovered even before software ships. should mean less bugs, not more.
unless we adjust for the AI-assisted new code, but thats uncharted territory for now
Deiskos@reddit
Right now the new tech is discovering vulns accumulated over multiple decades, once the new tech discovers all old vulns it'll be discovering only new vulns as they are introduced.
we_are_mammals@reddit (OP)
As someone who does not think that the current AI is AGI yet, I'd say there is an assumption in your comment that is incorrect. You are assuming that AI will discover all vulnerabilities. I feel that it will only discover those vulnerabilities that fit certain patterns that it is familiar with. It could be a large fraction of all vulnerabilities, but probably far from all of them.
berryer@reddit
That assumes people actually use it, though - how well has fuzzing actually been adopted since the 1990 whitepaper? I'm the only person I've actually seen use it in proprietary code over about 20 years and a half-dozen companies.
x0wl@reddit
Yeah I don't understand the meaning of worse here, we're having less vulns in the end
dnu-pdjdjdidndjs@reddit
ai pr review is already confirmed to be very effective, obviously not 100% but expect this to be ubiquitous where it isn't already.
CrazyKilla15@reddit
Hopefully. Maybe it'll finally be the death of debian and their myth of "security backports" as distinct from "regular 'non-security' logic and memory bugs" which arent backported
ntmstr1993@reddit
Just hire actual Indians to plug the deficit
PMMePicsOfDogs141@reddit
Do we know how much money and computer was thrown at this?
gesis@reddit
How many of these bugs apply to the shipped version in these distros? How many are the result of features and development past that point? How many are trivial edge cases that don't apply to 99% of users?
This may effectively be a big ol' nothingburger for stable distros as well.
daHaus@reddit
That's a hell of a graph, any chance there's another one showing when all these were introduced?
Wall_of_Force@reddit
I expect backport will be harder but not due to amount of bugs: I think now people are asking each commit from upstream into AI that if this was possible fix for some kind of exploit: than downstream manager doesn't have time at all for backport fix, unless they are using upstream version as is.
longdarkfantasy@reddit
No. They have 1 month or at least a few weeks to patch them.
Wall_of_Force@reddit
don't think so: those two bugs are open to public just as merged to upstream because someone scanned each commit and ask AI "Was it fix for some exploit?", leaving no time for repository manager to fix:
longdarkfantasy@reddit
Oh. I didn't know that. That sounds really bad.
Realistic_Account787@reddit
Both good and bad players can use AI to find bugs.
gordonmessmer@reddit
Yes, I think they will. It's hard to see how they could not. Shipping packages outside of their upstream was never a very good idea, and as our adversaries get better tools, that idea is going to grow continually worse.
I'm working on tools to do the opposite... to allow developers to build and ship applications with full control of their dependency streams, without the delays associated with distribution release cadence.
https://gordonmessmer.codeberg.page/dev-blog/2026/05/10/dist-git-as-a-registry.html
nicman24@reddit
That is just a rolling distro but with less testing..
gordonmessmer@reddit
It's not a distro, and it's not rolling release.
It's build tooling that delivers stable releases of every component.
Are you familiar with systems like crates.io and PyPI?
nicman24@reddit
skipping the package manager with pypi and whatever else is terrible
Also it is just a rolling source based micro distro
gordonmessmer@reddit
Yeah, I'm not doing that either. I take it you haven't looked at the projects at all.
nicman24@reddit
.. Ok dude
0riginal-Syn@reddit
I think it will smooth out over time. In many ways, it is in catch-up mode right now. It does increase the strain up front, though, for sure. I will say this is the nice part of being a maintainer on a curated distro. It allows us to focus a bit more and not be spread too thin, and we have had more community maintainers step up as well lately, which has helped as well.
nicman24@reddit
The genie is out of the box. The only thing we don't know is how many of them were already known or malicious
Johnsmtg@reddit
Random thoughts:
Better to "overwhelm" backporters than ignore the presence of a multiple security bugs
At this point we might as well use claude to backport the fix as well lol
If backporting is not realistic, then ironically bleeding edge distro might become the safest and more stable.
I am curious to know how many of these 423 were actual exploitable bugs, or more like "security warnings" with currently no real path to exploitation.
ifq29311@reddit
there were sugesstions apple did just that with their last releases (like 70 security fixes for macOS in single release)
if the bugs are properly handled and disclosed to main distro security teams, should be nothing to worry about