Preparing for the waves of updates and vulnerabilities
Posted by RetroGrid_io@reddit | linuxadmin | View on Reddit | 12 comments
Recent news from Anthropic is that their Mythos model is fantastic at finding 0-day vulnerabilities and generating exploits for them. At this point, regardless of whether it's Anthropic or some other entity, it's clear that we're in for a bit of a rocket ride keeping our systems secure.
For their part, they've started project glass wing to help the global software chain respond effectively. This is another reason why my AI dollars are being spent on these guys, even with their recent tokens fiasco which has bit me too.
I'm curious what actions, if any, are being taken by other admins to respond, beyond perhaps shortening your update cycle?
irrision@reddit
I think the thing I haven't seen nearly enough attention to is willingness on the part of major distro maintainers to accept AI generated code updates to projects. I get why there is opposition and for good reason but if they won't be more willing to do this they'll be fighting a losing battle when models like this become publicity available, and they will one way or another as if anthropomorphic doesn't do it another company will.
If you've used models like Claude opus for coding you shouldn't be surprised by this announcement either. It's made insane strides in capabilities in the past 12 months going from only okay at writing short scripts that usually didn't work without a lot of massaging to producing fully functional code for reasonably complex web apps with a handful of well crafted prompts.
What's likely holding back the current public models is more a function of efficiency and paralellism as these companies need to do more with the same amount of gpu per user to become profitable, so they are obviously limiting the resource assignment per model instance which limits the size of the context window and token throughput kneecapping the model.
help_send_chocolate@reddit
I expect there to be vulnerabilities across a wide range of severities and simultaneously a lot of hype and muddied waters due to people overselling their AI-related achievements. Likely a frustrating and risky experience for a lot of people.
AnsibleAnswers@reddit
That's really not what it seems like. The claim is that Mythos, a model that isn't accessible to the general public, has been used by anthropic to discover and develop exploits for thousands of zero days, most notably a 27 year old privileged remote code execution vulnerability in OpenBSD's NFS implementation that's since been patched.
https://thehackernews.com/2026/04/anthropics-claude-mythos-finds.html
1esproc@reddit
The better question here is was anybody else even looking at this? First off there's a lot of assumptions about OpenBSD - like that it's one of the most secure platforms. And then there's market, it's not widely used. So was anybody putting much resources towards finding a problem in NFS on OpenBSD?
irrision@reddit
There is a lot of legacy code carried forward from openbsd to this day in other Linux distros. Heck at one point it appeared that Microsoft was using parts of the network code from bad in Windows (and they might still for all we know).
03263@reddit
I can imagine in 10+ years we may see a new operating system that begins with a focus on security at the lowest levels, maybe even as low level as a new CPU architecture and other hardware. Strictly typed memory structures, more internal checks and balances to verify data moving between hardware components, with a software stack built to match. It could also be a massive pain to work with because usually more security tends to create more roadblocks to desired uses as well, but worth it for high value targets.
Klutzy_Scheme_9871@reddit
seems like all AI did was make companies more insecure. acting as a double-edged sword, it allowed lower-level script kiddies to weaponize offensive attacks AND the greed of companies that continue to lay off good personnel. good luck, "the west".
AnsibleAnswers@reddit
It’s difficult to determine how much of this is hype and how much of it is real. But if it’s real Project Glasswing may actually be a net benefit to security. It does seem like Mythos has actually gotten better at spotting vulnerable code given that researchers have already used it to find 0-days that have been sitting out in the open for years.
Security research requires some very specific expertise, with researchers tending to focus on one specific area of interest that they get very good at. It’s naturally an area that will always be able to use more eyes, and a field that really doesn’t have “generalists.”
I don’t have a solution for the fact that bug bounty programs are getting increasingly inundated by low quality submissions that were never actually tested.
gmuslera@reddit
You are forgetting the other half of the equation. The bad guys also do their research, and smaller LLM models can find them too, even if the field were just finding vulnerabilities with LLMs (that is not the end of it).
There is an asymmetry in the vulnerability finding and reporting ecosystem. Bad guys were ever able to decompile, reverse engineer or other things forbidden by the commercial software licenses, they can have their own communication channels, and vulnerabilities markets and so on (the story of Stuxnet, from 2010, is enlightening in this) all this time doing whatever they want with it before someone notices that something is being exploited, while the good guys have to report to manufacturer, main project maintainers and so on in private, and just when the patch is released for some time the vulnerability is announced (and there could be leaks in this process too).
So, no new vulnerabilities are being introduced with this (I hope), and existing vulnerabilities in systems that you have in production are eventually be fixed and added to the stream of updates of well maintained distributions. And you may finally be a bit ahead of the zero or negative-day exploit.
Regarding reactions, all severe remote vulnerabilities should be fixed as soon as possible, found by an AI or not. And Mythos or no Mythos, some weeks ago Greg Kroah-Hartman commented that AI vulnerability reports improved a lot.
speyerlander@reddit
The modern internet is a monoculture, gone are the days of variance in tech stacks to a meaningful degree, that compels threat actors to target supply chain vectors rather than individual machines / clusters. Defending against such vectors requires a strong emphasis on containment rather than edge defence, hence, the most impactful security change an organization can make is ditching namespaced runtimes for virtualized runtimes (Kata) and tightening network policies and connection monitoring, flagging every single unexpected connection as a possible breach.
There's only so much an attacker can do without lateral movement.
aprimeproblem@reddit
This hype changes nothing. If you do not have a decent patch management system and process in place you already have a problem, regardless of these findings.
tsammons@reddit
In San Jose for a wedding presently. Last night I was chatting with engineers connected to Anthropic's AI team, asked whether the claim is dubious- nope. They say it's a legitimate concern; I'm still skeptical.