Greg Kroah-Hartman Turns To New "Clanker T1000" Fuzzing Tools For Uncovering Kernel Bugs
Posted by anh0516@reddit | linux | View on Reddit | 20 comments
LousyMeatStew@reddit
In case anyone isn’t aware, “fuzzing” is just the process of sending random inputs into a program as a way to look for unhandled edge cases and such.
Notably, you’re testing the code as a black box meaning that the fuzzing tool isn’t looking at your code. In this case, the use of AI would be to simulate the attacker which, I have to admit, is genuinely clever as most low effort hacking attempts (and bug bounty claims) are going to basically be doing the same thing so you might as well nip that in the bud.
Tuna-Fish2@reddit
The best fuzzers do not treat the program as a black box, but instead inspect it to produce inputs that hit every possible different code path. See, for example, American Fuzzy Lop, which generates the changes randomly, but instruments the running code to identify candidates that result in different code paths. This lets it use the program behavior to prune the search tree, without having to understand the code.
Historical-Bee-7054@reddit
Not quite random and not quite a black box. One of the types of fuzzing is coverage-guided: the goal is to find inputs that will incrementally increase the line coverage of the test suite. It goes on creating a list of inputs and mutating it in order to cover more lines of code.
Cats_and_Shit@reddit
The inputs are generally not (completely) random, and the program normally cannot be a black box: a fuzzing tool typically needs to be able to inspect the program's internal state in order to help it find interesting inputs.
If you just blindly send random inputs to a program, you're almost certainly not going to get it to do much of interest. Instead, fuzzing tools often try to detect when some input results in the program doing something, and then employ various strategies to come up with other "similar" inputs which might also do something interesting.
deviled-tux@reddit
AI based fuzzing is really cool because it can enable us to do semantic fuzzing
altodor@reddit
I've also seen it be really good at generating data for test databases that will find every way to break every frontend to that database.
Lawnmover_Man@reddit
Using AI to find bugs is honestly a very good use case for AI.
the3gs@reddit
As long as it is done in a way that the bugs are validated before they are reported.
Passing a codebase into Claude Code and saying "pretty please find all the bugs in this code with no false positives please" and then creating github issues for all of the "bugs" it says it found is worse than just about anything for an open source project.
Using AI to find problematic input is a good idea though, as if a program crashes/misbehaves on an input, then it's almost always a significant bug that should be handled.
svideo@reddit
Sounds like those days might be gone, a lot of OSS maintainers have reported a marked increase of quality bug reports in the past month or so. The author of curl who famously threw the flag on AI slop bug reports now has this to say https://www.linkedin.com/posts/danielstenberg_hackerone-share-7446667043380076545-RX9b/
vytah@reddit
I'm pretty sure the main reason slop stopped is that they stopped offering bug bounties.
svideo@reddit
It’s across the board: https://lwn.net/Articles/1065620/
Even the linux kernel team is reporting the exact same thing. I know AI isn’t always popular but that doesn’t mean ignoring it is safe.
vytah@reddit
I'm talking about decrease of slop, not increase of valid reports. Those are two separate things.
svideo@reddit
Both are happening across the board, I don't think it's crazy to suggest they are related. It's all been in the past couple of months, I know this dude is from Anthropic and obviously is going to have A Perspective, but he's reporting on a real step change in capability: https://youtu.be/1sd26pWhfmg?t=72
Business_Reindeer910@reddit
This is more about trusting the people submitting the bugs than about not trusting the AI.
Separate-Royal9962@reddit
AI finding kernel bugs is one thing. The harder question is how to prevent them structurally in the first place. Fuzzing catches what exists; it doesn't prevent what could be created. Both approaches are needed — reactive discovery and proactive structural constraints.
Natural_Night9957@reddit
I don't know if I like a Terminator reference in the Linux kernel, with all hell breaking loose recently.
Arnoxthe1@reddit
One of the few pretty legit uses of AI I'd say.
ihatemovingparts@reddit
Is it worth burning down the internet for a fuzzing tool? Meh.
i-hate-birch-trees@reddit
Honestly a good idea, especially since the threat actors are going to be using the same LLMs to find CVEs
NOT_EVEN_THAT_GUY@reddit
good clanker