Linux's sched_ext sees a bunch of bug fixes following increased AI code review
Posted by Fcking_Chuck@reddit | linux | View on Reddit | 6 comments
Posted by Fcking_Chuck@reddit | linux | View on Reddit | 6 comments
mushgev@reddit
Curious what the actual tooling looked like here. A lot of "AI code review" in practice means dumping code into an LLM and asking for issues - useful but inconsistent.
The sched_ext case specifically is interesting because extensible scheduler code has to maintain invariants that span the BPF-kernel boundary. Those kinds of cross-boundary invariant violations are expensive to catch in human review because you need deep familiarity with both sides. If AI is reliably picking those up it's genuinely useful, not just finding style issues.
Would be interesting to see the breakdown of bug classes found. Race conditions vs. logic errors vs. API misuse would say a lot about where the signal actually is.
Jristz@reddit
For what i remeber any "AI code review" that throw bugs those Buga need to he reviewed and reproducible by humans.
So is still need human Review and knowledge too
gamas@reddit
Yeah AI stuff quirks best when it's handled by actual experts who can review the results. Its optimised for augmenting human work not replacing it. Despite what the tech bro grifters keep trying to convince shareholders.
Zomunieo@reddit
Here’s the Git repo with the prompts.
https://github.com/masoncl/review-prompts
So it’s a lot of documenting “this is how the kernel is supposed to work” and giving it specific recipes and strategies to investigate issues.
SystemAxis@reddit
Yeah, the interesting part isn’t that AI found bugs, it’s which bugs. If it’s catching cross-boundary issues in sched_ext, that’s actually useful beyond basic linting.
aliendude5300@reddit
Honestly, bugs fixed are bugs fixed. No complaints here.