Our devs stopped reading security scan results entirely and I'm not sure I can blame them
Posted by Smooth-Machine5486@reddit | ExperiencedDevs | View on Reddit | 33 comments
The false positive rate on our scanner got bad enough that two engineering leads told me they'd started treating all output as noise. Real findings were getting closed at the same speed as obvious garbage because no one trusted the signal anymore.
The thing that makes this dangerous rather than just annoying is that the vulnerabilities don't stop existing when developers stop reading the reports. We had coverage. We just had zero attention directed at the results.
We've been going back and forth on whether the problem is the scanner configuration, the scanner itself, or whether we need something sitting above it to filter and correlate before anything reaches them. Tuning rules manually has been a whack-a-mole exercise so far.
The question I keep coming back to is whether this is a tooling problem or a culture problem.
vansterdam_city@reddit
Here is an idea, why don't you, the security peoplee, do your job and read the results first before you give them to devs?
gUI5zWtktIgPMdATXPAM@reddit
Nook. They never do this. We then need to argue why specific points are not valid because all they did was run a tool with no context over something.
Professional_Mix2418@reddit
They they aren’t proper security people. No qualified individual would do that. Categorisation, risk classification is the name of the game in cyber security.
gUI5zWtktIgPMdATXPAM@reddit
They mustn't have been as they had no real understanding what we did in the development team and management would expect us to fix all the issues they gave us.
Professional_Mix2418@reddit
Hmm several issues with that. They don't need to know or have real understanding what you do in the development team (to a degree). They need to be able to communicate the risk to the confidentiality, integrity and availability of the organisations (information) assets.
And yes, management needs to support the infosec team that issues are fixed within the agreed targets. That is entirely normal.
gUI5zWtktIgPMdATXPAM@reddit
I don't mind addressing the risks, and fixing the issues. It just makes me question the value of what they do if we simply receive the output of some tool that cannot take into context what it's looking at. If you gave us the tool and told us to run it, it would achieve the same report so what have you actually done?
Like the tool flags we should add some headers for cross site scripting to an API response for an API we do not call via a browser. Actually if we did allow public access this would be a bigger issue and we need to consider what data should be available and authentication for public use vs internal authentication.
Professional_Mix2418@reddit
Dude you seriously need to learn about cybersecurity if that is truly what you think. Everything you are highlighting is wrong and reeks of ignorance. 🤷♂️
gUI5zWtktIgPMdATXPAM@reddit
That's what confuses me. If you think simply running an API through a tool and giving that to the dev team is job done I don't agree.
If you think adding in headers which instruct browser's to act a certain way on an endpoint consumed internally (available only internally) by another service (not a browser) is actually achieving anything I find this strange.
You should be looking at the whole architecture, how authentication is handled, authorisation, network restrictions, data handling, logging etc.
But sure I'm the ignorant one for having our security team just providing a verbatim report out of some tool without any questions or understanding of what an API is actually doing.
buykafchand@reddit
Triage layer made the difference for us, instead of dumping hundreds of findings on devs, we routed through something that, could say "this file with PII is accessible to people who shouldn't have it," which is a lot harder to dismiss. Netwrix Data Classification did that for us by baking access context into the findings rather than treating it as a separate step.
ryoumaskuy@reddit
We had the same spiral and what actually helped us was getting a layer that correlates data sensitivity with access exposure before anything hits the dev queue. We switched to Netwrix DSPM and the risk-based prioritization meant findings were ranked by actual impact so instead of 300 flat alerts our team was looking at maybe the top 15, that actually mattered, and the first week it surfaced a SharePoint site with PHI sitting open to basically the whole org that had been buried in noise for who knows how long.
raiisin@reddit
Any kind of alarm that has false positives somewhat frequently, will always end up being ignored. That’s the reality when you don’t have fully dedicated people to handle those alarms/notifications
CodelinesNL@reddit
Boy cried wolf principle. Tooling is the problem.
New-Molasses446@reddit
Manual rule tuning is a losing game because the scanner doesn't know your application context. You can suppress categories of findings but you can't teach it which data flows are important in your specific codebase without reachability analysis built into the engine itself.
Special-Cause7458@reddit
Went through 18 months of this. The false positive problem compounds because once developers learn the scanner is unreliable, they stop reporting when they think a finding is wrong. So your false positive rate in the data looks lower than reality because nobody is contesting bad findings anymore.
Smooth-Machine5486@reddit (OP)
The CISO filtering argument is correct but doesn't scale past a certain finding volume. Checkmarx ASPM with triage assist validates whether a finding has an actual exploitable attack path before anyone sees it. Security team isn't manually reviewing 400 findings, the platform does the first pass.
Like what reaches developers has already been validated as worth their time.
kondorb@reddit
Every single way of “scanning” for security issues I’ve ever seen was exactly like that.
Professional_Mix2418@reddit
Then you’ve never worked with a professional security team who takes ownership for that and deals with it. The fine tuning stage is entirely normal. Throwing it over the fence to let developers decide isn’t.
Smooth-Machine5486@reddit (OP)
That's kind of the uncomfortable part. If this is the universal experience then the industry has been selling coverage metrics as a proxy for actual security outcomes for a long time.
Professional_Mix2418@reddit
You are missing the intelligence part. Where is the CISO or even ISO prioritising and filtering these results, not least fine tuning so they don’t keep appearing.
Smooth-Machine5486@reddit (OP)
Well, that layer hasn't existed in practice, as findings have been going directly to devs without anyone owning the triage and prioritization step above them.
Professional_Mix2418@reddit
Exactly so they are marking their own homework and from a position of where they aren’t qualified to assess the impact from a cyber security perspective. It’s a major flaw in the setup of your organisation.
Bemteb@reddit
Question is: Who is responsible for actual issues. If a dev closes a ticket and it turns out it was actually a problem, are they held accountable?
On the other hand, who decided that devs should do other things and not spend their whole week analysing scan results?
Yeah, broken scans bad, but it's also how your company handles them that might be the issue.
Smooth-Machine5486@reddit (OP)
Accountability is exactly the gap.
Historical_Trust_217@reddit
It's tooling causing a culture problem, not a standalone culture problem.
Developers aren't irrational for ignoring noise.
Fix the signal quality first, then have the culture conversation.
Smooth-Machine5486@reddit (OP)
The sequencing matters here. We kept having the culture conversation while the signal problem was still unresolved which is probably why it went nowhere.
tcpWalker@reddit
Both. It's definitely both.
If you are asking people to read you are probably failing. Even smart people who read for fun. Because they are busy and security scanner output is bad. Automated upgrades with validation pipelines for "does this update break prod? No? Phase in upgrade."
Smooth-Machine5486@reddit (OP)
The validation pipeline approach is where we haven't invested yet. Right now findings reach developers as reading homework with no automated remediation path attached.
johnpeters42@reddit
Some of both. Some people are just ignoring it, others are at least trying to solve the problem.
You said the false positive rate got bad. Do you know what changed? Also, what specific types of things is it reporting (whether false positives or not)?
Smooth-Machine5486@reddit (OP)
Nothing changed on our end deliberately which is part of what makes it hard to tune back.
Scanner updates and new rules shipping automatically shifted the baseline without a clear rollback point.
Minute-Confusion-249@reddit
Both. But you can't fix culture in a system that punishes attention with more noise.
Smooth-Machine5486@reddit (OP)
Exactly the sequence we're stuck in. Asking developers to pay more attention to a low signal feed doesn't fix the feed.
QuantumG@reddit
Even when the results are high quality, most devs do not fix the issues. When there's a security breach and some piece of software is found to have been exploited, no-one blames these devs. Companies that are willfully at fault suffer no consequences. With this blatant disconnect, there's no expectation that this will ever change. Even worse, governments around the world, develop and stockpile exploits. To do this they pay freelancers to discover and keep secret the vulnerabilities. This market is more active than making secure software.
Roticap@reddit
Why are you spending tokens writing posts when you could be spending them vibe tuning your scanner?