Does your Security team just dump vulnerabilities on you to fix asap
Posted by flashx3005@reddit | sysadmin | View on Reddit | 494 comments
As the title states, how much is your Security teams dumping on your plates?
I'm more referring to them finding vulnerabilities, giving you the list and telling you to fix asap without any help from them. Does this happen for you all?
I'm a one man infra engineer in a small shop but lately Security is influencing SVP to silo some of things that devops used to do to help out (create servers, dns entries) and put them all on my plate along with vulnerabilities fixing amongst others.
How engaged or not engaged is your Security teams? How is the collaboration like?
Curious on how you guys handle these types of situations.
hitman133295@reddit
Yep, they have these fucking scans on daily and the moment MS released patch Tuesday, they all be like why you got spike fix it yesterday. Like mofo give it sometime to test too
flashx3005@reddit (OP)
Ha yea man I hear ya. Good to know we're not the only ones lol
kloeckwerx@reddit
It's no fun for them either. Imagine being responsible for security when the people that own the problem take the time to post about being harassed by security instead of fixing the security vulnerabilities that they need you to fix so you don't get compromised.
flashx3005@reddit (OP)
What happens when these vulnerability patches affect business applications? There's no digging into why or how app A is running or what would happen if a patch is to be applied to it.
My issue isn't really finding the vulnerabilities, it's the lack of knowledge of how certain ones will affect the business. I've only been here about 8 months, so I don't have a lot of legacy knowledge on how things run but u do my best to find out as much as possible.
kloeckwerx@reddit
How did you even survive log4j and heartbleed? The vuln reports should be detailing the components and what the minimum version is that resolves the issue. If it's cots software like a product from a vendor you check for updates and reach out to the vendor(s) for support stating "there are vulns, how do we fix them".
Just a weird thing to be upset about. I bet Homedepot, Target, and all the other companies with public data breaches had tons of people with your same attitude working for them.
flashx3005@reddit (OP)
Who said I was upset? I made a discussion about how others dealt with their Security teams. Majority of the ones who replied feel the same way.
kloeckwerx@reddit
"there are two kinds of people. Those that can extrapolate from partial information and"
Idk man, they expect a base level of competency. I'd much rather a list of what to fix than step by step directions on how to fix it. Knowing am application and cve are generally enough to figure it out š¤·āāļø.
Like, if i get a report saying "cve CVE-2025-22224 on esx" there's plenty to Google and find the solution.
https://www.cisecurity.org/advisory/multiple-vulnerabilities-have-been-discovered-in-vmware-esxi-workstation-and-fusion-which-could-allow-for-local-code-execution_2025-019
This just seems like such a silly thing to post.
flashx3005@reddit (OP)
I go out and find the vulnerabilities now and push back where it might break other production workflows. This isn't the issue.
What my main point is the lack of collaborative effort. In smaller companies things can get piled on quickly. So if there are items in which those guys can handle even if it's smaller items, like getting the change approvals so I can patch that vulnerability, all that helps and fosters a better working relationship from both sides.
Idk man. I don't have any other items to point out.
teflonbob@reddit
Yes. We have a crack expert team that are experts at using tools to find vulnerabilities for them but have almost no ability or confidence to fix things or explain the issue outside of what the tool tells them. Itās frustrating weāre basically creating an industry of tool watchers and not people who actually fix things
DramaticErraticism@reddit
lol, right. These aren't crack experts by and large, they just use expensive tools the business purchased and then send another team a ticket to work on.
These aren't brilliant minds using their skills and intellect to triage, they are buying a platform and clicking buttons.
LUHG_HANI@reddit
Question is, why the fuck are they even in a job? Just add to the IT team and dish out the new role as a general job. Like updates because that's essentially what it is now.
BoltActionRifleman@reddit
Sounds like they need a meeting with The Bobsā¦
CornBredThuggin@reddit
That's my exactly what my Info Sec team does. We have a regular meeting to go over the vulnerabilities. The guy leading it copies and pastes findings from other researchers. He'll regularly get confused in the middle of the presentation, because he didn't bother to proofread.
teflonbob@reddit
Itās all performance art these days. Appearance of knowing your job.
wintermute000@reddit
Infra shitting on securiteh for not having a clue about how anything works or the context of anything is IT 101.
I laughed at your comment re: an industry of tool watchers
teflonbob@reddit
Yes. Itās a very classic infra/ops view of security. There are rockstar security teams Iām not doubting that as Iāve worked with them in the past. however Iām seeing a trend with the newer batch of security professions not understanding the basics as security in IT is the latest diploma mill focus.
many_dongs@reddit
Those types of morons have always existed in the security industry. Technically ignorant people trying to get a paycheck have always been around in Security. The difference is the management hiring them. Don't blame the guy who doesn't know any better trying to fake it, blame the person who fucking hired them and authorized them to create you work
Intros9@reddit
Absolutely diploma mills overwhelming InfoSec right now, and I'm tired of being asked sincerely to explain rundll32.exe to the next wide-eyed "analyst."
8923ns671@reddit
If there's anything I've learned working in IT it's that every IT team hates every other IT team.
teflonbob@reddit
And we all shit talk sales and sales engineers!
First-District9726@reddit
You're assuming that security doesn't somehow follow the 80/20 rule, which it does. Just as in every profession, 80% of the people in it are utterly worthless.
Draoken@reddit
By that same logic, 80% of the infra teams are utterly worthless too and it's their fault just as much. If the rule is concrete like that, apply it both ways.
First-District9726@reddit
I don't disagree at all. From my experiences, there's a lot of companies where you could fire 80% of the workforce (after identifying the ones worth keeping around), and have no noticeable difference.
moofishies@reddit
Good news is that the low level security analyst positions are prime candidates to replaced by AI in the near future. Those positions are not safe.Ā
gunthans@reddit
Yep, with a deadline
alficles@reddit
Aye. I'm one of the annoying security people in my org. Here's roughly how it works:
Tools are used to find all vulnerabilities. Most of these vulns aren't exploitable because of configurations and usage modes. That XML library you're using might have an RCE, but if the only thing it's used for is loading settings from disk, you might be fine. Or, maybe not, if there's a way to trick the program into writing it's settings file incorrectly. For the vast majority of these findings, it costs less time for the company to fix the issue than it does to be sure that the vulnerability doesn't apply.
If the system owner indicates that the fix is expensive (in time, money, or whatever) to implement for some reason, there's a process for allocating more time, but again, most of the time, it's actually faster to remediate than to spend time in meetings ensuring that stuff is getting handled.
If a team doesn't have the resources (in time, money, expertise, and such) to handle routine security remediations, then the team doesn't have the resources to do their job. It's like if a restaurant said, "I can make food, but we just don't have the resources to handle the constant demands for cleaning!" We'd correctly say that the restaurant doesn't have the resources to do their job. This is unfortunately not uncommon, but it is fundamentally a problem that has to be solved by management.
And nearly every system owner has different processes and procedures for handling these remediations. Many systems can do downtime with no notice. Some have a complicated process to shift traffic and avoid downtime. Others have downtime scheduled in specific windows. Sometimes the straightforward fix will break the application and something more difficult has to be done. This is all stuff the system owner knows, but the security team doesn't. Nobody wants the security team trying to reboot live applications. :D
The biggest problem I see so incredibly frequently is business units that don't adequately staff their engineering teams. Everyone is cutting headcount so hard that systems routinely wind up getting "supported" by people who are already at 120% of capacity. Or, they have the headcount, but have failed to retain adequate engineering skill and have people who don't have the skills required to maintain their devices. And when that happens, teams wind up squeezed between security, which is asking them to remediate things, and their management, which isn't allocating enough resources to handle it.
The fix is usually to escalate upward to management. Basically, stop yelling at line cooks that the floor is dirty and go tell management that the cleaning isn't getting done. Because management is the one that can accurately measure and allocate their resources. And if they aren't doing a good job, escalate to someone who is. Too many security teams focus all their energy on the leaf nodes in the organization, creating tasks that aren't tracked by management. When this happens, it's doubly bad because management then doesn't give the teams "credit" for handling security tasks. I've even seen people disciplined for failing to meet objectives because they were occupied with mandatory security tasks. That is obviously dysfunctional.
21trillionsats@reddit
Love this response. As someone who was a former software engineer/sys admin and now higher in management, I need everyone in my org to understand everything youāve said⦠including my own ābossesā
Acceptable_Spare4030@reddit
As much as folks like to talk shit about management, you've just described the legitimate, critical role of management!
I say this as a 30-year sysadmin with a security focus who can't get my management to understand (or more likely, put their neck out there for the sake of) this role. They just put the "fixes" on your task list and roll it downhill, potential damage to the org as a whole be damned.
Incidentally this is also why I went out for management roles - to fill these gaps and make the system work as intended, pushing burden back up the hill wherebit can be addressed with resources and planning. My org, however, prefers to only hire those who've never stuck their neck out for anyone or anything, thereby perpetuating the problem.
ButtThunder@reddit
This is the problem with security teams that don't have an IT background. We classify our vulnerabilities based on the threat to our environment. If a critical vulnerability comes out for a python library, but the lib lives on a system without public exposure, is VLAN'd off, and does not run on or laterally access systems with sensitive data, I might re-classify it as a medium and then the sysadmin or dev team has a longer SLA to fix. If we need help tracking it down from our sysadmins, we ask before assigning it. Pump & dump vulns piss everyone off.
mirrax@reddit
The other side of the coin is that even with an IT background trying to critically think about every vulnerability is more effort than just updating where possible.
hkusp45css@reddit
I've done professional InfoSec for 20 years. It has NEVER made any sense to me that some orgs will run down every CVE they can find to remediate.
Patch, protect your edge, manage directional network traffic, get a decent SIEM, have decent endpoint protection and validate all that shit.
If you can manage that, you're ahead of a lot of multi-billion, multi-national corps.
mirrax@reddit
Security comes in layers. And there can be diminishing returns on effort in a layer. In vuln management, it's impossible to be 100% patched as many vulns you can't patch your way out of. But patching what you can and then evaluating the rest is lower effort than death by papercut trying to analyze everything to death.
doll-haus@reddit
Like an onion, or more like a parfait?
mirrax@reddit
Like Ogres, there's a lot more to security folks than people think.
gjpeters@reddit
It's Ogres all the way down.
Superspudmonkey@reddit
I always say "security is like ogres "
alficles@reddit
Yup. There's an effectively infinite amount of security work you can do at any given moment. That's why it's important to have some security standards that define the "minimum acceptable security" that adequately balances risk and cost.
hkusp45css@reddit
On my desk, I have a plaque that says "Right-size your paranoia."
Security done completely is fucking expensive. Security done wrong is just a new vector or AS.
Do security right, and do *just* enough of it to meet your risk appetite and then, stop. No, no. Don't explain how cool it would be to add something else. Just stop.
Elegant simplicity is much, much more secure in any state than complex security platforms generally are, practically.
The posture at my org is incredibly advanced for our size and value. However, it's dead fucking simple and that makes it effortless and sustainable.
alficles@reddit
Spot on! I may need to find one of those plaques. :D
hkusp45css@reddit
Ironically bought off an Indian "etsy" site with a dodgy card processor and no TLS on the site. I just used a disposable credit card number..
badlybane@reddit
Never been in a place were in front sec was a thing. The company i am at now is just starting a dedicated cyber guy. We are not even to the point where we have dedicated vulnerabilities scanning yea. We are just starting regular edge scanning. I can say for certain that in almost 15 years. It was social engineering that was the root cause. Yes there were vulnerable systems. But those were not the points of entry.
Like the guy above said but one thing I would add is en user training and engagement. Love that we have gamified department heads and executives to compare risk scores and exert social pressure at the top to improve.
TotallyNotIT@reddit
I've spent the last 7 months trying to get our shit under control enough that we can try to figure out what the signal to noise ratio actually is to prioritize what's real.Ā
When you're starting from way behind, sometimes running it all down is all you have until you know what the fuck you're even looking at. Then Patch Tuesday comes along and makes it all look like hell again.
TuxAndrew@reddit
It's a number game for C-suite to measure bullshit. "Look how good our teams are doing at remediating vulnerabilities"
hkusp45css@reddit
This is why every time my CEO says "it would be neat if we could see all of our security dollars on a report, or a screen in the hallway" I flat out invoke the "we can't expose that kind of data, even internally."
Because I'm not about to spend an hour a day explaining to the CEO why something they THINK should be green is red, or vice versa.
When a metric becomes a target, it stops being a measurement and becomes a goal. That's bad for everyone.
MBILC@reddit
But it looks good on our reporting tool that we have a lower score!
mirrax@reddit
On the flip side of that pithy comment, that score is useful tool as part of assessing risk.
MBILC@reddit
agree, part but not the sole thing, but companies will use that as a sole source of truth.
One client I worked with, every patch Tuesday, scores would sky rocket (expected), and Executives would lose their you know what, and would be explained to them the patching process, and how it works and times frames, same thing as last time and the time before that....and how it has been done for years with test then prod and end user systems et cetera.
randomman87@reddit
This is absolutely not true. You can and should just auto update most things but it is definitely not "where possible". Like it or not pretty much every org has some hacked together piece(es) of shit that will nuke itself if it's updated. Some vendors also aren't trustworthy enough to properly test before they release an update - looking at you HP.
Bogus1989@reddit
famous last wordsā¦.cloudstrike took down the world š.
randomman87@reddit
Huh? That's exactly what I'm saying, some vendors can't be trusted with auto-updates
Bogus1989@reddit
i still hate that companies and vendors, most of us think are a jokeā¦and we quickly realize they dont know more than we doā¦.
Its probably been one of the biggest let downs learning that in your career.
I worked with alot of guys retired now, from the time when vendors, and even MS didnt fuck around.
IBM flys engineers out to study your issues, and fix themā¦that type shit.
randomman87@reddit
Completely agree. I work in finance and the amount of shit software development/packaging going on for multimillion dollar contracts... The business doesn't care because it performs X niche function that morning else does
Bogus1989@reddit
it may sound stupidā¦but just like a shitty video game it lacks an identity when it goes that wayā¦.š¤·āāļø
randomman87@reddit
Lol fintech really isn't that glorious. It's garbage hacked together to meet regulations. Probably need to work for a bank to see a mainframe still in use.
Bogus1989@reddit
my bad,
im agreeing with you.
mirrax@reddit
That's what I was trying to say. Spending time having someone think about whether or not they should be patched isn't valuable.
This is where the effort in critical thinking should be spent. Consider the scenario, scanning tool says there's X number of vulns.
Scenario 1 (that ButtThunder is advocating for):
Security team that is staffed by all knowing wizards that understand all systems and their interactions analyzes every single vulnerability one by one and determines action plan.
Scenario 2 (that I am advocating for):
Scan list is passed to SME teams to patch what they can. Teams patching systems can have automation or release schedules as needed. Things that can't be patched away are identified by the team as exceptions. Those exceptions are evaluated in collaboration with the security team to assess, existing mitigations, risk profile, and effort to remediate.
randomman87@reddit
Middle ground I like is InfoSec is responsible for implementing patching plans with all application owners.
ButtThunder@reddit
Agreed, in larger environments it may not work due to too much complexity with fewer wizards. But I would hope that InfoSec communicates to the infrastructure teams doing the work the value of patching within SLA- usually due to compliance requirements. I probably shouldn't have assumed Op's org was small-medium.
mahsab@reddit
For some things, updating is trivial.
For some things - especially software libraries - it's a breaking change. And sometimes, it's such a big breaking change it can take MONTHS to update.
mirrax@reddit
That's what I was saying. Security identifying the list and passing it to SME teams to fix isn't the problem. Expecting Security to know all of what is or isn't trivial for all systems and libraries isn't reasonable.
Pass the list to the SMEs, let them patch the trivial and report the exceptions. Then security can work with the SME teams to spend the effort on critical thought on identifying the risk level and level of effort to remediate, working with management to allocate time and resources as needed.
mahsab@reddit
I think the best path would be somewhere in the middle.
Security would make a list, go through it and, where available, already extend it with information that would make remediation and risk identification easier.
I'm on both sides, and when I identify a vuln., I do some basic digging and try to find (and share) at least:
In that way it does not steal the focus of the other teams, because they can plan and estimate this much more easily than if just a list is dumped with a high priority and then everything has to be dropped so it can be evaluated even on a basic level.
mirrax@reddit
Most of that information is usually in the reports generated by the scanners. And when it's not, the SME for the system or the patching tool is going to have a lot easier time getting the information. Depending on the size of the organization, those scans probably have multiple desktop and server OS packages, large number of COTS applications, network appliances, hypervisor or container orchestration platforms, container images, and internal application dependencies.
From personal experience, I remember some developers chafing because security gave them a long list of CVEs to fix on externally facing applications. One with a 9.8 CVE in it for Spring 5 for which the specific upgrade path is upgrade to Spring 6 (which is basically a total rewrite). But that list also had a bunch of OS vulns in their container image.
But they were responsible for their app, for that particular CVE it meant going through and inspecting code for behavior because it wasn't going to get fixed in Spring 5. But the pain of continuously checking that no one used that behavior in every release should be felt by that dev team. Same goes with the container image OS vulns, having to continuous rev base images that have 10-15 some false positives with packages that aren't even used but triggered in the scanner. With the pain in the right place, the developers had the ability to pick a different base image that didn't include extra packages and reduced the attack surface.
The same goes for the other areas, I don't think security should have to be in every single vendor's download portal and knowledge base and be expected to know everything. If some shitty appliance is a pain to update, the administrator of it should feel that pain and complain to their management that system should be replaced.
This isn't to say that knowledgeable security folks aren't worth their weight in gold. Because working across teams to do attack surface analysis and threat modelling, coordinating across teams to get the appropriate mitigations to manage risk is so important. But unless it's a small org where people have to wear all the hats, place both the accountability and the responsibility puts the pain in the wrong place on people, punishing the effort of reducing risk.
dougmc@reddit
But you kind of need to do both. Sure, stay up to date on patches. But when something new and serious comes out, you still should think about it might have affected you, and what you could have done to protect against it before it even became "0-day".
But it's more fundamental than that -- you kind of need to have security in mind when building and maintaining stuff. Not so much regarding specific vulnerabilities, but just security principles in general -- sanitize your inputs, disable unused services, lock hosts down as appropriate for their role, monitor for unusual activity, etc.
And I think that even the security guys tend to miss that when they don't come from an IT or development background. Still, they nag people to install their patches, and run scanning tools and send spreadsheets with the results, and that's useful too.
mirrax@reddit
Security undoubtedly comes in layers and you are right that reactively patching vulns isn't enough. However scanning for vulns and passing a list to get patched is low effort checklist activity that can identify places where additional layers are needed.
And honestly sometimes the nag is needed, own enough systems with enough dependencies and it's just not possible to know everything. And scanned list can identify places that can reduce risk and attack surfaces. Take a container image for example, building on a full distro has a greater attack surface than say scratch or distroless, a big list of vulnerabilities exposes the depth of the attack surface and can justify the engineering effort towards reducing it.
tl;dr No mindless scanning doesn't solve security, but it is useful.
dougmc@reddit
We seem to be in complete agreement.
Typical80sKid@reddit
Having an IT background is a bonus, sure, but whether you have it or donāt doesnāt really matter. Vulnerability management is critical for business operations, especially when you deal with sensitive data or valuable IP. If your security folks are letting you slide on Vulns because they get how hard it is to patch certain systems or update/add GPOs, then that is not a good position to be in. Collaboration between teams is critical, and buy in from top down is imperative.
mirrax@reddit
Not sure if we are talking at the same wavelength. Because my comment was saying rather than spending effort analyzing every vuln; if possible, patch it rather than analyze. That then leaves vulnerabilities that can't or are difficult to be patched for the critical thinking. That's not advocating for letting vulnerabilities slide or devaluing vuln management.
It's the same thing on the development side, library version 1.2.3 has a vuln when you do xyz. The scanner says it's fixed in version
1.2.4
, the choice is to check that application doesn't do xyz, now, in every release, and maintain a whitelist or bite the bullet and update it. Updating and making it easy to update allows effort to be devoted to mitigations on exceptions.SAugsburger@reddit
I have worked orgs where they didn't honestly even know whether the CVE actually applied at all. e.g. vulnerability in XYZ feature that we don't have enabled. Anybody that read the vendor article would know how to search for the relevant text strings in the config, but alas they're just using a scanner that just says version < patched version means 100% of the CVEs that exist for that code apply even if they only apply to certain configurations.
Nova_Aetas@reddit
Are there teams out there with no IT experience and no prioritisation?
I can look at an environment and see 20 vulnerabilities but it doesnāt mean theyāre all likely to be exploited or equally dangerous. Who are these people getting hired and doing this?
hkusp45css@reddit
Security teams who don't understand risk appetite don't really seem to have anything to do with whether or not they have an IT background. That's a straight security principle with almost zero overlap into the ops space, other than it takes place there.
Honestly, I wouldn't hire a security pro who ONLY had security experience.
It would be similar to hiring a painter who only knew carpentry. Sure, it all happens on wood, but the knowledge of one thing doesn't give you any insight into the other.
alficles@reddit
Yeah, a lot of teams do security kind of backward. It's almost always easier to teach a domain engineer how to do their job securely than it is to teach a security engineer every domain they might need to deal with. The security team should be there to identify and support, but the system owner should always be the one calling the shots. Security isn't a thing you do, it's a way you do things.
BreathDeeply101@reddit
Kind of how it was always easier to teach networking admins VOIP phones than legacy voice admins networking to support VOIP phones.
hkusp45css@reddit
Boom. Headshot.
ThatITguy2015@reddit
Body was later pumped and dumped.
alficles@reddit
Situation is now secure!
purefire@reddit
I agree, and I re-assess as much as possible, but I don't have a good programmatic way to consider the compensating controls that can't be detected by the vmdr platform. Any recommendations?
flashx3005@reddit (OP)
Yup agreed. Then throw in any kind of NIST policies they want to implement or be in compliance with said industry and now you're looking months of things thrown at your plate.
MBILC@reddit
This, why scored based is often meaningless if it is not understood the actual impact and exploitability with in your own actual environment.
Think most of us have gone through this, new exploit drops, high rated CVE, but someone needs physical access to the physical server with a local root account to even exploit it...
Then you suddenly get higher ups telling you to drop everything to patch it now because they see a spike in scores from what ever monitoring tool...
You then explain someone would need to be able to access the very very secure datacenter first, then prove they are authorised to access said rack / servers and have the root account, and they still dont care..
carlos49er@reddit
Exactly. We finally convinced our leadership that we're literally causing our own outages by forcing us to patch outside of our normal patch cycle. Also, we spend tens of millions on security people and security products, where's our ROI? If we cant rely on that then YOLO it with MS Defender and give us raises!
mirrax@reddit
I think you identified the problem that isn't that there was a scan of CVEs and list passed to SME teams.
OMGItsCheezWTF@reddit
We get it appear in our jira board with a pre-assigned priority, but we can do a triage / threat analysis and go back with a revised priority which changes the deadlines.I have one open on my board at the minute that was highly critical as it was a remote execution issue, but it's in a dependency of a dependency of our local only test suite and only applies if you do something with it that the consuming library does not do. That went from "must fix within 2 weeks" to "very low / not affected" and "must be fixed within 6 months".
But our security audit findings are contractually reportable to our customers on demand so the company is pretty shit hot on getting them done in general.
Acceptable_Spare4030@reddit
This is just the modern propensity to mislabel a Compliance team as "Security." They're just doing CYA and creating a paper trail to protect the org in case of disaster. Not necessarily to find the lowest guy on the pole to hang out to dry in case the worst happens (though never rule that out, either!) but definitely to show the insurance company that the organization has a process to address vulns and you were "doing your best in accordance with modern standards(tm)"
It's not a terrible thing IF your org also has a separate Security team who can be called on to assist in remediating any vulns they identify. Since most companies skip that part, what you have is an elaborate industry kayfabe and no legitimate security plan under the hood.
kop324324rdsuf9023u@reddit
It's not your job to reclassify vulns.
bingle-cowabungle@reddit
This was always going to be the ultimate result of these stupid cybersecurity bootcamps, and security bachelor's degrees that they offer at community college which are just glorified ITIL and project management roles with a thin veneer of "security" attached to it.
When you try to sell education in cybersecurity like it's some get rich quick scheme by avoiding IT fundamentals, networking, routing/switching, and only teach project management and incident response, this is what the industry is going to be flooded with.
sysad_dude@reddit
Dis guy gets it
415BlueOgre@reddit
30 days or you fail
bleckers@reddit
Those deadlines are bull, you know that right?
gunthans@reddit
Shhhh, it keeps the auditors happy, but was all know they don't care any weight
bleckers@reddit
As if we need to keep the auditors happy. If they want to audit, let them do some creative auditing, rather than micromanage like an asshole.
Maximum_Bandicoot_94@reddit
Even funnier if the vulnerability was publicly released a couple years ago. They only just now found it on an external scan!
The comedy is that they themselves would have known about it a year+ ago had they been tracking vulnerabilities against the versions of code in prod. Which is plainly a feature of one of they tools they have. Have, but dont use properly, because the 2 folks who were trained on it left more than 6 months ago.
alficles@reddit
I once got pulled into an emergency meeting on a Friday afternoon. The team was trying to get an immediate RCA on a vulnerability so they could get approval for a Friday evening release. The team saw that a change ticket had been filed to rotate a key that had been exposed and immediately shifted into high gear.
Once I figured out what was going on, I informed them that this wasn't that urgent. The key protected something that wasn't terribly important and comprise would have been no more than a bit annoying. We should rotate it, but it's not a big deal.
The team pushed back and said that exposed keys have to be dealt with right away. That's when I pointed out that the key had been exposed for the last ten years and if someone had it and was doing bad stuff with it, they were probably done a while ago.
There are a lot of reasons it took ten years, but mostly because it was actually really hard to rotate it (genuinely terrible legacy design, nothing like it would ever get approved today) and the thing it protected wasn't a big deal so it got very low effort. The team pushed back, asking who had found the exposure and why hadn't they followed up.
I opened the ticket and showed them it was originally filed by the guy that was now the team lead of the remediation team, back when he was a junior engineer ten years ago. :D He had no recollection, which is quite reasonable.
We deployed the next Monday afternoon as planned.
bender_the_offender0@reddit
Yup, got one recently, deadline is was weeks ago, double check when they opened the request and it was that dayā¦
anomalous_cowherd@reddit
Our official policy had deadlines for patching based on when CVEs were released and their severity. The deadlines did NOT take into account whether patches were even available yet.
Joestac@reddit
Yup, 30 days and you lose network access here.
DarthPneumono@reddit
And they absolutely 100% should, for things that are sufficiently scary. If they can't differentiate what's scary that's a problem.
halofreak8899@reddit
The deadline: ASAP
tdhuck@reddit
Same here and I'll flat out tell them I don't care about their deadline, it means nothing to me as many of their 'requests' would require change management.
I had a 1 on 1 with my boss about this. I politely told him that the security 'team' might have good intentions, but they need to understand the risk level, as well. We can't just 'update everything' overnight because they want their scanner results to show 0 threats, it just doesn't work that way.
I had to explain to the security team (politely) that they need to focus on issue severity as well. For example, public facing services are much more critical than a single, internal device that nobody has access to with a CVE of 4.
The security team telling you to patch everything now is the same as an uninformed manager/CEO that says 'all things must be AI by noon tomorrow!' which obviously isn't realistic.
tacticalAlmonds@reddit
Hey at least we don't have deadlines. It's just this needs fixed soon. Sure thing buddy.
mycall@reddit
Soon is the best deadline.
BrokenRatingScheme@reddit
I prefer soon-ish.
jac4941@reddit
Yeah, yesterday. Always needing it yesterday. Despite all the work we're trying to keep up with. We've been working hard to track everything and at least be able to ask "which of the other critical, need-it-now, high-priority items that we're currently executing on for you should be paused to accommodate the new high-priority thing?"
Em4rtz@reddit
Yeah my security team is basically all dudes who went into cyber straight from college and have no idea of the consequences of their policies or rushing vuln fixes
flashx3005@reddit (OP)
Yup exactly this. No basic Infra knowledge at all.
bfodder@reddit
What help do you need/expect from them?
No_Solid2349@reddit
You need to ask them: - Do you want me to stop providing standard support for this activity? - Let's remove all unmanaged apps. - Could you please share what the security team is implementing to prevent users from installing unmanaged applications?
PghSubie@reddit
Are you wanting the Security Team to be installing patches on your system (s) on their own??
Hotshot55@reddit
I mean that's kind of the point of you owning the OS, you get to define the remediation process for it. You are supposed to be the subject matter expert.
Would you rather have the security team give you exact instructions on "fixing" things even if it'd make your environment unusable?
Ultimabuster@reddit
Maybe itās different at other orgs but at my org that seems to be like 95% of what the security team even does. They could be replaced by an automated report from the CVE scannerĀ
flashx3005@reddit (OP)
They'll list the remediation but don't understand the consequences of such. I don't mind the work but more collaborative efforts would be better. Them finding 20 vulnerabilities and to fix those asap on top of everything else isn't helping anyone. That's my gripe is lack of support.
FanClubof5@reddit
We are there to manage risk so if you end up with a vuln that you as the expert think is not reasonable to fix then usually you just need to submit a risk exception and make sure that management all signs off on that.
flashx3005@reddit (OP)
Fair point. Maybe it's me but it's just feels like they give a list, say fix it and report back. Seems like just another upper level mgr telling me what to do without the support. That's probably what gets me annoyed the most.
BeanBagKing@reddit
That's to be expected. I see a lot of people bemoaning security teams that have no idea how to patch something in this thread, but even a technical security team can't be systems experts on everything. A reasonably size business might have a person or two each for Linux, Windows, network, hypervisor, and databases. Some roles might cross, e.g. the Linux guy takes care of databases too. In general though, unless it's a very small company, you wouldn't expect one person to be doing all of those jobs. Never mind the actual software that resides on those systems. That is why the actual application of the fix gets handed over to the system experts.
One thing I noticed here is that you haven't really said what you do want help with. The technical buck stops with you, so what support do you want from them? I'm not saying there isn't anything; there are ways they could offer guidance or help, but there isn't enough details here to tell specifically what you want.
I can't tell (coming from the security side) if there is something wrong here or not, it's highly dependent. Are they pushing 20 vulns to you and saying fix these all asap because they are actually things that are really bad and do need to be fixed ASAP? Is it 20 things that aren't so bad, but indicate a larger underlying problem (e.g. Windows not being patched)? Or are they 20 esoteric libraries across that many systems that are all behind a firewall? Is the list of remediation there because the report included it so why not, or are they are genuinely trying to be helpful (regardless of the report inclusion)? i.e. what was the intent?
It sounds to me like there does need to be collaboration, but that needs to come from both sides. They need to know how they can help you, and they need to provide that help. At the same time, it's likely that they need help from you beyond applying fixes (whether they realize it or not) in the form of what is important so they can prioritize things. For instance, which systems are business critical, which systems hold the keys to the kingdom or can't be down for more than 30 minutes? Versus those that can go down for a week or more without any serious disruption. Both teams probably also need help from the application and data owners to decide these things.
As other people have mentioned, you also need a set of policies to help guide all of this. How many business resources does the company want to put into vulnerabilities. How many of these resources are yours (your time), and how many come from security? It's not in the business's best interest to have either side hand verifying every CVE (/u/alficles 's post was great, please read it). E.g. mass patch what you can regardless and then circle around to whats left. At the same time, if everything is a priority then nothing is, so the security team should be able to assign priorities and determine false positives when you get to that stage. These priorities may also be adjusted by your input. There should also be a process for going outside the expected SLA/priority. "This major thing just hit the news" kind of issue.
My suggestion would be to make two lists. One for your manager and one for the security team.
How can your manager help you? e.g. How should you allocate your time, who should be assigning work to you, should there be a policy. These are all things I feel like they should be handling. "You should be spending X hours on this, it's fine for security to assign you X hours worth of work, there's no point in having a middle man here. If it goes over, it has to come through me. I'll work with security to draft a policy", etc.
How can the security team help you? They probably aren't going to know how long something might take to fix, so with that in mind do you want them just to give you one thing and you work on that until hours are exhausted or it gets fixed, then get another? Do you want them to give you a priorities list and let you work through it? Is there additional information they could provide? What do they need from you?
alficles@reddit
Right. It's not their job to understand the consequences of the fix, usually. That's usually your job. They can tell you all day long what an RCE is and why you want to get rid of it, but they won't be able to tell you whether it's ok to reboot your server or what would happen if you disabled the HTTP redirect.
It sounds, though, like you don't have priority alignment. If management wants you remediating security findings, they may need to tell you which other work to not do. Or they might have to hire help.
The comment above (or below, who knows what votes will do? :D) about having SLAs is also important. That said, I think a lot of teams have SLAs that are longer than the risk justifies. I've seen the systems of a small business compromised by threat actors because they waited 48 hours to apply a patch for a CVSS 6 (medium) CVE. The risk may be low, but it's never zero.
short_tech_support@reddit
If you're understaffed and overworked your criticisms may be better directed more towards management.
The security team might just be trying to keep their head above water like you?
jpnd123@reddit
This should be decided on by your leadership and have SLA.
Example is CVE lvl 9-10 is 7 days, 6-8 is a month, and below that is 90 days
MrSanford@reddit
STIGs back in the day
Hotshot55@reddit
Shit, STIGs today still tbh.
MrSanford@reddit
I think fully STIG compliant systems have some useable functionality now, lol
NegativePattern@reddit
I have to provide instructions otherwise IT won't know how to remediate the vulnerability. If it's a patch, they're good about patching it since patches are automated. However, if the fix requires extra steps to clear it, they whine and complain about having to touch the server or additional work. So I do my best to get them the exact steps so there's no complaining.
natflingdull@reddit
I agree that the remediation process should be determined by the admin but IME security teams will simply point out a vulnerability that may be referencing very advanced concepts or the vulnerability may be so vague that it isnāt actionable. Its up to admins and security professionals to work out the how, why, when together. Admins should know how to research and understand a CVE but security pros need to work with admins to help determine if the CVE is legitimate and how the remediation should be prioritized.
SandeeBelarus@reddit
Itās a fair point. But knowing certain things like OCSP and CRL lookups use http generally speaking by design. And that https isnāt required. Or what level of cipher suites go with tls1.3 etc. lately I have had to do more education than remediation with the new crop of infosec analysts.
Shotokant@reddit
Yes. Always pissed me off. They get a nice security contract run a scan then just pass the findings to the sysadmin to repair. Wankers the lot of em.
JC0100101001000011@reddit
Yes!
Ziggista@reddit
Does your security team just send the vulns to you or do they actually risk assess each vuln and give you a score / proity to fix in accordance with your business risk tolerance / plan?
Reynk1@reddit
Not sure what else you expect them to do? Part of the role is to identify and call out vulnerabilities
Having them preform updates on systems they donāt operate would likely end in tears
reaper987@reddit
Given the time it takes to patch or fix even simple issues, I would love access so I can do it myself. I also love when newly deployed server "kills" our dashboard with missing patches from two years.
"It's behind firewall" are famous last words. Especially when lots of network departments configure them with Any:Any rules.
TimTimmaeh@reddit
Focus is o n monthly patching, which brings the amount of vulnerabilities, on an OS level, every month down to zero.
Server owners get, also monthly, the list of vulnerabilities to address.
There are not constant follow ups on the later, wouldnāt be possible, but based on policies (bla bla) the server owners are held accountable. So in case something goes wrongā¦.
Well and for real critically (9+) there ARE follow ups with upper leadership + isolation VLAN if not resolved.
ReptilianLaserbeam@reddit
I mean, if thereās a vulnerability that needs to be fixed the whole company, your livelihood, is at risk. So yes, that needs to be fixed asap. Put aside everything else and focus on the task at hand.
tonkats@reddit
Our security guy dumps stuff on me with no plan. Last year, he hired another bro to go to meetings with him to get swag and look important. Sometimes he buys expensive products that do the same things our other products do.
The extra dumb thing is he has skills, he just doesn't really use them for real work that needs to be done.
Calabris@reddit
The boss of our compliance dept. Said outright, we are not supposed to fix anything, all we do is shift the target. So yea they would dump vulnerability on us and then bitch that it is not remediated right away.
flashx3005@reddit (OP)
Yup sounds about right.
p3ac3ful-h1pp13@reddit
Yeah brother all the time. Qualys can be a bitch. I'd recommend using the cve I'd and if it flags a path / file. If you don't mind using Ansible to automate or use shell / power shell scripting to automate your solutions and use ci CD pipelines to deploy to all of the affected hosts. Good luck and lmk if you need any help.
Fire_Mission@reddit
Security finds the vulnerability. Sysads fix it. Security doesn't know your applications like you do. It's on you.
flashx3005@reddit (OP)
I'm talking beyond the network/infra stuff. When there is a business app, they find the vulnerability and want ti fixed without knowing what issues might or might not cause the app.
Fire_Mission@reddit
Yeah, they need to back down. Patches/fixes need to be tested against the app. We don't test in Production. And they should know this.
JerryRiceOfOhio2@reddit
yes, they just look at news articles once in a while and scream that other people need to fix stuff
bleckers@reddit
Have they been celebrating System Administrator Appreciation Day? If not. Tell them that you ain't fixing anything until you have had your coffee.
RegisHighwind@reddit
Mine isn't too bad. Mostly because I have a tendency to stay on top of them myself. And mostly because of Reddit, I see vulnerabilities before they do. Enforcing down time windows and regular patching also helps a ton.
West-Delivery-7317@reddit
You guys fix your vulnerabilities?!?
flashx3005@reddit (OP)
ššš
pertexted@reddit
I've worked in orgs that function that way. I've also worked in orgs where someone has to elevate a security/cvs/kb/urgent impactful fix in Change so as to process it as a low-planning event.
I probably just prefer to be left alone but its sort of part of the whole risk management thing.
Neat-Researcher-7067@reddit
Yes!!
macemillianwinduarte@reddit
Yep. "cyber" is the new "learn to code" for people who are tired of working retail. They have no critical thinking skills or IT background, but they can forward a Nessus finding. I don't expect them to fix vulnerabilities, but I do expect them to understand that our RHEL servers aren't running google android.
Reverent_Revenants@reddit
I spoke to this 19yo dude thats working on a cybersec degree and hes wholly convinced he will get a cybersec job with the FBI/CIA making $150k after he graduates lol.
"Bro it says right here on thus anomolous website you only need a bachelors in cyber security"
SafetyWorking3736@reddit
hey, security guy here.
what i struggle with other teams alot is they generally dont engage us in architecture design until the design is in a change advisory board meeting.
our function is to recommend best security practices and mitigate risk, so if you dont involve us early on on planning, you will feel like you have to make changes quickly before go-live dates.
"no tim, your admin console should not have default credentials and be exposed to the public internet without MFA"
our job is also to not do your job, so yeah you have to fix itš
Live_Bit_7000@reddit
How do I get one of these jobs?
SafetyWorking3736@reddit
you probably already have it, just dont do work thats not yours.
im not setting up an app/architecture thats not mine, and its the app owners responsibility to do things correctly according to proper standards in order to follow policy, assuming your organization is mature enough to have standards and policies.
what ends up happening if you "hands on" fix everything is people become lazy and just whine at you to do it for them, and get mad because you dont tell them how to do their job.
technically, as a security guy, your job is TO ONLY say "no this is not right, do it right"
if you wanna be nice and provide the "no, but..." answer, thats also good and preferred.
long story short, dont take on additional work just because of your coworkers incompetence
Live_Bit_7000@reddit
I want to be a security guy to boss others around
SafetyWorking3736@reddit
lol, typically we can only boss people around if we have a governing authority practice to enforce that supports our authority, if not, then what ends up happening is a breach because Tim wanted a PowerPlant accessible from his home office
Reverent_Revenants@reddit
This thread made me realize security guys must perceive admins the way admins perceive users.
No_Resolution_9252@reddit
Its your job to manage them. Start enforcing better practices, design and policies. No one other than sysadmins should ever be remoting into servers, and certainly not browsing the web in google chrome on it. The DBAs don't need every single SQL Server feature available install in their servers. Developers don't need every single feature of IIS installed or a mess of third party crapware installed on production web servers, etc.
letshaveatune@reddit
Do you have a policy in place: eg vulnerabilities with CVSS3 score of 8-10 must be fixed with 7 days, CVSS3 score 6-7 14 days etc?
If not ask for something to be implemented.
tripodal@reddit
Only if the security team verified each one first.
If they canāt prove the cve is real, they shouldnāt be in security m
PURRING_SILENCER@reddit
Lol. My security guy can't even determine if a vuln report from nessus is even a real risk let alone address if it's real.
We are constantly bugged about low priority bs 'vulns' like appliances used by our team and only our team with SSL problems. Like self signed certs. Or other internal things we can't configure without HSTS.
Like guy, I'm working three different positions and everything I do is being marked as top priority from management and due yesterday. I don't give a rats ass about HSTS on some one off temperature sensor that's barely supported by the manufacturer anyway. We already put controls in place to mitigate issues. You know this, or should anyway.
alficles@reddit
This is a management problem, not primarily a security one. Of course your security person isn't an expert in your system specifically. And if the security team isn't being driven in alignment with the needs of the business, then management needs to set them straight. If management, though, has told them that all your certs need to chain to a public root, then they're following the instructions they've been given. If management then doesn't give you the resources to do the work they want done, then they have set you up for failure.
I've seen some places issue sweeping mandates for stuff like "everything must use TLS" because they conclude that it's cheaper to force everything to comply than it is to do the security analysis required to determine which things should be in scope. Sometimes that's true, often it isn't. But if management never made bad decisions, what would they do all day? :D
PURRING_SILENCER@reddit
Yeah it's such a a small team that the security guy is part of the management team. He drives much of this conversation. And it's only him doing security with a lofty title of CISO. He's not qualified for it. Also there is no mandate for anything. I'm a level or two removed from leadership and I would be part of those conversations and likely inform them.
But in larger orgs your statement likely stands
alficles@reddit
Oof. Yeah, reading some of the updates here, the pile of CVEs is a symptom of a drastically more serious problem. I like computers cause I can fix them or throw them away. That approach is so much harder when the broken thing is a manager.
Angelworks42@reddit
Nessus is kind of bad as well - back when we used it, it seemed to have no ability to tell the difference between Office 365 and Office LTSC.
airinato@reddit
I don't think I've ever even seen an infosec department do more than run vulnerability scanners and transfer responsibility for that onto overworked mainline IT
ExcitingTabletop@reddit
I'm still pretty surprised that the general reputation of security guys went from the sharpest to the least. I know "back in my day", but growing up, security had more researchers and a lot less grunt infosec work. But even the least tended to be very experienced.
Now they just hit the button and email the results way too often.
MalwareDork@reddit
I noticed it's drifted into two extremes.
First is that companies have so much tech debt or so little concern over their equipment that all you need is some bored kid using metasploit to blow up your server. The fart button is good enough because the company is garbage.
Second is that the smart folk are tied up somewhere else, essentially being the proverbial Blackwall from Cyberpunk. AI-generated malware for Rust and Golang is starting to become more and more commonplace and really gums up signature-based detection. You can't just throw it in Ghidra either even with a LLM driving it. This isn't even touching on how to detect artifacts in deepfaked material and how to defend against it.
ExcitingTabletop@reddit
Learn To Code movement fucked IT for a decade or so. Part of that was bootcamp corporate slop, which got worse when those bootcamp slop got tied into the university system. I think this was a supply issue more than a demand issue.
Pretty good vid on the subject:
https://www.youtube.com/watch?v=bThPluSzlDU&ab_channel=PolyMatter
SammyGreen@reddit
Iāve been a security consultant for six years and tbh I donāt actually do a lot of, what I would consider, IT security.
I find myself doing standard infrastructure stuff but not being an idiot about it. Otherwise I manage pentesters and write-ups but I havenāt done any actual penetrating in years.
But hey, if clients want to pay premium just because I have the word āsecurityā in my title, Iām not going to say no to more money.
ExcitingTabletop@reddit
These days I write more SQL than anything else. But I still give presentations on the history of physical security and it's fun.
SammyGreen@reddit
Iāve been a security consultant for almost five years and tbh itās only a title haha.
I find myself doing standard infrastructure stuff but not being an idiot about it.
But hey, if clients want to pay premium just because I have the word āsecurityā in my title, Iām not going to say no to more money.
Vynlovanth@reddit
Guessing it went from people who were seriously interested in the internal workings of systems and focused on drilling deep into vulnerabilities and malware, to now itās a lucrative job that you can get some type of post-secondary education in, but the education doesnāt give you any sort of practical experience in systems. You donāt have to know what Linux is or x86 versus ARM or basic enterprise network design.
The best security guys are the ones running homelabs that have an active interest in systems and networking.
YourMomIsADragon@reddit
Yes, but does yours actually run the vulnerability scan? Our does sometimes, but also just reads a headline and throws a ticket over the fence to asks us if we're affected. They have access to all the systems that would tell them so, if they bothered to check.
mycall@reddit
You can blame cybersecurity insurance for that.
RainStormLou@reddit
We hired a consultant for extra hands because I'm too busy as it is, and that's been my experience too. We specifically looked for a pro that can validate and implement changes. We didn't realize that implementing and validating meant I'll still have to do it all lol. If that was the case, I wouldn't have hired someone! I already know what needs to be done, he's basically just retyping the vuln scans that I already ran before we brought him on!
Asheraddo@reddit
Man so true. I hated my security team. No help from them. But they were always whining and telling every day to fix some ācriticalā vuln.
Spike-White@reddit
We have an entire form and process for False Positive (FP) reporting since the vuln scanners make frequent false allegations.
Example is calling out an IBM Z CPU specific bug in the Linux kernel when we run only AMD/Intel CPUs. Even a basic inventory of the underlying h/w would have filtered this out.
flashx3005@reddit (OP)
Yea this seems more the case.
ronmanfl@reddit
Hundred percent.
Pristine-Desk-5002@reddit
The issue is, what if your security team can't, but someone else can.
tripodal@reddit
They can spend the time learning how before pressing forward email button.
whopper2k@reddit
If you already know why should they spin their wheels becoming an SME in something they never touch? That's just wasting time while the business is potentially vulnerable.
I understand if you're talking about basic patches/changes to common OS components, or fundamental concepts like password security. There's a frankly shocking amount of security engineers who have minimal technical experience, and that is as frustrating for other security engineers as it is for those who have to deal with them.
But I wasn't hired to learn how to manage ESXi, the infra team was. Multiply that by every other piece of software that requires patching and I'd never get any of my other assignments done if I was expected to learn not only how the software works, but how it is used in the environment.
So yeah, I'm gonna ask the app owner to at least look at the vulnerability so we can collectively figure out what to do about it.
tripodal@reddit
The problem is The average security engineer is trained to use tools, not to enhance security. Which was the biggest ahaha in the last 10 years of my career.
Iād settle for the average engineer to know whether or not we have esx, esxi or proxmox deployed before forwarding a virtual box vuln.
Iād also settle for telling me which ip/url/path/file xyz was detected on.
Make sure that the external insecure service isnāt already in the risk registry.
Make sure that the ports claimed on the reports are actually externally open.
Donāt ask for ip any/any rules for your security scanner if youāre just going to use it to generate endless garbage.
There is a fuck ton of meaningful work you can complete very simply before you engage the sme.
Try logging on to the appliance with readonly or default creds. See if the version claimed shows in the help menu.
Try setting a password that should fail the policy. Etc.
tripodal@reddit
If you can write an exception for Bob to run a LinkSys router in his office, you can write one for the self signed cert on the PDU inside the jump host network.
Instead of re flagging it every time the security tools get swapped.
whopper2k@reddit
Ah yeah, see I'd agree that's basic due diligence and should be done before reaching out. In general, I agree with your sentiment.
I will point out that not every tool reports all the info one would need to do basic checks, and sometimes it requires a level of access the security team simply does not (and should not) have. Hell, earlier today I had to ask our FIM vendor why the hell it can tell me who changed a folder's permission, but not what permission what changed. I've also had to ask devs to figure out how to patch their containers because building the container requires access to some defined secrets like API keys and such.
It's give and take, as with most jobs. We all have work we'd rather be doing than patching, that's for sure
tripodal@reddit
I hear you, but I fundamentally disagree about the level of access security team should have.
A compliance team should not have access, a security team can be trusted as admins.
I realize having security focused admins is a wishlist; but the world would be a better place if security personal were engaged in applying remediations.
Chrome extension lockdown gpo deployed in test or to a beta group; hand it to the desktop team to send org wide.
Esxi vuln, let them apply patch or mitigation to exp or test env, document for sysadmins or oncall.
This is why Iāll never be in management, because steering a ship in this direction feels impossible
whopper2k@reddit
if only more orgs would take on "security champion" programs and bake security into every single team rather than having large sec teams. Cuz as it stands security teams are just as likely to be underfunded and overworked as any other team; we just don't have the manpower to handle patching at the scale you're talking about.
Definitely get your frustration though, it's annoying being a blocker to someone actually getting their work done. And way too many people in infosec forget their job is to serve the business objectives first
Pristine-Desk-5002@reddit
Sometimes it's not about knowledge of how to exploit but about nation state resources
goingslowfast@reddit
You hit the nail on the head. Security teams should be providing intelligence.
If they're just aggregating RSS feeds from security blogs or hitting start on a vulnerability scanner and sending you the results, I'll say somewhat satirically just replace them with automation.
Noobmode@reddit
The C in CVE doesnāt stand for ChatGPT, they already exist thatās why there is an issued CVE
Cormacolinde@reddit
Something a lot of security people these days seem to not know or ignore is that part of evaluation a CVE is to look at the CVSS and adjust it to your environment, risk and impact. Too many people just take the CVSS and run with it these days.
Noobmode@reddit
Thatās a problem with process not the CVE itself though. Most people donāt have the time to try and sit there to go through manual calculations. Thatās why a number of tools use custom risk scores with tagging to multiply impact to bring your highest priority systems and vulns to the tops of reports automatically
tripodal@reddit
Just because someone attributes a valid CVE doesnāt mean itās real.
Dozens of ours explaining that we moved out of that datacenter 9000 years ago and to stop scanning those IPs
Noobmode@reddit
How are they scanning data centers you donāt own, that makes zero sense to me. If you left the data center you wouldnāt have a network connection, that sounds like tech debt and zombie networks that need to be addressed. Thatās still a finding.
mirrax@reddit
They are scanning Public IPs grabbing versions off the web servers. Not the saner method of running an agent on all internal servers and just externally scanning appliances.
tripodal@reddit
Because some scanners grab all of your dns entries, then scan all IPS associated with them, then grab all SANs on those certs, then grab all those ips.
Then they correlate a historical database of all ips and dns that were ever associated with you.
Security scorecard gotta look as scary as possible.
Noobmode@reddit
Security Scorecard is a scam.
tripodal@reddit
Yes well, unfortunately itās not up to us to decide that. Itās up to the paying customers that get all the warm fuzzies.
thortgot@reddit
Not all CVEs ratings are equivalent. A 9 is not equivalent to another 9.
Having someone who understands what the actual risk profile is, what if any mitigations can be used and similar considerations and that can assign a patch/mitigation schedule is the correct thing to do.
Leif_Henderson@reddit
You're rght that CVSS is not a perfect, infallible metric. But that does not mean the correct thing to do is refuse to write down SLAs for for fixing vulnerabilities.
I guess if you don't have cyber insurance and aren't beholden to any security standards (PCI, CMMC, etc) then you can technically choose to make your policy "deadlines for all security tasks are fully subjective" but that isn't a mature solution.
moofishies@reddit
That is ideal, but ultimately doesn't matter if your policy requires all CVEs with a score of 9 to be remediated in the same timeline.Ā
Mr-RS182@reddit
Yes or they fix it themselves which usually ends up breaking something else thatās just gets dumped on my desk to fix.
BigLeSigh@reddit
We donāt let them have permissions to fix anything for this reason
redditduhlikeyeah@reddit
Some of the opinions in this thread⦠just silly.
BigLeSigh@reddit
Same in any thread, which particular opinions rile you?
jmizrahi@reddit
lmao. yes
giantrobothead@reddit
Oh yes. At my previous job our security team would round up the latest CVEs, spin up critical tasks, and drop them in my teamās lap with zero regard for context, yelling FIX NOW. We had a couple members of our infosec team who were repeat offenders no matter how much our management pushed back against the behavior.
deadeye316@reddit
Just ask them questions like I do and watch them run.
worthlessgarby@reddit
Multiple jobs yes. One was government and when I asked about this I was told that security role was strictly "informational".
Phate1989@reddit
I wouldn't trust the security team with anything more then a dashboard.
The last time the security team had any rights they disabled vulnerable ssl ciphers on ALL servers and took down 60k users, and a million or so customers.
Now they get a dashboard and can enter tickets for engineers to make changes.
progenyofeniac@reddit
I had them come to me with a vuln identified by some scan: cached credentials. They wanted the value set to 0. No cached creds at all, ever.
Our workforce is entirely remote and using an SSL-VPN that they only sign into after logging into Windows, on domain-joined machines.
We had multiple meetings where I explained why they couldnāt do this, why weād first need a different VPN solution, etc etc etc.
Peak was when one of the security guys, after multiple discussions, called on a Monday morning for help getting logged in because heād taken it on himself to change this setting.
LastTechStanding@reddit
š I would make that change, with a note out to all users; stating that this brought to them by the security team. Sit back and watch the fire
progenyofeniac@reddit
Security didnāt have the marbles to see how it would break, nor did they have the marbles to fix it. Yours truly wouldāve had an even crappier morning by doing that.
LastTechStanding@reddit
Then I feel they need to live in the trenches as a prerequisite to their jobs
flashx3005@reddit (OP)
Oh wow hilarious. The lack of overall general IT knowledge annoys me. Don't need to be an expert or anything but just know AD/DNS/GPOs etc work to a certain extent.
cbass377@reddit
No, that is too much work for them. They just set the tool to email a spreadsheet with every CVE on every host to the ticketing system. Then when we don't action the tickets, we get "invited" to a standing weekly meeting to enhance our focus.
flashx3005@reddit (OP)
I wonder how this post would do under the Cybersecurity forum lol
dahimi@reddit
All the time. Not just vulnerabilities either. Frequently being handed updated policies with new items we have to comply with.
Basically, isn't this what security teams generally do?
Engaged in what way?
"Nessus has detected such and such false positive for the billionth time, please reply back with distro reference material indicating that these same vulnerabilities have back ported patches. No we will not group these false positives together and no we won't work with you to ensure fewer false positives are reported in the future."
"Version 2025-05-22 of security policy xyz has been updated and supercedes version 2025-05-21 of the same policy. We've added a dozen new items your department needs to comply with ASAP."
Complain to boss about needing additional workers to comply with the security team's directives. Get told there's no funding for that. Drink more.
flashx3005@reddit (OP)
Lmao yes yes and more yes! Cheers!
Skyobliwind@reddit
Well it all depends on what the defined tasks of the security team are. There are big differences. Security Engineers and Security Admins normally also fix the problems themselves if possible. While Security Officers and ppl in the SOC Team not necessarly have an IT background and lack the knowledge to actually fix many of the problems they find OR it's just not in their tasks according to their contract.
flashx3005@reddit (OP)
These would be a blend of Security Admins/Engineers.
Successful_Horse31@reddit
Yes. I thought I was the only one. I have three vulnerability scans I am trying to go over at the moment.
flashx3005@reddit (OP)
I thought the same also lol. We're all in it together šŖ
lectos1977@reddit
Yes, because am the security team and the sysadmin at the same time. Stupid me wanting things fixed ASAP
deltashmelta@reddit
$100k clipboardĀ
SG-3379@reddit
Wouldn't it be because of the level of access maybe they don't have privileges needed to make the changes themselves
euclidsdream@reddit
This was my first thought as well. Most security teams I have worked with are read/audit only with no admin access.
thereisonlyoneme@reddit
Security guy here. I don't work vulnerability management, but I am on a team just adjacent. We have a few automated scanners and then trigger other automation to create tickets. But there are far too many tickets to blindly send to other teams, so we have other processes to prioritize them. Although if we learn of a high priority vulnerability then we just immediately ping the team who owns the system with the problem. Like for example if an edge firewall had a vulnerability being actively exploited, then we would make sure the network team patched it ASAP.
My company prioritizes security, so we are a big driver of work (not just vulnerability management), but we're not the only ones giving out work. I try to be mindful of that. I don't push people. If a team responds with "we can't get that done right away" then usually I am just like OK, tell me when you think you might and I'll check in again.
I am really surprised to see some people saying they don't want to be involved in vulnerability management at all or "security is just pushing work on us." Our teams have ownership of their systems. They prefer to be in the loop on any changes. To me it would be discourteous to change their stuff without even telling them. For one thing, if I break something, they are the ones who get the late-night call. For another, I might change something they don't want to. Like if I said "oh software XYZ has a vulnerability so let me update to the patched version" but the patched version changes something they needed. They might rather disable the vulnerable feature but keep the same version.
Basically it's best to get everyone together and talk through these things.
PhillAholic@reddit
If the Vendor has a patch or workaround published, then you're good. What I've seen is a CVE sent with no effort by the security team to find if there is a patch or workaround. Just Fix it now. But I also get tickets about internet browser temp files being flagged by machine learning as highly probable malicious with absolutely zero rationale direction on what they'd like me to do about it. Was the user using their computer, yes. Great. Now what? And they want me to manually do a scan on a system...which is not at all how that security software even works. Zero confidence that they know what they are looking at.
thereisonlyoneme@reddit
Frankly the CVE should be enough for you to start working. I don't know why you would expect the security team to come up with a solution for you.
PhillAholic@reddit
They shouldn't be requesting me to fix something when the vendor hasn't determined a fix yet. They don't even look, they just forward a list and want it done yesterday. It should be a working relationship between two groups of competent people. Not take your kid to work day where you have to explain basic principles of your job at every step of the way while you get no work done. It feels like the latter far too often.
thereisonlyoneme@reddit
Again, it's up to you to find the solution. You're there to be the expert in your environment. The vendor can't tell you what's right for you. Even if there is a patch, you may not want to install it. So if installing a patch isn't an option then it's time to start looking at other ways to mitigate the issue.
PhillAholic@reddit
How am I the expert over the vendor who literally coded the software? I can take mitigating steps, but that's not going to make the line go away to the security team, so that's not what we're talking about.
thereisonlyoneme@reddit
You're the one who installed it. You maintain it. You know how it is being used. If the vulnerability is in some feature you don't use, for example, then the vendor wouldn't know that.
PhillAholic@reddit
That would be filed under risk mitigation, I've already covered it. We don't need to keep this going.
tripodal@reddit
Received a finding the other day because we have an exposed VPN. >.>
sydpermres@reddit
Hilarious!!Ā
greensparten@reddit
Security guy here; I do not just dump things on my system guys. I use to be a sys admin, and I have dealt with things being slammed on my lap; I promised myself NOT to do that when I become SecGuy.
Decade later: I work on building a healthy relationship with the sysadmin team, we engage each other in collaborative way; example; I am working on a new policy, instead of slamming it down and saying this is how we do things, I get them in a group, and ask them to take a look at the policy, and give me feedback. I also ask if itās realistically achievable with what we have, and how long it would take to implement. Because of this approach, they also keep me engaged, and over time I now know their capabilities, so when I write something, its based on what we can actually accomplish.
The other thing I did was I pushed for an Automated patching tool called Automox. Although there is 4 of them and 1 of me, they still have a lot of work to do. We use Automox to automate much of the patching, and things like software delivery and even āimagingā of new computers.
We are a smaller shop, so Automox is used to catch what can be done automatically, and then they go in and do the rest by hand, for example, turning off SMB or what not by group policy, etc.
I use Rapid7 IVM for Vulnerability Scanning, as it has a great Dashboard, and their risk based system allows me assign whats critical, so my guys dont waste time.
Ima post this and edit it later.
flashx3005@reddit (OP)
People like yourself have been through the grind and know how it is, I respect and appreciate. You also have a good understanding of how things work/connect. The Cybersecurity folks lack basic Infrastructure knowledge at least imo.
greensparten@reddit
I absolutely agree with your last statement. Right now schools are pumping out these kids with cyber degrees, but they lack understanding of system and networks, the thing they are protecting. I was a systems admin, then a network admin, Network Engineer, and then Security Engineer, and now head of cyber. I strongly believe in my path.
flashx3005@reddit (OP)
Absolutely agrees. Solid path! Great career growth and how it should be.
greensparten@reddit
I would say try and build a relationship with those guys, make them see you are human, and that its not an US vs Them. You dont need anybodyās permission to befriend them.
flashx3005@reddit (OP)
Yea the whole remote part makes it harder. Not like back in the day where you all go out for drinks and have fun. I do feel that when we have had productive convos they'll turnaround in a couple and be the same way again lol.
Doesn't hurt to try again. Thanks for your perspectives. Really insightful.
LastTechStanding@reddit
You are a diamond among regular rocks.
digital_janitor@reddit
Yes, the new IT dynamic is pushing all the work on to someone else and making tedious process that takes more time to complete than the actual work in order to demonstrate the meeting or missing of KPIs.
clybstr02@reddit
Yep. Granted, as your workload increases to maintain compliance you should be talking with your leadership to increase staff / outsource as needed
I see security like legal, bring to light any issues.
mycall@reddit
lol, I just did that today.
alficles@reddit
Excellent! When people just sweep the security stuff under the rug by quietly burning themselves out handling it, management doesn't "see" how the choice to ask for security hygiene is actually costing time and money. They will just conclude that you have poor time management skills or take too long at your tasks. It's super important to make sure they see everything being asked of you and have the opportunity to redirect you or the security team as necessary.
Security hygiene is only going to become more and more important. The threat landscape doesn't look likely to get less sophisticated or less dangerous any time soon, unfortunately.
mycall@reddit
The coorelatory to security hygiene is to minimize need for security i the first place (ie. buy less stuff, less networking, less piecemeal solutions)
alficles@reddit
Yup, and automate the hygiene. Servers should patch themselves if at all possible, for example.
Dsraa@reddit
Totally yes. We've been cleaning them up and strengthen our overall risk by quote allot. Unfortunately they act like it's never enough. Now our risk is so low that when patch Tuesday comes, all they say every month that we have thousands of vulnerable machines.
Literally every month.
And I have to explain to them, what day it is and that patches just came out and we have a patch schedule.
A month passes, and same thing happens where they act like the world is ending and don't understand what's going on. It's quite hilarious.
LastTechStanding@reddit
Thatās how security team rollsā¦. They find the vulnerabilities; god forbid they go fix them too.
wrootlt@reddit
Yes. But they don't have any deployment capabilities or permissions. They just scan and do reports. My team (endpoint management) does patching, server teams do patching, etc.
PhillAholic@reddit
They need to be experienced enough to filter out false positives and overall understand how the system works. The absence of understanding of risk acceptance and mitigation is another problem.
wrootlt@reddit
Yeah, ours do risk analysis and sometimes come up with something as more critical than scanner shows or downgrade something as not applicable to our environment. But i don't always agree.
PhillAholic@reddit
I'd gladly have the discussion. That can't be done if they don't know what anything means.
ElectroSpore@reddit
This is most often covered by our normal consistent patching policy.. The security escalated ones are normally special ACTUAL critical things we need to communicate mitigation for and act quickly on.
IWantsToBelieve@reddit
Never forget the A in the CIA triad. I hope that team is reviewing against your business context and setting appropriate SLAs. I find the majority of internal vulnerabilities aren't even worth their time pursuing, focus on the big ones that are public facing, exploitable or on critical assets. The endpoints should be sufficiently protected by defence in depth including (priv management, XDR, app control) meaning their blast radius is minimal and you can avoid chasing low value vulnerabilities.
TournamentCarrot0@reddit
Update your shit, and do it regularly. I get that itās a pain, but theyāre not dumping vulns on you for no reason. Theyāre doing it because you didnāt update your shit, regularly.
flashx3005@reddit (OP)
It's the ones that have business apps that really are the issue. They'll find something and say oh I see tomcat vulnerability let's fix it but don't how that will interact with the app, downtime, rollback etc. The network stuff from firewalls, switches etc I do all the time to keep those things in checm.
chillmanstr8@reddit
Itās pretty awful how they have all these automated scans to report on the status of vulnerabilities of an enterprise, yet when you get to the remediation section it is extremely vague with a couple links to different sites that explain it further, and list a host of relevant KB updates when you only need a single one. A single one that will ultimately be patched by automated ansible runbooks, yet this is not noted anywhere in the finding.
flashx3005@reddit (OP)
Exactly!
russr@reddit
They do, sometimes they're legit, sometimes their security software sucks. Donkeys
Example, when Chrome installs or updates, the version number for the exact same update can be different.
So when it updates pending the browser restart in the registry. It may list a version number that starts with something weird like 79, whereas the current actual version number starts with 135 I believe. So their security software will freak out thinking that they have a version of Chrome from like 10 years ago installed on their machine and their numbers jump into the millions for a problem that doesn't exist.
Or similarly, it will detect something as being old installed because there's a single stray file that wasn't deleted when the program updated, which literally has nothing to do with the vulnerability, but that's how their crappy software decides to detect it.
So I will push all of those things right back at them.
weetek@reddit
This is so dependent on team size and function. I think both sides like to point fingers but it's an unrealistic expectation of anyone to have all the knowledge.
You can think of vulnerability scanners and the security teams like people who let car owners that they have a recall, in this case the NHTSA. That team would not be responsible for also fixing the recall, right? Not every car is going to be affected by this recall, but they can group cars together by year (vulnerability/CVE) it's up to the car dealership (and owner) to figure out whether it needs to be repaired.
An owner is responsible for a single car, or maybe a few. Sometimes in security we are dealing with hundreds of vulnerabilities and also managing other projects so it's very unreasonable to expect us to validate every vulnerability especially if we don't know how things are set up... maybe a product is using an outdated java library, that's what I can see but I don't know how it was configured or used.
Another side is leadership just wants to see numbers go down so security teams have to cast a wide net. At the end of the day everyone's just doing their jobs and if you want the security team to do yours then you will just get replaced by them.
flashx3005@reddit (OP)
Right I understand nobody can or should know it all. I also dont know how every app in the environment works or how all the apps related but I do my best to piece things together. In a small shop helping in a collaborative environment works better this way we all can see and in the case I'm out and they have a zero day to fix and patch atleast they'll have an idea of where to start and whom to contact app owner wise.
weetek@reddit
Thatās a tough spot to be in, but itās not the security teams job to dictate ownership. In our environment we have 1000+ people, through the process of elimination I can usually get to finding out whatās affected but itās so time consuming. I did work on the IT team before though so I helped establish some ownership criteria, which helped a lot.
When you look at the security team from below or an adjacent team they are a pain in the ass but if youāre in the hot seat knowing that if you donāt transfer the risk to someone else your job is on the line. If we get compromised leadership is going to say ādid you know about thisā and āwhy didnāt you do anything about thisā now imagine this for every domain of cyber security. You have a small amount of problems that you are responsible for but the security team has everyoneās problems theyāre responsible for.
Now thatās a complete generalization and maybe your security team is acting out of incompetence or malice but in my experience everyone is stretched thin trying to stay afloat with work.
flashx3005@reddit (OP)
That's good perspective. Thanks for this. Appreciate it.
RequirementBusiness8@reddit
Not only dump, but sometimes come up with the stupidest solutions so a problem and dump it. Sometimes you have to push back.
Even better is when they push for something to happen, but other team within infosec pushes back against the only way forward with what they are asking for.
Orestes85@reddit
What sort of patch management are you using? SCCM/MECM is great and can easily get all your windows desktop and server endpoints patched and apply bios/driver updates from Dell, HP, and Lenovo... but there's a learning curve, and it's a big system with a lot more than just application and update deployments.
Most windows environments will probably have intune in their m365 licensing, and that can at least keep your windows products patched with update rings.
Good RMM products should also be able to deploy updates and applications. NinjaRMM also includes patching for a lot of common business applications.
I'd ask for your security team to help prioritize remediations and then set up a time frame for different priority vulnerabilities to get patched.
This keeps you accountable, gives them a timeline for the work to be completed, and will help you out with getting your own work done instead of working on vulnerabilities that security has agreed do not need to be completed until 2 weeks from now
BoringLime@reddit
We do as well but have a modified scoring system and do not blindly go by the css rating. Example is a critical that is only exploitable from the internal network, and the user that is access a printer management page and be chewing gum for exactly 30 minutes prior. This would be downgraded to a high and possibly a medium and we have longer to fix it. It gets or loses points if the exploit have active observation. Basically not all criticals are criticals to everyones unique environment. But once it gets to the medium and low range, it probably won't be addressed. We are on actively interested in criticals and highs. If we tried to resolve everything we wouldn't have time to do our our actual sysadmin jobs.
Sobeman@reddit
All the people who went to college during COVID for a "security degree" only know how to read alerts and forward them to other people to fix
Orestes85@reddit
I feel personally attacked.
Some of us are sysadmins, too, ya know.
Toribor@reddit
Our Security Team is constantly dumping extra work on me. Of course I'm also the Security Team so it could be worse.
bbqwatermelon@reddit
Your security team sounds like an asshole
AdolfKoopaTroopa@reddit
The whole IT department is a pain in my ass.
I am the entire department.
Stonewalled9999@reddit
Director of you own coffee cup then?
renrioku@reddit
Same boat here, keep dumping more work on myself, keeping up with security patches across hundreds of servers is a busy job.
MyClevrUsername@reddit
I hate our security guy with a passion! Heās also me.
Witte-666@reddit
Same here, and he is burying me in work. What an asshole.
SoonerMedic72@reddit
Our Security team is also constantly creating massive projects requiring research and careful implementation and adding them to my list. Our Security team is also me.
MBILC@reddit
I feel you! Our lists keep growing and growing.
PurpleCableNetworker@reddit
This hits the feels.
Ams197624@reddit
I'm in the same boat :) secops and sysadmin in one. I try to dump it on my junior coworker but most of the time I end up doing it myself anyway.
yensid7@reddit
Exactly the scenario I'm in.
Fast-Mathematician-1@reddit
Yep.
AfterCockroach7804@reddit
Yep⦠this is the way.
realityhurtme@reddit
This
Noobmode@reddit
dave_pet@reddit
Relevant
Kwuahh@reddit
Our own worse enemies
trobsmonkey@reddit
This is my entire job. Nothing but vulnerabilities. I'm sorry yall don't have dedicated teams.
hashkent@reddit
Yep. With a deadline of 2 weeks for high / critical, regardless if it actually affects us.
Bonus points for the 2 week change request lead time on some systems. So never meet the sla š¤£š¤£
Itās improving now, got security looking at wiz and only counting the publicly exposed services now in the SLA. Devs coping it too with CVEs in dependency packages.
BigChubs1@reddit
I wish they would let me start installing the stuff that I don't manage. It would make my life 10 x easier.
noideabutitwillbeok@reddit
We receive a report on vuls and they are tagged with different criticality levels. We patch the critical ones first, then the rest later. As our sec folks are in a different org unit I'd rather them not try to fix something without discussing with me first.
lungbong@reddit
Our security team collate the vulnerabilities, sit on them for a month then tell us they need fixing yesterday.
terrydqm@reddit
Oh they actually inform you? Mine just complains that things aren't remediated later after not sharing any information with me or the team.
Gh0styD0g@reddit
Security team?
MaximumGrip@reddit
just do the needful
fate3@reddit
I'll never forget when the security team at my old job sent us a request to delete the local BUILTIN\SYSTEM account on servers because it had high privileges
flashx3005@reddit (OP)
š¤¦āāļøš
Ok_Information3286@reddit
Yes, this kind of handoff happens a lot, especially in smaller teams. Security often identifies issues and pushes fixes without offering much support, which can feel like dumping. Itās tough when infra is expected to fix everything solo, especially with shifting responsibilities. Ideally, security should collaborateāprioritize risks, offer context, and work with you on solutions. If thatās not happening, it helps to push for clearer workflows, ownership boundaries, and escalation paths when workload becomes unrealistic.
flashx3005@reddit (OP)
Yes this what hoping happens or should have happened. However that hope seems to be fading away lol
CountGeoffrey@reddit
What do you expect them to do? They don't have the expertise to fix, they are just driving a discovery tool. The fault is in your org, not the security team.
flashx3005@reddit (OP)
You're not wrong. The top level has no idea about Infrastructure IT and they hire ppl just like them.
Mizerka@reddit
All the time, and they question why i have x and y open when they asked to have full network access to everything
ghjm@reddit
Our security team has control over our CI pipelines. Whether they hear about a new vulnerability, they immediately block our ability to do anything at all until we fix it. It doesn't matter if our biggest customer has a sev1 outage that we need to do a deployment to test or fix. Security found an old copy of bootstrap.js in some test code that doesn't even affect production? Everything stops till it's fixed.
Nailtrail@reddit
I am my sysadmin team and I am my security team as well. We have a great working relationship.
flashx3005@reddit (OP)
Ha, perfect marriage
_bahnjee_@reddit
We just hired our first all-security hire. There's one less thing (ok, one hundred fewer things) I have to chase down now. He keeps an eye on vulnerabilities... says, "Here's the patch that's needed"... I deploy it.
I couldn't be happier. (well, ok, they could pay me more...)
tacticalAlmonds@reddit
Does anyone else's security team lack critical thinking and is just a crew that exports alerts into tickets for someone else without reviewing said alert?
PhillAholic@reddit
I was asked to open up ports on my firewall because their security scanning software couldn't get into it.
many_dongs@reddit
This is normal and expected. You'd rather have your security team find issues than the bad guys in the event of someone accidentally exposing ports/dropping ACLs/other unexpected issues. It happens.
PhillAholic@reddit
Reviewing configs sure, but opening up a production firewall where it's vulnerable? I've never once had anyone ask for that before, including a handful of external security consultants doing scans.
many_dongs@reddit
External consultants are typically satisfied with attacker level access (no ports opened) and most importantly they will go with whatever the client wants
Full time security teams are more likely to be invested in a long term / lower level look. Not all, but itās not surprising. That being said this is just white box vs black box testing, very regular topic in pentesting
PhillAholic@reddit
If I believed for a second the security team knew what they were looking at and not just trying to check off a box I'd think about believing that. Instead getting "turn off your security" asks with zero details make me think otherwise.
many_dongs@reddit
Itās a reasonable way to think as a person but in a business context unless youāre the top of the technical management, itās not your call
PhillAholic@reddit
That's really not how it works in practice. They can ask, I can say now, they can't go back to their management, I can go back to mine, if they want to move forward I can insist we go through Change management according to SOP which includes a risk assessment where they actually have to explain it. Before you know they drop it because that's too much effort to check a box that they can't rationalize.
many_dongs@reddit
Seems reasonable. The explanation I gave would pass most risk assessment. Security teams do not typically have to demonstrate āthey know what theyāre looking atā to receive authorization.
PhillAholic@reddit
I'll see how it goes. I asked for documentation on how they were going to accomplish what they were saying they are going to do for the assessment. So either I'm right and they have no idea and they won't come up with it, or I'll learn something new and we'll put in on the report just like the SOP says. Win-Win.
tacticalAlmonds@reddit
Ironic. We had the same thing for a "simulation".
mirrax@reddit
Makes way less sense for a simulation. What value is a simulation that doesn't take into account in place mitigations...
Selectively opening up a port to a internally controlled security tool isn't an unreasonable request.
tacticalAlmonds@reddit
This was my point as well. Oh well.
27CF@reddit
at least you get tickets
moffetts9001@reddit
Yep. They blindly report R7 findings to us and fight with us when we tell them the findings are wrong.
Sasataf12@reddit
I've worked with security teams that have absolutely no technical expertise, and ones that have a lot.
I can tell you, the latter is a much better experience.
natflingdull@reddit
Thats been my experience as well, with the former being way more common unfortunately.
many_dongs@reddit
It's because the of the management hiring the teams. Security teams full of nontechnical people had to be hired by someone.
PhillAholic@reddit
Because they're cheap. No one seems to have cracked the code on how much wasting everyone's time costs.
many_dongs@reddit
Cracking the code would mean firing incompetent managers but that typically also means firing themselves so they donāt
PhillAholic@reddit
The more levels you have the more difficult it is. Add in some cultures inability to question or correct their superiors and it can become a shit show. Even with completely well meaning and understanding people, the back and forth can become tiring.
alficles@reddit
I use the phrase "security by spreadsheet" _way_ too frequently, and I'm on the security side of the fence. :D
Fabulous-Farmer7474@reddit
The security team where I worked was non technical. They interacted with an external vendor for action recommendations which they would pass our way with the expectation that we would treat it urgently despite the fact that their annual report documents how many incidents they "resolved".
At one point in the past they did have tech savvy people but the incoming CIO (an MBA) said they were "too expensive" so he laid most of them off an replaced them with paper certified people to save money. Yet that didn't happen because he added a new management layer.
dinosaurwithakatana@reddit
If anything this is going to get worse. Lots of people are scanning open source codebases with AI tooling to find vulnerabilities to collect whitehat bounties - which is causing a general increase in CVEs.
PhillAholic@reddit
If they are legit that's not a problem. 99% of the shit I have to deal with isn't legit or is not actually a risk. If something needs to be patched or workaround needs to be implemented I want to know about it asap.
MSXzigerzh0@reddit
And Vibe Coding is going to make it worse.
mellamosatan@reddit
Yes and we are graded heavily by that by the big dogs.
"Hey man this is red" is my joke about what their job is.
atw527@reddit
You have a security team?
deweys@reddit
Genuine question: How would you like them to help you? Should they be installing patches, updating VMware, etc?
whiskeytab@reddit
they could start by not sending out a monthly email about vulnerabilities that Microsoft have patched when our patching is already automated lol
digitaltransmutation@reddit
At the very least they should read the vuln's text and assess the asset to determine if the finding is valid. Would reduce our guys's ticket creation by around half.
Basically the problem with this transaction is that they generate a lot of timesucks that move the needle on nothing and I have the entire rest of my job that I need to do.
PhillAholic@reddit
Frankly these are the jobs about to be replaced by AI. The output can't get any worse, and blaming bad AI is easier than a human.
PhillAholic@reddit
Break the team up into tiers just like operations. Don't let Tier 1 talk to anyone. Find the issues, have someone who knows wtf they are reading review it, and only forward issues to operations that are actual issues.
Subject_Estimate_309@reddit
This is the part I canāt get past. These same operations teams would blow a gasket if we walked in there and started applying patches or messing with their environment. (As they should)
If thereās a true positive vuln, what are they expecting me to do other then validate itās real and open a ticket to patch?
themastermatt@reddit
Maybe youre better at this than most SecOps teams. Validating a true positive is something that most are currently NOT doing. They run exports from whatever tool they were sold and start sending emails demanding fixes without any context or attempt to understand the system. Infra would LOVE to work with Sec that can analyze further than whatever Tenable says.
BeanBagKing@reddit
From the security side I'll also add that in a lot of cases it's not in either teams best interest to validate 90% of stuff. Vuln scanner says we need to turn off SMBv2? Are we using it? No? Patch test -> patch prod. It is worth literally nobody's time to either a) validate the output of the tool or b) justify why it needs to be turned off or what the risk level is.
Maybe 90% is hyperbole, but I feel like there's a lot of monthly patch vulnerabilities that follow this logic. I get it, it's a low priority finding and it's going to take time to patch, and because of that infra wants to know why this thing is actually a problem. Someone could validate that it's actually present and do a risk analysis and bring it to change control (ok, that one should be there anyway) and get an SLA assigned but look, I gotta be honest, it's just because it'll make the damn number in the dashboard go down by a bajillion points because it's on every system and then we can both brag to the CFO about teamwork and how happy the auditors were this year. So can we push the change to at least test and see if anything breaks?
But yea, after that, all the little stragglers and one-offs, it should be more than a CSV tossed over the wall.
Subject_Estimate_309@reddit
This is as much on operations as it is security. Iām happy to validate the output of my tool, but thatās gonna rely on OPS providing my team with the right visibility to do that. If the CMDB isnāt halfway accurate and I canāt login to see what youāre actually running, then the OPS team is getting a big CSV to sort through on their own.
themastermatt@reddit
Thats fair. In my last org, SecOps had full Domain Admin and Global Admin. For reasons. Still sent every CVE dump in the blind with "let us know when they are fixed, k thanks bye!". Further, they would then not listen to the explanation nor go look for themselves. 100% tool readers which is becoming pretty common.
Subject_Estimate_309@reddit
Thatās a nightmare. In multiple ways. Iām so sorry
YSFKJDGS@reddit
Most likely they are not doing a true risk based security program. Yeah, your firewall shows a CVE of 9, or your server shows an RCE or something.
HOWEVER, the interfaces exposed to these vulns are behind strict FW rules, not exposed to the internet, etc... In which case those vulns are downgraded from a 9 to like a 7 or something, SLA adjusted because of compensating controls, etc.
All of the mitigating controls that adjust internal CVE numbers is how you start to actually show a mature program. 99% of the complaints here are because they do NOT have a mature program, and frankly both sides of the conversation (including rolling up to management) are to blame.
MeanE@reddit
At least understand what itās used for, is it public facing or is it already well protected. Know the risk of the app itself and not just that it has a vulnerability.
ChataEye@reddit
You have to understand this: security teams arenāt necessarily IT people. They typically work with dashboards that light up red when somethingās wrong ā and your name ends up on it. Thatās when you get the alert: fix it in 48, 72, or however many hours.
The problem? Sometimes what needs fixing involves reworking parts of the infrastructure, which can take days. But that doesnāt matter to them. All they see are dashboards and deadlines.
PhillAholic@reddit
Yea...that's the problem. I like cars, but hiring me to looking into a garage's productivity would be stupid.
scriptmonkey420@reddit
And most of the time they are wrong or not for my team to handle....
I do SSO but will get TLS vulns for RDP that are for the Windows Team and not my team just because we use the server.
rankinrez@reddit
You should be happy to have a security team finding them for you.
CVEs just keep coming. None of us can help that but we all need to stay on top of it. Thatās just life.
PhillAholic@reddit
It's a balance. If I'm drowning in meaningless bullshit, the real ones are going to get burred.
lumirgaidin@reddit
Former Infrastructure Engineer turned TVM Analyst.Ā Yes. And no. But also yes. Seeing it from both sides, without having some deep technical knowledge, a lot of OMGWOW CVSS 10 PATCHĀ
SysAdminDennyBob@reddit
Yes, this is a common approach. It can become overwhelming depending on the Security team's operational nature. For example browser updates, there can be multiple of these per month. Some security teams want you to deploy these updates instantly, but then you look at your patching routine and it only runs once-a-month. In those cases I had my management address it
"The Patch Team patches once a month. Everything else is an out-of-schedule patch. You (Security) need to define when a CVE is bad enough that we would patch outside of our normal schedule, it should be very rare. Change Control should have to approve."
Further, Security is not allowed to send me a task if the update has not gone through the normal schedule yet. I set everything on Patch Tuesday and lock it down. I do not add anything more until next month. "Security, DO NOT send us a vulnerability that will get automatically patched with next month's regular schedule. No ticket at all, nothing in my queue, understood? You missed the cut off and it's not an urgent patch, you'll get it next month with zero effort from me, it's automatic."
Solutions to get out of the churn:
When you get a task and it has 10 systems that are missing an app update, don't just address those 10. Instead expand out your deployment to all systems that have that application. This prevents them discovering more on the next round of scanning. Do more than what that ticket asks.
Buy a big ass patch catalog. Purchase something like Patch My PC. This gives you a gigantic array of application patches all automated. You start patching EVERYTHING. You leap frog security and get ahead of them. Stop waiting to get a ticket on an app, just go head and patch it. Your app teams will fucking hate being current all the time, fuck em. This takes some political capital but this action dropped a huge flow of security tasks down to a trickle.
Fumblingwithit@reddit
Our company's security team does fuck all but cut-n-paste general best practices and PowerPoint presentations.
_W-O-P-R_@reddit
As a security guy, I try not to - I talk with my sysadmins and network engineers to figure out if we should actually be concerned based on where the vuln is in our environment, what compensating controls exist or could be engaged, what LOE/downtime/cost the fix would entail, etc.
herdthink@reddit
Yes. I handle VTM patching for 300Ish serves (as a backup admin, not an SA). Around 50% of the time the patches are scheduled, the rest are āthis just came up, get to work.ā
fixit_jr@reddit
Yes. They have basically become our master and commander. Between the amount of their tools we need to deploy and constantly change. New security controls that change the dev tools pipeline and cloud infrastructure all I do is fix vulnerabilities with no help, deploy there tools, deal with issues caused by new controls they implemented that break things that already work. Been like this for the last 3 years.
PigInZen67@reddit
We have internal SLA agreements regarding the severity level of reported vulnerabilities. We remediate with best effort in accordance with those SLAs. Our security team does handle the automation that reports the severity level, but they certainly are not "dumping" vulns on us.
Geek_Wandering@reddit
Depending on severity and impact we are given between 1 week and 6 months. If there's a valid business reason deadline that can't be met, temporary waivers are usually granted. They are extremely firm on completing the hardening but understanding of the need to still run the business.
tripodal@reddit
Received a finding the other day because we have an exposed VPN. >.>
Mr_ToDo@reddit
I guess we'll put it behind a proxy :|
tripodal@reddit
At least 7
AbleSailor@reddit
I call them the Paul Blart's of the company. Observe and report. They have gone so far as to automate ticket generation for their findings, flooding the unassigned queue each month. At least we got them to run their scans AFTER patching was done.
whopper2k@reddit
I'm a security engineer who mostly focuses on application security, but since that so often involves containers these days it gets mixed up with system CVEs quite frequently. Plus, I also work with the engineers who deal with more "traditional" OS patching and vulnerability management. I'm not the most experienced, but I do want to give my 2 cents here.
Anyone who has ever had to look at CVEs on a regular basis knows that they are not created equally; for every "oh shit if we don't patch this right now we're so fucked" CVE, there are hundreds if not thousands that have some combination of vague language, fearmongering about complex exploit chains requiring some level of existing access, or that are straight up contested by the authors of the "vulnerable" application for various technical reasons. One need only look at CVEs from the Linux CNA to see what I'm talking about; High severity vulnerabilities that only work on specific hardware, descriptions so long they get cut off by character limits, and internal kernel jargon that is extremely poorly documented outside of the decades-long mailing lists. The amount of time it takes to review and confirm even a single CVE ranges from a couple of minutes to hours of digging through search results, all the while the application/OS is potentially vulnerable.
On top of that, not only does the security team have to understand the CVE itself, they also have to know your environment to see if, say, the fact your app uses HTTP actually means its sending data in plaintext or just a sign you're using a load balancer to handle the TLS for you. That's where the collaboration, as you pointed out, comes in to play; we simply don't have that kind of knowledge, and even if we did there's no guarantee that documentation/tribal knowledge is up to date. Outside of basic OS patching (which really should be automated/standardized anyway), we basically have no choice but to at least talk to app owners about our findings.
I personally do what I can to make sure that there are patches for CVEs available before I reach out to a developer, and even review application code myself to verify that an automated scanner isn't marking a random variable as potentially malicious user input while missing the input validation done in the same function (which happens all the time). I do sometimes have to say "please just patch this" (see the Linux CVE point above), as I also have other tickets/duties and can't spend hours on every finding. That said, I have coworkers who aren't as careful and have had to step in to handle their screaming before it reaches anyone outside of the team. But we always work with app/server owners on deadlines when conflicts are brought up.
Just want to provide some context as to what kind of workload you might not be seeing on the receiving end; as with most IT roles, the day-to-day is usually invisible to everyone else. But also you might just have a shit security team who doesn't realize they work for the business, not their tooling.
natflingdull@reddit
Fearmongering about exploits that require admin access is so real. Ive honestly never understood it, had many of these types of exploits put on my desk and Im just like āif the black hat has a global admin account and password weāre already boned. Its like buying expensive padlocks for someone with a master keyā
whopper2k@reddit
The number of vulnerabilities I've seen that are marked as High severity and require a maliciously crafted esoteric filesystem is truly an indicator that risk scores are completely made up.
general-noob@reddit
Pfft⦠if they actually notify us of anything, they just forward the alert without verifying anything first. We get a Nessus scan once a month that includes so much extra client stuff or just IPs, most of us never look at it, and they never even follow up.
natflingdull@reddit
Lol this is painfully accurate. I didnāt realize the whole āI get paid six figures to forward a nessus report with zero additional informationā was so godamn common
And those reports often suck because they just use the built in scans. I didnt realize how many infosec teams are not tuning their scans AT ALL until I actually had to manage some Tenable products.
general-noob@reddit
āYour RHEL 8 systems donāt have the newest Apache installedā. Security monkey
āDid you check the Red Hat scan option?ā Me not a security person but knowing how it works better than they do.
āThe what?!ā
Jesus
ambscout@reddit
I typically pass workstation vulnerabilities to the sysadmin/help desk, especially if it is the result of them not updating something, etc. that they should have.
Emiroda@reddit
If youāre not on top of vulns in your own systems youāre mediocre at best. The point of Ops/sysadmin is resilient and secure systems. That CVE canāt be exploited because you have a WAF in front? Great, thatās noted as a mitigation in your risk assessment, right?
What is the point of this thread other than flex that youāre a mediocre sysadmin?
Arudinne@reddit
You guys have a security team?
jhupprich3@reddit
Our team can't even pull account inventories, server counts, etc. They don't even respond to any of the CVE's I send them from Defender (They won't use Defender). CISA gets up our ass about late reports because the person handling that doesn't know how to log into a tenant to find the tenant ID.
CmdrDTauro@reddit
Worst is when they use historical data of machines that are offline or for out of date apps in old user profiles that arenāt presently being used.
Yes these out of date apps exist in inventory but if theyāre not actively being used then itās just a reporting blip, and isnāt infrastructures responsibility, itās a compliance reporting thing.
If infra has put in place the things that will update them when they come online, then infraās job is done. It aināt my job to massage your numbers.
natflingdull@reddit
Fucking Teams
andrewthetechie@reddit
Yup. Without understanding how things are actually vulnerable or not.
Newest fun is their tool doesn't understand ubuntu patch versions so they scream at me that SSH is vulnerable, even though its not.
RouterMonkey@reddit
I'm curious. Are you saying they should do your job for your (remediating your equipment) or that they should just ignore this stuff and leave you alone?
Their job is to find vulnerabilities, your job is ti manage the equipment under your control, including remediating vulnerabilities.
EraYaN@reddit
The problem is the tools spew like half bullshit findings⦠try them on any random node project and ooh boy. And you often just canāt do shit about it. Or itās already mitigated because itās only in test code etc. You you end up doing something for maybe 10% of tickets. But the research is 1 FTEs full time job on your teams of 2ā¦
natflingdull@reddit
Absolutely true. The overwhelming majority of vulns Ive been tasked with remediating were false positives. I had to do this at an org that used non persistent VDI and it was a nightmare because the infosec team wouldnāt tune the scans to focus on the Golden Image. Just constant spam of vuln reports for hundreds of machines, it made actually remediating stuff very difficult
NegativePattern@reddit
Yep! I find the vulnerabilities and dump them on IT.
In my defense, that's the whole separation of duties part. I do provide assistance if they can't figure out how to remediate the vulnerability. Usually in my report I highlight what the fix is.
natflingdull@reddit
Thats really all you need to do but a lot of infosec people will just forward a CVE and expect that the work will be completed immediately. Its mostly an issue of process IMO
Memento-scout@reddit
At least in our org we provide the details on how to fix it (reg key, gpo setting, config etc). We check if we see any breaking changes on small subset of hosts and then hand it over with the notes from it.
natflingdull@reddit
This is exactly how it should be done. I can research the impact of a patch, update, hotfix etc because I own the OS so thats 100% on me, but just forwarding a vuln scan with no additional information is just lazy.
Iām even cool if the security team doesnāt have the details on the fix, they just need to work with me, explain the impact so we can prioritize accordingly, and also there needs to be the understanding that unless its a zero day I need to do some research on the change before pushing it to Prod, which takes some time. I used an MSXML parser as an example in a previous comment, we had a vuln for this a while back. Ive worked with security people who would expect since Im a MS admin that I have in depth knowledge of what every .dll is and the purpose it serves, which is obviously a complete misunderstanding of what admins do
WhenTheRainsCome@reddit
And on the flip side, the Infra team cant figure out how to deploy security agents on Linux because it takes checks notes two steps after install.
And they don't validate anything. "we pushed a package to remove that version" ...did it work? "We pushed it last week" ...I still see installs "are you sure, we pushed it to every host just to be sure" ..here's the list. "your data must be stale!"
weeks go by
"Yeah, there was an issue with the uninstall package"
Our dev teams get the white glove treatment, infra gets spreadsheets.
Avengeme555@reddit
This has been my exact experience. In my past two roles InfoSec has been by far the laziest team and is constantly trying to push off work onto others.
securingserenity@reddit
There may be other reasons you call them lazy, but recognizing separation of duties is not laziness.
It is generally considered a conflict of interest for the people that find the problems to also be the people that fix the problems.
mirrax@reddit
We investigated ourselves and found nothing wrong!
pdp10@reddit
It wouldn't be a conflict if the person responsible for fixing them, found them first and deprived the GRC team of their KPIs.
mirrax@reddit
I mean that's just practical application of Goodhart's Law.
Avengeme555@reddit
Yeah Iām not trying to disparage the entire InfoSec community or anything like that. It just so happens that my current and former teams would do minimal work and try to push off issues with their software onto other teams. Of course Iām not standing over their shoulders for the entire day so this is just my perception based off of experience.
0zer0space0@reddit
I especially like when they give me ānewā ones with a due date in the past because they sat on it for a month before giving it to me.
Alternative_Cap_8542@reddit
I came to realize we are just here to be seen.
hardingd@reddit
As long as they are providing details of what the scan found i.e. CVE-###: this patch should be applied and this registry entry was listed X but should be Y - then thatās all they should be doing.
Kynaeus@reddit
Sometimes, yes, but they also get the same back from us on occasion
Most of the time they do their best to alert us to problems ahead of time, give us as much time as is possible, work together on exceptions, compensating controls, and other mitigating factors
We only need to work on remediating medium and higher vulnerabilities (PCI-DSS), they will also do everything they can to help us as long as we aren't shirking our responsibilities
They try and equip us with tools to get more info and to publish and share remediation instructions specific to our company, they are also involved in all of the planning and architecting discussions to ensure that security is top of mind so that problems don't get blown back in our face at the last moment
...Ok now that I'm typing it out I'm hearing that we actually have a pretty great security team because security is a top-down concern for everyone, not an after-thought
Cryptic1911@reddit
Yes and it's fucking annoying. I get it, I really do, since we got smacked with ransomware like 5 years ago, but they go to the extreme. They've got software up our ass reporting cve's back to a dashboard and score all machines / companies on it. Then they harp on the smallest stuff that isn't a huge deal, basically wanting to have 0 cve's in the environment with like 20k machines, so it's never going to happen. It's just every day of "jesus christ. what now?". Security (and legal) basically owns the rest of IT, to the point that it takes like 6 months to get a software review for something we need. Forget just downloading a tool to help fix a situation. The days of just getting shit done are over
modern_medicine_isnt@reddit
I send the ticket back if it doesn't clearly state the vulnerability and how it can be exploited. It also must say what remediation needs to be done, in detail. And this is the one they often can't deliver... how will I be able to prove that it is fixed. Otherwise, I simply say it isn't actionable. That forces them to stop just throwing crap over the wall. A lot of the stuff turns out not to be exploitable in our env because of some other protection. So when they do spell it out, I can document that, and we move on. And if it does need to be fixed, they have provided the how, which makes it low effort.
many_dongs@reddit
I'm on a Security team currently that does this. Every team I've lead in the past did better than this.
The difference is the management. My executives (I'm a director now) are complete morons.
LankToThePast@reddit
When Iāve been somewhere larger, with a security team, we had a great working relationship with them. They would help us implement the changes, even work with our schedule, and if they needed it sooner they would give us cover delaying other projects.
EggoWafflessss@reddit
All the time. Unfortunately I'm the security guy as well, abs that guys just an ass.
Rhythm_Killer@reddit
Yes and that is a pain in the arse, however as I tell my team would you rather trust those people with the power to go in and do it themselves - usually the answer is no!
ipreferanothername@reddit
They used to do that, it's kinda managed now so it's not as bad. Took ages though.
Mandelvolt@reddit
One a month I get a report with a handful of items to fix and about a week to do it. Takes a few days but of all the mismanaged parts of my job, this workflow is actually quite pleasant. Getting less items every month since I can publish update rules to GP/MDM/Ansible and our cloud environment is quite mature by this point. Those first few reports were brutal, but that's life when you're an underpaid startup employee absorbed into a larger company.
redyellowblue5031@reddit
Finding vulnerabilities is part of a successful layered security program and is legitimate work. Pretending that it doesnāt matter or shouldnāt be someoneās job is burying your head in the sand.
That said, ideally there is collaboration between teams.
We try to research whatās found and prioritize the most critical ones that appear to be lower complexity, are actively being exploited, have extra exposure in our environment specifically, etc.. We also try to do some legwork to find what the solution should be.
Thereās limitations though as separation duties means we donāt have admin rights to run most things (which we shouldnāt), so yes the work of actually patching can fall back to admins.
Additionally, admins who own said systems should have some concept of how they work/how to patch them.
Ultimately like I said, it should ideally be a collaborative effort. No single person is responsible for all of it from a technical perspective; we all have some slice of ownership in the process.
af_cheddarhead@reddit
Our security team does not have the permissions necessary or the expertise to actually perform the remediation actions, nor should they have the permissions as this should be a division of responsibilities thing. Of course, in many shops this is pie-in-the-sky thinking due to the lack of adequate manning.
There should be some discussion with the Security team as to priorities and mitigation actions when scheduling the time to perform these actions.
gokarrt@reddit
our "security team" is just a scanning tool relay service. they could be replaced with any simple queuing service.
sybrwookie@reddit
Sure, all the time. And it goes something like this:
InfoSec: "THERE'S A VULNERABILITY!!!!1111"
Me: "OK, is there a patch you're asking to be applied? A setting you're asking to be changed?"
Infosec: "IT'S RATED 9999999/10!!!111"
Me: "That's nice. I've already said I'll get changed what you want. Tell me what you're to be done."
Infosec: ".....our tools say this is a problem, does that count?"
Me: "No, it doesn't. Look into it, see what actions are recommended, and once you've made a decision on the actions you want taken, tell me and I'll make sure they're done."
Infosec: "Microsoft released something, it's gonna be wrapped into their cumulative this month."
Me: "Alright, so then we're good here?"
PappaFrost@reddit
It sounds like your security team wants you to have extra staffing help.
Never say "No."
Say 'Yes + Invoice.'
Let THEM say NO!
"You want me to fix all of these. I would LOVE to but unfortunately we are understaffed at the moment. Here's a new job description for a new hire that will help us meet the organizations security goals."
goingslowfast@reddit
I've got security and ops on my team. Our security teams regularly drop work on the operations teams and that's just the reality of operating in 2025.
Security is about intelligence, not reading blogs or hitting start on a vulnerability scanner and emailing you the output.
Your security team shouldn't be replaceable with an RSS feed from Bleeping Computer, GBhackers, or similar. They should be flagging / prioritizing issues based on their knowledge of your environment. Your security team should be able to let you know where you have the vulnerable service or product installed, what services you provide that rely on it, and how critical it is to skip ahead of other work.
You need to be able to rely on your security team to tell you if this is a "FIX NOW", a "fix next release", or "put it in the backlog". That is the intelligence they should be providing, and it needs to be informed by an understanding of the impact that pulling the fire alarm for a vulnerability has across the organization.
> I'm a one man infra engineer in a small shop but lately Security is influencing SVP to silo some of things that devops used to do to help out (create servers, dns entries) and put them all on my plate along with vulnerabilities fixing amongst others.
You're becoming platform operations vs infrastructure engineer if that trajectory continues.
DevOps should be viewed as a philosophy not a structure. Unless you're creating VMs/DNS entries in the whole pipeline from dev to qa to prod, inconsistency will eventually bite you. Ideally, dev builds the deployment tooling that creates your VMs, makes DNS entries, etc., QA tests it, then you operate it.
Mozbee1@reddit
Yes, it can be overwhelming to wear multiple hats, especially in smaller shops. But pushing back against security findings because theyāre inconvenient isn't productive. If anything, that creates a risk backlog that eventually becomes a breach story. If the team is short on resources, the right move is to escalate that through leadership, not treat the Security team as the enemy.
AirCaptainDanforth@reddit
Yes
travyhaagyCO@reddit
At least half of my job is remediation of vulnerabilities now. Our company decided to make them the same priority as a server down. Good times.
telvox@reddit
Our sec team is 90% run report, 10% dump everything on server admins. 1% braking so they can say they give 101%.
potatobill_IV@reddit
I am the security team.....
And the network team.....
And the engineering team.....
woohhaa@reddit
We had this issue at my old shop. The security teamās requirements started to consume most of our time causing project delays. We ran it up to the infrastructure director who then started pushing back against the security folks. They eventually budgeted for a security operations group to be put in place who took over all the tasks.
It was rough to start but as they got familiar with our environment and started to build connections with the right people it really took a lot off our plate.
flummox1234@reddit
you have a security team? /s
0DayAudio@reddit
Security person here. I understand your frustration, given a list with 0 priorities and just told to fix it is not what a good security team should do. However as sysadmin it's part of your responsibility to maintain the OS, patching included.
A good sec team will help establish SLAs for remediation based on a combo of CVSS scoring, actual exploitability, and environmental conditions. IE is the asset in question edge facing, in the DMZ, or fully internal.
False positives are part of the security life, there is never going to be a time with there won't be false positives and it should be part of the SecTeams process to help verify if the vulnerability is a real FP.
I spent 10 years being a penetration tester and one of the things I did at the company I worked at was work with the vulnerability team and the sysadmins to help verify if vulnerabilities were actually there or not.
I also helped educate the admins on why this stuff is important. An example of this, I had a DBA who managed a number of MSSQL servers in our environment, he was responsible for both the OS and DB stuff for these systems. He refused to patch because of various reasons, no time, uptime requirements, etc. There was a vulnerability a number of years ago where an attacker sends a malformed packet to the server and kills it. Instant blue screen of death. There was even a Metasploit module that fired off this attack for you, all you had to do is put in the IP address of the SQL box. After going back and forth via email and IM I simply just went over to the other building and sat in his cube with my Kali laptop and asked him to pull of the console of one of his servers then I showed him how easy it was to blue screen his box. His reaction was priceless, pure utter shock at how easy it was to mess with his server. I saw the light of realization in his eyes and as a result of what I showed him he became the biggest advocate for the vuln team and patching at the company. He even helped refine some of the processes and procedures IT used to make things quicker.
Bad/lazy teams exist and it sucks. My current job is at a company where the former sec team did the bare minimum, sometimes not even that, and were eventually fired for mismanagement and incompetence . I've spent the last year cleaning up that and helping educate the rest of ITOps on what a good security team can do for them.
The best advice I can give you is push back on them. Make them give you real SLAs, and prioritize what needs to be remediated. Get them to commit to real polices and not just an arbitrary, fix this list shit style of operation.
Zer0C00L321@reddit
1000000%
General_Ad_4729@reddit
Ive worked in private sector and DoD. This is the way.
gregoryo2018@reddit
We were a bit lucky that current security bod has an ops background. I was disappointed to see the fairly rapid turn to throwing things over the fence at us, but I'm not losing hope.
Separately in meetings with a bunch of ops people from various orgs earlier this year, this broad issue came up. A couple of them brought us around to trying to think from the security POV. If we show some level of concern for their priorities, we can have higher hopes of being able to assert our own.
I think we're instinctively doing that with our org, and certainly I'm trying to and noticing it more since that conversation. I reckon it's starting to work. We're getting useful tickets instead of half arsed CRs, pull requests among the 'you should fix that thing pronto', and some supported process around security engagement early in deployment or our own change practices. So, I'm still not losing hope.
bindermichi@reddit
Ask the for a CVE number to check for available hotfixes.
hkusp45css@reddit
Information Technology in ALL of its iterations and presentations is ALWAYS just "customer service." Every single thing you do is in service to some internal or external customer.
If one of your customers isn't working within the defined process, you need to align your customer, not your process. If there isn't a defined process, create one.
My sec team drops all manner of hyper-critical fixes on my ops team. That's literally the job, for both groups.
We have a codified process, SLAs and documentation so that everything is timed, tracked and artifacts are created and preserved.
If the work you're doing for your sec team is just the kind of catch as catch can drive-bys or verbal/email instruction to "get it fixed" then I can understand why it would feel overwhelming.
I can only caution that your best path forward is changing the process and mechanisms, not nec3essarily the workload.
hellobeforecrypto@reddit
That's kind of the worst way to do security.
Am security person.
meiko42@reddit
This reads to me as if security are just inventing problems and it's their fault somehow lol
SoftwareHitch@reddit
Wait, you guys are getting security teams?
ultraspacedad@reddit
I'm solo like you so I'm always giving myself all the work to do.
lucke1310@reddit
Being the System/Network/Security Admin, I make sure I don't do this. I have to
All this to say that being on a smaller team means wearing more hats and not passing the buck.
Ghul_5213X@reddit
"without any help from them."
They are helping you, they are showing you the vulnerabilities.
Security should not be admins, its a conflict of interest to dual hat these positions. You want a security team to be incentivized to find vulnerabilities. If you put security in the position of fixing them you can get a situation where they are reporting a better security posture than actually exists. You want them uncovering problems not sweeping them under the rug.
biglawson@reddit
All the time.
itmgr2024@reddit
Those who cannot do, do IT Security.
hajimenogio92@reddit
You guys have a security team?
nikdahl@reddit
My favorite is when they send us vulnerabilities, but the vulnerability is part of the machine image that SECURITY SUPPLIES, and they refuse to acknowledge or fix the image so we can redeploy.
BronnOP@reddit
I wish.
I find a list of vulnerabilities and itās on ME as the security team to fix them. Just getting people to reply and let me reboot a very minor server is a chore. Iām doing the scanning. Iām doing the remediation. Iām doing the re-scanning. It seems they want to put as many hurdles in-front of me as possible.
Are_you_for_real_7@reddit
Yeah so imagine me Network Engineer flagging holes to security team so they can refer them back to me to fix so I have a reason for mgmt team for firmware upgrade - how silly is that
OneStandardCandle@reddit
I see these threads occasionally and I always want to ask: are you guys hiring?
I'm a security guy doing most of our vuln management work. I find that I have to prove out the vulnerability ten times over, then coach the barely-technical app admins to fix the problem. I have a critical vuln on an external facing, high impact app that I've been fighting to get a change scheduled since January.
raytracer78@reddit
Yes - just a dump of "here's what our Vuln scanner tool found, please address/fix ASAP." They don't seem to do any of their own research or confirmations that we are actually impacted or if its a false positive, etc. It drives me nuts because I have 100 other things I am already working on and here we have someone being paid to just forward me reports that another service is generating and they get to have a job title of "Cyber Security ENGINEER". They aren't engineering shit other than more work for me.
KickedAbyss@reddit
I wanted to respond but I'm busy dealing with CMMC crap for the next forever for security
notl0cal@reddit
You gotta play the game too.
The relationship between SAās / Engineers and ISSx roles is all about shifting blame.
Itās a giant game of fucking tug of war and it all comes down to people not doing their jobs correctly.. Or just simply not caring.
This is a problem that plagues every workplace regardless of title.
Sudden-Most-4797@reddit
Oh, yup. Yes they do, just about every day. Then everyone gets annoyed. My take is if everything is high priority, nothing is high priority.
duranfan@reddit
Alas, yes.
Bubbadogee@reddit
You should be doing regular patching which will cover most of those vulnerabilities.
however if he is just providing a list of CVEs for like lets say apache, take a look through the CVEs and 9/10 of them typically never apply to your app. some of the CVEs are like
"if you have the bogo option turned on, and you have http exposed, someone can view all your files"
Stuff like that where its a extreme edge case that sure, if that is true, then thats bad and you need to patch or update, but then you go take a look and
no bogo option is turned off, http is not exposed
the security guy more then likely just threw the app/version through a CVE detector, gave you the list, and said all in a days work.
Now if he is providing misconfiguration vulnerabilities showing hey, i did some scanning and found our .env is exposed on this app. Then thats a differnt story.
Sounds like you just have a lazy security team if he is just providing you a list of CVEs with nothing more.
im_suspended@reddit
Yes and I love it. I was really waiting for new bosses to toss some work at me, you know, urgent stuff that can be done whenever you want, at night preferably.
jameson71@reddit
I call them the security scan team. Thatās all they do.
Apprehensive_Bat_980@reddit
This
haksaw1962@reddit
That would be a 'yes.' And to make it worse, it goes to our team manager, then he farms them out to the team. but say we have a critical with a 30 day SLA, often I don't get it assigned until 14 days are left.
We used to use Tenable for our vuln scanner and it was good at saying "this is a vulnerability, this is how we found it, this is how you remediate it." But the changed to Qualys, which gives us "This is vulnerable, here is the CVE (if there is a CVE, 20% just claim a vuln)," you might get a listing of the detected .dll or registry key if your lucky.
And don't get me started on the byzantine change control process to schedule a remediation.
Apprehensive_Bat_980@reddit
Yeah more or less.
plazman30@reddit
All the time. The worst part is is that we have a patching team, but the security team refuses to communicate with them directly. So, we'll go through a round of patching and they'll miss 2 servers I support. And the security team reaches out to me to tell me my servers are still vulnerable, and it's my job to get the servers patched again. Not sure why I need to be the middle-man in this mess.
And now, when I reach out to s vendor to ask them if they're vulnerable to some critical exploit and if they've patched, the security team has decided they will only accept communication from a c-suite executive from the vendor, and if we can't get that, then we need to look for a new vendor. Somehow that rule doesn't apply to Microosft, IBM, Oracle or RedHat. But it does for everyone else. I've had 100% of my external vendors tell me to go pound sand.
kanid99@reddit
I suppose this is why we have a security analyst on our team. He works with the security team to ensure that patching levels are maintained and deploys/ installs all security patching in our environment including on servers. I work with him to make sure that these patches are deployed successfully and assist him and resolving any post patching issues that might come up.
I deal with all non-security patching, which ironically ends up being oftentimes an out of ban hot fix for a security patch.
djgizmo@reddit
what would you have them do?
have you taken any of them out to lunch to get back on the same page?
What does your leadership expect of you when they report the vulnerabilities?
What is consequences of cyber insurance and audit if you donāt take care of these quickly?
BuffaloRedshark@reddit
yes. The new system they use is slightly better as it usually lists the KB to download (in the case of MS related ones) but overall they're still not helpful.
mdervin@reddit
I just tell them if it nukes the server are they going to help me fix it on a Saturday night? If the answer is no, I tell them I get to it when I get to it.
D_Bat@reddit
The vulnerability reports just come straight to me to fix. We have SOPs for how fast they need to be fixed based on severity and if it's a false positive I provide our security team with evidence so that we can submit to have it removed.
Guslet@reddit
We have 18 folks in our IT department. I am the Sec Ops manager, here is how it goes. I find vuln, I go and look at what it is, I say, ok I can fix this, I fix it. If it turns out to be ESXI or something that requires more information/downtime, I go to the asset owner and submit a ticket to them and also call them to discuss. They then take 3 months to fix it and I continually bring it up in meetings over the next three months with our CIO directly in front of the individual who is not fixing it. Eventually, three months later it gets fixed. Easy peasy.
codewario@reddit
Yes, but we are a large organization. Our secops team manages the multiple scanning tools and infrastructure we use at our company, as well as enforce the timelines of remediation.
We expect teams to manage their own OS once we deliver a server to spec (or if they build a VM via self-service), and remediating vulnerabilities isn't much different. Tickets are created and assigned to server owners when vulnerabilities are detected on their instances with a deadline to fix. Remediation that requires an OS patch or an update to one of our standard agents, then we (Windows/Linux admins) would be expected to remediate it regardless of owner.
That said we also assist teams who have trouble remediating their instances. We have strict timelines for remediation, and there are consequences when the deadline isn't met and there hasn't been prior extension approved for the remediation.
May sound harsh but the deadline, while strict, is generous and if you have a good reason, extensions can be approved (but never indefinitely).
GloomySwitch6297@reddit
I am my own security team and my own remediation team.
If I will say to myself "ASAP", it does not mean anything to me
mrtealeaf@reddit
Yes, but only on Friday afternoons.
fluidmind23@reddit
The first pass of qualys identified 80k vulnerabilities. Most of them resolved themselves with updates that were automatic but there were like 5000 actual interaction tickets created that day. (127 year old global company with so much tech debt Microsoft should have repoed everything 10 years ago) I believe they are still working through them 4 years later.
bbx1_@reddit
What security vulnerabilities? oh, those terrifying alerts?
yeah, thankfully we don't have a vulnerability scanner so vulnerabilities isn't an issue.
Truth but not.
ez12a@reddit
We used to get pings from all sorts of people about CVEs. It was too much noise for our oncall. I created a chat with only the security team and the process is to create an incident task for tracking response.
Tldr we treat them like incidents/tickets/tasks to be completed, Albeit with higher priority of severe.
ElvinLundCondor@reddit
Security farmed out reporting to the QA team who clicks a button and sends me the list of vulnerabilities to be remediated. Problem is they donāt know the meaning of the report. Iāll get things like port 443 is running version X of apache which is vulnerable to CVE-Y. Upgrade to newer version of apache. Look up the CVE at redhat. CVE remediated at revision Z of the apache package, which is already applied. Try to explain to QA. Nope, you have to upgrade. Ok, configure apache to not report version. Ask QA to re-run the report. Youāre clean, thanks.
And donāt get me started on SSL protocols, cipher suites, and hash algorithms.
No-Percentage6474@reddit
This why I have 7000 tickets in queue. 6990 are security findings for software I donāt have support on.
Hipster_Garabe@reddit
Do you work with me? Iām convinced Cyber teams donāt do anything except hand out Tenable reports and say fix this. No direction just solve.
Nnyan@reddit
Team effort. Security personnel do more than just provide lists. They need to categorize by risk and provide a detailed assessment on why itās a risk to our environment. They research and provide remediation plans and test them as a joint effort (more eyes on potential impacts). Once approved for deployment they are responsible for tracking them
Significant-Ad-3617@reddit
Be lucky that's all you have to deal with
natflingdull@reddit
Yeah its happened to me many times. Its only particularly frustrating when I get forwarded vuln reports from teams who are uninterested in working with me.
For example, years ago I was working at a 500+ employee financial institution with a dedicated security team. I started getting tickets in from the Infosec team that were too vague to be actionable, such as āPHP 5.0 out of date and must be updatedā on a Windows Application server hosting like a hundred different RDS apps. I was pretty green at the time so I assumed this was something you could update on the server itself like .net, but obviously ran into issues when I realized how many applications/web servers were utilizing PHP. I reached out to the security team to see if they could help me narrow it down and all I got was a lot of aggressive pushback and essentially āfigure it outā. Im still no expert on PHP but I eventually realized that to accomplish what they wanted as frequently as they wanted we would have to move most of the applications on this Windows Server to a Linux VM(s) which I absolutely had no authority to do as it affected almost every department in the company.
I had the security team and CIO breathing down my neck about these vulnerabilities despite my explanation of the issue in fixing until I eventually got another job and left. At subsequent jobs I saw a lot of similar patterns of obstinate security people being completely unwilling to work with admins to solve problems, which is frustrating because Im not the expert, they are, but Im not going to blindly patch, update, or get a vendor involved just because someone said to do it and refused to explain without any context. Like why is it on me to go through tons of vulnerability tickets, research every single CVE when half the time its referencing technology I donāt understand or have never heard of. If your job is to research and analyze cybersecurity threats but you refuse to explain your analysis then you arenāt doing your job.
On the flip side of that, Ive worked with great security people whoāve walked me through the issue. It normally doesnāt take that long. For example, I was once tasked with removing the MSXML parser from a few windows machines and I reached out and was like ācan you explain the issue before I go down this rabbit hole? I canāt remove a system component on a production server without research into the impact so I need to understand how serious this is before I prioritize the research and time it will takeā. The analyst was great: she broke down why it was an issue and explained how it opened up a pretty bad RCE type vulnerability. The whole conversation took twenty minutes
Honestly I think theres a ton of people in that field who have no practical experience in IT so they actually donāt understand the vulnerabilities theyāre looking at and so they get cagey not because they donāt want to explain but they canāt explain. Way too many people in that field who think forwarding email reports from a pre built Nessus scan means their job is over.
mycall@reddit
On aside, as a software engineer, they often give us bogus vulnerabilities marked as high or critical, but once we review it, they are often low or "as designed" (aka ignorable). I just wish they were less paranoid and more knowledgeable. As they are taskmaster without understanding coding, it becomes is a serious detrement for my team's efficiency.
maziarczykk@reddit
Yes
nickerbocker79@reddit
I'm pretty much the only SCCM guy in our IT department. The worst is when the security engineer would just send me a Tenable scan of an entire location without filtering it asking me to take care of it.
orddie1@reddit
Yep. That they do
cammontenger@reddit
You guys have a security team?
WhereDidThatGo@reddit
Is their job just to take a list of vulnerabilities from some tool and throw it over the fence to you?
If the answer is yes, then what do you need those security people for?
Masterofunlocking1@reddit
Iām strictly networking but omg yes! I literally hate our security team.
flashx3005@reddit (OP)
Amen to that š
ronmanfl@reddit
Constantly. Just got a ticket for a bunch of new zero-days this morning, in fact.
GlitteringAd9289@reddit
zero-day + 1 hour. Have to wait for coffee to kick in first
SandeeBelarus@reddit
Yes. Security pros are not always knowledgeable about platforms. So they are more analysts skilled at the scanning tools and not so much on actual infosec engineering.
blue_canyon21@reddit
Yes! And they give really stupid deadlines for them too.
ccsrpsw@reddit
Without a shadow of a doubt. End user training too. And signage. And anything that does any sort of work that isnāt āinterestingā to them imo.
And it used to be not using tickets 99% of the time too - just emails. Random emails. That 3 weeks or so later they follow up on with astonishment that we didnāt see it. At least that got fixed.
musashiro@reddit
Sadly yep, even though its not something we should do š¤¦āāļø
fnordhole@reddit
Yeah, they don't vet whether the vulnerabilities match the target environment.Ā For example, reporting Cisco vulnerabilities on a Windows 2022 server based on a default Nessus scan running from inside the network with domain admin credentials.Ā They just copy and paste the boilerplate frkm the tool they use.
They're a bunch of six figure copy-paste monkeys who can do no wrong so long as they're making life difficult for everybody.Ā So they double down.
Criticisms about their tactics and performance and general ignorance of how anything at all (especially networking) works are viewed as being anti-security.
govatent@reddit
I love when they tell you to fix it but they themselves don't understand how to fix it or what it even is. But it's on this random report the tool generates.
PoolMotosBowling@reddit
what vulnerabilities specifically?
With proper endpoint client auto updating and a patch management schedule, most your systems should be pretty upto date, right?
Skinny_que@reddit
Yes with a deadline with very little insight or anything like that
Absolute_Bob@reddit
Establish a change management process and follow it.
PurpleFlerpy@reddit
I'm on the security team and we get all sorts of crap dumped on us! If helpdesk didn't throw anything that they don't understand at us, we wouldn't be dumping the vulns on other people.
Zerguu@reddit
Our Security team is just a helpdesk in disguise for people who what to feel important. Any alerts they get from the insufferable SIEM they just pass to infra team.
Mine-Cave@reddit
That assumes the security team even comes to you with a vuln at the start.
Most of the time my cyber team just comes to us when a vuln has been open for too long. "Hey go fix this"
Sylogz@reddit
I get results of each scan and the system creates tickets with resolutions for each.
So in the ticket i have the assets, what needs to be done and how to do it.
Then tickets are closed automatically when fixes are done and scan has resolved the issues.
Deadlines are very relax and noone from the security team have complained if we are late or similar. They are just happy we do our best.
We have automated more and more, it took 30-40 hours for a person to be compliant and now it is done in a more automated way. It takes 3-4 hours per months instead.
Booshur@reddit
If you need more time then tell them as soon as you can. If they want it done sooner then you have grounds for headcount increase, if only temporary help. I have contractor friends who I can pull in for stuff like this.
Tech4dayz@reddit
Yup, every job except the current one has been like that. I used to think I wanted to do infosec, then I saw what they really do at 90% of companies and I noped out of that idea real quick, I think I'd off myself if that was my job.
tankerkiller125real@reddit
I'm the solo IT Admin, and I have vulnerability SLAs I have to meet for SOC 2. It's annoying as all shit, but that's the way it is. Luckily between MS Defender, and Action1 it's easy enough for me to keep up with it all.
riddlerthc@reddit
glad to see im not alone in this. not just vulnerabilities but everything related to them. its always fix ASAP or we will schedule a meeting to work on it together crap.
meowMEOWsnacc@reddit
That sounds about right. Source: I work in a security teamĀ
reegz@reddit
Well most updates happen on a set schedule. Out of band are different.
In my org a team should have a patch schedule and when those updates are released theyāre installing/testing them within a predefined SLA.
If weāre contacting you itās because you missed your SLA and didnāt file an exception etc. Too often I get managers telling me this is unplanned work however the patch cycles are quarterly/monthly at the same time. Itās planned work.
DR952@reddit
Yep.
Shiveringdev@reddit
Omg yes. Every week and hundreds of emails.
robvas@reddit
Pretty much. Keep things up to date, configure according to their required guides/specs or best practices.