Do you let your security team dictate how you run your systems?
Posted by Public_Warthog3098@reddit | sysadmin | View on Reddit | 132 comments
Through the years. I have come to realize a lot of the people working in security has never worked in operations. So to a lot of the security folks who has their security+ never locked down or hardened xyz systems. Has it been a problem for you where the gap and disconnect is? how is it dealt with?
ballkali@reddit
We had an audit last year that exposed we had zero reliable before/after change data, for GPO modifications, native event logs were basically useless for reconstructing what actually changed and when. After we deployed it, the next time something broke we could pull a full forensic timeline in minutes instead of spending half a day digging through fragmented logs. The 170+ risk checks also flagged unconstrained delegation on a few accounts we genuinely didn't know were sitting there.
reiloven@reddit
The gap gets real expensive when nobody on the security side can contextualize why a finding actually matters, like flagging every stale account, the same severity whether it's a dormant service account with domain admin or an intern who left 3 years ago with zero permissions. Having tooling that severity-scores findings and maps them to MITRE ATT&CK has honestly done more to, get ops and security speaking the same language at my org than any amount of meetings.
Fratm@reddit
We just have a know it all senior director who spits out buzzwords that he doesn't even understand.
EnragedMoose@reddit
I only hire security people with IT/ engineering backgrounds for this reason. It's nowhere near an entry level job. You need to be better than most of your peers.
ek00992@reddit
Good luck explaining this to every single school district in the country. Everyone wants sophomores in HS to get their Sec+. Most of these kids don’t even know how a directory structure works. Blind leading the blind.
MalwareDork@reddit
Don't have to explain tiddily because there's no reason to hire said people.
ek00992@reddit
Which is exactly what's going to happen to so many of these poor students, yeah. They're being fed a completely false impression of what it means to pursue a career in cybersecurity.
MalwareDork@reddit
At some point you just have to grow up and grab the reigns to your own destiny. If I didn't do that in my early 20's, I'd still be some moron working at a fast food joint with a garbage Criminal Justice bachelor's when the economy shit the bed in 2008.
ek00992@reddit
I’m not saying you’re wrong, but a lot has changed over the past 20 years. I don’t think it’s fair to frame it based off your subjective experience. I don’t mean that as a slight in any way at all.
That said, there are plenty of students now who will do just as you did and succeed despite.
MalwareDork@reddit
No slight taken, but it's just one of those immutable things that goes as far back as time itself. At some point people will get sick of their current predicament and either choose to resign themselves to it or shove back hard. What the zoomers and gen alpha are going through right now is not much different than what it was for millenials in the 2000's.
To use (offensive in order to remain accurate) oldspeak, the mantra repeated to us in school and by our parents were "if you didn't have a Bachelor's degree, you would be picking cabbage with the rest of the illegals." A lot of us got burned because if it "wasn't a degree in the four cornerstones of education (Math/Science/English/Liberal Arts" it was worthless and a wasted life. Comical since a huge path to success was IT, programming and trade jobs and everything else like accounting, education and science fields were oversaturated.
Now it's the same thing except instead of the four cores of education, it's IT jobs and comp sci. Some of the younger millenials/older zoomers had the opportunity to get into crypto, social media influencers, and streaming, but that's been pinched off.
So, I don't know what the new paths of success will be, but at some point you just have to either realize it's bunk or try your damndest to claw to the top.
4SysAdmin@reddit
I’m a security analyst with a sysadmin background. It’s definitely a hard line to walk. To me there’s normally three situations:
1) A practice has been taking place “forever” that’s just bad practice. For example, the entire IT department had local admin rights to every PC in the organization. Does a secretary need local admin to a finance PC? No. So that was changed and lots of people griped. Sorry not sorry.
2) A change is made and everyone agrees with it. This is the best case scenario. We discover an open security issue or vulnerability, communicate with everyone what it is and why it needs to be changed, and everyone agrees. As you can guess, this is the least likely scenario, but it’s nice when it happens.
3) Getting with the times. Sometimes we implement a change that few people think is really needed, but it needs to be done. Last case of this situation for us was successfully phishing the help desk via phone to reset a users password and MFA tokens. We were introducing a new password reset system that included user authentication via MFA before resetting the password. This means no more going into AD to reset a user password. People were furious but reluctantly agreed after we presented the findings of us phishing a users creds and performing a mock data exfil including sensitive documents.
For us, our security team and infrastructure team (including sysadmins) are in the same team and report to the same person. This greatly helps, as we can normally be on the same page because we work so well together.
NiiWiiCamo@reddit
No. Having been both, I would never assume I knew more about the service, system or implementation than the people running the whole thing.
What I did tell them was mostly about findings that probably flew under the radar, new vulnerabilities and critical findings in general. I never told them how to run their systems, but I did tell them quite a few times that the current issues have to be addressed within one week, depending on the issue.
You just cannot serve your whole root via NFS without authentication. At least enable the host firewall and limit the access. Especially on the interface that lives on the public guest wifi.
lineskicat14@reddit
ISO teams and their demands are maybe the #1 factor in why I might switch careers/retire early.
They have been a huge issue for 3 companies in a row. They know jist enough to be dangerous, but never enough to actually be helpful.
Coder3346@reddit
I know a company who paid to get the iso 27001
ek00992@reddit
What I can’t stand is the smug way they push back as you tell them their demands are disproportionate to the needs of the company, and potentially detrimental to the company’s ability to operate effectively.
I worked at a startup for a very basic web app. You’d think we were developing Fort Knox the way the security guy demanded things be run.
The intern does not need a fucking yubikey
lineskicat14@reddit
You nailed it. They think every ask is mission critical, and yet they rarely are.al and the dilemma there is: whose call is it? Always seems to be their call and IT has no say. My favorite analogy to use for them when im talking to my managers:
"They want to install the 10th lock on a door thats already been locked 9 times. No one is getting in that shouldnt be, however the people who DO need to get in, are now getting blocked. "
Again these guys are dangerous, and I also am starting to see them create work because they worry about layoffs or their department shrinking. And of course, that work isnt done by them, its done by us in IT.
anonpf@reddit
Thankfully our isse has worked operations and I worked cyber security. We both speak the same language and understand the impact policy and configurations have on the systems.
safrax@reddit
I'm also one of those people that has walked both sides of the fence. Unfortunately with our current security team they're all pure policy people. The amount of things they approve without understanding that the problem is resolvable if the offending team would just work with me is entirely too damned high.
Speeddymon@reddit
My same thing! I'm on the DevOps side, the app teams don't come to me even though I actually know how to come up with ways to cleanly implement a fix for most of the policy related issues we have exceptions for, without needing an exception.
pc_jangkrik@reddit
Thats the best that could happened. Somebody who already in trenches.
Unfortunately ours is less than useful. The manager promote him because he's the only one who not handling specific system when the requirements come that we neeed security person. And what more funny is this person already known to had disciplinary issue. But the manager said he already said sorry so its okay.
Basic-Bobcat8469@reddit
smh fr its wild how some security peeps never even touched the systems they protect like wth
ChoiceLeg9596@reddit
smh rly feels like security folks just chillin while ops be out here hustlin, so awkward
Electrical-Staff0305@reddit
This is the way.
A security engineer with OPS or hands-on SI experience is worth their weight in gold. Any time we’ve ever had to go through an assessment or certification, it’s been light years easier when we’ve had at least one person on the security engineering team that had strong hands-on skills, even if it wasn’t directly with our product (in the case of 3PAO). Had one guy from a 3PAO who would simply ask for all of our documentation. ALL OF IT, including all manuals. 2-3 weeks later and he’d have them all read and came back with questions about how to operate the device (if he didn’t have direct access to a lab). He wouldn’t make a single comment about security requirements until he knew how everything worked within the domain or how a specific product worked. He already had background as a netadmin, sysadmin, some DB admin, even worked a few years as a developer. Compared to the rest of them, he was light years ahead, and even knew more than our own sysadmin and netadmin folks more often than not. Within a month, he usually knew our systems better than we did, and that’s when he’d start writing stuff up. And he tended to be right (even if we didn’t like it). Dude even had an uncanny knack for understanding why a particular security finding was there. We might not even have legacy documentation on why a particular feature of one of our products was enabled. He’d make a guess and we’d hunt it down, eventually realizing that he guessed right and needed to change our as-shipped configuration because of it.
Drove us nuts at first, but he saved our butts several times, and ended up saving us a bunch of time and money in the long run. Not often I use the word “genius”, but this dude absolutely was.
TL;DR: if you have a cybersecurity guy with strong hands-on skills, and the experience you need, use them! They’ll make life a lot easier for you!
fnordhole@reddit
Our security people speak moron.
They believe anything Google "AI Overview" says, despite most of it being hallucinations crafted to match their prompts.
I hate them.
anonpf@reddit
Yea that sucks. Certainly been in crappy situations like that. It’s been nice having input in who we hire.
Prudent_Cod_1494@reddit
No. If it were up to security people we’d shut down email and only communicate on pieces of paper that we burn after the other person read it. There’s a healthy tension there that is good. Their job is to make us secure as possible. My job is to make sure the business has the technology it needs to function. We typically meet in a happy middle. When we don’t and I’m dealing with a real asshole I get to win because I’m the one making sure the tools to actually win and work business are functioning so it’s easy to get the execs to agree with me and force it.
Real-Patriot-1128@reddit
I work at a university that didn’t do so well on a cybersecurity audit. Now we have a cybersecurity initiative where the cybersecurity chief is the “ word of God” as far as how we operate. Zero regard or tolerance for how systems work but has a mandate to get everything compliant in a ridiculous timeframe. IT is running on a skeleton crew and buried in work. It’s in understatement to say SNAFU. It’s almost comical.
Expensive_Finger_973@reddit
When they force me to do something that I tell them is a bad idea, I also tell them that the announcement for the change will include phrasing that it is being done due to new security standards that the security department has deemed it required to implement.
For often than not the realization that they have have to explain the fall out makes them rethink things.
loweakkk@reddit
Yes, this request come from the CTO which is also your N+3 so move on and go to CAB. If you want I can even present it myself.
wonderwall879@reddit
good ol security with no operations background. Balancing between justifying their job and not getting fired. Most (not all) that i've met have been super insecure if they'll have a job if they dont do some spy level changes to the corporate network environment.
In reality we just need them to stay up to date on the latest vulnerabilities, keep employees fresh and up to date on security training and social engineering attempts, and active monitoring on company traffic. That's 95% of what their daily tasks need to be. Yet they're always trying to make physical topology changes.
pc_jangkrik@reddit
They had no clue how things run and just tryin to be visible to management by suggesting changes. Wont be surprised if that change was spit out by ai.
Jofzar_@reddit
This is the way, 100% agree
MickCollins@reddit
No. But I work with the Cybersecurity director, as he knows I was cybersecurity for a very large chunk of my career so I know where he's coming from. He's got two people under him - one guy for his CISSP who's about as useful as a wet sock and another guy for firewalls who knows his shit but has the attention span of a gnat.
Public_Warthog3098@reddit (OP)
Cmon lol. No one's perfect. You gotta see people for their good traits 🤣
xSchizogenie@reddit
If there are no good traits, what you wanna say?
Public_Warthog3098@reddit (OP)
I'd say nothing 🤣🤣🤣
xSchizogenie@reddit
Exactly. I have a colleague like that too … I am responsible for our internal rollouts for everything, but for MFP/Printer and document scanners, we share the work. I take the printers, he does the scanners. And by god, if I would do both again alone, we had no problems lol. I seriously wonder what he is doing all time heh „have to do something with some scanners“.
Public_Warthog3098@reddit (OP)
Lol he's scanning himself 🤣🤣🤣🤣🤣
xSchizogenie@reddit
I wouldn’t be suprised about that at all.
MickCollins@reddit
Firewall guy is pretty good and will discuss stuff to determine issues. However Wet Sock has said some EXTREMELY fucking stupid shit out loud. Like he turned off older crypto algorithms and broke about eight applications doing it and as was like "wasn't me"...yeah it obviously was. While I agree they had to go away, you set up change control with the application owners to figure out what to do, you don't just say "fuck you, users" and turn it off. Then during the Crowdstrike shit in July 2024 dude had the balls to say "it's not Crowdstrike, it doesn't work like that" when we already had six servers blue screening. After that one I stopped giving a fuck what he says. If it were up to me, he'd have his admin access taken away.
Public_Warthog3098@reddit (OP)
🤣🤣🤣 well then.
Plenty-Piccolo-4196@reddit
I worked 2 years as a sole internal IT admin with a devops team attached until I got offered infosec at the same company.
I don't even do suggestions before I plan them out in my head. Sometimes I straight up cancel them before they reach IT
TerrorsOfTheDark@reddit
I have never had a security person ask me to change something that increased security. Every single one of their requests has been to make things less secure from 'make sure everything on the internet can see this host' all the way through to 'run this untested binary on every system'. If I ever run across a security team concerned with security I might faint.
Public_Warthog3098@reddit (OP)
Whattttttt. How large is your org?
TerrorsOfTheDark@reddit
Over the last 20 years I've seen the same type of security folk from 50 person companies up to 3000 person companies. One day, I hope, that a security person asks me to make things more secure, but I expect to retire before it happens.
MonsterTruckCarpool@reddit
Ours do. For many critical vulnerabilities they dictate 48 hour remediation across all systems. This causes havoc from a business, stakeholder and operations standpoint.
Toasty_Grande@reddit
I don't think it's hard if the infrastructure and architecture is setup to accommodate the pace. If you have a blue-green deployment, then the patching can happen in one, tested, and then swapped in via a load balancer. Additionally, with good automation, most patching can be done without much human intervention.
MonsterTruckCarpool@reddit
lol testing. Been here 9 years and we have made zero movement on our testing posture.
Toasty_Grande@reddit
Testing posture isn't on SecOps. Guess who that is on?
MonsterTruckCarpool@reddit
Not me I’m in the outside looking in
Mrhiddenlotus@reddit
So why would you blame security for the havoc lmao
JwCS8pjrh3QBWfL@reddit
EXACTLY! So many of the complaints in here can just be boiled down to "I suck at my job"
MonsterTruckCarpool@reddit
lol I’m not in infrastructure
cddotdotslash@reddit
Ah yes, the threat actors usually wait until at least 49 hours before beginning their exploit campaigns.
Beginning_Ad1239@reddit
Is there something else in play that is causing that requirement, such as an insurance policy or local law? Availability is part of security as much as confidentiality.
MonsterTruckCarpool@reddit
We are a hospital so yes this may factor into that.
AugieKS@reddit
Ha, I am the security team! What am I gonna do, bully myself?
PappaFrost@reddit
Me too, and I find it's best to bully yourself when you're making eye contact in a mirror! LOL.
codewario@reddit
Yes and no. They dictate detection, secops standards, and bureaucratic stuff like exceptions management, but implementation is on our plate. They have some say in what happens but we have the autonomy to push back as well. In the end, we are usually pretty good at finding solutions that work for all parties.
StenEikrem@reddit
This thread describes a real problem, but it is a dysfunction, not a norm.
The majority of security professionals I've worked with understand that this is a collaboration. The adversarial version some of you are living with is a broken implementation of something that was designed to work differently.
For those stuck in the adversarial version, what follows are a walk-trough of key concepts of how security governance actually works. Health warning: this is most useful in organisations with a formal security programme.
If your security team is one person with a CISSP and a god complex, your options are more limited.
Your company has an ISMS
Every organisation that manages information security has some form of an Information Security Management System (ISMS). ISO 27001 certification is the most visible expression of one, but the ISMS itself is just the system of policies, processes, roles, and controls that govern how security decisions get made. Most companies have one. Most sysadmins have never read it.
It matters because the ISMS is not the security team's personal fiefdom. It exists to support the business. And the frameworks behind it say some things that are directly useful to your situation.
Risk ownership follows resource ownership
This is the bit most sysadmins don't know, and most security teams don't advertise.
In any properly functioning ISMS, risk ownership sits with the person who owns the asset or the business process. Not the security team. The security team identifies risks, recommends controls, and provides assurance. They don't own the risk unless they own the system.
If you don't own any resources, you don't own any risk. If someone is trying to make you responsible for risk decisions on systems you don't control the budget, staffing, or architecture for, that is a governance failure. Not a security requirement.
Requirements and Controls are recommendations until the risk owner accepts them
Security frameworks don't say "implement everything the security team says". They say identify risks, evaluate treatment options, and make a decision. That decision belongs to the risk owner, typically the business unit or system owner, not the security team.
Tradeoffs are not a workaround. They are how the system is designed to function. Every organisation has gaps between what it would like to do, what frameworks recommend, and what resources allow. Closing those gaps or not is a (business) management decision. The security team provides input. The resource owner decides.
If your security team issues directives without going through this process, they are skipping the governance model their own frameworks describe.
What you can do with this
When security drops a requirement that doesn't work operationally, don't push back with - that's not practical. Use their own language:
As some of the responses indicates, the organisations where security and ops collaborate well aren't lucky. They are following the governance model as intended by frameworks and standards.
Hotshot55@reddit
Fuck off with AI posts
Public_Warthog3098@reddit (OP)
LOL. How'd you know?
_haha_oh_wow_@reddit
Kinda, yeah: There are limits to what they get to do sometimes but largely, yes, all the IT departments work together with security unless it's going to cause huge problems and the security issue is not critical.
Horsemeatburger@reddit
We don't really have those issues. Our security guys are all either have ops experience when they joined us, and those that don't get an a two week introduction where they shadow ops people.
In addition, we give our ops people exposure to what the security folks deal with so they get a better understanding of the security side of things.
The results are that everyone is pushing in the same direction and conflicts between the ops side and the security are rare.
bitslammer@reddit
In a perfect world security should be telling you *what* needs to be done, but leave you room as to how exactly to meet those requirements. That's they model we use in our org.
Most of our security architecture team come from technical backgrounds so they understand what can be done, what's practical and what's reasonable, but sometimes even their hand is forced due to Sox, HIPAA, PCI, GDPR, DORA, VAIT, NIS, MAS or any of the other dozens of regulatory requirements we face operating in 50+ countries.
root-node@reddit
We listen, we nod our heads, then ignore most of their suggestions. A couple of times we have even laughed in their faces at how stupid their suggestions are.
On the rare occupation they have a good idea, we'll help them get it implemented.
TerrificVixen5693@reddit
Yeah they suck ass and have no idea how to balance best practices with legacy systems running XP.
Mrhiddenlotus@reddit
lol
TerrificVixen5693@reddit
When inherit a plant running 20 year old systems, you can’t really apply modern security counter measures if the systems are too old for you to implement them.
Mrhiddenlotus@reddit
I mean, I don't know what your env looks like, but I hope that things strictly segmented or airgapped. What did they tell you to do?
TerrificVixen5693@reddit
Let me put it this way, we’ve been told for the last X years we’re replacing this system that can’t and isn’t air gapped, can’t be patched, the vendor is closed for business, has zero support, has failing hardware, with zero controls outside of network segmentation, was put in 17 years before any of us worked here, and runs vendor defaults, but the upgrade never shows up due to budgetary reasons.
Info Sec: “FYI, this system is in the top 10 most vulnerable systems in the company, please have these totally impossible actions handled within 48 hours.”
The system administrators who know operational security of the environment: “No shit Sherlock. We’ve put in for a budget for this numerous times because of a promised upgrade that doesn’t ever show up. And hey, we have a policy exception with GRC for this system that was approved a couple months ago after I discovered this system and was able to sunset two of its brothers, but not it, because it’s vital to the operations here, and if it went down, we’d lose hundreds of thousands of dollars a day.”
pc_jangkrik@reddit
Yeah, explain everything and ends it with "What do you suggest?"
Most of the time you'll had dead air and if they had their caretaker, it will spurt some corpo speak to deflect the question.
Mrhiddenlotus@reddit
I've been in this exact situation. Sounds like management is going to be in for a rude awakening when it shits the bed, but it also sounds like you have your ass covered. "security" workers that are just vuln scanner monkeys give us all a bad name.
Fun_Chest_9662@reddit
You have to break it down in there language. Alot never configured a system and as a linux admin they never touched anything but a windowes env, and when it came to mission systems even less so. They just want there checklist and some dont understand how rmf works. If one person in their group isnt understanding try another person or learn different wording. For example like "group account" vs "role account" one requires a structure of aprovals the other just a log of who uses at that time. terminology matters. And explaining that there name will be tied to the potential risk of implimenting a control that will stop all systems and bring down production causing loss of revenue, i.e. they wont get a paycheck or get fired because they forced the control usually opens there eyes lol.
Blyatman95@reddit
My biggest headache is our “security guy”, who is a top bloke but we’re a tiny MSP with no need for dedicated security, and without being mean he’s “just” a tier 3 engineer who has his Sec+ and listens to dark net diaries.
I’m constantly trying to explain to him that the 4 employee mom and pop shop interior design company or whatever doesn’t need ISO27001 tier segregation, permissions, InTune compliance policies etc.
Are these things in a vacuum good? Yes. Do they improve security? Yes. But if you’re arguing with the owner of a business and making them input MFA 3 separate times just to login to their 365 they’re going to get annoyed and find someone else who just lets them get on with their work without applying corporate level security to a micro business.
Sharp_Animal_2708@reddit
the gap is real and it goes both ways. had a security team push a 90-day password rotation policy on service accounts last year that broke 3 integrations in prod because nobody asked ops what those accounts actually connected to.
the orgs that handle this well have a shared risk register where security defines the risk tolerance and ops picks the implementation path. the ones that don't just play hot potato with change requests until something breaks.
who owns the remediation timeline in your org when security flags something?
adept2051@reddit
They get to tell you policy, gates and monitors. You pass them or ask for exclusions with good reason. Security don’t get access except like any user for incident responce if they want to be ops they get your gates, policy and monitors and pass them or GTFO.
zzxxzzxxzzxxzz@reddit
I have worked with both types, those that have a very limited understanding on how technology works, can read a report and barely understand what it means, they are basically a compliance officers. Those people are hell to work with, especially if they are confident in their lack of knowledge.
And worked with security people with operational background. It's like day and night. Seconds group understands the challenges and what it takes to get to the goal, most of the time, and are ready to support you. Not just send email with unrealistic deadline.
SpakysAlt@reddit
If it was up to the security team at my last job nothing would function at all and the company would then go out of business. It would be 100% secure though!
NeatRuin7406@reddit
the gap you're describing is real and it basically comes down to whether security was hired as a policy function or an engineering function. policy folks optimize for documentation and audit readiness. engineering folks have actually had to implement their own recommendations and know where the sharp edges are.
the specific failure mode i've seen most often: security mandates something with a hard deadline, ops pushes back with technical objections, security escalates to leadership framing it as "ops is blocking security". leadership, who also doesn't understand the technical constraints, sides with security because saying yes to security looks responsible. ops then gets stuck implementing something badly, it breaks things, and somehow that also becomes ops's fault.
the thing that actually helps is having even one person on the security team who has spent time in ops (or vice versa). that person can translate between both camps. they know when a policy requirement is a real security need versus a checkbox for a compliance report.
shared on-call helps too. if the security team's policy change causes a 2am incident, and they're on the rota for it, suddenly their recommendations get a lot more operationally realistic.
DisappointedSpectre@reddit
To some degree engineering/ops needs to treat security as a stakeholder as well.
I moved from sysadmin to security engineer and the complaint from the other side is that new projects and deployments get planned without security input from the start, and so security now has to be the "bad guy" and escalate the issue so that leadership dictates stricter requirements that should have been taken into account earlier.
In some regulated industries security also has a lot more organizational pull due to compliance requirements, which leads to more friction.
Usually the problems I see stem from various middle managers jockeying for position and playing politics more than they do from security or sysadmin people not understanding the other side.
Mrhiddenlotus@reddit
Thank you for this distinction. I've come to realize that to sysadmins, "security team" includes GRC and its not their fault. Companies will hire GRC people and call them "security analysts". These are not operational roles, they're just another function like legal and HR. An actual security analyst is a technical role.
JeopPrep@reddit
Have them provide a written explanation what they expect to achieve with the new controls, how they would be implemented, the reasons they need them and how they will test the effectiveness. If they can provide a well thought out case, it probably justifies a serious discussion.
illicITparameters@reddit
Yall have seperate security teams?
excitedsolutions@reddit
Unless government with specific regulations involved dictate what can ant can’t be in place, cybersecurity presents risks (vulnerabilities) to the operations side of IT. Then IT ops needs to evaluate the intended impact and if there are issues with addressing the risk by disabling the use of/patching/etc then mitigation is used. I’ve seen businesses keep 20+ year old applications in production use by mitigating the 100 and more vulnerable issues this presents by keeping it in a walled garden with no internet access. It’s not the greatest thing, but it satisfies the cyber risk and keeps the business running. In many cases it is the cheapest approach and since the entire situation with mitigations ends up on the risk register, it never loses focus and gets reviewed annually. After a period of time for the business re-affirming the risk and mitigation, it raises up on the risk list and eventually gets addressed. Not pretty, but pragmatic approach that allows cyber, IT and the business to all be satisfied - with the “right” thing being done eventually.
phoenix823@reddit
Your question is a false choice. Security vs. operations is a tradeoff between two risks: the risk of being exploited/hacked vs the risk of disruption to operations. Risk tradeoffs are handled by the senior management of the firm. If immediate patching to eliminate a zero-day is an operational problem, that is for management to fix: make sure there's a non-prod environment to test in, make sure there's CI/CD setup to validate any application changes, replace pets with cattle, ensure there's monitoring and observability in place for servers and platforms, etc etc etc.
IT people are smart enough to know zero-days need to get fixed ASAP, that they are a huge risk. InfoSec saying so isn't unreasonable. Not having the tech and processes in place to respond correctly IS unreasonable.
WorkLurkerThrowaway@reddit
Honestly no. I listen to their expertise but push back when needed. There hasn’t really ever been an issue between our teams. I think our goals are aligned.
BeatMastaD@reddit
Compliance requirements are never 'you cant do x' its 'if you do x it has to be y' but lots of people get lasy or dont understand as you said so they default to the former.
The way around it is to identify what the actual requirements are and find a solution that meets them while allowing functions. Or get management buy-in and get certain risks accepted. Sometimes the comany is willing to accept the risk due to the costs of securing and that is okay. Its their risk to accept.
glyndon@reddit
security should concern itself with 2 things:
- how vulnerable is the system
- how is access to the system controlled
That is, if you want to run a webserver on a basket of hamsters, that's your business. The security people should focus on testing its vulnerability to reasonably foreseeable threats, and then have a strong vote in how much or little of it gets exposed to those threats.
Ultimately the risk decision should be made by whoever stands to suffer if there is an incursion. After all, it's *their* risk to take, or avoid. Security's job is to inform, and act on approved (by those asset owners) policies.
(former CISO here)
UncleGurm@reddit
We push back on infosec asks. A lot. Figure out why they’re asking and find a compromise way to get it done. That’s the challenge of the job, tbh.
XInsomniacX06@reddit
I love it when they do and I make sure they’re transparent and have a call like that with directors. Then I give them the breakdown of how it could impact the business and offer a plan to do it in stages across multiple lower priority systems after auditing which systems might be impacted. Grow a spine and don’t let them boss you around, they are NOT operations.
Mrhiddenlotus@reddit
You must be confusing security teams with GRC teams. Extremely common with sysadmins I've found, even after being one.
XInsomniacX06@reddit
Garcia gets the reports from security, where there isn’t GRC it is security thinking they are. Regardless you need to tell em their normal remediation dates aren’t viable unless you want downtime . Once you communicate that the financial systems would be impacted you’ll be called in my that division ie moneymakers to explain risk. From my experience none of them know what’s going on, it’s some 20 year old sending you a report.
Mrhiddenlotus@reddit
Sounds like a hiring practice problem.
XInsomniacX06@reddit
You must be young. Yeah bud voice your opinion at your job instead of Reddit and see how far it gets you. You’ll suck the corporate cock and like it
odubco@reddit
they got certified and bypassed having to deal with the realities of how difficult it is to apply security principles without being overly invasive to worker productivity
sroop1@reddit
In order to implement a policy to fulfill security's arbitrary requirements I have to build something to fulfill security's arbitrary requirements in which I have to implement a policy to fulfill security's arbitrary requirements.
Yeah, I wish my job was just monitoring Qualys and checking boxes.
Mrhiddenlotus@reddit
You're talking about GRC, not security.
sroop1@reddit
It's functionally the same team for us unfortunately.
Mrhiddenlotus@reddit
Super common, which really sucks for both operational sides. That's why so many people here think security teams are just vuln scanning monkeys.
odubco@reddit
compliance reporting is not the same
Mrhiddenlotus@reddit
This is so wrong it's funny. If you're company is hiring such candidates, that's on them.
DehydratedButTired@reddit
Fortress mentality is best but no one can easily go in and out of a fortress to work.
elreyadr0k@reddit
This right here.
ThatBlinkingRedLight@reddit
Yes.
Source: I’m the director of IT and of Security. I had that awkward conversation with ownership that I tell you what to do.
Honestly
Take their opinions and find a middle ground but the documented policies are the truth and that’s all there is to it.
syberghost@reddit
Usually I'm either going to do what they ask, or help them ask for something better.
I have personally told them "no" a few times, and made it stick.
We're all one company, and ultimately one team.
Mrhiddenlotus@reddit
It will always be a collaborative effort in an ideal state. Neither systems or security should be exclusively in control of how systems are run. Both should be involved in the creation and implementation of policy, and there should be compromise for both sides. For every security person who doesn't know how systems work in the real world, there's a sysadmin who logs in with a global admin account for daily driving.
l0st1nP4r4d1ce@reddit
It's a balancing act. Usability can mean lower security friction, High security usually means high friction to reduce accessibility.
The most secure device is one without power. Makes it a bit useless though.
elitexero@reddit
Yes I am at the whim of a team of 'security professionals' who blindly rely on nessus and tenable outputs as the gospel.
I regularly get tickets asking me to buy proper certificates for internal domains that are on TLDs that don't even exist and was once told to turn off HTTP GET and POST in our ingress load balancer because 'this could allow attackers to get in'... we make a SaaS product.
CammKelly@reddit
Security as part of the design phase is essential to stop the we ran a scan you implement bullshit.
sabre31@reddit
Hell no. Most of these security teams are clueless anyways and just follow what they read online. We do some stuff but push on a lot of stuff that would slow down the business.
zed0K@reddit
LOL. This. It's hilarious when they send me bleeping computer articles and that's their "threat intelligence research".
I've been reading bleeping computer daily, before half of them even knew what a computer was.
Shotokant@reddit
I get pissed off they sell a security vulnerability scan service to customers. Run a scan then pass all the fixes to us to implement a d change manage. It's frigging annoying they make bank by just eunni g a scanner and presenting the results.
hosalabad@reddit
Yeah we work together. That’s because both groups want to stay off the news.
DiscoSimulacrum@reddit
its unfortunate that so many people want to get i to cyber for the money. many dont have the knowledge and skills, but manage to get the certs and memorize enough jargon to get hired.
its good to have a strong security posture, but exceptions to policy are going to be needed. its a constant process to work toward those goals and update the goals as the cyber landscape changes.
jhuseby@reddit
We’re dealing with those consequences now with our intune deployments. Not having rogue apps in the environment is great in theory, but if you don’t implement it correctly, it’s a fucking nightmare to maintain and Support.
jimmothyhendrix@reddit
Nah, the good thing about being a gangster is that you can just mog your other IT nerds. If the CEO says make it work, you make it work. Just act with this energy
RestinRIP1990@reddit
yeah because I am also the security guy who even though I hate myself sometimes
donewithitfirst@reddit
Just document what they want to do and your approval/disapproval. As long as they are not on the hook they should be fine. Don’t discount them but discuss why you are making the decision you are making.
Own-Slide-3171@reddit
The security team and the systems team are one and the same
CraigAT@reddit
Ideally, it should be a discussion. They tell you what they would like to do or achieve (some teams will be prescriptive enough to tell you exactly how they intend to do that). For your post you get to point out any issues or alternative suggestions. Then you have a discussion about the best way forward.
Most of the time, we will have to relent and implement some or all of their suggestions - because protecting our systems and data, usually beats user inconvenience.
IWantsToBelieve@reddit
We just make sure we have operational experts in our security team. We control the SOE and all software introduced into the environment and govern access control. Doesn't mean we do everything but security are at the change gate and maintain overwatch.
Ship is tight. It's impossible to keep things secure when ops runs rogue without both teams working side by side. Just like it's impossible to maintain a good User experience if security holds all the power.
skotman01@reddit
This is probably my number one pet peeve being in IT in general these days. It used to be truckers who’d go get their MCSE but couldn’t turn a pc on, now it’s college kids graduating with degrees in MIS or Cyber who can’t think critically.
I was in cyber and hated having to teach basic things to people who should know better, both on the cyber side and the app dev side.
evolutionxtinct@reddit
Can I interject on this so how do you feel about someone who was Ops who went to Security…
I feel like I try to do easy security but people still complain.
Daphoid@reddit
As a life long operations guy that went security ops, we do - and a few of us on the team in senior roles have a ton of ops experience and know the caveats / pit falls / issues / and things to work around to be practically minded while still pushing infra teams to be secure.
Abracadaver14@reddit
I'm on the technical soc team, so I'm involved in every decision made. We usually find a workable middle ground.
zantehood@reddit
We follow their general guidelines but do make risk-assesed exceptions.
Toasty_Grande@reddit
If there is a disconnect between the two teams, then the first step is to get together and work out the perception issues. The security team may be assuming you have automation and other systems that make patching easier, where the ops guys may still be living in manual, single server, no dev/test or blue/green environment.
Also, make sure you are working with the SecOps people to ensure they are doing the environmental scoring so they aren't making you patch a 9.x that with your mitigations is a 4.x and isn't as urgent.
onceyougobalck@reddit
It depends on what type of network they’re on. If it’s the enterprise network, absolutely. Closed-restricted networks are a little more flexible. R&D done on open net and then run through vuln remediation before going on the CRN.
endnote@reddit
I've had to advocate for my stuff a lot. I work with Operational Technology, Building Management Systems, and some other odd ball stuff where you simply can't blanket apply standards across the board. We however spent a lot of time working both on our own and with them to put a lot of compensating controls in place, built out documentation, and generally being security minded in the first place. Our systems are more strict than most all of them out there.
It took a lot of educating and talking through things to get here though.
So yes and no. We meet requirements they have to the best of our ability but we also can't meet others due to the nature of the technology we work with.
pdp10@reddit
One way to deal with it, is to make one team responsible for a given subsystem, end-to-end. There are still "communities of practice" that specialize in infosec or networking or QA, but they're not silos that can toss real problems to another team to solve.
Suitable_Mix8553@reddit
Our security team usually bothers the application and dev teams, lot of custom code that is not security-focused and when migrated same security issues after the face just give them the bare system already secured and tune the kernel for their application needs.