I'm looking into using a patch management-solution - What are the risks?
Posted by Kukken2r@reddit | sysadmin | View on Reddit | 28 comments
Hello!
We have around 20x Windows Servers around the city and I have manually been checking in, done updates and checked stuff like disk-space etc.
I have seen both Action1's Free-tier and level.io and it all seems pretty effective compared to how I have done it.
But what are the risks? Are they worth it in my scenario? It's not governmental or health-related and mostly domain controllers, but I assume that Action1 or Level would also work as a single entrance to all of these servers if the agents were to be installed.
What if they were to get hacked?
What are the things I have to consider apart from activating MFA and only allow logins from a whitelisted IP?
These are all SMB's (and so are we) so I am new to this.
Thank you!
- A junior :- )
Superb_Lychee_5150@reddit
You’re thinking about the right risks. The main things to consider are agent security, least privilege access, and network segmentation. Even if MFA and IP whitelisting are enabled, you should assume the management platform is a high value target and treat it like a privileged jump point. Limiting what it can actually execute and monitoring all changes is just as important as securing login access.
Such_Rhubarb8095@reddit
Totally get wanting to automate patching instead of doing all those manual check ins. Biggest risks are if the SaaS provider gets breached or if someone gets hold of your admin creds, they basically get access to all managed servers. MFA is a must but also watch where the agents call home and what permissions they use. Check how Action1 and Level.io have handled past security issues and look at alternatives like Atera which puts a lot into securing both the agent and the portal. Some folks layer monitoring just to catch weird activity fast.
DaineLusian@reddit
Yeah, for Linux servers Ansible's my go-to for automating patches across multiple boxes, it's scriptable and free.
I was manually patching a bunch of VPSes before and it was a nightmare keeping up.
Xcloud sorted it for me on my setup, handles the updates without much fuss.
pavin_v@reddit
Try www.patchifi.com Or feel free to text me
GeneMoody-Action1@reddit
In response to your deleted message, again it was just a warning, it is right there in the subredit rules, was just cluing you in... If you want to operate here, you have to follow them, all us vendors do.
And it is why your comment karma is in the negative, and your messages are deleted. (Not me, I have no power here, I just stay in the lines)
Meant no disrespect at all, was attempting to be helpful since you did not seem to have read them or if so, understand them...
Again I suggest you do read them if you have not, and wish you well.
GeneMoody-Action1@reddit
Read the rules man, many vendors operate in this space, but their are rules for a reason. This is not how you do it. Spam and drop a SEO link posts will get you booted fast. Be helpful, stay in context, add context where relevant. And good luck.
It is why you are being down-voted.
pavin_v@reddit
Do you own Reddit ? Didnt know that. 🤣 This is just a conversation my friend. What are you getting from being oversmart one in the class.Hahaha
MartinDamged@reddit
"What if they were to get hacked?"
This is probably the thing you should consider the most.
Cloud patch management offerings is great these days. Very easy, cost effective, and just really nice!
But your concern is valid.
Do you have backup resources "airgapped" from this, if a Solarwinds like supply chain hack should happen again? Can you get back up and running from restores if youre compromised by a 3rd party tool that have full access to all your servers? Can you restore your entire environment fast enough from backups so the company does not bleed money way more than what you saved on the nice patching solution?
What about possible compliance outcomes if a full breach happens through a tool like this?
If you are in a regulated business, this can end up being expensive real fast.
We are in an industry where the above risks is too high vs the benefit of nice cheap cloud patching.
So we prefer solutions that can be hosted internally. But its getting harder and harder to find good products that fits. Most of the solutions are turning to cloud only solutions in the last 5 years.
MartinDamged@reddit
On a side note...
We run only around 50 Windows servers on-prem. Out of those we have 3 that does NOT have automatic Windows updates and reboots scheduled to happen automatically during the night about one or two weeks after Microsoft Patch Tuesday.
So I really don't understand why youre doing this manually every month.
And start monitoring is everything!
Get something that monitors your servers, and send alerts ASAP.
It does not need to be a costly affair - loads of OpenSource software to get you started.
GeneMoody-Action1@reddit
SecurityOnion and WAZUH both excellent solutions. And free to use.
SecureNarwhal@reddit
supply chain attacks would be your biggest risk since you're having a third party update your servers.
If you want to avoid third party tools, Microsoft does offer WSUS and you can use SCCM/Configuration Manager with WSUS as well. Spin up some servers (1 upstream primary, a few downstream replicas) and have them handle Windows updates for your Windows servers and endpoints
jma89@reddit
Just a friendly reminder that WSUS is deprecated, and while they've stated things won't change for Server 2025, there's no guarantee it'll work for Server Next and beyond.
GeneMoody-Action1@reddit
100% agree
Not sure why people do not get this, and keep suggesting it, it was a solution, technically still is somewhat of a partial one, but never was ideal. MS is not in the business of maintaining 20yo+ software that competes with its new cash cow alternatives. Expect it to dissolves far faster than people think or want to admit.
And since the far larger intial vector footprint is in third party apps vs the core OS, WSUS does nothgin for that outside soem janky third party sort of work arounds.
Many classic and antique things retain and even gain value over time, MS management suites are not one of those things.
Relevant-Idea2298@reddit
Arc Update Manager works really well in my experience and is a nice replacement.
Relevant-Idea2298@reddit
If OP decides to go first party I’d definitely recommend jumping straight to Azure Arc vs. the Config Mgr / WSUS route.
Arc Update Manager works pretty well and is way less administrative overhead than SCCM/WSUS.
ImmediateRelation203@reddit
yeah so coming from my perspective as someone currently doing pentesting and previously working as a soc analyst and security engineer, tools like that can definitely make life easier compared to manually logging into 20 servers to patch and check things.
the main thing you already identified is correct though. platforms like Action1 or Level.io basically become a central control plane for your servers. if an attacker compromises that console or the account that manages it, they potentially gain the same access the tool has. in a lot of environments that means remote command execution patch deployment software install and sometimes shell access across every machine with the agent.
so the risk is not really the tool itself. the risk is that you are concentrating privilege and access in one place.
that said for a small environment with around 20 windows servers the operational benefit usually outweighs the risk if you set it up properly. most smb environments already have worse exposure from manual admin access or reused credentials.
things i would think about beyond just enabling mfa and ip restrictions
first privilege separation. do not run everything with a single global admin account. create separate accounts for daily management vs full administrative control if the platform allows it.
second protect domain controllers more carefully. you mentioned most of these are domain controllers which makes them the highest value targets in the network. if possible restrict what commands or scripts can run against them from the rmm tool or at least monitor that activity heavily.
third audit logging. make sure the platform logs actions like script execution remote sessions patch deployments and user logins. from my old soc analyst days this is one of the first places we check during investigations. you want logs that clearly show who did what and when.
fourth api tokens and integrations. some rmm platforms allow api keys or automation hooks. those often get forgotten and can become a quiet entry point if they are leaked.
fifth agent trust model. remember that if the management platform pushes something malicious the agents will usually trust it automatically. that is why protecting the admin console and accounts is critical.
sixth vendor security posture. look into things like how they handle authentication where their infrastructure is hosted and whether they have had past security incidents. any cloud management platform is part of your attack surface.
from the pentesting side i will say attackers love rmm tools because they give them instant scale once compromised. but that does not mean you should avoid them. it just means you treat them like a tier zero system similar to active directory.
the reality is automation tools like these are often safer than manual patching because systems actually get updated regularly.
GeneMoody-Action1@reddit
I just wrote an article on how the fastest breakout time to date was recorded in '26 already, at 27 seconds. If you are not automating, you are losing.
And this... "from the pentesting side i will say attackers love RMM tools because they give them instant scale once compromised. but that does not mean you should avoid them. it just means you treat them like a tier zero system similar to active directory."
1000% agree, the risk of having them is no different than many other management tools, but the risks of not having them are far outweighed but the consequences of not having them. All day, and getting worse tomorrow, then the day after, so on and so forth, count on it.
Reasonable_Host_5004@reddit
I do run windows updates via PowerShell and task scheduler on windows servers:
https://www.powershellgallery.com/packages/pswindowsupdate/2.2.1.5
Most third party software that is patched via action1 shouldn't be installed on a server anyways.
You can combine the powershell scripts with healthchecks.io
So you will get notifications if something goes wrong.
Disk-Space etc is more likely a job for a monitoring system, not for patch management.
We do run the action1 free tier in our company due to cost-savings. But only on our clients and not on server.
fahque@reddit
Action1 does windows updates too.
Reasonable_Host_5004@reddit
Yes, but it uses the windows update channel.
That's why I am suggesting if you only need windows updates use the powershell method (or even group policies). No need for a third party software being installed on your servers.
GeneMoody-Action1@reddit
While you are correct we use the windows update channel to source the updates, we provide WAY more control. Our patch management solution allows for deployment in rings, live accountability from automation to verification, and enterprise wise statistics. You could use powershell for that, but by the time you wrapped all the control and compliance points around it you would need to maintain proper modern control, you would have invented a square wheel while we already have a round one, and a free one for 200 endpoints or less, complete free forever, not a trial, and as fully functional as the system running on millions of others.
Most admins this day in time barely have time, IF they have enough time to even manage the patching much less construction and maintenance, documentation (In case you die, change jobs, get laid off etc), etc that a real solution needs.
And the multitude of products, even made BY microsoft specifically to do thses things understand this.
GeneMoody-Action1@reddit
Far far less than the risks of not having one?
Patching has changed, it is not what we old sysadmins knew it as, or even the younger ones that have been in it longer than 5 or so years.
Modern patching requires live intelligence, ability to take immediate Action1 enterprise wide, and much more.
Remember there was a time than AV on a system was considered optional, or as needed, and most computers did not have it. Now it would be considered insanity to not have EDR and live scanning. Patching has reached the same threat level. Why? Because it is technically the same issue, the flaws that were once destructive annoyances are now weapons. The criminal and state sponsored actors this day in time realized that the value was not as much in random self propagating destruction as targeted intent.
As a result the issue is now worse than it was as a virus, the same level of caution and protection must now be applied, and then more.
MikeWalters-Action1@reddit
Here is our public roadmap link to this Agent Takeover Prevention feature: https://roadmap.action1.com/250
Jason-Kikta-Automox@reddit
Full disclosure: I work at Automox, so I'm in this space every day. Not here to pitch, just want to share some ideas.
Others have mentioned the supply chain risk and it's real, but I'd weigh it against the risk you've already inherently assumed. Manually patching 20 servers spread across a city means inconsistent timing, thing get missed, and no audit trail. That's a much more common breach path than a SolarWinds-style vendor compromise. Doesn't mean you shouldn't think about it, just keep it in proportion.
Here's what I'd look at when evaluating vendors:
For supply chain risk:
On your environment:
The actual safety net: The real answer to "what if they get hacked?" is the same as "what if anything goes wrong?" Tested, immutable/air-gapped offsite backups with a documented recovery plan. If you can rebuild your environment from scratch, you've bounded your worst case regardless of what vendor you use.
MFA and IP allowlisting are solid starts. Also look at role-based access (not everyone needs admin), audit logging, and session timeouts.
You're asking the right questions for someone early in their career. Most people don't think about this stuff until after something breaks. Remember, the job is not to avoid risk (or we'd turn all this stuff off), it is to balance risk.
pavin_v@reddit
Try www.patchifi.com
elkshelldorado@reddit
The main risk is that the patch management tool becomes a central access point to all your servers. If that account or platform gets compromised, an attacker could potentially push changes everywhere. With MFA, IP restrictions, and proper permissions, the benefits usually outweigh the risks for managing multiple servers.
devloz1996@reddit
If you want cloud patch management, and this is your concern, then you probably want a behavior-based XDR watching it. I think Action1 has something about addressing potential HQ hack on their roadmap, but I'm not sure about specifics.
Ultimately, it all comes down to risk management. Every tool in your belt is a risk you accept. Pocket knife could open up on its own and prick you, power bank could explode... it's basically the same thing.
You may also find that such risk is acceptable for one subset of endpoints, while being unacceptable on another. In such a case, you still benefit from having a benchmark to compare with on your "manual" group. For example, my company is happy with it in the office, but no way in hell it goes down to factory level.
Kind_Philosophy4832@reddit
The risk of a compromised patch management or rmm is always there with cloud products. Going fully on premises can reduce that risk (as long as you keep everything internally) and have no auto updates for the application itself. But looking on that from a normal pov the patch management will help you to stay compliant. You probably have to make sure to define specific update rings. For example not updating all your servers right away after Microsoft released a new update and that update is not security critical. You maybe heard about the classic patch tuesday nightmares. :D
Afaik people like action1 a lot