Major Mayhem After Microsoft Patch—130 Servers Down, 360+ BSOD! Anyone Else?
Posted by Technical_Syrup_9525@reddit | sysadmin | View on Reddit | 333 comments
Hey everyone,
I’m hoping someone out there can relate to what we’re going through. We just rolled out the latest Microsoft patches, and it’s been a complete disaster. Right now, we have 130 servers knocked offline and over 360 systems that keep hitting BSOD. Our team has been working around the clock, and morale is taking a beating.
To make matters worse, we checked in with both of our security vendors—SentinelOne and Fortinet—and they’re all pointing fingers back at the Microsoft patches. We’ve reached out to Microsoft support, but so far, we haven’t had much luck getting a solid workaround or a firm fix.
Is anyone else experiencing this level of chaos? If so, have you found any way to stabilize things or discovered an official patch from Microsoft? We’re all running on fumes trying to keep things afloat, and any advice (or moral support) would be hugely appreciated.
Thanks for reading, and hang in there if you’re dealing with the same nightmare. Hoping we all catch a break soon!
ForeignAwareness7040@reddit
What OS do you guys have on the servers? W2016? W2019? W2022? Just to be clear on the environment
Technical_Syrup_9525@reddit (OP)
2016,2019 and 2022. We can't find any commonality between manufacturers or environment. These are deployed across different environments. We waited to deplore and tested in our internal environment and we were not affected on the server side. We did have an issue with a Dell PC but thought we had cleared it.
HauntingReddit88@reddit
You must have something installed on all of them... AV? A GPO?
OkMethod709@reddit
Or another driver?
HauntingReddit88@reddit
Don’t think so, different systems from different manufacturers I think I saw
OkMethod709@reddit
CPUs the same ?
jcarroll11@reddit
This, we have been on the Dec since they came out, with no issues. Just installed Jan and no issues yet
thats across 2016. 2019. 2022 as well
omfgbrb@reddit
I know you're freaking and it's just a typo; but damn that's funny!
tastyratz@reddit
May be worth updating the main post with information scattered across the thread if you can so it's easier to follow.
GgSgt@reddit
Do you guys test patches? If not, I suggest you implement that process. We push patches to non-production environment when they get released first. Then non-business critical system, then everyone else. This has saved us from this exact scenario.
I've told my SOC team that we are never, ever, going to push patches from any vendor within 7 days of their release unless it's something for a critical zero day.
I'm actually a little shocked how few orgs have a patch deployment strategy that isn't "let systems auto update themselves whenever".
ThatWylieC0y0te@reddit
Thank god I don’t have to worry about this on my server 2003. Going back to bed yall have a great night!
dreamfin@reddit
I like to live dangerously with my Server 2008 R2.
Itguy1252@reddit
R1 is better
ourlastchancefortea@reddit
Server edition is overrated. We run our business on XP.
quasides@reddit
and there is this backery running their POS on a C64 in 2025
IdiosyncraticBond@reddit
LOAD "*",8,1
POKE 53280, 6
SYS 64738
vdragonmpc@reddit
I miss my Commodore with the 1541 Disk Drives. You were baller if you had 2. You were a loser if you just had the tape drive.
ErikTheEngineer@reddit
I only had the VIC-20, and only got the tape drive later....what's below loser? 😂
Life's going to be very different when we're 85 in nursing homes...instead of listening to the stories of the neighborhood kids playing stickball and going to Feldman's Candy Shop somewhere in Brooklyn, we're going to be a bunch of drooling old farts making modem sounds and playing Atari 2600/Intellivision/NES games.
Stonewalled9999@reddit
I felt that in my soul
vdragonmpc@reddit
Do you remember the kid up the street with the cool flightstick joystick and we were using the one from the atari?
xraygun2014@reddit
<cries_in_discount_centipede>
vdragonmpc@reddit
"Dungeon of the algrebra dragons" was the cassette of doom
Amazon was my first Disk drive game. I still have it somewhere.
Olleye@reddit
„Press play on tape!“
ourlastchancefortea@reddit
That's probably safer than XP :D
Stonewalled9999@reddit
XP home right? no pesky Pro splash screen?
Massive-Cell7834@reddit
I run mine on Lindows.
ProgressBartender@reddit
Do you work in my server room? /s
ourlastchancefortea@reddit
In the grim dark of the 2st millennia, there is no hope in any server room. Only suffering.
babywhiz@reddit
haha that reminded me, the last “tech boss” we had (2005-07) told the owner he could save money by building servers from scratch. We were in the process of moving our ERP code from vb5/access to .net/sql.
He bought underpowered components, and slapped a windows XP license on it for 60 users. Needless enough to say, only 10 people could work at a time.
ThatWylieC0y0te@reddit
A fine system that is as well, at least it isn’t 2012 🤢
JohnGillnitz@reddit
Hah! Ours are 2012R2! So we got that going for us.
ThatWylieC0y0te@reddit
Powerful-Pea8970@reddit
She's sweet bud.
technobrendo@reddit
I just logged into your server and can confirm, you're all good. Go back to bed, your infra is safe with me
el_chad_67@reddit
Surprise sysadmins protecting the network 🥰
youreprobablyright@reddit
Reminds me of a Darknet Diaries episode where a company found a bitcoin miner on a wind turbine control system that they manage, but the guy running the miner was doing a better job of patching & maintaining the system than the companys' sysadmins (in order to keep the miner healthy). They left the access & miner in place for a while if I recall correctly.
Sirbo311@reddit
That was a fun anecdote. I love that podcast.
8-16_account@reddit
Too bad about the massive nosedive it has taken lately. It's like a complete 360 in terms of quality
GSUBass05@reddit
180?
omfgbrb@reddit
eh, 90, 180, 270, 360, whatever it takes...
Sorry for being obtuse...
H1king33k@reddit
Unexpected Mr. Mom reference.
Nice.
OptimoP@reddit
Acute response.
SpaceCptWinters@reddit
Thank you for your service
8-16_account@reddit
No, they moonwalk away
GSUBass05@reddit
the best way
fatcakesabz@reddit
Yer it’s become really bad in the last year, I suppose there are only so many cool stories to tell, my favourites are the red teamers particularly the bank guy that did the wrong bank
UltraEngine60@reddit
Yeah I keep meaning to find a podcast that has actual technical explanations for attacks. Instead of shit like "they used DNS, which is like a phone book for domain names"
technobrendo@reddit
Thats a tricky preposition, its hard to get mass appeal with a highly technical-heavy discussion like that. I'd listen to it, but don't suppose it would be a popular as DND.
williamp114@reddit
I mean hey, if it's ethical for FAANG companies to use your personal information (and identify you through covert methods) for the sole purpose of selling it to advertisers, in exchange for free services where you are the product, then this miner is no better :-)
Pyrostasis@reddit
You beat me to it you bastard, take my upvote.
Longjumping_Law133@reddit
Which episode please?
youreprobablyright@reddit
https://darknetdiaries.com/episode/22/
ka0ttic@reddit
Episode 22
quasides@reddit
you boost your security you become a challenge for hackerman to breach it
you do nothing for 2 decades you become a challenge for hackerman to save it
00notmyrealname00@reddit
Like a reverse Harvey Dent!
AlfaHotelWhiskey@reddit
You get a pen test! You get a pen test! You get a pen test! You get…
Opening_Career_9869@reddit
Could you look mine over next pls? K thx bye, I stopped caring 15 years ago
Freakishly_Tall@reddit
You guys ever cared?
That's not fair, I guess. I think I cared once. But the year started with a "1." Then being on the interwebs went from something only nerds did and everyone else mocked, to something everyone wanted and, well, here we are.
StandardClass3851@reddit
can you log into mine also? Thanks
dadoftheclan@reddit
"It's now safe to shut down your computer"
Dingus_Khaaan@reddit
The hero we didn’t know we needed
TheJesusGuy@reddit
God bless you looking out for the community
Chance_Reflection_39@reddit
Exactly
ThatWylieC0y0te@reddit
lol see I told you, wasted your time for nothing
chazza7@reddit
Can’t patch your server if there are no new patches available
Bad_Idea_Hat@reddit
Every time I see this post, I go to the upgrade path chart, print it out, and then burn the printout.
ThatWylieC0y0te@reddit
You actually use one of those printers… disgusting 🤢
Bad_Idea_Hat@reddit
This is my one print a month. Last month was a Spongebob meme. Give me a pass.
ThatWylieC0y0te@reddit
I dunno man, one print a month soooounds like a lot to me
mikeblas@reddit
Technical debit never sleeps.
ThatWylieC0y0te@reddit
The server of course not it has 7 years uptime lol but me of course I do already completed all the challenges of upgrading it. See that’s why they don’t release anymore upgrades they perfected it 😉
u71462@reddit
Don't touch it it's working. Never touch running and working systems Not even if it is a pensioner.
BeagleBackRibs@reddit
True as400 stories
darkzama@reddit
Bruh... this is the truth...
Ark161@reddit
This is why we are usually on the later end of the patching cycles and have QA/canary boxes. Ever since Microsoft’s Royal fuck up with printers a few years ago, I do not abide their antics. I’m waiting or they say they do something stupid with bitlocker, nukes all the keys, everyone loses their data, and Microsoft is just like “WHOOPSIES!!”. It is complete negligence on their part that they demand signed drivers and force updates on people, but have the QA quality of temu
Tuz@reddit
No, because I stopped using Microsoft years ago lol
adamixa1@reddit
You deployed the update on Friday?
codinginacrown@reddit
A holiday weekend on top of that...
davix500@reddit
And a holiday on Monday for most of the U.S.
adamixa1@reddit
so rip i guess
soiledhalo@reddit
Doesn't it make sense to have this level of crap on a weekend as opposed to everyone coming to work on Thursday only to realize that everything is hosed? Yes, your time is screwed, but you didn't take down everyone's productivity with it.
adamixa1@reddit
For me this is a disaster level incident, you can expect if i update this server, if something wrong i need to fix it by troubleshooting. Unlike you made changes to network equipment that you can just roll in the previous setting.
The issue with updating an untest or corrupted package during the weekend is, everyone will be in home shutting down their phone when you need some help. Your vendor may be working, but their vendor?
In terms of productivity, yes but not all will use the services provided by the server so they can do other things. It's not a network issue that everyone will suffer
Nightkillian@reddit
Maybe they don’t like the weekend 🤷🏻♂️
SpangledFarfalle@reddit
The long weekend.
Pork_Bastard@reddit
fucking madness
Det_23324@reddit
who needs personal time. am i right?
Stoobers@reddit
Read Only Friday ftw
PsychoticEvil@reddit
We were seeing unoumntable boot volume BSOD's on servers a month or two ago that turned out to be a conflict between the newer versions of SentinelOne and StorageCraft.
Technical_Syrup_9525@reddit (OP)
We believe it may be what you are referencing
PsychoticEvil@reddit
If it is, on most servers we were able to boot into safe mode and uninstall SentinelOne. After a reboot, the server should boot normally.
We did have several server on 2016, that never gave us the option for safe mode. In that scenario, we had to restore to an image from prior to the SentinelOne upgrade or wipe and reinstall and then restore the file structure.
This happened right after we moved to a new backup product, so our final fix was removing StorageCraft and putting SentinelOne back on.
SentinelOne support had never seen the issue and we didn't engage with StorageCraft support.
zerotol4@reddit
Try grabbing a copy of the crash dump from C:\Windows\Minidump and opening it though Windbg (there is a modern version of it in the Microsoft store) and then typing in !analyze and see what it tells you, It can often show you what triggered the BSOD or give you more useful info
Downinahole94@reddit
this feels like a Linus tech tip and not a admin talking to another admin. You don't think he knows about minidump?
Aromatic-Act8664@reddit
No?
The vast majority of admins I know don't even know about procmon, and sysmon lol.
Jrirons3@reddit
They also didn't provide any information at all other than "servers down" "BSOD!". No windows versions, no patches that were installed, no mention of what BSOD, what does "servers down" mean, why are they down, what happens. So no they probably don't know how to analyze a dump file.
floswamp@reddit
The real team sent the intern here to post and see what he can dig up. Give the kid a break!
downtownpartytime@reddit
Yeah the bsod will have an error on it to at least point in some direction
mike9874@reddit
I worked at a company where I used to think that. I then moved to another one and realised that no, all admins don't know about these things
c0nsumer@reddit
Yep... Just running the automated analysis probably puts one in the top 5% or 10% of Windows sysadmins.
Mi_Ro@reddit
RemindMe! 3 days
alexnigel117@reddit
The dump will tell you whats up with these errors you are getting
TheManInOz@reddit
Or if you like something a bit simpler, Nirsoft BlueScreenView
LForbesIam@reddit
Yup. The dump will tell you.
whatever462672@reddit
Seconding this. More info, please, OP.
Born-Biker@reddit
Imagine typing like it's a book title...
Secret_Account07@reddit
Hmm we have a rather large environment (+5,000 Windows Servers) but we haven’t seen any issues. Granted we are only a few days into our patching cycle, and this round is test servers, but we usually know by now if there’s an issue.
Can you share more info?
AGTDenton@reddit
Do you not UAT the patches?
TEverettReynolds@reddit
Did you first deploy these patches to a TEST\DEV\QA environment on week one (30 days after the patch is released)?
Then, you break up PROD into 2 or 3 separate groups, patches in the next 2-3 weeks (30 days after the patch is released).
You NEVER patch your entire environment at the same time.
NEVER, NEVER, NEVER.
spazmo_warrior@reddit
☝️This guys patches!
bm74@reddit
Yes, apart from most certifications and insurance requires patching of critical vulnerabilities with 14 days.
headcrap@reddit
Nope.. last level of chaos like this was CrowdStrike.. good to know though since InfoSec is moving to SentinelOne starting with OT..
gtbeakerman@reddit
~~CrowdStrike~~ ClownStrike
Rawme9@reddit
Confirm it's not Sentinel One - we are on latest agent and have staggered updates and no significant BSODs to speak of on any endpoints or servers
Icy-State5549@reddit
I used a screenshot of the cpu spike from CrowdStrike in my vmware metrics for a presentation earlier this week. We faired well because we got on it early, but it was still remarkable. I have a calendar item set for July 19th to screenshot that spike at the far end of the "one year" cluster performance graph. It still really stands out. I am upgrading to 8.0u3 this week, I hope I don't lose my metrics!
KlausBertKlausewitz@reddit
VM Snapshots anyone?
Test installs anyone?
boblob-law@reddit
Somebody needs to take this bullshit down. This guy is either full of shit or trying to be crafty. He is talking below about how they tested these patches for two weeks. THis is a troll. Is it April 1st?
Technical_Syrup_9525@reddit (OP)
appreciate your feedback, and I want to clarify that this truly did happen. I have no hidden agenda; I simply hoped to find out if anyone else has encountered a similar issue. Our team consists of eight engineers who are currently overwhelmed, but we do plan to conduct a thorough after-action review. I understand there are many strong personalities in our field, and I respect everyone’s viewpoint.
For context, our top-level engineers (I am not one of them) are working around the clock. We manage over 60 customers across various environments and are looking for commonalities. That’s why we brought in two external security vendors and engaged our outsourced SOC—to ensure there was no missed security threat. Each of those groups pointed to the patches, though it’s entirely possible there may be another cause, which is exactly why I posted about the issue.
Thank you for your input.
Dracozirion@reddit
You mentioned S1 and Fortinet. What exactly is installed on your servers when you talk about Fortinet?
MrR0B0TO_@reddit
The one time I’m thankful the servers are 15 years old
nothingtoholdonto@reddit
2012r2 ftw!
981flacht6@reddit
I have 2016, 2019 and 2022
Sentinel One
Indeed all patches no problem. VMware hypervisor.
RaguJunkie@reddit
Same here - no problems either. It could be down to a specific sentinelone agent version I suppose, or unrelated to MS and S1.
vulcansheart@reddit
Same as well. Mixed physical and virtual (VMware) environment, server 2016, 2019 and 2022. No issues like OP described.
HappyCamper781@reddit
\~500 MS Servers in our env, Tst/Dev/UAT patching since 2 days ago, 70+ servers in and no issues.
badaboom888@reddit
so what was it?
Throwaway4philly1@reddit
Damn right before 3 day weekend for some
soiledhalo@reddit
Scared me! Left everything to run at 8 PM. Just checked and everything is working, but damnit, you scared me.
Dracozirion@reddit
Don't tell me you have Forticlient installed on your servers. You don't, right?
RIGHT?
tarlane1@reddit
This might be a bit late at this point, but I understand those patches were having trouble with Digital Guardian if you have that in your environment.
SUPERTURB0@reddit
Damn, all at once was a nice move. Certainly saved some time.
Seductive-Kitty@reddit
24H2 has screwed up a bunch of things for us. A lot of undocumented registry changes and even vendors like Canon have no idea what's going on with their software
JustLengthiness8818@reddit
We've patched a few hundred so far and nothing.
weekendclimber@reddit
Patched about 80 servers (2016, 2019, 2022) with the 2025-01 CU in our VMware environment (6.7) last night and no issues today.
xxbiohazrdxx@reddit
melonator11145@reddit
This is the thing you need to be patching
CrayonSuperhero@reddit
Psshhh, none of that Broadcom crap for those guys!
Existential_Racoon@reddit
Lol my company still sells it
melonator11145@reddit
Fucking hell, it's 2 years out of support
Jfish4391@reddit
This is why small/medium businesses get ransomwared
Stonewalled9999@reddit
using hacked VMware licences? You can't even buy 7 (legally) anymore
Existential_Racoon@reddit
I bought 8 and downgraded in the portal last week
Pork_Bastard@reddit
oh snap
Jfish4391@reddit
Please google Log4shell or Log4j
minimaximal-gaming@reddit
Log4killchristmas only anffected vcenter, standalone hosts are fine (apart from all other vulns for esxi 6.7). And who the fuck runs there vmware mngmt in the same vlan as prod / users or even exposed to the internet. For sure no excuse for running EOL for years but problably a old vmware is not the problem at such places.
Twinsen343@reddit
2019, exchnage and no issues with updates for for 2 days now
roboto404@reddit
Did it pass your test environment? You used the teat environment, right?….. RIGHT?!
Prestigious_Line6725@reddit
I wish we had the budget for a teat environment
Background_Ice_857@reddit
i will show you my teats (environment) for $5
vogelke@reddit
Jesus, I love this list.
droppedpackets@reddit
Ya forgot admin group = domain users
LaxVolt@reddit
Oh you do, you just happen to run prod on it
yntzl@reddit
lucky644@reddit
Of course, our guys have a code name for our test environment. They call it Production. What do you guys call yours?
roboto404@reddit
PROD-SQL-DC-1
vass0922@reddit
So much of me wants to down vote just out of fear that it's probably reality somewhere.
debauchasaurus@reddit
More like PROD-IIS-SQL-DC-1
Isilrond@reddit
PROD-FILE-BACKUP-IIS-SQL-DC-1
mcdithers@reddit
This was the environment I inherited 3 years ago! Now my test environment is BURN-IT-DOWN
Rivia@reddit
Add the hyperv role for fun
Mysterious_Collar_13@reddit
PROD-FILE-BACKUP-IIS-SQL-DC-1 runs as a VM on the following machine: PROD-HYPERV-RDS
Don't forget 3389 is also open to the Internets
tastyratz@reddit
That's clustered with PROD-HYPERV-PRINT obviously
TheWino@reddit
Forgot DHCP
MarquisDePique@reddit
In MS land, DC implies DHCP and DNS. What we're missing here is -MBX1 ;)
TheWino@reddit
😂😂
Kuipyr@reddit
P-F-B-I-SQ-DC-1
Needs to be 15 characters or less.
CfoodMomma@reddit
So, SBS.
Phalebus@reddit
Nah if it was SBS it’d also have RDGateway and Exchange
Icy-State5549@reddit
Prodcdhcpiisq~1.mydomainiscrap.com
We ran out of space for dashes, redundant characters, and serial integers in hostnames pre-win2k. I just added 128Mb of ram to Prodcdhcpiisq~2, so 2025 is gonna rock!
DarkangelUK@reddit
I'm a contractor working for a MAJOR global company, there's a shocking lack of test instances here..........
I came from a company that is 1/4 of the size and they had test environments for everything, it just blows my mind.
vass0922@reddit
I've seen similar, though I've been in environments where they were so secure they would not wait for test patching they would deploy straight to production logic be damned.
I'm a contractor as well, so I can voice my opinion document the risks they are bringing to the customer and do what I'm told.
JimmyMcTrade@reddit
We had a client with PROD-DC-IIS-SQL-FS-HV
What do I win?
TinkerBellsAnus@reddit
somewhere? Do you want a list broken down by region and WAN IP?
I see this dumb shit so often, it pains me. It pains me even worse, when I watch a team of "highly skilled engineers" lift and shift that pack of shit to Azure because "Cloud is where we make good MRR"
RBeck@reddit
PROD-SQL-DC-1\sqlexpress
Stonewalled9999@reddit
why are you naming it DC1 we all know there is no DC2 or DC3, just call it DC :)
1_________________11@reddit
Not ambitious enough gotta say 001
PrimeConduitX@reddit
💀
Flaky-Celebration-79@reddit
Best comment in the entire thread so far. My test environment is my home lab. Once I vetted updates here, they get rolled out in the corporate environment. My boss loves it.
Flaky-Celebration-79@reddit
Adding for context, I'm running an almost carbon copy of systems at home. Same brands, HyperV and most of the same software.
Lando_uk@reddit
So you use your own time and electricity for the benefit of your work place? No wonder your boss loves it. You need to stop doing this, seriously.
Flaky-Celebration-79@reddit
Dawg the systems were all there and running lonngggggg before I worked there. I've had a homelab since 2003. I see this more about getting the value out of my already running environment.
So all my boss knows is there's a test environment that almost matches ours.
Lando_uk@reddit
Just saying, you should have a work test environment and also do testing in work time, not your own time.
Flaky-Celebration-79@reddit
Oh I promise you it happens on work time. I don't give any time for free. Shoot I got in trouble once for responding to an email on off hours. My work requires us to clock in for ANY work. Probably one of the reasons I don't mind sharing my homelab. It was going to be patched anyways, might as well be getting paid!
welcome2devnull@reddit
I guess that's his test environment...
Everyone has a test environment, just not everyone has a production environment!
kuahara@reddit
https://i.imgur.com/9IjRQmy.jpeg
Technical_Syrup_9525@reddit (OP)
Yes that is why it doesn't make any sense.
Technical_Syrup_9525@reddit (OP)
80% of the workstations are not affected including mine. We have tried to recreate with no joy.
roboto404@reddit
Ooh this is a weird one then. Any similarities on the 10% or are they random workstations
Technical_Syrup_9525@reddit (OP)
totally random.
roboto404@reddit
Do they all have the same error code?
Infninfn@reddit
My bet is AV or some kernel level monitoring software
Euresko@reddit
Teat lol
roboto404@reddit
Lol next gen environment
y0da822@reddit
Anything ever come of this? I have S1 24.x and Server 2016+ - havent seen an issue on a couple of vms (Azure hosted vms)
danstheman7@reddit
FYI we have seen issues with 3 2012R2/2016 servers and SentinelOne 24.X agents. Uninstalling and going back to ver 23.X allows you to boot successfully. It’s a very VERY small percentage of our fleet (less than 3%) but it has happened at least 3 times in 3 unique environments. No known correlating factors.
SentinelOne did confirm the issue and said it’s under investigation.
PTCruiserGT@reddit
Have seen same issues in our environment.
I didn't think 23.x was still officially supported. Maybe they're making an exception in this case. Either way, good to know that 23.x is a fix.
danstheman7@reddit
23.4.6 is still fully supported and the latest supported version for 2008 R2/Windows 7/Server 2012 (non-r2).
rogerrongway@reddit
Dead Windows = Secure Windows.
Imhereforthechips@reddit
Patches took down my AD FS farm. Backups are a life boat.
ceantuco@reddit
what server version?
Imhereforthechips@reddit
‘19
ceantuco@reddit
Thanks. Updating our DCs next week. Our Test DC had no issues.
Imhereforthechips@reddit
My DCs are the only servers not in Azure Update Mgr. I’ll be testing those too.
sarevok9@reddit
This is an obvious LARP.
No minidump, no BSOD, completely irrelevant KBs that dozens of others are running, saying it's run for weeks on preprod but crashes prod (indicating env drift / poor testing as a culprit and not MS).
Pass.
Plasmanz@reddit
It reads like a ticket from 1st level, nothing useful and a bunch of panic.
Deviathan@reddit
Threads like this stress me out. I think I'm just going to believe your reply for my own sanity.
landob@reddit
Hrmm. All my servers are fine, they got patched this last friday.
I've only upgraded my handful of test workstations and they are fine. Going to do the full production workstations tonight.
My assumptions is there is a piece of software you have running everywhere that is the core of the problem?
PatrickOM@reddit
Ok total guess.. cause you gave basically no info, When i seen something like this ms forced drivers to require a cert.. Can you boot into recovery and disable signed driver as a requirement to test?
After that id be rebuilding the bootrecords and the usual bsod troubleshooting, trying to get one online to understand where the issue is.
We havent seen any patch related issues so far this time!!
Good luck!!
NGrey119@reddit
Our pilot went out fine. On day 2 now. Nothing down. Going on production next week
Cmd-Line-Interface@reddit
Boot safe mode and uninstall all recent patches?
One0vakind@reddit
Well, well, well... Starting 2025 off strong. Hopefully it's not the patches.
FriskyDuck@reddit
We’ve deployed the January patches to Windows 11, Server 2019 & 2025 without issue so far……
lebean@reddit
And we have rolled them across Windows 10, Windows 11, and Server 2022 with zero issues. Between your results and ours, seems like the patches are quite safe and poor OP has some completely unrelated issues they're fighting.
BlackV@reddit
I mean we have just about 0 info from OP so right now total guessing game
broknbottle@reddit
My guess is likely due to SentinelOne. All of these security solutions are garbage
Imhereforthechips@reddit
‘19
saccotac@reddit
What were the KB of the patches installed
Technical_Syrup_9525@reddit (OP)
KB5048652, KB5048652, KB5048685, KB5048685
weekendclimber@reddit
These KBs don't line up with what I'm seeing. 2022 21H2 2025-01 CU = KB5049983, 2019 2025-01 CU = KB5050008, 2016 2025-01 CU = KB5050109
Technical_Syrup_9525@reddit (OP)
I'll ask the server team to clarify. I won't get them tonight as they are spinning up BCDR
MBILC@reddit
Look like Decembers patches, k, not January. So then any issues or kirks should be worked out by now...
It is going to be a 1:1 comparison of the test systems versus production because there is clearly something different.
The list goes on and on..
https://support.microsoft.com/en-us/topic/december-10-2024-kb5048652-os-builds-19044-5247-and-19045-5247-454fbd4c-0723-449e-915b-8515ab41f8e3
FatBook-Air@reddit
FWIW, we have been on December patches for about 3 weeks on 2016, 2019, 2022, and a small number of 2025 without known issues.
CARLEtheCamry@reddit
Same, 10k servers across the Windows Server lifecycle and no issues with December's patches.
Wonder if OPs company tested...
arkain504@reddit
Same for us with 2019 and 2022
Unable-Entrance3110@reddit
Same here, but my install base is even smaller with only 2019 and 2022.
Bebilith@reddit
For 2016, KB5050109 is just the 2025-01 servicing stack update. The 2015-01 CU is KB5049993, but that isn’t shown as required until the SSU is installed, even though both are 2025-01 updates.
weekendclimber@reddit
I stand corrected. Doing this from mobile so appreciate the correction in KNs 👍
DiseaseDeathDecay@reddit
Since these are from December, I'd be looking at configuration changes to your servers between the previous patching and this patching. Specifically drivers, firmware, and agents, but it could be any number of things.
joefleisch@reddit
We had outages also after the January 2025 patches for Windows Server 2022
We had many Hyper-V VMs change MAC address.
We use DHCP with static reservations for application servers.
New IP addresses on servers.
Guess what happened to the firewall rules!
pointlessone@reddit
Only seeing a false positive on Forticlient on our workstations for a OneDrive update on this side of things.
No harm has come from letting it get blocked so far, but we aren't using OneDrive significantly enough to cause interruptions.
LForbesIam@reddit
This sounds like what happened with the Crowdstrike fiasco. Is it an Antivirus def?
Flyess@reddit
We’ve been rolling out patches this week with no issues. We patches last month also. I’m going to need more info…
arkain504@reddit
Since patch Tuesday our ADFS environment was down. Or rather the service would not start. We rolled back to a backup from before the updates. Everything worked.
Someone installed the updates again. After a reboot the ADFS service wouldn’t start.
But if you restart the Windows Internal Database service, the ADFS service will start.
AtarukA@reddit
No issue here so far on 900+VMs.
AV is Withsecure with EDR, servers from 2003 to 2022. yes I patched them already because "but muh security" without testing.
qejfjfiemd@reddit
I've patched a bunch of less important servers today with the jan rollout without issue
lordcochise@reddit
Ouch, was this just from Jan '25 patch tuesday stuff? not a single issue for us so far (mix of W10/11, Server 2022/2025) and we ALSO use SentinelOne, but we only use Hyper-V for virtualization.
(1) Hypervisor issue? Do you use Hyper-V / VMWare or another vendor? Are the offline servers primarily VMs or hosts? Using any updated stuff like Shielded VMs?
(2) were these up to date with Dec patches already or is it possible they got pushed older updates at the same time?
(3) Smells like maybe SentinelOne or Fortinet might have needed exceptions if it's across so many of your devices?
(4) Anything Bitlocker related? 2024 saw a number of stages updates to revoke old uefi certificates / images, were all those updates applied?
(5) check the megathread - https://www.reddit.com/r/sysadmin/comments/1i0ym1n/patch_tuesday_megathread_20250114/
(6) Possibility of Enforced Mode being an issue?
KB5037754: PAC Validation changes related to CVE-2024-26248 and CVE-2024-29056 | Enforced by Default Phase:
Updates released in or after January 2025 will move all Windows domain controllers and clients in the environment to Enforced mode. This mode will enforce secure behavior by default. This behavior change will occur after the update changes the registry subkey settings to PacSignatureValidationLevel=3 and CrossDomainFilteringLevel=4.
The default Enforced mode settings can be overridden by an Administrator to revert to Compatibility mode.
woodburyman@reddit
236 Systems patches already. No issues. We started pushing on Tuesday.
193 Client OS. For clients its a mix of W10 22H2 and Windows 11 23H2/24H2. We pushed 24H2 finally this week. The only issues are with the Feature Upgrade, things we expect like reinstalling VPN client as they had old versions, etc.
43 of them are Server 2022/2025. No issues either. We have not upgraded our HyperV hosts yet, that's weekend time. Not the couple 2019 systems i have left as they run our ERP which is also weekend time.
We're a Dell shop for servers and end users. Malwarebytes. SonicWall VPN clients. Teamviewer. Rapid7.
uninspiredalias@reddit
24h2 seems to have a major issue with Dell SupportAssist OS recovery (or vice versa!). Invisible backup fills the drive, having to manually disable in Bios and uninstall on all our Precisions and Latitudes.
Flashy_Try4769@reddit
Deployed January critical and security updates to our Tier 0 servers Thursday early morning. No issue so far. Sounds like issue is unique to your environment. We are also a S1 and Fortinet shop.
thisbenzenering@reddit
I applied patches and rebooted yesterday without issue. best of luck 🤞
Chrisdotguru@reddit
I read an article about a fortinet VPN credential breech, could it be related?
https://www.bleepingcomputer.com/news/security/hackers-leak-configs-and-vpn-credentials-for-15-000-fortigate-devices/amp/
Cepholophisus@reddit
Did your test environment run into issues? Were these patches tested, and they failed every time?
perkia@reddit
wdym?
Technical_Syrup_9525@reddit (OP)
No issues in the test environment.
hadesnightsky@reddit
Happening on my end after automatic updates
Filed a ticket to our helpdesk but the cause was EaseFlt.sys
Just to make the pc usuable.. Boot to safe mode And move said file elsewhere for the meantime.
In our org it happened 3 times now AFAIK.
KennySuska@reddit
No issues here. However, if you're looking at both Workstations and Servers (with different versions of Server OS), you need to figure out what exactly they have in common. There has to be some software in your environment that is running on all of those systems, maybe an AV, XDR, or even your hypervisor.
Otherwise, someone messed up in some way when pushing those patches to Prod. Have you gone through the minidump yet?
philefluxx@reddit
Oh man that sounds horrible. Cant say we saw any issues in the Azure cluster of VM's we run, about 18 of them.
I did have one of my colleagues jump in to test a client's support request yesterday and said "Oh weird the server restarted a day ago but not due to scheduled task" so I said "patch Wednesday?" and we confirmed that was the case. Didn't check the KB's in detail though, just looked like your standard cumulative updates and .Net. Wishing you and your team luck!
2Tech2Tech@reddit
using LTSC was the greatest decision i ever pushed for
TBone1985@reddit
Rolled out Jan CU updates to 2016, 2019 and 2022 with no issues tonight.
CornBredThuggin@reddit
Same. Nothing is on fire. Yet anyways.
Khal___Brogo@reddit
Same, just finished verifying everything. Going to bed, hope I don’t get woken up to a bad Friday.
WoTpro@reddit
Are you on AMD hardware? I am seeing some issues in my environment this morning after user patched his system with BSOD
tastyratz@reddit
FWIW AMD adrenaline 12.1 is a mess. If OP did driver updates at the same time maybe it's not the MS KB's?
WoTpro@reddit
Yep it was the adrenaline drivers that messed my user up, updating to the newest fixed the issue
RegistryRat@reddit
OP, you guys didn't have backups? Snapshots? Pushing the updates to just a few machines as a test?
guiltykeyboard@reddit
This is why you should test updates before you deploy them to everything and not let windows update just install whatever it wants.
wwbubba0069@reddit
why I snapshot servers before updates, but I don't have near the number of systems as you do.
I haven't had any issues with my environment, also using S1 and Forti.
SpaceCryptographer@reddit
Wow you have a large test environment!
Appropriate_Ad_9169@reddit
RemindMe! 8 hours
Olleye@reddit
What the holy ... how?
MBILC@reddit
You roll out patches to 400+ systems at once...
Now, please tell me you have a pre-prod group you test on first and let run for at least a week or so before going to production?
Dropping MS patches a few days after releases is never a good idea, for this exact reason, MS has a bad track record..
Technical_Syrup_9525@reddit (OP)
We held and tested on servers with no issues for two weeks.
MBILC@reddit
As others noted, 2025-01 Cumulative just came out on the 14th...
I did see above you noted some KB numbers for the patches, but they do not match January's KBs...
Did you possibly deploy the wrong patches or Decembers or maybe some that were pulled?
How were they deployed? WSUS/SCCM/KACE or something else?
Technical_Syrup_9525@reddit (OP)
They were Dec patches and rolled out through Datto RMM
heapsp@reddit
datto in combination of another software vendor could be the culprit here
Couldabeenameeting@reddit
I consult for a client with systems on Datto RMM, looks like December patches ran fine on 150 servers from 2016 (ha!) to 2022
lumpeh@reddit
Datto here, but ESET for av/mdr stuff instead - zero issues with Dec patches for its worth.
GezusK@reddit
The updates that came out Tuesday?
Fizgriz@reddit
How is that possible when this months pat he's just dropped two days ago?
IllustriousRaccoon25@reddit
What are you running from Fortinet?
No issues with these patches and S1 on ESXi or Hyper-V. Have a few bare metal servers that haven’t gotten patched yet.
bobs143@reddit
That is what I'm thinking. I have several friends who run S1 and patched with no issues. So what is actually running on the Fortinet side?
snorkel42@reddit
I’m guessing it is FortiSIEM.
LTMac97@reddit
We started getting floods of data overwhelming our fiber in a school system coming from Microsoft on the 7 brand new computers we installed in the summer. Grinding the schools to a halt as this bloated our network. We started up another new windows machine and same thing happened. Microsoft hasn’t been a great help
Doomstang@reddit
RemindMe! 8 hours
bondguy11@reddit
If this was being caused my a Microsoft update there would be hundreds of others having the same issue, has to be something else unique with your environment security stack
Boblust@reddit
I’m running Jan updates for 2016-2022 servers tonight. I have a test environment and these have been good since Tuesday. So, am I good to continue to update my prod environment?
ohiocodernumerouno@reddit
Who doesn't point fingers at Microsoft?
ohiocodernumerouno@reddit
Yea! Long weekend!
OGKillertunes@reddit
Sentinelone is terrible imo
Air_Veezy@reddit
I applied patches in my environment last night and have’t experienced any issues. I hope your able to get things sorted for your org
Icy_King9382@reddit
Why are you spreading misinformation?
mistakesweremade2025@reddit
In what sense is this misinformation?
Ok-Pickleing@reddit
Thank God
Morlock_Reeves@reddit
In the Monthly UPdates thead it is being reported that the updates break System Guard Runtime Monitor. Maybe that is your issue. Seemingly not an issue for most people it appears.
jorel43@reddit
Do you use defender EDR /atp? What exactly is your issue, are they all just blue screening?
Rouxls__Kaard@reddit
You sure this isn’t the result of a wide-spread malware attack?
Suspicious_Mango_485@reddit
I’m putting my money on S1!
Status_Baseball_299@reddit
First thing Microsoft is going to request is a tss capture, download if you haven’t already done
Cranapplesause@reddit
Have 100+ severs. Mix of 2016, 2019, 2022. I’ve patched about 90% and no issues yet. It’s got to be something specific to your environment.
xendr0me@reddit
So far zero details from OP on the symptoms besides "BSOD!"
This is 100% an issue with something specific to their environment, especially if these are 2024-12 updates.
Odium-Squared@reddit
This is why we don’t patch. ;)
Ergwin1@reddit
Id create a new ou and disable alle gpo inheritance. See if it boots.
Morlock_Reeves@reddit
My test ring of Win10/11 Server 2016/2019 have been fine since Tuesday when I installed them.
reddit_username2021@reddit
100 comments and bsod code or minidump not shared…
Volidon@reddit
Have a feeling OP isn't on the server team or has experience on how to provide necessary information.
SnarkMasterRay@reddit
You are correct.
FiRem00@reddit
It’s like they don’t actually want help
cbartholomew@reddit
Y’all don’t sandbox patch updates?
Cyber-X1@reddit
That sucks! Is there a way to set things up so there’s a test server that gets patched early and delay patches to all other machines, and maybe it rolls those patches out very slowly so they all don’t go down at once? Probably too rare to do all that tho huh?
Janus67@reddit
I don't believe we've deployed anything yet outside of test. But I did hear there were some issues.
What OS version?
Virtualized? Which hypervisor if so?
Which updates/KBs did you approve?
Did it work on some or break all of your environment?
Technical_Syrup_9525@reddit (OP)
both VMWare and hyper V
TinkerBellsAnus@reddit
This has to be something not completely patch related then. There's something else that is clashing against this issue, you're just not seeing it yet.
Ikhaatrauwekaas@reddit
None here running vmware with 2016,19 and 22
ellileon@reddit
I applied the patches to 400+ Servers last week and no issues at all. Windows Server 2016-2025.
This has to be some kind of special configuration on those servers. Did you find some overlapping part on those Servers?
Mafste@reddit
Well I was going to patch this weekend, imma just delay that one week.
Spiritual_Brick5346@reddit
i could log in on a friday and check/prevent this
fuck that, they don't pay me enough
they can deal with it
tbrumleve@reddit
Nope. Dec was a non event (Dev / QA / QA2 / CAT / Prod / DR / VDI environments patched at different intervals). Jan is looking the same (Dev & QA patched this week, the rest come over the next couple weeks).
BasicallyFake@reddit
Zero issues across our test cluster but we haven't pushed the most recent ones beyond that yet
WhAtEvErYoUmEaN101@reddit
None of our customers seem to be affected. That’s roughly 2k servers
Rawme9@reddit
First off, this would be meeting our disaster recovery criteria but I'm not sure the scale of your company. Because of that, we would start recovering from backups for data and spin up new servers. That's the easier part for us and likely you if you have good backups.
For endpoints, you need at least a few to test. What are the BSOD codes? All the same or different? Can you reimage from Intune, and if not can you boot into safe mode? Etc. Cattle not pets so I would try to reimage in whatever the most efficient way is with your available tools.
Technical_Syrup_9525@reddit (OP)
Yea our team is and has been spinning up on our BCDR devices. Luckily we do image based backups locally for most and some in the cloud. We are making headway on that front. The team hasn’t had enough time to do an after action report. We have engaged Microsoft and multiple security vendors including our outsourced SOC to rule out some sort of threat. It just doesn’t make sense to me and am hoping someone a lot smarter than me has any ideas but honestly we are too busy. I’ll post the codes Tomorrow
benscomp@reddit
This sounds like a SentinelOne issue
Technical_Syrup_9525@reddit (OP)
We had another MDR vendor come in and look and they are claiming it's the patches also. It's driving my team nuts. Then we had our RMM vendor look and they are all pointing at Microsoft. But yes I know it should have hit the news by now.
4t0mik@reddit
Changes in MS code can drive crashes, etc with RMMs, etc.
There a common software that spans 2016, etc?
Technical_Syrup_9525@reddit (OP)
Datto RMM
Scootrz32@reddit
Are you using Datto for backups too. We had an e problems with a one Storagecraft driver that was causing servers bit to boot. I know Datto was based on that stuff at one point
4t0mik@reddit
I would start there if you haven't already. Look for a way to disable, remove etc.
Your test environment have this loaded (and version)?
Opposite_Ad9233@reddit
Damn, I am reading this while patching Dec/Jan updates on 300+ servers.
Icy-State5549@reddit
Me after reading this thread: Meh.. my environment is fine. If I'm screwed I'll deal with it tomorrow.
My anxiety after reading this thread: If you want to sleep tonight, then go check your gear.
TheWino@reddit
Applied patches on Tuesday not seeing the same.
Guderikke@reddit
Good luck, maybe consider a patch test group of non critical servers the week before patching prod, moving forward.
clinthammer316@reddit
We have SentinelOne and I have installed this months WU on a bunch of 2012 2012 R2 2016 2019 and 2022. No issues so far. All are not critical prod systems just in case.
Wish I had the jupiter size balls of OP to push the WU to 500 systems
MisterFives@reddit
We appreciate the alert, but we need a lot more info. Which KB? What server OSes are affected? What's the BSOD error code?
Also good luck and godspeed.
MBILC@reddit
KB's they listed above do not seem to match with the 2025-01 Cumulative patches released.
techierealtor@reddit
You’re not the only one having a bad day, I stubbed my toe on my way out the door this morning.
Jokes aside, feel for you. I already told my boss I might find a new job if crowdstrike equivalent happens again.
lucky644@reddit
Well, what’s the common factor among them? sentinelone and fortinet? Can you setup one test machine with one or the other and test? Narrow it down and then harass the vendor.
Unless you have some other common factor among all those servers.
Maro1947@reddit
Lol - I got a "Server Error" notice whhen I clicked on this!
themanonthemooo@reddit
I hope you’ll get it resolved quickly!
ohheyd@reddit
Did…..you test before deploying to production?
pjustmd@reddit
Need more info.
RCTID1975@reddit
You're going to need to provide a LOT more info here.
But no issues here, and if this were simply an MS issue, we would've heard about it before now
Standard_Opposite_86@reddit
We had an internet outage today, but I run a small shop and no one working at night time. Please share more info on what update it was and OS.
Standard_Opposite_86@reddit
And good luck! Hang in there!
-c3rberus-@reddit
Please share more info on this.
PedroAsani@reddit
Be a hero and drop info on this as you find it. Save the rest of us.