Secure Boot 2026 certificate rollout stuck on VMware VMs
Posted by maxcoder88@reddit | sysadmin | View on Reddit | 16 comments
I'm trying to deploy the new Secure Boot CA 2023 certificates on Windows Server VMs running on VMware, ahead of the June 2026 expiry of the old 2011 CAs.
The deployment gets stuck at "InProgress" indefinitely. Event ID 1801 shows error 0x80070013 (WRITE_PROTECT).
From what I've read, the root cause is an invalid Platform Key (PK) in the VM's virtual UEFI NVRAM, which blocks any write to Secure Boot variables — so GPO and registry keys alone don't fix it.
The suggested fix involves:
- Upgrading ESXi to 8.0 Update 2+
- Upgrading VM hardware version to 21+
- Renaming the NVRAM file via SSH so ESXi regenerates it with 2023 certs
My questions:
-
Has anyone actually gone through this process? Any gotchas?
-
Is the NVRAM rename safe for VMs with vTPM enabled?
-
Any way to do this at scale without touching each VM individually?
Running ESXi 7.x currently. Thanks!
satsun_@reddit
I followed this article: https://knowledge.broadcom.com/external/article/423919
It's a manual process, but worked well for a small batch of VMs.
Summary of the Broadcom doc:
Create an HDD and attach to a VM
Obtain the certificate and copy it to that HDD
Disconnect the HDD from the VM
Power down the VM you want to update, attach HDD, modify VM config parms, boot to UEFI, select new cert, shutdown, disconnect HDD, then boot.
Think of that process like connecting a USB drive, copying a file, then booting into the BIOS of a machine and updating the cert.
maxcoder88@reddit (OP)
Which VMs are actually affected by this vulnerability? Is there a way to identify them before running the remediation?
satsun_@reddit
I assume any of your VMs that have the UEFI and SecureBoot functionality enabled. If your VMs are using the BIOS boot method or aren't using SecureBoot, then I assume they don't matter, but don't hold me to that.
Not tending to this supposedly won't break anything, Microsoft just won't be able to apply future security updated to the boot sector.
Sensitive_Scar_1800@reddit
Windows operating systems will generate a Windows event log “1801” in the SYSTEM log under windows event viewer. The event will read something akin to this “Secure Boot certificates (CA/keys) are available but have not yet been applied to the device firmware.”
This log is generated following a scheduled task that’s run periodically (every 12 hours I think).
Lazy_Acanthisitta729@reddit
+1 to this. Also what we are doing at our company and is working well enough. Running the scheduled task MS has for this detects the updated certs and sets the regvalues as updated. A bit tedious though as haven't found a good way to automate going through the bios to do the selections for applying to cert from the mounted drive.
maxcoder88@reddit (OP)
Is it possible for you to share in full the steps you implemented in your own environment? You are performing the Broadcom steps and then running a scheduled job afterward.
wastewater-IT@reddit
I've gone through that exact process on ESXI 8U3 with no issues:
Shut down, snapshot, upgrade hardware to vmx-21
Rename NVRAM file in the datastore browser
Boot up, confirm everything looks good
Run the registry key method of updating secure boot, remove snapshot if successful.
It's worked fine with every VM including W11 guests with vTPM - keep in mind this will trigger BitLocker if you have that enabled on the VM, have the key ready to go.
Also for time-sensitive VMs that can't handle being out of sync for the 1-2 minutes before time re-syncs (deleting NVRAM resets the clock), you can set the advanced parameter rtc.difffromutc to -25200 (for UTC-7, choose your timezone offset number of seconds) before the first boot so that it has the right timezone on boot. You can remove that parameter after NTP sync occurs.
maxcoder88@reddit (OP)
Thanks for the detailed writeup — this is really helpful! One part I'd like to understand better is the time-sensitive VM section. Could you elaborate a bit more on that?
Specifically:
- When you delete/rename the NVRAM file, how much does the clock actually drift, and what exactly gets reset — just the RTC offset, or the full BIOS time?
- What kinds of workloads or services are most at risk during that 1-2 minute window before NTP re-syncs?
- Regarding the `rtc.diffFromUTC` advanced parameter — is this a standard VMware setting, and is it safe to leave it in place longer term or should it always be removed post-sync?
- Any edge cases you've run into, like domain-joined VMs where a clock skew triggers Kerberos auth failures?
- Does this also apply to DCs holding the PDC Emulator FSMO role? Since the PDC is the authoritative time source for the domain, even a brief clock reset could cascade to all domain members. Have you specifically encountered this with PDC Emulators or other domain controller roles, and if so, how did you handle it?
Appreciate any extra detail you can share!
wastewater-IT@reddit
Excellent questions! Here's what we experienced:
The clock seems to lose its timezone on first boot, so the time is correct but 7/8 hours off (for Pacific standard/daylight time). This may be different if you have the incorrect time on your ESXi hosts or if you have sync time with host enabled - test this out on one VM if you can.
We were mainly concerned with DHCP (failover config relies on time sync), database servers, and domain controllers, everything else we accepted the time being off for a minute.
Yep, it's an official setting: https://knowledge.broadcom.com/external/article/419717/error-bios-time-gets-set-as-local-time-a.html. After the time synced up we shut down the VM again and removed that property so that we don't have to worry about it in the future.
For domain-joined VMs, it will fail Kerberos until the time syncs up (can't RDP or log into it), but as soon as the NTP sync completes it's good to go. The event logs also show the wrong event times until NTP sync - another reason why we took special care with the sensitive VMs.
We saved DCs for last, especially the PDC emulator FSMO holder. The RTC.diffFromUTC parameter worked like a charm and nothing went out of sync. That's why we tested with our other VMs first to ensure no issues; it was scary but everything worked!
tarvijron@reddit
According to our TAM we are at wait and see/ will be handled by update.
maxcoder88@reddit (OP)
Thanks — just to make sure I understand correctly: when the TAM says "will be handled by update," is that referring to an upcoming ESXi/VMware patch, or a Microsoft update for the guest OS?
And is there a specific KB article or release we should be watching for? Want to make sure we're tracking the right thing given the June deadline.
MrYiff@reddit
I think this would be the KB to watch, it doesn't give any timeline for the automated fix, just that they are working on it (and this was only added recently, originally the KB only provided the manual steps):
https://knowledge.broadcom.com/external/article/423893/secure-boot-certificate-expirations-and.html
maxcoder88@reddit (OP)
Which VMs are actually affected by this vulnerability? Is there a way to identify them before running the remediation?
MrYiff@reddit
You can only check on the certs from inside the VM afaik, it potentially affects linux guests too as they can support secure boot too.
Newer VM's seem to already have the new cert but I'm not sure what version this was introduced with.
The VMWare KB I linked has manual steps for checking for the new cert for both Windows and Linux.
tarvijron@reddit
An esxi patch. Notably we are already on vC/esxi 8 and there’s been some very slight rumblings that it might be a VVF 9 thing.
SuspiciousOpposite@reddit
https://github.com/haz-ard-9/Windows-vSphere-VMs-Bulk-Secure-Boot-2023-Certificate-Remediation
This tool will do everything for you (except upgrade you to ESXi 8.0.2+, of course). I've not used anything on it other than the -Assess switch yet though, so I have no idea of how well it works doing the full remediation. Currently still trying to get my head around the issue.
Also worth noting that Broadcom say that are working with MS to make this an automated process, so there could be something more official along soon.