Broadcom mandating a minimum 72-core license for VMware from April
Posted by sgt_Berbatov@reddit | sysadmin | View on Reddit | 111 comments
Nothing fully confirmed as yet, but here's the story from El Reg: https://www.theregister.com/2025/03/28/arrow_vmware_licensing_change/
We renewed for 12 months in December to review what we were going to do. We now have 9 months to move.
xfilesvault@reddit
We only have VMWare because Cisco requires it for Call Manager...
Everything else is on Proxmox... 15 servers, almost 300 cores.
75 core minimum for VMWare just to run Cisco Call Manager?!?!?
AggravatingPin2753@reddit
Cisco will be more than happy to put you in the cloud!
TinkerBellsAnus@reddit
Cisco puts it in my cloud every renewal period. By cloud I mean butt.
NotSureWhyNotNow@reddit
We are migrating five phone systems this year, moving to 3CX. Less expensive (including the phones) and easier to manage. Two more sites next year make the move, and half of the enterprise will be there, including our entire region. Cisco licensing has gotten greedy, looks like Broadcom was jealous.
Bart_Yellowbeard@reddit
Name checks out.
TinkerBellsAnus@reddit
Its sore, and got some speed bumps on it. People trying to claim "i bit my lip that's not a sore" are LIARS
Teunon@reddit
I believe v15 doesn't require VMware if your looking to move it off.
dzfast@reddit
Tell me more?
xfilesvault@reddit
About CUCM 15 supporting KVM? Looks like he deleted his comment. It's not true, unfortunately.
Michichael@reddit
Sounds like time to move.
Repulsive-Koala8636@reddit
They should have announced this on April Fools' for comedic effect.
jmfginlauber@reddit
We’ve been running Proxmox in production for several years now and couldn’t be happier. In light of the recent VMware chaos, we’ve started helping other SMEs move to self-hosted Proxmox environments as well. It’s not just a cost thing — owning your stack gives you way more flexibility and peace of mind.
One of the few gaps we encountered was multicluster management and more advanced integrations like Kubernetes. So we actually built our own platform around Proxmox to simplify that — kind of like a control plane for managing multiple clusters, plus some nice extras (managed K8s, storage, etc.)
Happy to chat if anyone’s considering the jump. VMware’s making it easier by the week…
moosethumbs@reddit
My org is working on a plan to switch to OpenShift virtualization. We renewed for 5 years right before the Broadcom acquisition, so we have a 3-4 year timeline. So far (in testing) OpenShift has been working well. 9 months is a pretty short timeline. Have you been looking at options?
Diligent_Ad_9060@reddit
I've talked with one larger organization who seemed happy migrating to kubevirt. That comes with the complexity of Kubernetes, but I believe their main motivation was as an incremental step in a bigger goal to containerize their environment and move closer to ephemeral/immutable workloads.
metaxa313@reddit
Are you running openshift on your own hardware?
moosethumbs@reddit
Yeah bare metal
metaxa313@reddit
That was my plan and I was overridden, now I have an incoming IBM Fusion. All that to run one application we migrated to openshift. I have ~30k cores and PBs of storage, but we need to pay IBM for hardware.
moosethumbs@reddit
Are you buying from IBM or Red Hat? Maybe that’s the difference. I think there are rules about who can sell what
moosethumbs@reddit
Hmm well I mean if you’re going to migrate into that then I suppose it’s just new gear? But yeah technically you can run OpenShift on anything.
Impossible-Judge1966@reddit
I am surprised people are not talking about the VMware lawsuit. https://www.theregister.com/2025/03/26/vmware_sues_siemens_for_using/
spartanmk2@reddit
Haha! broadcom whining that Siemens wasn't allowing a software audit xD
Snowmobile2004@reddit
Hasn’t that been in place for a bit now?
TrueStoriesIpromise@reddit
Yeah this was announced a month ago.
And to clarify: it's 72 core minimum for the organization, not each server.
illicITparameters@reddit
That’s 24-cores per node in a 3-node cluster. Thats ensuring SMBs go elsewhere.
accidentalciso@reddit
I’ve been away from physical infrastructure for several years now. I think the last time I was ordering physical VMWare hosts was back in 2017/2018 or so. How many cores is typical in low end to mid tier hardware these days?
Hayb95@reddit
What’s your typical stack look like now without any physical hosts?
No_Resolution_9252@reddit
As a virtualization platform, at minimum 24 - typically more.
For really low end there are many proc choices in the 24-32 core range for under \~3 grand that can be implemented in a 1u single socket server.
I suspect this is a move to get really horrible old dual 8 core nodes out of rotation and needing to be supported.
illicITparameters@reddit
Depends on your workloads, but usually 16 because that’s Microsoft’s minimum license amount.
W3tTaint@reddit
Multiples of 16 for MS licensing 😉
ElevenNotes@reddit
That’s 12 cores per CPU. How many Xeons do only have 12 cores or less? I count 4 in the fifth Gen. Four out of 32 CPUs. 10 in the 6th Gen out of 62. You really have to go out of your way, to shop a below 12 core CPU these days. All the low cores Xeons are also the ones with the highest turbo frequency, not something you need in an SMB data centre don’t you think?
illicITparameters@reddit
Another person who just doesn’t get it. 🤣🤣
I have a client right now who only needs 16-cores per node, so why would I spend more money?
ElevenNotes@reddit
I’m well aware of such small requirements of small to micro businesses. Theses businesses never needed VMware. They are better of using Proxmox or Hyper-V or no virtualization at all and bare metal Linux with containers.
No_Resolution_9252@reddit
That isn't a lot of cores. About the lowest core count server processor you can get now is 12 cores (excluding high end, high clock, low core count skus)
10 years ago I wasn't building SMB virtualization platforms with nodes that had fewer than 20 cores across 2 sockets. A 32 core proc costs less than 2 grand today.
The pile of legacy garbage virtualization platform I have to deal with today is 24 cores per node and it is BROADWELL.
The virtualization platform for a job I just left built on the "the company is broke as a joke but our stuff is falling apart so badly we have to shop last year's xeon silver servers on sale and populate storage with spinning hard drives" design, were 24 cores...in 2019.
illicITparameters@reddit
1) that isn’t true at all 🤣. Literally just had dell price me 8-core CPUs.
2) 24-cores per node is quite a lot in SMB land.
3) Just because you were doing that, doesn’t mean it’s right or that people should be doing that now.
No_Resolution_9252@reddit
I qualified that. There are 8 core CPUs that are high end, high clock and high cost. I initially didn't consider it and then edited to address sleazebags that run desktop processors under their virtualization platforms. I am aware at least some of them exist unfortunately.
24 cores is not a lot for anything. An average desktop is approaching that now. my sub 2k laptop has 22. Even if doing something as reckless as 90% subscription on a 3 node cluster, that is enough for about an 8 (actively used) server network.
24 cores was not a lot of cores 10 years ago. The 2670v3 in dual socket systems was absolutely ubiquitous in smb virtualization platforms at very little cost over dual 8 core systems.
USarpe@reddit
You are exactly the kind of people, who should kept away frommaking any decission. A laptop with 22 Cores 😂😂😂😂😂.
There are people in this world, who understand what they do, let them do their job.
No_Resolution_9252@reddit
Ok boomer.
illicITparameters@reddit
The average desktop is not approaching 24-cores.🤣
I’m not going to continue entertaining someone who talks out if their ass and lives in a bubble.
Good luck, bro.
No_Resolution_9252@reddit
ok boomer
illicITparameters@reddit
I’m not even 40, but keep coping.
No_Resolution_9252@reddit
ok boomer
hlt32@reddit
Remember that HT on one physical core counts as two logical cores. Take a midrange modern desktop like a 14600, 6P cores (with HT so 12 logical) and 8E cores, 20 logical cores is close enough.
illicITparameters@reddit
What is the point of this comment? HT isnt physical cores.
hlt32@reddit
Tbh I was thinking of something else that licensed on virtual, disregard my comment.
TrueStoriesIpromise@reddit
Right. I'm not defending the number, just making sure it's clear.
illicITparameters@reddit
I know, I was just writing out the insane math for people.
FLATLANDRIDER@reddit
Working with our VAR right now for renewals and we are looking at buying extra licenses right now because according to them, each order is a minimum of 72 cores.
So if I add a host in the future, even if I only need 32 cores, I need to buy 72 minimum. So we are looking at buying licenses for near term future hosts even though we don't need the licenses yet.
TrueStoriesIpromise@reddit
That may be true, too.
What should happen is that SHI (for example) buys 1000 cores and then splits the license up and resells them to SHI customers.
But I'm sure that's a license violation, too.
avaacado_toast@reddit
Doesn't matter, fuck Broadcom and thier shenanigans. I'm moving to proxmox.
Brufar_308@reddit
We have 84 cores, and were forced to license 96 at our renewal, so this doesn’t surprise me. We renewed several months ago.
systonia_@reddit
Yes that is confirmed by me. We had the same quote just days ago. 72 cores is the minimum . Assuming 2 CPUs and 2 servers(so 4 CPUs ) for a small setup, that asks for 18 Core CPUs. These guys really are gangsters. You either have 16 Cores or 24. So you either waste a few cores or have to pay up for 96 cores
No_Resolution_9252@reddit
2 node virtualization? seriously?
FLATLANDRIDER@reddit
Yes, that is common. Not everyone needs 50 servers, and virtualization is easier to manage than bare metal everything.
No_Resolution_9252@reddit
If you are doing 2 node virtualization, you are literally throwing money away on the hardware, licensing and labor to support it. Even without special purchasing arrangements in AWS and Azure to reduce sticker price you are putting yourself at a cost disadvantage.
ziobrop@reddit
not every workload should be hosted in the cloud.
Tribat_1@reddit
The vast majority of my customers have 2 nodes. Maybe 3 for replication to a DR site.
No_Resolution_9252@reddit
Then its not 3 nodes is it.
Sauronphin@reddit
I made a business out of Ceph and Promox.
I love broadcom
Ansky11@reddit
Do not use ceph unless you have 4 or more nodes.
Sauronphin@reddit
Yeah our minimum designs call for 5 or 7 nodes.
That gives us 28 osds minimum.
Thin is in, cheaper nodes but a whole fleet of em
ElevenNotes@reddit
Which drives the core count up which drives the Windows licensing up. So you save on VMware but you spend more on MS, great!
Sauronphin@reddit
Depends if you run datacenter or standard licences.
ElevenNotes@reddit
Not really. At 7 nodes with 12 core CPUs, two CPUs, you have 64k for data centre and 56k for standard (10 VMs per node). The only option where core count has lower impact is licensing by VM, but here again, you have to license 8 cores per VM. Most VMs don’t need 8 cores, so you pay for cores you don’t need.
rmeman@reddit
Tell me your worst Ceph nightmare scenario
ErZ101@reddit
Tell me more. installation, conversion of existing setups, support of existing setups?
Sauronphin@reddit
Mostly conversion from vmware to Proxmox on new setups and I throw in support for recurring revenue.
bedel99@reddit
I have a VMware image that I need to run for development work. But I can’t get it to run natively am running it in windows on workstation on prod mox. Do you think you could get it to run natively ? I know only enough to get about it pm.
MajesticAlbatross864@reddit
We just got a renewal quote for a customer last week, last year it was 32 cores across 2 nodes, $4000NZD this new quote 72 cores and $25,000NZD needless to say that’s not going to happen 😂 just finished rebuilding my home lab on proxmox and converting all the vm’s.. went well
tjasko@reddit
Oof, yea that's atrocious.
kanid99@reddit
I guess I dont see the pain here. 72 cores for an organization isnt that much but I guess it depends what you use this for.
Our smallest clusters have 4 nodes of 28 cores each. Our largest have 12 nodes of 64 cores each.
FatBook-Air@reddit
Most orgs don't need much CPU, so they often get the smallest and cheapest CPUs they can get.
So this Broadcom change is going to be extremely painful for many orgs. For many, this will likely be the thing that makes them jump.
faulkkev@reddit
It is like they want to go out of business
Site-Staff@reddit
It’s typical for a raided company like this to milk everything they can out as quickly as possible and then bankruptcy the remains away.
It will happen. Seems like not moving from VMware is IT negligence.
smoothvibe@reddit
We went Proxmox. Fck u, Broadcom.
Phlatchmo@reddit
If you look at the screenshot from the register article, I believe that if you're running more than 72 cores total, you will simply pay for the number of cores you are using. They're example was five dual 16 core CPU servers which equals 160 cores and they would only be charged for 160 cores. I hope that's the case.
ConstructionSafe2814@reddit
I'm not that worried about Broadcom anymore. I think we had a bit of luck we weren't knee deep in the VMware "ecosystem" like horizon and what not. Just 3 hosts and Veeam + SAN.
Last year we renewed for 3 years. Next year August 2026 our SAN will go EOL.
I've got a Proxmox PVE cluster on really old hardware (HPE gen8+gen9) since X-mas holidays 2023-2024. Currently I'm working hard to get a Ceph cluster up and running to provide storage for Proxmox. Also on old (gen9) hardware.
Both clusters perform better than I could have hoped for.
Only deadline I have now is our SAN. I want to avoid buying a new SAN at all cost (pun intended).
Ansky11@reddit
Storage is cheap. Do not use ceph or SANs. Just do sync across all disks of the 3 nodes with HCA cards.
Ssakaa@reddit
How many people in your org have debugged data corruption/concistency issues on ceph?
Those aren't just monetary.
ConstructionSafe2814@reddit
Sounds like you have practical experience. Please elaborate on debugging data corruption in Ceph!
Ssakaa@reddit
Only in my homelab, eventually concluded the inconcistencies had actually destroyed the data, resorted to backups for the few pieces with real data. Granted, I was proper stress testing it, 1gb links, spinning disks, and 3x old r610 era hardware. Deployed a pile of 40ish centos VMs in parallel, satellite 5 underneath that. The combined I/O traffic somehow made the cluster lose sync, whole stack froze up. Spent a good week attempting to recover it, since it was just homelab... and, it's genuinely not a trivial ecosystem if you don't live and breathe clustered storage. That experience single handedly made me appreciate the price of a real SAN.
Fatel28@reddit
We run our Ceph with all flash/NVME storage and a 200GBPS (via LACP) backhaul. Spinning disks with only 1gbps sounds like you're asking for trouble. The docs recommend 10gbps minimum.
Also - You only had 3 nodes. Not recommended for production.
Ssakaa@reddit
Yep. Wasn't production. So, while I absolutely pushed the limits and caused it to hit an edge case, it failed catastrophically when faced with the edge case. From there, the tooling to dignose and attempt recovery are simply not designed for anyone that's not neck deep in developing ceph. If you don't inherently know the layers and data stuctures, you get to learn them for a maybe on recovery.
With a dedicated team focused on it, it's an awesome tool. For a random bubblegum and duct tape solo admin shoe-horning PVE in because they're not a viable vmware customer, it's a potential failure path that they will not have sufficient internal or external support for
Fatel28@reddit
To be fair - proxmox does order paid support. So there IS support for it.
I do get what you're saying. I learn best when I break something and have to fix it. But I think there's a small distinction here, you were running it in a config that was never supposed to work. So it breaking made sense, so did not being able to fix it. That's why they have the minimum requirements. It'll break unrecoverably if you don't.
I'd be really interested to see if you could break it the same way in the minimum supported config, then fix it. I'd bet you probably would've had a much harder time breaking it and a much easier one fixing it.
ConstructionSafe2814@reddit
Does Ceph really cause data corruption under heavy load? I can't imagine that. I'd guess it would first halt I/O as soon as it's unsure.
Perhaps given the design choices of that cluster, there might be more design/configuration choices that caused it? Like bad drives and not having fail over capacity to self heal (3 nodes).
Again, I can't imagine his data corruption was just because "Ceph".
Fatel28@reddit
Right that's what I'm saying. It was a bad/broken setup from the start. The fact that it worked at all was a fluke.
mattk404@reddit
As a homelab with similar era hardware.... 1gb networking on I'm assuming not a ton of spinning disks is not going to go well. I'm fairly sure you'd be able to recover but performance would likely be underwhelming. 10G, nvme bcache backed hdds (6ish per node) on r710s perform quite well. 1.2GB/s and plenty of iops (can't remember what it benched at last).
Ssakaa@reddit
Performance was reasonable, 4x 2.5in 500gb SAS per box. The 1gb links just bottlenecked the burst of individual actions from those writes. Ceph stopped agreeing on who had the newest of each block device, and by the look of it, had kept writing past that line. Was ugly.
mind12p@reddit
Hp microserver gen8? Do you have any guides how to move to proxmox? Last time I checked there were issues with the b120i driver.
ConstructionSafe2814@reddit
Not sure on b120 driver. Just install it and see if it works.
There's a migration tool in recent versions of Proxmox. It's not all that hard. Also CloneZilla might be your friend as well as the documentation.
NIC naming might change because of the VM "hardware" being different. You might also want to uninstall VMware drivers before you migrate. But it should all be in the documentation. It is pretty good.
Dioz_31337@reddit
Lmao Proxmox FTW
OldWrongdoer7517@reddit
Why is pre-order licensing still a thing in general? It seems so arbitrary by today's standards.
AggravatingPin2753@reddit
I’ve got a just replaced HP DL390 gen10 dual cpu 256 ram and 8 tb total SSD storage (don’t remember the exact usable amount after raiding it) running proxmox for our testing. We cobbled the monster together from our last refresh leftovers.
Been solid as a rock with local storage, but we moved all our datacenter VMs to a IAAS datacenter running the VMware cloud stuff. We’re looking at this type of setup to replace small satellite office servers, so local storage is all we need.
No way can I justify 72 cores for one or two low end satellite office servers. We couldn’t even justify it when our quote came in with 32 min.
No_Resolution_9252@reddit
for the love of god, why are you torturing yourself with that complicated a virtual environment for 1-2 servers
architectofinsanity@reddit
BOHICA…
metaxa313@reddit
Are there any mid size admins on this sub that are switching? We have ~30k cores in VMWare and I don't see us migrating any time soon.
Ch4rl13_P3pp3r@reddit
We’ve had it confirmed this week.
Sk1tza@reddit
Same boat now, 12 months to work out where to go. Most likely AWS/Azure now...
SpOoNmAn666aust@reddit
Just had it confirmed the 72 core minimum applies on a new order even if your organisation has 500 cores currently.
thrwaway75132@reddit
Don’t piecemeal, co-term everything
TheDawiWhisperer@reddit
Broadcom make a lift and shift to EC2 seem cheap
MeatPiston@reddit
Pretty soon they’re just going to pull an Oracle and ask for your gross receipts so they can decide how much to charge based off your income.
TinkerBellsAnus@reddit
You laugh, but I saw a C&D from them because a client had not renewed in time. Broadcom is just Oracle with an Asian at the helm. Everything else would pretty much give Larry Ellison a boner when its discussed.
Mogaloom1@reddit
No need, they already log all this information.
They will simply ask if you let them take the money directly from your bank account.
MagicBoyUK@reddit
Yeah, we're not going to hit that. Guess the remaining VMware servers are getting yeeted.
Pre-Broadcom they were making hundreds of thousands off us a year, now it's going to be zero. Good job Broadcom.
unethicalposter@reddit
You ditching them is what they wanted.
bbqwatermelon@reddit
Only Broadcom could out-Microsoft Hyper-V core licensing. Microsoft has incentive to price people out to move to Azure, Broadcom does not. May be boxing themselves into a corner.
retiredaccount@reddit
Many VM admins in the workforce don’t actually know much of anything in depth and open ticket after ticket. The 72 core license is probably the minimum break even point for Broadcom to offset the average admin from a backend support standpoint. Makes you want to see their ROI sheet with these comparisons.
Igot1forya@reddit
Having VMware in the homelab has helped a ton for learning how it works. It's a shame they are taking this away from us. But I get it, they want to sell certifications and paid support.
In the 15+ years of using VMware professionally, I never once opened a ticket with them. Most issues I had were actually related to a bad patch that they issued (like a PSOD due to a bad NIC driver or when they cooked USB storage due to log journalling). In every case, a reinstall only costs me maybe an hour of actual hassle. I'd rather do that than deal with troubleshooting over the phone with support who may simply say to reinstall anyway. Not that I had many instances where I had problems.
davidbrit2@reddit
I feel like the "iCarly smirking knowingly at her computer while having a drink" picture whenever someone seems surprised at a new way Broadcom is gouging for VMware.
imthelag@reddit
I'm shook that people are still just finding this out. I've never used VMWare but have known about this pretty much as early as r/sysadmin did.
I'm not perfect by any means. Rather, just an observation that we all benefit from subscribing to industry news - especially for any SaaS or hardware a company would consider mission critical.
Ok-Car-2916@reddit
Unpopular opinion is that 72 cores is a generous minimum for a virtualization solution you want to technically have enterprise support for. That being said, for under 72 cores there should be a free community license, you just shouldn't be able to call support and waste their time debugging a toy environment.