Nutanix hit us with a 75% quote increase with a one day notice before expiration... so that project is dead. VMware is out and we were looking hyperconverged... Any other alternatives?
Posted by junon@reddit | sysadmin | View on Reddit | 488 comments
We were looking to get off VMware and refresh our hardware in one fell swoop but it was already going to be expensive and a 75% quote increase announced the day before the quote expires has probably put that out of reach. I was REALLY looking forward to being able to handle purchasing and support for our international offices through nutanix directly, instead of through regional vendor support offices as is currently the case with Dell.
Does anyone have suggestions of similar hyperconverged providers with good international support experiences and "reasonable" prices that haven't started turning the screws yet?
Hyper V isn't out of the question but I would prefer an all in one solution.
MekanicalPirate@reddit
Check out Scale Computing
Recent_Perspective53@reddit
Hated scale
chron67@reddit
What issues did you have with it?
darkcloud784@reddit
It's basically proxmox but with a bigger price tag. I personally would go with proxmox or xcpng.
concerned_citizen128@reddit
It really isn't, the management is completely simplified compared to proxmox. There's no storage layer the manage either, it's just available.
If you need to justify your position to manage the system, then proxmox or xcpng will make sense. If you're looking for dead-ass easy to setup and manage to the point where you don't need to spend any time (even the updates are single click) and want to spend time elsewhere in your environment, scale is the way. The experience is more like setting up cloud vms than anything else. Initial setup is 5 minutes (not counting unboxing and racking) and then there's nothing really else to do...
Recent_Perspective53@reddit
Simplified if a matter of opinion. Does it group everything into one management system, yes. Does it limit you to one management system and require purchasing new systems when you need more system, yes. I personally hated it and didn't like it, nor could I find all the stress I needed to manage.
Recent_Perspective53@reddit
Didn't support the failover DFS setup i had. Annoyed me half my file servers no longer served a purpose.
admlshake@reddit
They worked sales for Nutanix.
DerBootsMann@reddit
there’s no need anymore , scale’s dead in the water
MekanicalPirate@reddit
Really? Care to elaborate?
DerBootsMann@reddit
https://www.blocksandfiles.com/ai-ml/2025/07/31/scale-computing-acquired-by-acumera-which-becomes-scale-computing/1610190
you might want to google or bing-fu for some public details
MekanicalPirate@reddit
Well, hopefully they maintain everything everyone seems to like about the platform.
DerBootsMann@reddit
most of the og crew’s gone , new owners already jacked prices .. i’d chill on that for now
Environmental_Mix856@reddit
+1 for scale. It’s been a few years but support was fantastic, the pricing and licensing model was good, and the software was based on kvm which has a lot of usage in the wild.
Inquisitor_ForHire@reddit
Seems like everything not from VMWare or Microsoft is based off KVM. It's good stuff.
DerBootsMann@reddit
there’s bhyve and there’s vates with xen
brownhotdogwater@reddit
Been using it over 20 sites. Works well, only thing that sucks is no support for a span port or pass though virtualization.
justmirsk@reddit
I came here to say Scale Computing. Really simply and easy to use. We implemented it for a bank and it was smooth.
FU-Lyme-Disease@reddit
Scale over Nutanix any day. And like 10% of these learning curve.
woodyshag@reddit
Nutanix isnt really all that difficult and is supported by far more products than scale.
Inquisitor_ForHire@reddit
This is our problem - vendor support. We do a lot of OT software and you need something that a vendor will actually certify and support you on.
ronin_cse@reddit
We just got hit with a quote from Scale that is up 100% vs a few weeks ago. We also love their product but they aren’t immune from industry wide price increases either.
Bahk7@reddit
Been using scale computing for over 8 years. Nothing but good things to say about it. We originally chose Scale over Nutanix during POC.
hardingd@reddit
Scale is improving quickly, but the enterprise tools are … lacking.
xCDOGx@reddit
I also really like the Scale system and was going to recommend it.
Shington501@reddit
Came here for this - ScaleComputing is great, especially for non-enterprise - I am in a similar situation, and they can get hardware reasonably, likely via Lenovo.
CrazyInspection7199@reddit
Was about to say this. We use it and have zero complaints
Packet7hrower@reddit
SCALE Computing. Fuck VMWare. I manage about 30 clusters. Rock fucking solid.
OkVast2122@reddit
They went bankrupt and their assets sold for peanuts. Why bother?
Packet7hrower@reddit
Where’s your source? I call BS
OkVast2122@reddit
That ain’t exactly breaking news, is it. Been doing the rounds online for a good year already. Here’s the official announcement on their biggest client’s site, the one that hoovered up all the leftovers.
https://www.acumera.com/press-releases/acumera-acquires-scale-computing/
Call it whatever you fancy, it don’t change the reality, does it. I’ve got no skin in the game, not even a whiff of VMware stock, but you might wanna crawl out from under that rock you’ve been parked on and have a proper look at the vendor you reckon’s running your stack. Just my two pence!
Packet7hrower@reddit
Not trying to be a dick, but still - where is your source?
That’s the vendor acquisition.
I’ve been to their conference every year, very involved with the community, and I’ve never heard they filed for bankruptcy… this is the first I’ve heard about it.
I’ll wait for your source :)
Until then, I guess have fun spreading BS info!
Fighter_M@reddit
I’ll start with a little heads-up, because this one definitely lives in the "insider chatter” bucket. There’s no press release, no official breadcrumbs, nothing you can just Google and point at. You kind of "it pays to know the people" or just be close enough to the circle to validate any of it, but from how you’re talking about their conferences, every year, and so on, I’d guess you can sanity-check at least parts of what I'm telling here or about to tell. So, the Scale story... Take it with a truckload of salt if you want, treat it like valley gossip, or go ping your network and see what lines up. At some point, when VMware started stepping on rakes with all the subscription-only and "lets triple the quotes" stuff, Scale’s CEO had what probably felt like a genius play, which is buy back VMware licenses, sometimes even take hardware off prospects’ hands, then flip those customers onto Scale’s stack plus new Scale-branded servers. Super-seductive pitch, and it really worked! Sales popped, and everyone loved the story, except… VMware licenses aren’t really transferable, and that customer hardware, it’s basically dead weight. Lab it, demo it, donate it, or just write it off and move on. So sure, revenue charts looked like a clean hockey stick, but CAC which is "Customer Acquisition Cost" was quietly going through the roof. For quite a while, the board and VCs were allegedly getting the curated version of the sales pitch, which is lots of growth, lots of momentum, very pretty slides, and not so much focus on the part where the economics didn’t actually make sense, but cash always tells the truth, and at some point the investors, including Morgan Stanley, did the math and that didn’t land well. Scale's CEO gets pushed out, there’s talk of legal action, CFO goes missing, and suddenly you’re in shutdown and fire-sale place. Company gets sold at a loss, something like 20M under only Morgan Stanley invested capital from what I hear, and liquidation preferences mean one party, and it's MS people, is "fine" while everyone else basically gets zeroed. Investors, shareholders, everyone! Then you look at the team and it tells the rest of the story, which is engineering is thinning out hard, CTO moves to a competitor, something-about-the-nodes, VP Sales shows up at DataCore, CRO lands at Stor-stinking-Magic and it goes and it goes. Do a quick LinkedIn sweep and it feels like a quiet evacuation, so whether you were a fan or not, the tune right now is pretty clear. Yes, they’re still around, but it doesn’t exactly feel like a long runway situation.
OkVast2122@reddit
Not trying to be a dick
Fair play mate, you’re actually doing a pretty solid job so far!
Know what, I already handed you one that’s out there in the open, innit? What you want next, me grassing up my own mandem with names, times, and a cheeky tape, yeah? Not happening! All I’m saying is this, Scale’s sitting on the vendor list of one of the biggest VARs in the UK, and we do bare business with that lot. So, their sales people, and our lot, they chat a bit loose, especially after a few pints and a bit of extra on the side, you get me.
Nah, it ain’t like that! Acumera was Scale’s biggest paying customer, so when Scale started rolling like they might switch the lights off, Acumera didn’t really have a choice but to step in and buy them out just to keep their own IT from going sideways. End of the day, Acumera’s an MSP, they can’t just flip a switch and move a whole data center overnight. That sort of transition takes time, months easy, sometimes even years.
Oh, so you’re drinking from their marketing firehose? Alright then, go on, why don’t you pattern up and tell man what their books were saying? Do everyone a solid, go pull your strings and show us what you’ve actually got now.
Look, as someone born and raised on the East Side, I don’t really vibe with people like you. You guys don’t watch your mouth, you throw out accusations left and right just cause there’s an internet and a firewall between us. I’d like to see you say all that while looking me in the eye, and good luck with that! As of now, you’re done out here, dismissed.
wezelboy@reddit
Hardware refresh is going to be expensive no matter who you go with.
Falldog@reddit
This here. Hardware prices are skyrocketing and every vendor is reducing how long their quotes are good for dramatically because of it.
gsmitheidw1@reddit
Dell is offering 15 day quotes to us because things are increasing so rapidly. Our education sector brokerage team is recommending reconditioned hardware.
If you've not already bought hardware recently,.I'd say forget it at this stage. Too late! Many items are just no longer available at all for any price. Particularly some enterprise GPU cards but also some edge level stuff like touchscreens are just unavailable altogether.
Bigglesworth12@reddit
Cisco is only offering 2 day quotes and a contract clause that they can come back and ask for more money after selling it to you. Still not sure if that would actually hold up or not.
jakeryan91@reddit
Compute related quotes are valid for 7 days and 14 days for everything else.
Source: I work for a VAR and Cisco is 95% of my business and I'm in this madness everyday. We do include said disclaimer that Cisco may screw us and therefore you via cancellation or price increase after the order is created.
Bigglesworth12@reddit
you are correct, we got clarification today that they are at 7 days. also Cisco just cancelled our December order and re priced it for around $1 million more then original. fun times...
gsmitheidw1@reddit
That's insane, Cisco have increasingly huge competition. Sounds a lot like they're squeezing their existing customers. It may not work out well for them.
wrosecrans@reddit
I can't imagine there's a huge spike in real end-user demand for touch screens all of the sudden. Some companies have probably started panic-buying all sorts of random stuff like that just because they heard there are shortages of tech stuff.
gsmitheidw1@reddit
That's probably the case, all I can see are some options are now missing.
monkeyatcomputer@reddit
... and when you do order hardware, their updated terms let them increase the price or just not deliver after you've already waited 6 months. It's wild out there.
PDXNatty@reddit
I have some factory refurbished hardware vendors that still have hardware in stock. Pricing is elevated but wayyy less than what you get direct. Shoot me a message if you need something.
iama_bad_person@reddit
Not sure what people were expecting to happen. DRAM prices skyrocketed 5x in my country, at some point it became cheaper to not honor quotes and take the reputational hit than it was to make a multi-thousand dollar loss on every order.
lost_signal@reddit
Memory tiering is 80% of my VCF related discussions.
Being able to cut ram costs by 50-75% kind of crowds out any other data center cost discussion
AtarukA@reddit
And honestly I can't even blame them on that, like even on a logical point, it makes sense, they can't guarantee the price won't be 10x what they were 6 months ago.
AtarukA@reddit
And honestly I can't even blame them on that, like even on a logical point, it makes sense, they can't guaranteed the price won't be 10x what they were 6 months ago.
surloc_dalnor@reddit
They've already been doing that any way.
ericstern@reddit
"This quote is good for 365 days or until president makes a tweet"
IdiosyncraticBond@reddit
And you can bet he and his friends did some hefty trades before that, knowing what would happen
teddyphreak@reddit
Same for us; we are down to 10 days to finalize a PO after final quote is produced
Larsonski@reddit
And what about delivery times. HPE sent me a quote with 130 days delivery time
PDXNatty@reddit
Had a client in a similar situation to you. We had to source their chassis and memory separately and assemble on site. 3TB of memory came out to $120k, had to go factory refurbished for everything. Total server build was around $250k. 3 week lead time.
The alternative was to go through HPE direct, who was quoting us 1.2M with 6 month lead times.
networkn@reddit
Woof 3TB of Memory! You could actually get 10 chrome tabs open with that! 😂
moldyjellybean@reddit
Call me crazy but these high hardware prices don’t seem sustainable just wait for a little bit if possible
RegisteredJustToSay@reddit
Eh... I mean sure it's future capacity building but try to find a hyperscaler or AI provider who isn't constantly on the absolute edge of hardware saturation for compute capacity. It's not like no one is using AI.
hutacars@reddit
This is how building brand new stuff works, yes.
You’re thinking too small.
vNerdNeck@reddit
Lol. Let's us know how that works for you.
Supply chain is going to be affected until at least the summer of 27.
Also, now that memory manufacturer have moved to 14 price buds and wars with their OEMS (Dell, Cisco, Lenovo, hp, etc), the costs are not coming down anytime soon.
mariahmce@reddit
It has to end eventually I guess but not any time soon. I’ve been hearing to expect shortages for at least the next 2 years.
wezelboy@reddit
I hope you are right.
1esproc@reddit
Don't understand how anyone is still thinking HCI is a good plan right now.
jermchan@reddit
Curious, but what is a good plan then? Spreading out HCI into individual nodes in a cluster brings about the same thing, woth now an added cost of storage devices on top?
1esproc@reddit
Traditional SAN. HCI often comes with vendor lock in, a complex software layer and horizontal scaling. When you run out of one resource, you add new nodes - CPU, RAM and storage all at once. Costs of each of these is going nuts, it's better to be able to scale vertically. It's somewhat workload dependent, maybe everything you do scales linearly, but my environments sure don't.
jermchan@reddit
But HCI HCLs amongst hypervisors are almost similar, if not same across many components. The only thing I can see running out of scaling options at a component level is CPU. Memory and disk/nvme drives can be added as components to the cluster without needing to scale horizontally by nodes. To me, the added cost is still storage shelves compared to HCI builds.
OkVast2122@reddit
This is very true! Our quote had basically doubled in like a week.
Lordnerble@reddit
me refreshing 90% of our servers in the last two years......
Constantly pointing that out to my boss. Were right in the middle of all hardware cycles minus new employee devices. Last server I bought is now Doubled in price to 62k.
afmed@reddit
Exactly this, at the end of last year I bought 4 servers, expecting to drip feed the other 12 over this year, in Feb the price went up 50%, yesterday the Feb price doubled. Now I'm just hoping the prices get better next year... Not holding out much hope tho
No_Investigator3369@reddit
At the end of the day, this means no raises this year, right?
Tricky-Service-8507@reddit
Depends on how you move. If you aren’t prototyping how your environment can and should change before the last minute, well yea you might be slow to move and slow to respond
combovertomm@reddit
Whatever happened to hyper-v usage?
brokenpipe@reddit
OpenShift virt from Red Hat. So surprising I’m the first one to mention it here.
You can use either CSI or Portworx for storage backend.
TheDarkerNights@reddit
This is basically what we're doing. We went from VMware to OpenShift Virt in the lab, and we're planning to get it in prod within the next month or two. We even got the OKE licenses so we can start on containers and reduce some of our resource usage from VMs. For storage, we're using IBM Fusion (basically Ceph).
gravemoss_@reddit
hey! im a budding linux sysadmin but im heading the vmware to openshift virt migration for my environment.
if you don't mind sharing, what did you come from? what roadblocks(if any) did you hit? hope it's going well for you either way!
0xe3b0c442@reddit
deep breath
I’ve been rubbing this transition for my large org for the past year.
There are a lot of snags, foot guns, and outright bugs along the way, although the latter has improved significantly since we started. Some of this is due to poor decision-making on our end.
Generally speaking, the environment was designed to mimic the VMWare environment as much as possible. But, that didn’t take into account some of the fundamental differences between VMWare and OSV.
Storage has been our biggest pain point. We used NFS storage for the majority of VM disks since that’s what we had used for VMWare and it worked well. Big mistake. With KubeVirt/OSV, using Filesystem (as opposed to Block) storage added an extra layer (KubeVirt keeps the disk as an image file inside the provided filesystem volume instead of using it directly), impacting performance, backups, etc etc. I’m still working on unwinding that one a year later.
Networking also needed a lot of work. A lot was done with VMWare vSwitch that wasn’t easily accomplished in OVN-Kubernetes, and ultimately doesn’t make sense to be done there.
Our biggest continuing pain point is the lack of a true vMotion/DRS replacement for moving VMs between clusters. Red Hat is getting closer with MTV, but there are still a lot of limitations, particularly that all involved clusters must have a migration network in the same L2 domain. This is really painful where we have different AZs with different network stacks and trying to stretch a VXLAN overlay between them.
Red Hat has been amazing through the whole process. At minimum make sure you have premium support if your org can afford it; a dedicated resource can also be a huge help, but it isn’t cheap.
More than anything, make sure you have someone who deeply understands Kubernetes on your team, whether as an FTE or contract. Having that person, especially in the design phase, will help you avoid some of the mistakes that were made in our initial deployments which were designed by a bunch of guys with deep VMWare experience.
gravemoss_@reddit
this is so helpful to me, thank you for breaking down and explaining some of your terminology too. all the best to you!! 🖤
My_Cool_Throwaway_@reddit
Im just about finished with the migration from VMware to OpenShift for my org. Went fairly smoothly, but did hit some snags. Mostly a couple of bugs in the migration tool (one of which was fixed and updated by RedHat towards the end of the migration). Also networking was quite a journey for me to wrap my head around at first, there are some gotchas with OVN+Linux networks that I didn’t think about coming from VMWare. Obviously going from VMWare to Kubernetes is a HUGE change, but the web UI is functional enough for day to day tasks. You WILL have to get your hands dirty with YAML frequently though. Embrace it is my advice.
Writing this quickly in between things so sparse on details, if anyone is interested feel free to reply or DM me and I’d gladly expand further.
Also RedHat support has been competent
1esproc@reddit
Talk about trial by fire. Your org is nuts
gravemoss_@reddit
fire it sure is, but as the resident linux hobbyist im the one with the most experience. we do have a solid team tho. we've got some consulting hours in the pocket if shit gets too silly, but we've got a good bit of resources otherwise and i'm cramming RH's learning materials like crazy.
director made the decision, so this is where we're headed, but i'm optimistic we'll make it. 💪
No_Philosophy4337@reddit
AI makes linux admin a dream
0xe3b0c442@reddit
It helps, but you also need to know what you’re doing because it also throws vast amounts of bullshit out.
No_Philosophy4337@reddit
Not if you use the right model and know how to prompt - 99% of “AI slop” is actually poor prompting
1esproc@reddit
The more domain specific issues, where there isn't much online conversation happening, it is a mine field. LLMs aren't magic, there is no traditional thinking. If it hasn't been able to ingest an endless amount of text about something, they will spit out convincing bullshit.
No_Philosophy4337@reddit
The skill is knowing what to do to prevent bullshit. Its all in your prompts.
0xe3b0c442@reddit
99% is a big stretch here, but this still applies.
At the end of the day generative AI is still statistical pattern matching analysis. Presented with something previously unseen it is as likely to fail as not. It doesn’t understand what’s going on, it’s just calculating what token is most likely to be appropriate next.
1esproc@reddit
Godspeed 🫡
squarezero@reddit
Not necessarily a road block, but you need to accept and then embrace that you're living in a Kubernetes world. It's powerful and you'll have lots of options for customization and automation. But overall the solution is a little rough around the edges. Red Hat does seem to be making improvements, but those get shipped in new releases. So lots of upgrading if you want all the shiny new bells and whistles.
0xe3b0c442@reddit
This right here is the best advice.
If you try to treat it like VMWare you will fail, plain and simple.
brokenpipe@reddit
and they are rapidly improving. It does feel like VMware during the ESXi 3.5-4.0 days where I was constantly applying updates for (significant) improvements
brokenpipe@reddit
I don't want to give away too much but the biggest roadblocks is large VM volumes. They take time to migrate. The other thing to consider is network.
That said it is a heck of a lot better than Proxmox at the Enterprise level.
malikto44@reddit
I wish RH stuck with oVirt. OpenShift isn't bad, but I would say it isn't as good with persistant, "pet" virtualization as oVirt is. OpenShift is great for cattle though.
Creshal@reddit
Man I wish RH hadn't killed ovirt. It's still years ahead of Proxmox when it comes to both reliability and features.
malikto44@reddit
IIRC, it was one of the few virtualization platforms that Microsoft "blessed". I do know it is still around, and I think Oracle still supports it, but it would be nice if some other Linux maker had full enterprise support for it. I know it wouldn't hurt RH any to have it as an alternative to OpenShift.
Creshal@reddit
Oracle stopped selling new support contracts like three months before we realized it would've been better to get one from them. :/
malikto44@reddit
This sounds like a business opportunity. Create a company, get some people who know what they are doing with coding, make downstream products of oVirt and other items with support and code changes, allow upsteam updates to happen, so if the oVirt-downstream gets patches, oVirt can pick them up and use them.
This would allow a company to come out with a VMWare competitor all with effectively zero R&D, and also greatly benefit the F/OSS community. For "secret sauce", sell add-ons, perhaps an enterprise control plane, perhaps SAAS stuff, that is more of service based items, than code.
Creshal@reddit
ovirt is in a weird position where it's technically still alive, thanks to community contributors, and the two commercial offerings are still… around, for existing licensees. Stepping into the middle of all this would be legally exciting, and you need people who're really good with virtualization, enterprise hardware, and automation plumbing, and also want to deal with Java and XML all day. And able to provide 24/7 support world wide.
I suspect there's good reasons why nobody's done that, the venn diagram of requirements does not look like it'd be fun.
malikto44@reddit
It would take some cash, but the results would be lucrative, especially if one could convince Veeam, Nakivo, and other backup products to support it. In fact, this could actually become a viable niche. Yes, the company may not "own" the product, but secret sauce can be kept separate, and most of the basic coding isn't needing secrecy, especially if there are some SAAS items involved.
Digressing slightly, I can see a company doing this with FreeIPA/IdM. Keeping that maintained, but offering a product to allow directory access via the cloud with a local replica being the source of truth. This way, authentication is closely controlled, but one doesn't have to actually touch the product. It may be too similar to Okta's model though.
Creshal@reddit
I wish you the best of luck getting VC attention.
You mean Redhat? FreeIPA is already the value-add commercialized spin.
ChemicalGuide82@reddit
And how long before they dramatically increase prices?
bongthegoat@reddit
They already did last year. Our openshift is more expensive than our vcf licenses.
0xe3b0c442@reddit
The beauty is you’re paying Red Hat for support, not the product. Everything in OpenShift is available Open Source. Paying Red Hat gets you support and integration testing, and pretty UIs. That’s about it.
If Red Hat pulled a Broadcom, I have a nearly identical OSS stack ready to deploy at a moment’s notice.
Fortunately Red Hat has been a great partner, hopefully that continues.
brokenpipe@reddit
They actually went down. Got rid of core pricing earlier this year.
Sure eventually I'm sure they'll be coming back to collect.
Runnergeek@reddit
With Red Hat the software is open source so while they might increase price you would be in a better place as you could self support
Bubba_Phet@reddit
This ^^^ We are currently starting our journey into OpenShift with Portworx for DR. Very promising on the OpenShift side. Less promising for me as a DR Engineer and no Portworx certs currently available (at least to me). But it's all evolving at a crazy rate. All thanks to Broadcom.
nVME_manUY@reddit
DR engineer? That's a first
0xe3b0c442@reddit
This sentence tells you all you need to know about the state of enterprise data security. 😁
nVME_manUY@reddit
I'm from latin america so yeah, it's bad
teddyphreak@reddit
Actually we did look at Harvester, which is roughly the same stack but provided by Suse using RKE2 which is a product we use heavily.
It's an interesting option but in our geographical region only the top level technical sales manager from Dell knew what we were talking about, operational risk under those conditions is too high for us
H3rbert_K0rnfeld@reddit
Or OKD plus KubeVirt for a freer version
speeder2002@reddit
If the rate increase is because of hardware costs going up (which it likely is), all other vendors are in the same position. Good luck procuring hardware at a reasonable cost at this moment.
Pseudometer@reddit
Nutanix will sell anyone on a much lower price to get them to switch. But they always increase prices by a huge margin on renewal.
Teleports2000@reddit
I work at Nutanix, renewal is a 5% uplift.
sinclairzxx@reddit
Can you prove that? Is this company wide policy for every customer account and opportunity?
mcmatt93117@reddit
Yea, two renewals deep now with them.
Don't recall off the top of my head what either were exactly, but they low enough that didn't register as anything other than standard year over year COL type increase. Wasn't anything major.
I'll check Monday, but as the fella above or below me said, maybe 5%, definitely less than 10% for each?
MrFibs@reddit
It's been a hot minute since we were a Nutanix shop, maybe 4 years now, our renewal was pretty whatever at a pretty basic $80k 3 node cluster, but we prioritized the flexibility so shifted to AWS. Obvs the current climate changes everything, plus tech orgs tend to turn pure evil over night nowadays, but food for thought.
Dave_A480@reddit
Proxmox with Ceph storage
placated@reddit
Love to see Ceph making a comeback.
Civil_Asparagus25@reddit
Comeback? Ceph never left
DerBootsMann@reddit
what do you mean by that ? ceph never went anywhere, and it’s here to stay
placated@reddit
It sort of had a big moment when Openstack was a thing, and it seems to be having somewhat of a renaissance again.
DerBootsMann@reddit
software updates are pita , even ill-famous nutanix is way ahead
placated@reddit
You can’t just “dnf update” ? That’s all the harder it used to be.
DerBootsMann@reddit
you sure can ! that’s what usually comes up next ..
https://www.reddit.com/r/openstack/comments/1q472tt/do_you_upgrade_your_openstack_periodically/
placated@reddit
Ooooh Openstack. I thought you meant Ceph. Yea Openstack upgrades are rough.
DerBootsMann@reddit
nah , this ain’t ceph , it’s openstack .. my bad for the mix up , should’ve been clearer on that one !
Dave_A480@reddit
It being integrated into Proxmox & VMWare turning into assholes kinds of helps there...
throwawayofyourmom@reddit
Don't choose proxmox ceph if read/write speed matters
0xe3b0c442@reddit
Eh?
I can’t speak for proxmox, but ceph read speeds are insane on NVMe and write aren’t bad by any stretch.
This sounds like someone who used Ceph back when spindle disks were all that you could get.
throwawayofyourmom@reddit
Write speeds have a hard cap that is pretty low, if you're okay with 100MB/s then fair enough
0xe3b0c442@reddit
Um, no, not even close.
We get multi-GB write speeds on ours. Effectively line rate for the SSDs.
The only real limit here is the storage and networking speeds.
sheep5555@reddit
I am able to get 2GB/s write speeds on ceph cluster at work (big B), not sure what you're talking about
sentient-hardware-55@reddit
I can’t speak for ceph at scale, but yea, I use it in my home cluster proxmox setup which use nvme and it’s great. I am using 10gb for public and private network, so maybe that helps. I think it’s worth considering for small businesses that can put the money down for it.
wangston_huge@reddit
If read/write speed matters you simply need fast storage and networking, same as always for any hyperconverged setup.
roiki11@reddit
Except both nutanix and vmware esa smoke any ceph configuration. There's just no comparison.
Ceph really needs to get crimson on track.
Capt91@reddit
Nutanix would be fastest for Fintech.
Vmware would be fastest for commerce site.
Ceph would be fastest for a video streaming site.
Fastest needs use context.
DerBootsMann@reddit
who told ? you know nutanix is switching to spdk nvmeof , so any spdk target vendor out there would run circles around nutanix , like now
DerBootsMann@reddit
yeah , perf could be better , esp all-flash , you absolutely got the point here
snark42@reddit
If you're going to pay for vmware esa or nutanix you can probably afford to run ProxMox with Everpure/Pure, VAST, WEKA or Qumulo and get better performance in the same or smaller footprint (since you don't need servers loaded up with storage.)
Bladelink@reddit
That makes me think of the people in the past who would compare iPhones and Macbooks against other competing hardware....that cost half as much. "My macbook is faster than your laptop!" "Uh, yeah barely I guess, except this laptop was $300."
roiki11@reddit
Those systems do come bigger though. You can get esa servers that are 1u. While pure is 3u by itself and those other systems are 4 and up at a minimum. But you do get better performance for that cost.
Not sure about the cost but some vsan is included with vmware licenses.
Bladelink@reddit
Yeah, I have some IO issues in my homelab with Ceph (~70TB on mixed disks over QSFP), but that's only because I'm using a handful of consumer-grade SSDs that don't have the caching needed for the Ceph does, usually for lots of small files or for database transactions. But that's just a hardware issue on my part, everything else is fine.
Dave_A480@reddit
If read write speed matters you just might want dedicated storage hardware instead of hyperconverged
throwawayPzaFm@reddit
If you give it some thought, DAS will always have higher performance.
It scales differently and you might need something not local, but performance isn't ever going to be the reason.
Cooleb09@reddit
Don't choose Ceph if read/write matters
Unless you have a dozen or so hosts and 100GB+ network connectivity.
UffTaTa123@reddit
Proxmox is the greatest. I moved from Hyper-V to Proxmox and all i complained was that i did not have done it years earlier.
vane1978@reddit
Does Proxmox offer U.S technical support?
AlkalineGallery@reddit
Yes , but only 8x5 NBD. Third party support is very good though.
panjadotme@reddit
Name drop em
sheep5555@reddit
Croit has 24/7 support
DerBootsMann@reddit
your local msp would do a better job
sheep5555@reddit
ive been happy with them, they are very knowledgable about ceph
DerBootsMann@reddit
local msp can drive to your place , and ex ink tank ppl own the product so support is better , i see no place for an overseas consultant , kinda falls in between
sheep5555@reddit
croit is in the USA as well, but i think remote assistance is adequate like ive had for every other hypervisor vendor
DerBootsMann@reddit
as a side note , the u.s. is huge , and i kinda doubt someone out in omaha , ne would hop on a plane to physically help with an on-prem ceph deployment in ft lauderdale , fl .. anyway , i get your point , not trying to argue , whatever works for you man and keeps you happy !
sheep5555@reddit
i dont understand, ceph has nothing to configure that would require boots on the ground in person, do you think I am unable to put disks into caddies? every other vendor does remote support only, why are the standards for ceph different?
ogrimia@reddit
ISS US offers 24/7
IndicanBlazinz@reddit
45Drives has been pretty damn good.
scristopher7@reddit
> Third party support is very good though.
Nah I would highly disagree
AlkalineGallery@reddit
compared to the shit show that is hyperv?
MrFibs@reddit
I use Proxmox in a 4 node cluster in a pretty basic setup for my homelab/personal servers (haven't touched Ceph, got a Unraid NAS separately for storage, most disks, and backups, plus B2 for offsite). It's honestly been super stable and I know I could crank things up to a higher effort/more secure type of set up that might be SMB enterprise ready, but the idea of moving the business to proxmox is anxiety. With how AI-brained our leadership is, in like 2 years, if our current golden children fall off, most techs would be wholly unequipped to like actually deal with real IT. Current-day HW pricing probabling makes the AWS vs on-prem question a wash though.
sheep5555@reddit
For us AWS came in at about 125k/yr estimate but no idea on egress fees etc, got a 500k quote for 5yr vmware licensing + hardware, proxmox was 100k hardware and spend 10k/yr on 24/7 support + licensing, so basically we can build a new cluster every single year with the price the other vendors would be charging
gsrfan01@reddit
Assuming new hardware needs to get purchased and you want professional services or closer support 45Drives has treated us well on the Ceph side. Haven't specifically used them for Proxmox but their support for their servers and Ceph has been top notch.
Original-Hornet786@reddit
We just had a couple of meetings with them and are waiting on a ballpark quote for a hyper converged setup to see if it’s worth it to move off of VMware. They are very impressive so far but we’re a hospital and this whole thing is pretty terrifying.
gsrfan01@reddit
For what it’s worth, our Ceph clusters handle body camera storage for our police department. We’re east coast so parts are next business day. Their training and professional services were excellent and can definitely bridge the gap in knowledge and having paid support 24/7 also helps alleviate stress about deploying open source I’ve found.
If you’re more comfortable with VMWare’s style of doing things XCP-NG + Xen Orchestra from Vates might be a better fit. Built in backup is a nice plus too. No production experience but I’ve had it at home for 5+ years and is probably what I’d go with if we had budgets gutted.
We did switch from Nutanix + VMWare to Nutanix AHV and have been extremely happy. This was back in August though before the hardware shortages and price hikes.
Original-Hornet786@reddit
We put Nutanix at two smaller hospitals that we acquired last year and I do like how easy it is. We are getting quotes on that as well but the hardware costs for everything is just crazy so we’ll see.
Original-Hornet786@reddit
Yeah, I think 45 Drives woukd be a good fit support wise. I really like how they seem very professional and knowledgeable. We did speak to one company that provides support for XCP and they were not impressive at all! There was no way I’d want to rely on them for support. I don’t know what we haven’t looked into Vates but thanks for the reminder. I’ll contact them this week.
sheep5555@reddit
+1 same kind of situation, i was a bit nervous about proxmox at first but having a consultant double check design + deployment was worth the money. It has been more stable/less buggy than our vxrail setup
Hebrewhammer8d8@reddit
Only if you have a person who is comfortable with Linux Ecosystem and Ceph or going to learn to be dedicated expert. If you don't have a person or team dedicated to learn about Linux Ecosystem & Ceph it will not be a fun time.
halfhearted_skeptic@reddit
I’d be surprised if a Ceph consulting firm cost more than a VMWare license.
sheep5555@reddit
Proxmox + Ceph is easier for me to manage compared to vmware/vxrail vSAN, just IMO. Turned updating into a week long support mess with dell to an hour or two
Bladelink@reddit
If you don't want to pay for the expertise of administrating it yourself, then you can go pay vmware or nutanix their blood money to manage it for you. That's the tradeoff of offloading that work to the vendor.
polypolyman@reddit
I'm really happy with DRBD/Linstor as well... but ymmv based on needs.
DerBootsMann@reddit
everybodys in love with drbd right up to the moment it goes split-brain and nukes your data
beatfried@reddit
really shows the state of this sub with hyper-v / s2d beeing higher up than proxmox / ceph.....
LocksmithMuted4360@reddit
100%
fabioluissilva@reddit
This is the way
mcdowellster@reddit
This is absolutely the way. Ceph Days Berlin 2025: A Deep Dive into Open Source Storage https://share.google/BEZasqLTqMLE5wZ7G
When an entire country not only helps fund a project but runs it in production... You can probably trust it.
Never mind that I've deployed a bunch of times 😅
Serafnet@reddit
Can vouch for this as well.
Bbeat92290@reddit
Bonjour, l'augmentation de prix que tu subis n'est pas liée à Nutanix à mon sens mais plutot les coûts du hardware donc, que tu sois sur du 3-Tier, du HCI ou autre tu auras le même problème partout. Après si tu as vraiment bien aimé la techno Nutanix, tu peux tjs regarder leur solution NC2... tu pourras avoir notamment chez OVH les prix actuels mais avec l'hébergement des infras et tu réduiras finalement tes coûts
tlrman74@reddit
Many large implementations in datacenters have been using Proxmox with Ceph. With the release of 9.1 it's even better. I run a small cluster of 5 hosts with Ceph and came from Vmware Vsan. I'm finding Proxmox just works better and is easier overall to manage, if you have Linux knowledge. The biggest transition pain point for me was the tools surrounding our VMware environment. We still use Veeam but ended up with alternative tooling for monitoring and management.
Locodegreee@reddit
Out of curiosity what did you do formally to learn proxmox w ceph? Trying to scope out alternative options for our enviornment but I just dont know anything about Linux. Is there a specific distro that would be beneficial for learning like Debian?
tlrman74@reddit
There are training programs in place. You can see them on the Proxmox website. Another way to go if your group doesn't have much Linux experience is to go with a partner. If US based 45Drives has built a good business around Proxmox support and hardware specific to CEPH storage and clustering. There are a number of others in the US like ICE Systems as well.
sheep5555@reddit
Proxmox training courses are adequate to learn everything, ceph doesnt have much additional training needed, you just need to set it up right and make sure you are using fast SSD and fast network.
teddyphreak@reddit
We are in the process of quoting with Dell & Canonical Managed OpenStack. I still don't have numbers to compare but I'd say the outlook so far is promising
ChadTheLizardKing@reddit
HPe is putting together a solution with Morpheus. They got it as part of an acquisition a few years ago but did not do anything with it because why would you; then VMWare did its thing and now hypervisors are no longer a commodity.
They are looking to make it a competitive offering if you are willing to go full HPe. It is KVM under the hood like everything else.
sont21@reddit
It not ready behind proxmox in functionality
ChadTheLizardKing@reddit
Can I ask what was your experience with Proxmox was like?
teddyphreak@reddit
Our relationship with HPE has been quite poor. We went ahead with them in one of our DCs with a full dHCI deployment and the experience has been quite poor. Considering that solution was way more mature at the point we went with them than what Morpheus seems to be the moment they are out of consideration altogether.
ChadTheLizardKing@reddit
Yeah they are all over the place. We go through a VAR and lean on the VAR for any PS.
Greenlake is, um, something.
1esproc@reddit
They're hardware agnostic and it only supports Ubuntu, and you install it on top of it yourself, it's not like an image.
I would not trust pricing on it to stick where it is for long.
I really dunno what the market is for it. Why would you jump ship from VMware onto a newish Me-too! platform with no track record, owned by an enterprise corp known for fucking you at every angle.
ChadTheLizardKing@reddit
I do not disagree with you but the market is in flux; VMWare doing a smash and grab while exiting the market puts a lot back on the table. It used to be a "VMWare" market and then KVM for everything devops-sy.
I would guess that HPe is going to (try) to turn Morpheus into a Nutanix competitor and then spin it off.
In fairness to HPe competitors, Nutanix will do the same - they get you onto the platform with a nice, reasonable intro price and then hit you hard on renewals. So I totally agree that you cannot trust the renewal pricing; but that is everybody now unless you are willing to hire the expertise to run your own KVM or Xen-based cluster using an out of the box distribution. Practically speaking, HPe will give Morpheus to you for a song today so you can get a 5 - 7 year deal on it and then see where the market takes it.
The interesting play is, depending on your infra state, is that if Morpheus is "good enough" for now. You do the multi-year deal and then, since it is KVM, you look at your options in a few years.
Honestly, the hypervisor market is going to be all over the place for a few years IMO. Memory pricing, with VMWare's increases, has totally changed the calculus of on-premise costs.
Please do not take this as a "pro" morpheus post. I am sure it is a hot-mess under the shiny badge; we are like everybody else - looking at our options to figure out if the VMWare ransom is worth paying another year or do we exit.
pornogeros@reddit
You mean their openstack product or a fully managed service ?
teddyphreak@reddit
I just had a talk with them this week and they provide the service in 2 SKUs, one for deployment and an add-on for fully managed service. I'm trying to get the project approved for both if possible.
ThroatMain7342@reddit
I just finished a canonical openstack deployment. It’s really good & stable
474Dennis@reddit
If you're an active Acronis Cyber Protect Cloud partner, I'd like to let you know that there is an upcoming Acronis Cyber Frame solution for MSPs who are looking for VMware/Nutanix alternatives for their clients, and who want to host their clients' workloads in alternate locations, such as a data center managed by the MSP or in the Acronis Cloud.
Life-Assist7881@reddit
We looked at similar options recently.
One thing to watch is network requirements, once you scale HCI clusters, east-west traffic becomes a real factor (especially 25G+).
Some setups look fine on paper but fall apart once you hit throughput limits.
Tricky-Service-8507@reddit
25g is tiny but very good points
ProofPlane4799@reddit
OpenShift is the way to go if you need a good support.
https://www.linkedin.com/posts/marco-torres_las-cruces-public-schools-virtualization-activity-7425565710833197056-32cZ?utm_source=share&utm_medium=member_android&rcm=ACoAAAj36kEBJcSTEnKD6BXGI2mw3kXXDwzQLFE
Zenkin@reddit
Do you actually need hyperconverged? Like the main benefit is being able to scale quickly, so you can grow from 3 nodes to 6 nodes to 12 nodes and not have significant downtime. Good stuff.
But if you're not actually expecting to grow your cluster, why not look at a traditional stack? IBM FlashSystems have been super competitive on price for SANs in the past few years, and if you can get SAN zoning configured once it's basically set it and forget it. Sure, you won't have "one throat to choke," but how has that been working out for you with all your eggs in one basket?
flurfdooker@reddit
A buddy of mine is actually buying an old HPC cluster from a university and repurposing it for a new Proxmox build. He's in an odd position in that he has tons of data center space but very little money, and he needs memory/CPU but not bleeding edge GPU. The university is getting rid of the cluster because the GPUs don't cut it anymore.
gmc_5303@reddit
Fs5200 user here. Smoking fast and dirt cheap, going to attach it to my proxmox dev cluster next week to pilot it for the VMware to proxmox migration pilot.
rismoney@reddit
Aren't there compromises? Proxmox does not have a clustered file system, so it is extremely dangerous to RW mount a lun to 2 nodes. Also I don't know if you get thin provisioning and have limitations on snapshotting. Need to dig into iscsi and proxmox to know what will and will not work.
gmc_5303@reddit
You need to do your research on LVM in 9.1, things have changed.
rismoney@reddit
They have changed, but I don't think a lot of "VMWare-ish scenarios". ie snapshots are thick. LVM thin isn't cluster safe. Agreed you need to dive in for specific use cases.
Best to use ceph
gmc_5303@reddit
I don’t need lvm thin, I’ve got compression on the SAN.
0xe3b0c442@reddit
Echoing this myself. Hyperconverged can be useful, but it adds a layer of complexity to the stack and its operation that running storage separately can mitigate, especially if you have dedicated storage experts or contracts.
jonboy345@reddit
And their higher end stuff with the flash core modules are fucking incredible. Pricey, but holy shit.
SlateRaven@reddit
This here. When we refreshed our infrastructure, we actually found it cheaper to go DHCI instead because of deals on individual components, plus we weren't growing much, so HCI didn't make sense despite the sales people's best efforts. We went all Nimble/Alletra for storage, Mellanox for switching, and Proliant for computer.
ub3rb3ck@reddit
I wouldn't say that's the MAIN benefit, but it certainly is one.
bakonpie@reddit
why is Hyper-V not an "all in one" solution?
archiekane@reddit
Hyper-V and Starwind VSAN, works well for us.
newboofgootin@reddit
This is what I’m looking at. Nutanix completely fucked us last month, so we are abandoning the brand too.
mokdemos@reddit
What size storage you need?
newboofgootin@reddit
Who are you with?
mokdemos@reddit
I'm just an SA that has used a few SAN's...was wondering what you were looking for.
newboofgootin@reddit
Different customers ranging between 8 - 16TB. Nutanix was nice because of block level hot/cold storage tiering. Could get a lot of cheaper storage on spinning disk.
_SundayNightDrive@reddit
You may have just solved an problem for me. Thanks!
DerBootsMann@reddit
this is a lovely combo , product was rock solid , support team was top notch . not sure how it looks after the acquisition though ..
oldwornradio@reddit
Run this for a bunch of clients, generally it’s pretty damn solid
flecom@reddit
ran this setup for a LONG time before going to proxmox (eliminated all microslop software) and it was a pretty solid setup
Vivid_Mongoose_8964@reddit
+1 for this
adunedarkguard@reddit
While our on prem workload is much smaller than it used to be, I replaced some old hardware/VMWare cluster with StarWind/Hyper-V cluster, and it was a fairly painless process.
MilkAnAlmond@reddit
Seconding this. No problems. Support is really good.
woodyshag@reddit
Use S2D and you have hyperconverged.
Arudinne@reddit
After my experiences with S2D, I never want to touch that shit again.
OkVast2122@reddit
Hyper-V is generally OK, while S2D is a disaster.
Arudinne@reddit
Yes, I suppose I should have noted that.
For our modest needs, Hyper-V is just fine.
randomugh1@reddit
Oh 5120 errors “that can be safely ignored” but cause massive disk corruption on the VMs. Or When you open FCM and see the status of each role slowly update one at a time you know you’re in for a bad day.
peraving@reddit
Ugh 5120 gave me flashbacks to a couple of hyperv clusters that would have intermittent stability issues, roles failing, isolating and quarantines hosts. No solution after so much troubleshooting. I rebuilt the entire cluster node by node and it finally went away.
And yeah, standalone hosts whilst a hassle to migrate workloads around provide nights of peaceful sleep which is far more valuable IMO.
Arudinne@reddit
Felt like my hosts rode the knife's edge of stability. Things would be fine for hours, days, even weeks. Then something would shit the bed if I tried to live migrate or reboot a VM, or quite often for no reason at all.
Tried various alternatives that didn't involve hardware, because my boss wouldn't give me the budget for it, but which still ended up costing tens of thousands of dollars, only to eventually have the same fucking problems.
I lost a lot of sleep and hair trying to get things stable before I just gave up and ran the hosts as independent servers for nearly a year.
One of our VARs found a way to save us $20K a year on W365 licensing and I was able to convince my boss to let me use that savings to get us a Powerstore, which we got up and running earlier this year before the RAM and flash shortages. Zero issues.
Fuck S2D.
mokdemos@reddit
Had S2D fail and took all my data. Just use clustered SAN now.
Angelworks42@reddit
That's what we use on hv - but we ran into some really gnarly issues with sriov enabled (which it is by default). This is netapp using smb3.
mokdemos@reddit
I hit a Broadcom NIC issue just the other day. They would reboot the server under high load. Found out I had to disable Virtual Machine Queue on every NIC, and so far no reboots.
DerBootsMann@reddit
the only reliable smb3 stack provider is microsoft itself , while everyone else keeps running into weird phantom issues .. we’ve tried a ton of smb3 implementations , like netapp , pure , infinidat , and vast , and we could never make it work reliably across all clients and workloads as it’s always something ! sriov weirdness , perf hiccups , or straight-up lockups during vm and sql db failover .. pure is probably the best outside the microsoft stack , and vast was absolutely the worst
DerBootsMann@reddit
use hyper-v and s2d and you’re asking for hemorrhoids ! ok , hyper-v is mostly fine , and it’s s2d that usually ruins the party for everyone
NotBadAndYou@reddit
Microsoft sells that as Azure Stack HCI. Cloud-managed, but on-prem.
DerBootsMann@reddit
no , they don’t ! microsoft sales ppl get zero reward points for azure local , and it’s no part of their quota , so they don’t push it at all
Zazamari@reddit
Its now Azure Local but yes
Cormacolinde@reddit
Also, it’s an absolutely terrible clusterfuck. Wouldn’t even touch it with a 10-foot pole.
NegativePattern@reddit
Can you elaborate more? I have an org looking at it for a PoC in the summer.
DerBootsMann@reddit
product ecosystem doesn’t exist .. you’re basically on your own
run !
Creshal@reddit
Documentation is made up, support doesn't exist, Microsoft themselves seem to barely understand what the stack does, their "certified" hardware partners understand less. You'll run into disk-corrupting heisenbugs about monthly and there is nothing you or Microsoft can do about it.
Some-Platypus5271@reddit
My Corp has tried to poc it twice now. This time with two clusters. Going back to VMware. Failures and no answers. Lots of outages
nmethod@reddit
It's a garbage product. I was loosely connected to a large deployment that failed miserably -- ignoring the stability issues, the killer was unpatched vulnerabilities that seemingly had no roadmap for a fix that killed it in a heavily regulated environment. Millions wasted...
Cormacolinde@reddit
Bad documentation, bad support. No training available, no SMEs, no one knows how to fix ANYTHING with it. One customer had the hardware vendor do the configuration, it was a complete mess that barely worked and it ended up all the networking was misconfigured. Probably because the guy setting it up had no idea what he was doing. They rebuilt the cluster TWICE, and I’m not sure it’s working after 18 months. Another customer had the worst experience migrating from VMWare, with migrated VMs being horribly unstable, and despite my (and others’) best efforts we have been unable to make migrated VMs work with Azure Arc. Microsoft has documentation that says how to do it, but some other information says it’s not supported. Even their own support don’t understand or know anything about the product.
Oh and the control plane is built on a kubernetes cluster (yep) that you have no access to, and modifying any of it is unsupported. Once deployed, if you need to change ANYTHING you have to redeploy from scratch. That includes changing the DNS server used by the cluster.
Management is supposed to be “easy like Azure”, but it’s really an unholy mess combining traditional Hyper-V tools, local Windows Admin Center, Azure Windows Admin Center, Azure Arc, Azure Management blades, PowerShell, Azure CLI and voodoo.
And performance is terrible.
Don’t even bother with a PoC. You’re wasting your time.
craigthackerx@reddit
Not OC
I've never ran it myself either, but I do know of a company in the UK that PoC'd it.
Essentially, do you use Azure? Yes, good, you know how stack works? Wrong. He told me they just had issue after issue and support was basically non existent, as well as getting any consultants in.
I seem to recall they had some pain around VM disks, just not going into the LUN or something.
Either way, I actually think they went OpenStack in the end.
Falldog@reddit
Great, as long as you never plan on expanding the cluster with different nodes.
audioeptesicus@reddit
Never ever run Azure Local (formerly Azure Stack HCI).
The solution is not proven, support is abysmal, and we've had numerous outages with MS not being able to provide any conclusions on root cause.
If we weren't so sure that our predecessors and current management were engaging in some shady shit that is preventing us from replacing it with something stable, we would have gotten legal involved and replaced it with anything else.
Fuck you, Mike.
OkVast2122@reddit
Support? Azure Local got support? You sure about it, mate?
OkEssay4173@reddit
Same experience, in two locations
NotBadAndYou@reddit
Lol don't worry, that was NEVER a consideration of ours...
audioeptesicus@reddit
You hiring?
juitar@reddit
That is what we moved to, it's perfectly fine at this point.
Secret_Account07@reddit
We tried Hyper-V, well POCed, and it ain’t it. VMware is far superior. I wish this wasn’t the case but it is
Now if I was a small to midsize org? Yeah
But for our setup just unfortunately didn’t work out. We did try a few alternatives, one of which came close.
discosoc@reddit
People really need to start elaborating on these sorts of comments. It's basically useless to criticize what is otherwise a perfectly enterprise-ready product but fail to provide any detail.
Reverent@reddit
The issue is when people's evaluation criteria is basically "Is it X product".
Is hyper-V going to drop in replace vmware? fuck no. Can it meet the same business capabilities? Of course! Adjust your expectations.
bakonpie@reddit
agree Hyper-V isn't as solid as VMware, but for the price I'm willing to make minor sacrifices. FT is also not possible in HV, you might be conflating HA with FT. your failover/HA issue isn't something common so I'd look at your setup before claiming it is an issue with Hyper-V in general.
warpurlgis@reddit
You can get sort of close to VMware ft if you have replicas
TahinWorks@reddit
Enterprise here. We were Hyper-V 2010-2016, then VMWare for 10 years, now back to Hyper-V. I personally have experience with VMWare from 3.5-8.0. Hyper-V is no VMWare, but it's great compared to what it was 10 years ago. The addition of WAC and Arc closes the gap against vCenter, but not completely.
The main problem with Hyper-V in my opinion is its lack of a unified learning path; it has nothing like VCP. MCP style Server courses will tell you all about configuration windows, but not best practice or how it all fits together.
Pair that with Hyper-V's inherent tolerance for misconfiguration. It will let you enable features on unsupported hardware, it won't hold your hand through storage setup, it will happily let you configure CAU incorrectly, and it won't tell you about anything you did wrong until it doesn't work. That, to me, is where a lot of animosity toward Hyper-V comes from.
It does have more reliability issues than VMWare. For example, if a Veeam backup job fails and leaves your disks on a checkpoint disk, VMWare was really good at snapshot consolidation and self-repair, while Hyper-V will tell you it's on its normal disks but is still actually writing against its checkpoint disks. Little stuff like that
Hyper-V needs some tweaking when you spin up a new cluster. But you identify the soft spots, write detection and monitoring scripts, and move on, and soon enough Hyper-V is running on par with VMware.
I would run Hyper-V a thousand times before touching any hyperconvergence stuff like Nutanix. If Broadcom taught me anything, it's to not vendor-lock your entire datacenter.
Mr_ToDo@reddit
Hypervisors really do seem to be one of those things where, once you've really grown into it, there isn't a clean, mostly identical feature set to offer as a good alternative
And it does feel like VMware was sitting at the top too. Making it all the harder. Not often you see a de facto best choice lose it's place so quickly. Normally it'd get shittier over time or a competitor rises up. It's just wild how this played out
SknarfM@reddit
Can I ask specifically what didn't work with hyper v for your company? We need a stop gap option at a minimum to utilise older hardware. And are likely to at least stand up a Hyper-V POC.
junon@reddit (OP)
Oh because I'd have to source my own hardware and I'm looking for more of a "one throat to choke" situation, mostly because our international sites are small and dealing with international support myself for the coordination and whatnot is a pain.
Injector22@reddit
What's everyone using to cluster HV together and provide load balancing? Last I checked failover cluster doesn't load balance. Unless there's something new and I'm missing it.
I know about vmode but it's not GA
Readybreak@reddit
We went hyper very with zero issues, we have solutions that is basically hyperconverged, just without the fancy html5 interface.
FunSea7083@reddit
Anybody using hpe's vme?
MetroTechP@reddit
HPE Morpheus
CrewOk3589@reddit
I would look into Scale Computing! We moved over from VMware back in 2022 and it's been great.
BirthdayFamous1591@reddit
We're planning to reuse the hardware for now and use vergeio since it's license per host. The cost of new hardware is just unbelievable 😮.
BirthdayFamous1591@reddit
The challenges could be to migrate the VMs from Nutanix ahv to vergeio.
Competitive_Smoke948@reddit
have a look at the HPE option. if you're buying hardware, they might do a deal.
ScaleNinja@reddit
Proxmox and XCP-ng are good options, and if you’re looking for a multi tenant IaaS cloud platform then Apache CloudStack.
Tricky-Service-8507@reddit
There a few more projects for iaas but you’d be in good hands with Apache cloud stack
HunnyPuns@reddit
Man, it's a good year for Proxmox.
nev_neo@reddit
Hyper-V seems to be a good option. I'm currently testing a 3 node failover cluster and everything seems to be working fine.
Will be enabling S2D on it soonish - using a bunch of DC class nvme drives and some old SAS ssd's - not sure if any of them are on the certified lists. Just wanted to push this to its limit.
AlkalineGallery@reddit
We have hundreds of hosts. HyperV sucks hardcore. No direct support options and the SET HA options are to let the OS play ARP games... No MLAG/LAG support. They deprecated LBFO which did support LAG.... terrible product
nev_neo@reddit
People still use LAGG ??
AlkalineGallery@reddit
Sure, how else do you use four 25G links on a hypervisor? We have 4 sizes, 4x10G, 4x25G and 4x100G. What "better way" are you pretending to allude to?
OkVast2122@reddit
Just don’t! S2D is pants.
planedrop@reddit
Proxmox or XCP-ng are the way to go, XCP-ng is my go to and favorite but both are excellent.
Tricky-Service-8507@reddit
Neck and neck
qrave@reddit
Proxmox or open stack on supermicro, build platform devops all in code with agentic AI 😬🤣
aswarman@reddit
I have not looked at a quote recently but look into Scale Computing.
Magumbas@reddit
Hyper V baby
_gneat@reddit
It’s due to the massive price increases in memory. Every vendor’s hardware is increasing drastically. Just wait to refresh until prices come back down to earth.
Ok_Discount_9727@reddit
Are you living under a rock? HW is through the roof, if you need new HW you’re SOL.
Just take a look at memory prices
DavidKleeGeek@reddit
YMMV but I've had difficulties with some I/O-intensive workloads on any hyperconverged vendor. I've had some write latency challenges with moderate SQL Server workloads on Nutanix, plus others like VSAN. If you have high I/O-demand workloads like this, I'm all for a platform like Proxmox + Pure Storage or other good all-flash vendors.
Efficient-Sir-5040@reddit
Proxmox. Next question?
homemediajunky@reddit
You know, everyone saying how bad BC is fucking over people and to move to Nutanix should read this. We priced Nutanix, and looked at what they normally charge for renewal and ultimately, for the time being are sticking with VMware.
One thing we had to factor was, some of our infrastructure uses SANs, which is a no go for Nutanix. And Nutanix HCL is amazingly strict. Added to that is the human capital cost, training current employees, hiring new employees, the hidden costs kill ya.
Difficultopin@reddit
Hyper-V + StorMagic
OkVast2122@reddit
Only thing going for em is they’re British, which might rattle a few man across the pond, cause now your support route’s all long and twisted. Not that it matters much though, their support’s already dead on arrival, so adding a cheeky 6–10 hour lag ain’t making it any worse. Real talk, the whole thing’s a mess! Performance? Nowhere to be seen! UX’s living back in 2010 like it’s comfy there, and the UI’s straight outta 1996, no exaggeration.
Difficultopin@reddit
Sounds like you are full of BS.
DerBootsMann@reddit
sounds like some salty sales rep just trying to squeeze one more lil deal outta it lol
DerBootsMann@reddit
i’m kinda scratching my head on that one too , like who’s actually buying this stuff ? s2d’s got a reputation for a reason , folks complain all day , but it ships with the datacenter for free . ms support is legendary in all the wrong ways , but still , at least you can find people who know s2d , docs exist , community exists .. this niche exotic gear with near zero footprint though ? that’s where it gets sketchy , no field stories , no real operators , no battle scars .. feels weird , yeah
onetwobeer@reddit
We’re using vergio, it’s awesome
LookAtThatMonkey@reddit
Same here, I have two 2 node clusters running in DC’s and about a dozen Edge node setups. It’s been bang on for us. I am loathe to mention it here because you just get downvoted to oblivion.
DerBootsMann@reddit
that’s because ppl hate them
LookAtThatMonkey@reddit
Yeah i get that, i just wonder how many of those hating have tried it. I admit I didn’t see the hate until after we have demoed it, it may have swayed me elsewhere. Glad I didn’t to be honest.
DerBootsMann@reddit
idk man , i can live with immature tech , like slow i/o , ugly cockpit ui , and lots of ai artifacts and whatnot . but getting played and pushed into something ? nah , i don’t roll like that
onetwobeer@reddit
Why? Did I miss something? So far it’s been a great solution for my company and I have peers in the industry running actual datacenters that run actually important shit on it and they love it. Is there something i’m missing?
DerBootsMann@reddit
yeah , yeah , yeah .. we absolutely believe you ! no $hit sherlock
yes , absolutely do
they’re bannned from this and a couple of other subs for astro turfing
https://www.reddit.com/r/vmware/comments/18tne1y/vergeio_real_or_snake_oil/
onetwobeer@reddit
Lol ok got it, wasn’t aware they were banned. Fwiw, i do love their software. And i can tell you the federal government is enjoying it too. You know sometimes there’s more than one right answer out there ;) maybe dial back the vitriol a bit, or not
DerBootsMann@reddit
now you’re ! it’s they and nakivo , which got bailed out recently , verge is not
my biggest problem with them ? they’re like a backwards fight club , in fight club you actually fight , just don’t talk about it . here ? everybody talks , nobody runs it in prod . you start pulling threads , all roads lead back to their own crew , ex-customers on payroll , vars , official employees , same circle , smells off . feels like they’re lining me up for a hit , and yeah .. hard pass , not my lane
and now you’re selling them to me ! why ? why would you even care ? got skin in game ?
DerBootsMann@reddit
and another verge spambot enters the chat !
MeleeIkon@reddit
Hyper-V no good is HCI. Proxmox. Seriously, just use proxmox.
I have a cluster right now with 576 CPU cores, 67.64TB of RAM and 733TB of storage. 200TB of which is fully NVME flash. dual 25GB Fiber from each hosts with switches on 100GB backbone.
All on proxmox.
MeleeIkon@reddit
Works flawlessly. If you are doing HCI, you need kicking bandwidth. Minimum of dual 25GB with 100GB backbone. Do not try with 10GB.
cryptofuturebright@reddit
Proxmox
fuck_green_jello@reddit
Similar thing happened in December. Price was going to jump 50%+ within a week if we didn't sign the agreement. Boss wasn't being bullied into signing before budget approval. Renewing for VMware for a year to proceed with datacenter refreshes until we can figure out alterniative, probably Proxmox.
Big_H77@reddit
We reached the inflection point where moving to Azure was actually cheaper than sticking with VMware or Nutanix… It’s been five months and fortunately monthly spend has been stable at a 50% savings compared to the hype setup we had.
Granted, this is anecdotal to our situation, there are flipside situations where Azure/AWS becomes the devil and private cloud or on-prem hypes are cheaper.
nyckidryan@reddit
What are the plans for an Azure outage? Just curious.. seen a lot of stories and wondering how people are prepping for a DNS server missing a comma in a config file.. 😉
Big_H77@reddit
Oh we still keep a DC in our office locations handling low level stuff like Print Server and DNS, but they’re 1U Dell servers with ProSupport 4HR Same Day… Our firm was primarily in-house, then did the Rackspace thing for a bit, then finally into Azure.
Rackspace couldn’t eat the costs of the licensing after Broadcom entered the mix a few years back but we were grandfathered in on their original pricing structure… That was until our Dell Hype become EOL on their end lol. The estimated cost for them hosting even 1 Hype shot up almost 100%.
nyckidryan@reddit
...I miss the days of NT4... 😂
Big_H77@reddit
No lie detected. We are seeing the benefits of the new age and it’s now become second nature, but cost-wise, things will never be the same lmao.
BloinkXP@reddit
Look at Azure Stack HCI. It is getting very complete and you can leverage a lot of entitlements with it.
OkVast2122@reddit
It’s Azure Local, mate, unless MS renamed it again LOL. Anyway, before they would start accepting random SANs it’s a definite no-go for the most of us.
BloinkXP@reddit
Yeah, we abbreviate it to AZL internally, but I have been eyeing this product for a while.. For us, we are shifting to HCI and our NFS will come from NetApp and then moving to HPE (if we need it).
OkVast2122@reddit
Hyper-V is SMB3-only, no NFS! Azure Local is getting some SAN support, like ex-ScaleIO, but it’s not generally available at least yet.
BloinkXP@reddit
It is coming...we worked closely with MS and have a great roadmap.
But honestly, we are a large enterprise and our perspective is very different.
OkVast2122@reddit
Nice! We’ve been getting very controversial signals from the team.
BloinkXP@reddit
Man, it really depends on the team with MS. We happen to be lucky...the previous ones were less that adequate.
OkVast2122@reddit
I’m in IT for one of the biggest international food retail chains out here. Big boy ops, not some corner shop ting.
Crimtide@reddit
We looked at proxmox and XCP-NG. Ended up going with XCP-NG
stephendt@reddit
What made you pick XCP-NG over Proxmox? I couldn't give up Proxmox containers personally
PNW_Techs@reddit
We picked XCP-NG because the architecture is really similar to VMWare and we don't use local storage. Until recently Proxmox didn't have a central management appliance. XCP-NG has a VCenter equivalent. I know with Proxmox you can control the pool from the host interface but 3 years ago when we were evaluting both, Proxmox felt like a step backwards into the early versions of ESXI.
From my experience clustering in XCP-NG is really simple and works well. Install the hypervisor add it to the pool and then BAM, xcp-ng takes the pool settings and teams the physical NICs, adds the VLANs and attaches storage. I can be migrating VMs onto a host within 5 minutes of joining it to the pool. The only downside is in XCP-NG you're hardware needs to be pretty much identical. If one host has a slower processor it will throttle the other hosts to make HA and load balancing operations work.
For containers we run LXC or Alma/Rocky Linux VMs with something like dockhand or Portainer for a management GUI. To be fair all my production containers are in AWS because that's where our ERP/CRM is.
Creshal@reddit
Don't worry, it still sucks.
Ayoungcoder@reddit
I run both ovirt and proxmox in production, and ovirt is definitely less stable (mostly from relying on gluster). Proxmox has plenty of issues as well. Openstack was hell so we dropped that, but we're still looking for something more stable that still ticks all boxes
Creshal@reddit
Ovirt, unlike Proxmox, has proper SAN support including distributed locking, so we just used it with netapps. Proxmox's ceph support is a nightmare in practice (hyperconverged is a lie unless you feel like wasting 90% of your HW's performance), so we're using it with the same netapps (except now we have to use NFS instead of FC… sigh). Makes it easy to compare how much instability proxmox brings to the table, since the storage is the same.
ApprehensiveRub6127@reddit
CPU clock speed is not affected if hosts use slower or faster processors. Instruction sets are affected if cpu’s are different class
Crimtide@reddit
XCP-NG cost as much as our VMWare did prior to the Broadcom takeover. We also liked the UI Of XCP-NG using Xen Orchestra better. It's just easy and intuitive.
DerBootsMann@reddit
proxmox s free ..
Crimtide@reddit
They both are, but not when you scale to Enterprise environments and are required to carry software assurance, support and maintenance plans.
stephendt@reddit
Good to know. How do you find clustering?
Do_TheEvolution@reddit
For me, it felt simpler and more reliable while things just worked without much tinkering or weird issues.
Got some notes when homelab testing.
DerBootsMann@reddit
mind sharing why ? thx
morilythari@reddit
I love xcp-ng but they need to get their storage updated. 2TB vhd limit in 2026 is silly
gsrfan01@reddit
Good news, qcow2 entered beta in October 2025 and with that likely working further on SMAPIv3 which would, hopefully, begin righting the ship that is their storage stack.
They had to hard-pivot from SMAPIv3 work thanks to Broadcom sending so much business their way and needing to bring their V2V and other tooling inline. The QCOW2 support came least partially as a direct driver from folks moving over from VMWare with larger disks.
damodread@reddit
It supports QCOW2 now
Tricky-Service-8507@reddit
I have both in production at work and home And both work well
Leaha15@reddit
Do NOT, try s2d with hyper v
You get what you pay for, you could try proxmox with ceph, but if stick with vcf or nutanix
CrimPhoenix@reddit
Our hyper-v appliances are running starwimds vsan, I recommend and they met our turnkey due to internal treasures requirements. But as others have said hardware costs and availability are going to be the real issue now.
farva_06@reddit
Hyper-V supports HCI as well.
HenryTheNoodle@reddit
Simplivity with VME
wardedmocha@reddit
proxmox. I am in the middle of a migration right now. It is 100% worth it.
GamerLymx@reddit
look at xcp-ng and xen orchestra. Maybe it doesn't have all features that vmware has, but may be enough doe you use case.
Paulitow_@reddit
I'm testing harverster from opensuse. More kubernetes oriented but can be fully used as hyperconverged hypervisor. Ui is not as friendly as proxmox or vmware but it does the job well
DerBootsMann@reddit
did they get rid of mandatory longhorn and started supporting san ? just curious , as i didnt touch them for a while
Paulitow_@reddit
I think yes, at least i wasn't forced to use longhorn to start. And we got an answer from suse saying that Harverster is compatible with our SAN storage so i guess that time is over
DerBootsMann@reddit
oh , that’s actually some solid news , and i guess i gotta take another look at it then ! appreciate you sharing that
ITaggie@reddit
Yup that's what my shop has been using for about 2 years now. 90% of our workloads are containerized at this point so Harvester+Rancher works very well for us. Longhorn is mainly used for virtual disks though, most of our workloads are using our NetApp cluster for storage via Trident.
For those who are still very heavy into the more traditional fat VM workloads I would probably recommend Proxmox+Ceph over Harvester though.
tankerkiller125real@reddit
Same here, really liking it overall though.
Connect-Comb-8545@reddit
Due to hardware pricing many organizations are moving to cloud. If you want a quote, my msp is doing free azure migrations to be your CSP. We also offer fixed azure pricing that’ll beat out the azure pricing calculator. We’ve had a good success rate of lowest azure fees through proper licensing and configurations. Dm me if you want to discuss further
unixuser011@reddit
Azure local/HCI?
RustyBarfist@reddit
We moved from vmware to azure local and I have to say its been a nightmare of a product. MS can't even tell us the cause of some of the outages we've had to endure.
unixuser011@reddit
Doesn’t surprise me tbh, it seemed like a cool concept, running your own Azure tenant on-prem, but it does seem very half-baked
DerBootsMann@reddit
azure got thousands of techies , azure local got you ..
Maclovin-it@reddit
God no!
JNikolaj@reddit
That's just hyperv thought
stalinusmc@reddit
Not quite. You can run specific list of Azure PaaS services in addition to IaaS VMs
ApartmentSad9239@reddit
And pay for the courtesy <3
stalinusmc@reddit
Oh I didn’t realize the other suggestions were free. /s
Charokie@reddit
With what Azure PaaS invoices come in at just go back to Broadcom.
Relative_Way6555@reddit
I recommended Sangfor for the HCI. For POC, they provided all the hardware, including three servers and switches, along with all the technical support for installation and migration. We used it for about two weeks in a production environment under high load. We were very pleased. It includes many important features found in VMware, such as Pro Active HA. It includes a built-in backup and replication solution. Replication is free for up to one hour. It also supports synchronous replication with CDP (Continuous Data Protection).
DerBootsMann@reddit
unless they went 100% open source i don’t think they would succeed .. there’s enough of the haweii and smc horror stories already , nobody want to install a security backdoor inside their protected perimeter
theunrealneverlived@reddit
We're currently in the process of changing our Dell VXRail servers into Hyper-V and Starwind VSAN. Hell of a project with a ton of scary variables that could go wrong but this is where we're at after spending huge only 3 years ago and need to see some sort of ROI. IT is becoming an industry full of the same snakes that run mass media, social media, and the US government. When this project is finished I'm looking into a career change for my own sanity. Unfortunately your story is all too familiar and I wish you the best of luck💪
LookAtThatMonkey@reddit
We used StarWind. Once Datacore got their mitts on them, we exited. As a product, it’s brilliant though.
DerBootsMann@reddit
same story here
BloodMoist8156@reddit
Vates XCP-ng
djgizmo@reddit
its not production ready. too many pita quirks.
FatBook-Air@reddit
How does it compare to Proxmox in your view?
DerBootsMann@reddit
proxmox works
djgizmo@reddit
proxmox is way more stable. XCPNG has cooler features.
1esproc@reddit
Like what?
djgizmo@reddit
for a year plus, their host to host transfer was stuck at 50MB/sec (500mbs). they were aware of the issue.
their webgui for individual hosts were meh for a very long time, and documentation for manual configuration was incomplete for a long time.
Proxmox is just the opposite
derhornspieler@reddit
/r/harvester and migrating workloads to RKE2 cluster has been really great for us.
IxFail@reddit
Vates VMS (XCP-ng & Xen Orchestra) with XOSTOR add-on
sont21@reddit
Xostor I've heard bad things
DerBootsMann@reddit
xostor is drbd based , which means it’s a trash can with a motor attached
The_NorthernLight@reddit
XCP-NG…. 10th the price, all of the functionality.
DerBootsMann@reddit
hardware shortages and the resulting price surge are a global issue , and it’s not like your local vendor can do much about it .. maybe they dump some locked inventory and take a loss , but don’t kid yourself , they’ll make it back by f you somewhere else . that’s just how the game works , capitalism isn’t about leaving money on the table
TechPir8@reddit
Openshift https://www.redhat.com/en/technologies/cloud-computing/openshift
XCP-NG https://docs.xcp-ng.org/
coraldayton@reddit
Question is - what does your backup software support? If it doesn’t support whatever you’re looking at, you’re gonna have to migrate to new backup software as well. Keep that in mind while you’re looking at alternatives to Nutanix and VMware.
iexsist@reddit
If you’re looking for an an All-in system, hardware and software look at scale computing. Great for small and mid companies.
TheBostwick@reddit
Nutanix doesn't do their own hardware so that is out of their hands largely. I'd recommend Nutanix over Proxmox, but Proxmox is your next play if that isn't happening.
OkVast2122@reddit
It’s either Hyper-V or Proxmox, mate! Everything else is very niche at best.
Different_Code605@reddit
I am using Harvester from Suse
DragPi@reddit
Sangfor HCI is pretty decent and cheap which we used for our client.
OkVast2122@reddit
Sangfor is ‘Made in China’, so you should really think twice, mate!
DefiantDonut7@reddit
We use XCP-NG
So_average@reddit
Was this a recent change or did your company have lots of experience with this product already?
DefiantDonut7@reddit
We started on OpenStack in 2015. Project was too young and unstable. Moved to CloudStack. Had the same problem.
Moved to Citrix. Was much better. Devices to move to XCP when Vates forked. But we were early users of Xen Orchestra, like beta testers lol. So we have a decade on the product stack at this point
Assumeweknow@reddit
Refurbished server hardware with ddr4 is your friend.
happygrifter@reddit
HP just hit me with a quote for 3.8 million for 12 dl360s. Hardware costs are insane.
Senior_Pressure9913@reddit
Try proxmox
jlipschitz@reddit
We looked at XCP-NG. They have an HCI product.
soulseaker@reddit
Switch to candle making and goat farming.
Sir-Spork@reddit
Openshift Virtualization with ODF. A bit of a steeper learning curve but it does fit the requirement.
Plus side is it gives you proper container infrastructure if you feel you need it down the road.
nitra@reddit
Server we sold Nov. 14th $7800CAD, just had the same server requoted, $26KCAD.
zerocoldx911@reddit
If the quote didn’t expire you can ask them to adjust it
vikrambedi@reddit
Nutanix was abusive before VMware was...
wyrdone42@reddit
Harvester HCI.
All the power of Kubernetes but for standard VM workloads.
GenericCleverName73@reddit
45drives servers and ProxMox for virtualization leveraging ceph storage for live migrations and redundancy.
cryonova@reddit
I like our HPE Nimbles they have a hypervisor and all
cowprince@reddit
I feel like hyper converged right now is just a bad idea with hardware costs. You're taking a hit in multiple areas.
grstein@reddit
Kubernetes bare metal
Hogesyx@reddit
I am a distributor pre-sales that also distribute Nutanix, the situation now is really bad for this brand actually.
First off their previous model requires users to have certified hardware, last year they added for third party storage but only pure and dell, but still requires certified computes.
Their business model that “forces” customer to not able to recycle old hardware for “better experience” really bite them in the current RAM pricing situation.
If pricing is a concern open source is the way to go, you can slowly drop off VMware nodes and convert them to Ubuntu or redhat ovirt.
No-Specialist7504@reddit
GCVE migration tool from Nutanix to Google Cloud Compute Engine
98TheCiaran98@reddit
Talk to 45drives about a proxmox cluster
athornfam2@reddit
Lookup Platform9. I heard from a partner larger clients are moving to that since it’s made by ex VMware engineers.
Makanly@reddit
Avd and/or w365?
I vote for w365. Keep your life easy.
Don't be afraid to spend other people's money.
Several-Help-6744@reddit
Hardware has gotten incredibly expensive. You could always look into some data center options for a multi-tenant space if you want to take out the hardware management and a lower one time cost.
To preface I do work in the industry so if you have questions I can answer.
madbuda@reddit
Have you looked at vergeOS? Bias since I work there… but I was a customer before an employee.
HorridUnknown@reddit
Have you looked at Starwind? We recently went with their hyperconverged solution. 3 node cluster with Hyper V. Their support has been really helpful and the whole process was as smooth as glass
onepost4me@reddit
Did nutanix increase or did the hardware? Hardware volatility is through the roof with quotes bring valid for maybe 7 days and we're even seeing disclaimers that product with long lead times are subject to price increase even after ordering.
reviewmynotes@reddit
Scale Computing is amazingly good. Give them a call. You can get a single price for the hardware, software, extremely high quality technical support, all software updates, remote connections to your system without a VPN or port forwarding, overnight hardware replacements in the case of failure, and even the networking gear necessary to make an HA cluster. I've used them for about 12 years and have never once been disappointed. They cost less than VMware back when VMware was considered to have reasonable prices.
The one thing I'll point out is that anyone selling hardware is experiencing higher prices right now, due to rapidly rising prices and constrained supplies in memory, storage, and CPUs. So things might not be great no matter what system you use.
nxtgencowboy@reddit
Another scale computing customer and 100% in agreement with this post. Going on year 7.
Hebrewhammer8d8@reddit
Bhyve good for your workload?
sont21@reddit
Would not trust it
AthiestCowboy@reddit
Heard some great things about Platform9
patriot050@reddit
Just to hyperv with pure storage. It's been extremely solid for us. Treat them as normal window servers with monthly patch reboots and you will be fine. Besides a lot of third-party companies are really starting to support hyperv now.. Microsoft is also working on their own vcenter equivalent called wac V mode (we also have scvmm and it's kind of similar to vcenter but it really is different..)
nullbyte420@reddit
Wac v mode entra 365 copilot (new)
Swevenski@reddit
What about dells VXRail? Surely dell can put together a quote and keep it?
KrakusKrak@reddit
I can tell you that VXRail is going to be high like the rest of them
FarmboyJustice@reddit
HahahaNo.
Swevenski@reddit
Come onnnnnnn delllll loves you!!!!!!
WoTpro@reddit
HPE morpheus
quickshot89@reddit
It’s still too new to be used IMO
dragloke@reddit
Morpheus has been around since 2010. HPE acquired it to ship with their dHCI stack and make it a competitor to VMware. I've used it in a POC and for what it's worth, was fairly impressed. Some integration's are required still for my company to commit, but the monthly updates they provide are fairly decent.
MeanE@reddit
I went to a vendor/HP lunch and learn on it last sept and it was…rough. I know development on it was going strong.
VosekVerlok@reddit
Their development roadmap has been pretty agressive, the release in sept last year really ironed out a lot of things from especially the install and guest migration, and really started becoming viable for enterprise.
Morpheus 8.0.13 – Jan 2026
Focused on improving cluster management, migration efficiency, operational automation.
Enhanced API and CLI capabilities for cluster configuration and instance visibility Improved migration enabling VMware workloads to move to NFS datastores on HVM clusters with reduced overhead HVM cluster improvements including Windows VM management, automated hardware inventory, storage and cloning fixes, and seamless agent upgrades Impact: Improves operational efficiency and enterprise scalability while enabling simplified management of virtualized workloads. The release was also delivered to Veeam and other ISVs for ecosystem integration and qualification.
Morpheus 8.1 Now Available – March 2026
Enhancements across deployment, infrastructure automation, networking, and security.
Introduces a single binary installer to streamline installation, upgrades, and lifecycle management. Adds the new VME System Library Catalog for centralized governance of system‑level assets. OpenShift Virtualization is now GA, alongside enhancements such as NUMA‑aware vCPU placement and expanded HVM networking models. Strengthens Bare Metal as a Service (BMaaS) with PXE lifecycle automation, security hardening, and storage enhancements including HA, iSCSI, and BFS. Improves networking and security via ArubaCxDss security groups, enhanced VIP pools, and SR‑IOV passthrough for high‑performance workloads. Adds platform‑level upgrades such as bulk migration improvements, OpenSearch replacing Elasticsearch, and Zerto support for HPE VM DR. Introduces multi‑tier (3‑tier) tenancy for CSPs, enabling scalable Tenant‑of‑Tenants management with full policy inheritance and strong isolation. Impact: Faster deployment, stronger security, and expanded hybrid cloud capabilities for enterprise platform teams.
Coming Next – Morpheus 9.0 – Target: Aug 2026
Upcoming innovations include stretched clusters, Software defined Networking (overlay networks and micro-segmentation), Morpheus Software - Central, Zerto for migration, Shared vDisk and private cloud enablement with SimpliVity including upgrade automation and backup provider improvements
1esproc@reddit
What were standout issues for you? I had a demo
tommishuck@reddit
Came here to say the same thing!!
VosekVerlok@reddit
If you are keen on HCI, one option is HPE simplivity, which they now have running on VME (KVM based) which comes with morpheus as a management console.
SquizzOC@reddit
This is everyone right now, this is every single manufacturer and yes those times lines are damn near impossible, but if you don't buy it someone else will and most OEM's don't have a choice but to operate like this.
I've seen this with HP, Dell, Cisco, Lenovo to name a few. I've told this story a few times already, a client has a 750k Cisco order already processed, PO already cut, Cisco is telling them to expect the cost to jump up before it ships and they will have the option to pay the cost increase or cancel the order at that time.
This is why the largest in the industry, CDW, even has this line on the top of their website right now:
"Due to supply chain challenges with some OEMs, CDW cannot guarantee availability or pricing for affected products until they are ready to ship. Your account team is here to help"
There's no way around this, its the world we all live in, all so CoPilot can give you a bad summary of your email, ChatGPT can give you bad health advice, but at least we get those great cat videos right? RIGHT?
Good luck to us all. I'm exhausted.
BarServer@reddit
With OpenAI retiring SORA in preparation for their IPO they even take the cat videos from us! :-)
Tricky-Service-8507@reddit
Installs open source hypervisor converts over the previous infra and feels happy saving money and time
kagato87@reddit
With ram prices spiking you need to ask yourself if hcvi is the best path, vs buying compute, ram, and storage specific to your needs.
If your clusters are cpu or disk constrained right now hcvi would be bad as it's ram getting clobbered.
Contrast. A SAN expansion won't be impacted as badly and if it's cpu you need, not memory, you can add hosts that are core heavy with moderate ram. (And bonus, if you have empty sockets you migjt just be able to get the chips to meet the need.)
I worked with nutankx when they were new. It's a compelling product, yes, but it's also "one size fits all." And, well, as we all know, there's no such thing.
Senior_Conclusion102@reddit
Do you know what had driven the increase from Nutanix?
I’d hazard a guess it is hardware (which is out of their control) because of global supply chain issues, which will be an issue across the board regardless of hypervisor.
I know quoting is a bit of a pain at the moment Nutanix offer a month, or end of quarter whichever comes sooner. Others like HPE/Cisco have changed their terms so they can change pricing even after they get a PO - at least in my market (EU)
TheNotSoEvilEngineer@reddit
If that quote was hardware related, all of them are bad. Doesn't matter the oem, prices are jumping daily and lead times are months out.
Faux_Grey@reddit
Had a good time with Virtuozzo & Supermicro.
abix-@reddit
The company I work for is migrating all VMware VMs to OpenShift Virtualization Engine by the end of 2027.
OpenShift is a step in the right direction but still paying money to Red Hat. My preference is bare metal open source Kubernetes.
You always have to pay for hardware. Kubernetes has a learning curve but no hard requirement to pay for product/support unless you want to for business reasons
namtab1985@reddit
Scale computing?
geekonamotorcycle@reddit
Xcp-ng has been my go to for years.
clubfungus@reddit
Virtuozzo is rarely mentioned, but they've been in the Virtualization space for decades. Worth a look.
sont21@reddit
That's a old one haven't looked at them
huntsvilleon@reddit
Just curious for those with Hyper-V support, someone mentioned to me (not online) that Microsoft support for Hyper-V couldn’t be bothered because they want to go to Azure cloud. What’s your experience?
AlkalineGallery@reddit
AFAIK hyper v is only 3rd party support
Hakkaathoustra@reddit
Check Incus
https://linuxcontainers.org/incus/
sdrawkcabineter@reddit
I was going to suggest rolling that in-house as it's never been easier.
I don't think that's what you're looking for.
oneslipaway@reddit
We decided against Nutanix and went with Scale Computing. We aren't a fancy shop so it fits our needs.
Vivid_Mongoose_8964@reddit
hyperv + starwind vsan, super easy and quick
didact@reddit
We are probably going to go with OpenShift. We've already got significant licensing with RHEL, and they are giving us a quote that dovetails with migration plans.
We are cautious on the RHEL front, the first quote they gave us was broadcom-esque. You're going to want to get good honeymoon pricing locked in for 3 years, and then have a migration strategy in the 4th year to something different.
Capt91@reddit
What's your use case?
The only question that can be asked yet.
Joshuancsu@reddit
Take a look at XCP-ng. Handles just about anything you can throw at it. AND it's a Type 1 Hypervisor.
cidknee1@reddit
I move people from VMware and hyper V overt to a Scale cluster all the time. Love that product.
Real-Patriot-1128@reddit
We use Azure Local which has been good for us. I heard people considering proxmox.
ANDROID_16@reddit
I haven't used it so I can't vouch for it but I don't think anybody has mentioned Harvester yet
https://harvesterhci.io/
It's a Suse product.
jca3746@reddit
I used to work for Nutanix and just recently left for a different job. But right before I did, we were working on expanding our lab equipment for internal usage. It got scrapped when our hardware vendors tripled prices overnight. Unfortunately, hardware shortages are affecting everyone.
CharlieTecho@reddit
Proxmox 😂
Tricky-Service-8507@reddit
Hyper V works but I’d rather not depend on windows when it’s not needed
pops107@reddit
I had a SAN quote double in a few weeks, it's a nightmare.
I still prepare VMWare over others but prices are just insane now.
I personally prefer proxmox over hyper-v but if the customer wants support it starts to add up and customers seem to feel safer with a Microsoft product in hyper-v amazingly.
Wouldn't go hyper converged though, I would go with separate storage and keep your options open in the future.
Popular_Shine4075@reddit
I use hpe vm essentials.
LocksmithMuted4360@reddit
Proxmox...
excitedsolutions@reddit
If hosting yourself hyper-v, or if looking at a private cloud solution look at OpenStack. Our costs moving from private cloud with VMWare (before the price increases) to OpenStack were cut in half without losing any features.
Tricky-Service-8507@reddit
And you hit them with a cancel service and Proxmox or XCP-NG migration, problem solved
Aromatic_Amphibian29@reddit
Take a look at https://www.virtuozzo.com/vmware-alternative/
AmiDeplorabilis@reddit
The ONLY problem I have (had?) with Hyper-V has been the exorbitant cost of the CALs; that helped make VMware a reasonably priced choice. But with what Broadcom and now Nutanix have done with pricing, that may be less of an issue.
KavalierMLT@reddit
Hyper-v is a good solution.
Another option is shift to redshift (Linux)
Generic_Specialist73@reddit
I can contract with you to build a hyper-v hyper converged system
tuanster1119@reddit
Sounds like you got hit by RAM and storage increases. We've been scrambling all year with quotes only being good for about a week and lead times in the 60-90 day range.
imposter_sys_admin@reddit
Wait... people still use hyperconverged?
Var1abl3@reddit
Proxmox was my go to.
JaffaCakeStockpile@reddit
What industry are you in, what scale environment, are you looking for on-prem or cloud...?
Realistically Hyper-V or Proxmox are the two big choices here, favourability depending on the weight of linux to windows you have
981flacht6@reddit
Sorry but you gotta swallow the cost increase, or don't do anything at all.
Brook_28@reddit
Scale computing
Final_Tune3512@reddit
good ol HyperV
Ill-Panic-4533@reddit
I’m assuming this increase is all hardware, NTNX is a software company but is stuck beholden to HW pricing on HCI. I bet that increase is all on hardware from SMCI.
el_jefe_302@reddit
Sent you a PM
ionV4n0m@reddit
WOW, Nutanix went shitty too huh? yeesh..
polo2883@reddit
It could be due to RAM and SSD shortages.
DragonspeedTheB@reddit
This. All of our vendor quotes are only good for one week now. Ugh.
polo2883@reddit
Dell said that to me as well. They also dont do additional discounts on larger quantities anymore.
pabskamai@reddit
Proxmox
CalvinHobbesN7@reddit
Rancher perhaps? VMs and K8s.
swissthoemu@reddit
proxmox. sheer beauty.
paulmataruso@reddit
+1 for Scale Computing as well. They have been really rock solid. Have several large city goverments running there entire stacks on them
Tall_Put_8563@reddit
I use proxmox and its great.
Common_Arm_3316@reddit
If you have some dev chops just do Kubernetes and deploy virtual machines with Kubevirt
OinkyConfidence@reddit
You already said Nutanix, otherwise I might recommend Hyper-V. It's a bit of a dark horse for sure.