Gartner says mainframe can be cheaper than Broadcom VMware licensing- here's what the business case leaves out
Posted by zedkarma1@reddit | sysadmin | View on Reddit | 13 comments
Gartner VP Analyst Alessandro Galimberti told The Register that some VMware users running 500-700 Linux VMs are finding IBM mainframe cheaper than Broadcom's Cloud Foundation stack. The TCO math can work, HA, DR, and data synchronization are built into the platform.
But the business case only covers hardware and software costs.
It doesn't cover operational costs. And on mainframe, the operational costs are almost entirely human.
VMware administrators who move arrive without RACF knowledge, JCL knowledge, ISPF muscle memory, or any understanding of why the change management process asks for a business owner at 3:47 AM.
Three things every organization needs before they migrate, and the container question nobody is asking yet.
Happy to answer questions from anyone considering the move.
VA_Network_Nerd@reddit
Sorry, it seems this comment or thread has violated a sub-reddit rule and has been removed by a moderator.
Do Not Conduct Marketing Operations Within This Community.
Your content may be better suited for our companion sub-reddit: /r/SysAdminBlogs
If you wish to appeal this action please don't hesitate to message the moderation team.
itishowitisanditbad@reddit
How often does the pricing change in a way that would change the outcome established here?
What stops that happening at any point?
What value really is in the information here based on that?
zedkarma1@reddit (OP)
The answer is in the reply, you may read it or not.
zedkarma1@reddit (OP)
Fair challenge. IBM Z pricing is tied to MSU consumption and software licensing it changes, but slowly and predictably compared to Broadcom's post-acquisition pricing moves, which were sudden and significant (some customers reported 3-5x increases).
The more durable point isn't the specific price comparison, Gartner's analysis will date. The durable point is the workload fit argument: sequential stateful batch processing at scale has physics constraints that don't change with pricing. The channel architecture advantage is structural, not contractual.
So the honest answer: use this as a prompt to run your own numbers for your specific workload, not as a definitive price guide. The value is in the framework for the comparison, not the specific outcome.
Disclosure: I'm not an IBM employee, not an IBM partner, and not selling mainframes. I build independent mainframe and non mainframe software tools. I have no financial interest in anyone choosing mainframe over anything else.
PsychoGoatSlapper@reddit
Piss off AI bot
zedkarma1@reddit (OP)
35 years staring at dumps. Very much a real person, unfortunately for me.
Ape_Escape_Economy@reddit
I’m allergic to AI slop.
zedkarma1@reddit (OP)
Fair. I'll take the feedback. Written by a human who has been staring at ISPF panels since 1990, but I understand the skepticism.
Mac-Gyver-1234@reddit
Instead of tripping from one vendor lock-in into the next vendor lock-in companies might consider more strategically resilient approaches.
Like in-sourcing and usage of open source, e. g. Kubernetes and KubeVirt.
a60v@reddit
This. Gartner is being useless as always here.
I do admire the mainframe platform for what it does well, but, unless the organization is already heavily invested in mainframes, it makes little sense for most to move to them now, trading one proprietary tool for another.
Direct-Fee4474@reddit
500-700 VMs is a really small footprint. If anything that just sounds like a lesson in the long-term costs of technical debt, I guess.
enterprisedatalead@reddit
I’ve actually seen this play out in heavily regulated environments where uptime mattered more than hardware sticker price. One financial org I worked with kept an IBM z/OS stack because their batch processing, RACF security controls, and near-zero downtime ended up being cheaper than constantly rearchitecting distributed Linux clusters every 3–4 years.
The interesting part is Gartner usually measures total operational cost, not just infrastructure cost. Once you factor in COBOL app stability, staffing continuity, power efficiency per transaction, and compliance overhead, mainframes can still make economic sense at scale.
The bigger question is whether companies are optimizing for engineering flexibility or long-term operational predictability. In your experience, which side usually wins when leadership makes these decisions?
zedkarma1@reddit (OP)
In my experience operational predictability almost always wins when the decision reaches the CFO level, and engineering flexibility wins when it stays at the VP of Engineering level.
The problem is most of these decisions get made at the wrong level. Engineering teams build the TCO model, present it upward, and the numbers look compelling. But they're modeling infrastructure costs, not operational risk. The CFO who has lived through one failed migration, missed batch windows, regulatory scrutiny, customer impact, weights operational predictability very differently.
The organizations that regret staying on mainframe are usually the ones that never invested in modernizing around it - tooling, observability, developer experience. The platform is sound. The investment gap is the real issue.
Your point about rearchitecting distributed clusters every 3-4 years is underrated. That cycle cost rarely makes it into the initial TCO comparison.