A little question about IBM Z (z/OS)
Posted by Der-Wilde@reddit | sysadmin | View on Reddit | 37 comments
So, recently i found about IBM's z/OS and it's usage in banks and other critical systems. My question is: it's possible to replace it for an open source solution?
For what i've research the point of Z is the entire integration with proprietary IBM hardware, which makes possible a very efficient I/O, RAS, Workload Manager and Security.
phoenix823@reddit
System Z has run Linux for decades. They have a dedicated processor, the Integrated Facility for Linux (IFL) for just that.
pdp10@reddit
IBM originally ported Linux to
s/390architecture because it was cheaper and faster than writing another OS to do the jobs they wanted to do, adding various kinds of offloading and connectivity features to the core mainframe products. See also VIOS running in a hard partition on IBM Power hardware, though VIOS is apparently based on AIX, not on Linux.Later, to increase mainframe sales volume while still maintaining tight market segmentation for the backward-compatible systems like z/OS, IBM marketed mainframes as being able to run Linux as well as commodity hardware, but with more features and allegedly better TCO. The features revolve around RAS/uptime and direct, low-latency access to legacy mainframe. It's nowhere near cost-effective to try to use an IBM mainframe to do what distributed array of commodity servers can already do.
Kardinal@reddit
I just have to mention the appropriateness of your username.
heretogetpwned@reddit
Bingo. It depends on what your core banking software was written for. I previously worked for a payment switch that had both IBM Z (working with partners) and HP Non Stop (running our own)
pdp10@reddit
I suspect that a lot of computing veterans, like myself, still think of these as "Tandems". A good 46-minute history video here, for those with the time and inclination. These are something I've never worked on first-hand, or even adjacently.
mcmatt93117@reddit
Love Asianometry, followed him for a few years now.
Also, Advent of Computing.
Any other legacy hardware/operating system/programming language history podcasts /channels you happen to know of?
pdp10@reddit
I can't recommend anything in particular. I try not to do very much that could be described as "retrocomputing", unless I have a specific project.
mcmatt93117@reddit
Oh, zero percent actual retro computing myself either - but I enjoy the podcasts.
Around legacy ISAs or the birth of fortran, anything in that genre. Like it on in the background.
automounter@reddit
I forgot all about Tandems
heretogetpwned@reddit
Adjacent only, I supported the gapped Windows systems that the NonStop SMEs used for day to day. I believe it was a 16-20' row independent of our other systems.
It was years ago, but these are the only in use (albeit DR side), Itanium badged compute I'd ever seen.
phoenix823@reddit
Nobody is going to buy a Z just to run Linux, that's for sure.
pdp10@reddit
IBM will send over a sales team to sell you one just to run vanilla operations on Linux. I'm confident that someone has bought one for that, at least once, even if it wasn't a highly economically-efficient decision.
LRS_David@reddit
Sales folks will try and sell you a Honda Fit to haul a camper if that is what is in inventory. And has the best commission.
phoenix823@reddit
Oh I know there are machines out there that are full of nothing but IFLs. Maybe a better way for me to put it would have been "Nobody is going to go out of their way to start a relationship with IBM just to buy a Z to run Linux."
calladc@reddit
Also with ibm owning red hat they're able to react to changes in the upstream os much faster, and can keep the z/os distro lined up with specific lts releases of rhel
pdp10@reddit
Most of what was open and easily replaceable in the past, has already been replaced, per Sustrik's Law:
IBM mainframes are used for priorities in usually this order:
Everything can be replaced with a different solution, as long as it can be done with a modicum of commodization. AS/400s, a non-mainframe IBM product line that was originally designed as a mainframe replacement, was specifically built not to be cloneable, because IBM was existentially angry that their main product had been cloned by Amdahl and the Japanese.
Unlike AS/400s, IBM mainframe hardware can be emulated in software, using the open-source Hercules emulator, or IBM's tightly-controlled commercialware emulator, z/PDT, which they only sell for purposes of development, never production. It's very difficult to impossible to get IBM to license modern versions of IBM mainframe operating systems to run in production on open-source emulators, however, for business reasons that aren't hard to guess.
LRS_David@reddit
Well it didn't help that Gene was the head of design for the original IBM 360. IBM must not have had good lawyers drawing up employment contracts back then. Now days if you work for IBM you agree they own any IP you come up with for any reason. Or so employees have told me.
Fred Brooks had an interesting tale about winning an argument with GA about byte size. Basically GA didn't care about lower case letters or much of anything not needed to do math on a computer. He wanted 6 bits are the basic size of things. TWJr decided in favor of Brooks.
pdp10@reddit
Before the System/360, mainframes that fell into the "second generation" were mainly 36-bit word size, because 10-digit decimal precision was considered to be the product need at the time, and that required 35 bits. Many options for character encoding were actively used to pack 36-bit words, from 5 to 12 bits. So lower-case, non-Latin, and conceivably even fonts were already possible then, just not widely standardized.
Lower-case characters weren't part of common 5-bit telegraphic/teletype codes of the time. Even the lower-case part of the ASCII standard wasn't added until 1967, three years after the announcement of the System/360. All-caps was an obvious characteristic of computing at the time.
The System/360 is generally considered the influencer that made octets into the standard byte size, and 32-bits or multiples of 8 to be the standard word size. But S/360 also notoriously failed to adopt ASCII, because of the timing and an inability to make a new ASCII-based peripheral ecosystem. IBM mainframes ended up with 8-bit EBCDIC as an extended version of 6-bit BCDIC, and the IBM-ecosystem mainframes and IBM AS/400 use EBCDIC to this day.
Notably, the famous 1960 PDP-1 was 18-bit wordsize and not considered a mainframe, and the famous SDS 9-series from 1962 was 24-bit and considered a mainframe.
LRS_David@reddit
I was avoiding a much longer post. In ancient times I got to deal with 6 bit ASCII for space reasons (4 letters to 3 bytes and back again). Made it harder to sell software in Quebec and Miami.
Then there were the mainframe guys who wanted us 8K byte per user mini computers to do the conversion between ASCII and EBCDIC on hour side instead of them doing it on the mainframes. NOPE.
pdp10@reddit
A perennial. Like I'm going to convert your EBCDIC or SNA on my side, just so you don't have to IBM pay for the world's most expensive MIPS. But you're not going to give up any of your budget to do it either, because you can't even afford your MIPS in the first place.
Today the transistor count or discrete ASIC is effectively free, but once, you were possibly looking at a dedicated 11/780 or something. Then again they already had comms controllers that size, so what was one more dishwasher in the datacenter?
I even had this problem into the 21st century, when our midrange folk refused to migrate user sessions over to TCP/IP, even though they had TCP/IP already implemented.
LRS_David@reddit
We were using RJE/2780 (or 3780 I forget) protocols to talk to very large companies. It was the only thing we could find in the 80s that everyone "in theory" could support. All of the big mainframe folks spent a week or so setting it up and testing with us. One company came back and said it would be a few weeks longer. On their version of MVS or whatever the code kept ABENDing. They pulled up the source code for the 2780/3780 io handler. All it was was a comment that said "Someone needs to move the code into here."
Oops.
so what was one more dishwasher in the datacenter?
Most of the companies we were dealing with had something like 50 branch offices with 40 to 200 327x terminals, printers, etc... so yeah, what was one more box. Or mostly one more card in one of the comm boxes.
Then there was this one company where NONE systems guys would agree to touch a "modem". So I got to fly to another city with a modem under my arm. And plug in the power, phone jack, and comm cable. [eye roll]
jdiscount@reddit
Could you, yes in theory.
Should you, no.
Companies with mainframes have them for a reason.
tarvijron@reddit
This has “I just learned about them and now I’ve got a plan to replace mainframes” energy.
Der-Wilde@reddit (OP)
No, i just got curious and made an question
tarvijron@reddit
“Probably these silly old admins have never heard of Linux”
skspoppa733@reddit
These are a fascinating platform that don’t get enough credit for their capabilities. They don’t make economic sense for probably 97% of workloads, but that’s just due to Intel and Microsoft’s marketing superiority in 90’s when computers became mainstream. Had the costs been more in line with what a bunch of cheap Intel servers could do then the IT industry might look a lot different today.
LRS_David@reddit
Sure. But the cost of conversion will be non trivial. And when all is said and done, open source software isn't free when you look at TCO. For someone replacing a Z they will be paying RedHat and similar others huge annual support fees. And as to major large bank open source software, please point me at some.
Banks have to integrate with many national and world wide IT systems. These integrations cost money. And must be certified. George Baily's S&L is a quaint memory.
ofnuts@reddit
Banks run a mix. A very large bank I did projects for had around 12000 Linux servers besides their farm of IBM mainframes. And as far as I know their integration with others is done via some of these Linux servers.
LRS_David@reddit
But to my point, getting totally rid of Z or closed source software would be hard for most banks. And other such companies.
pdp10@reddit
Of course not, but zero of the competitors are free of TCO either. The only thing that's really free is eliminating the component/need altogether -- often great engineering work when feasible!
The statement that open-source isn't free of all costs is mostly used as a talking point when someone is promoting a software-based competitor that isn't open-source. It's a reminder that TCO is king and needs to be carefully predicted, but there's no real value past that.
When speaking with potential vendors, besides TCO, a good subject on which to seek alignment is "cost control". How to guarantee that unexpected price increases won't sink the business, basically. Open-source licenses have a fantastic narrative with respect to cost control, because the end-user can seek unlimited alternative sources for the same products with zero or minimal need for migration.
Imagine that you're a hosting provider who sells virtual-hosting using VMware's hypervisor and suite. The next thing you know, your supplier has tripled prices. And if it seems like things couldn't possibly get any worse, VMware is now trying to sell their products directly to your customer, cutting out the product-side middlemen!
LRS_David@reddit
And reduce the demand for uranium mining, which is rather polluting (it's a heavy metal contaminant: not necessarily radioactive, just poisonous).
Strong disagree on that. There are times open source software may check all the boxes but still not be a valid solution. I don't like the way Autodesk and Microsoft rule the design build industry to the extent that many contracts require their use for the project with conversations (or Save As) against the terms of contracts. I don't like it at all. But it is a requirement of the contract.
But I have a hard time imagining a bank with more than a boutique operation being able to go full open source. And I suspect that some of those Z system shops have a LOT of open source running. Just not the core operations.
LRS_David@reddit
I see you've dealt with Oracle. :)
phobug@reddit
The interesting part is not the OS or any hardware-software integration, it’s the workflows these machines execute. There is no guarantee that the Cobol code written for that machine will run on anything else, and I can hear you say “Thats bullshit, GCC can compile Cobol just fine” and that might be true but there might be an edge case “optimized” by GCC so you would get a slightly different result, not X bank account but the next one sort of thing. How many USD are you willing to risk to try out the open source alternative? Keep in mind that the only value a bank has is the trust people put into it. If word goes out that you misdirect payments occasionally, you will not be a bank in the near future.
pdp10@reddit
I think it's more of a combination of factors. With respect to legacy code:
rainer_d@reddit
I believe there is a bit of resurgence of this hardware due to it’s ability to run AI workloads.
jimicus@reddit
Not very easy.
These mainframe OSs provided their own mechanisms for storing lots of data - you weren't necessarily limited to files, you could also store records - essentially, an early form of database that was built right into the OS.
Relational databases as separate pieces of software you install on top of the OS didn't really exist when those things were being deployed (and it'd be some years before databases offered half-decent performance).
You can run Linux alongside them - mainframes have had something akin to virtualisation for decades - IBM calls them LPARs - but you'll quite often find the business logic still runs on z/OS because there simply isn't a drop-in replacement for the record-based storage z/OS offers.
pdp10@reddit
Yes, record-oriented filesystems provided useful functionality. But it's no surprise that anything that can on amainframe with half a megabyte of memory in the 1960s or 1970s, is not difficult to replicate from scratch.
Yes, IBM mainframes pretty much pioneered hardware-level virtualization, starting in the late 1960s but not becoming mainstream until the 1970s and 1980s. IBM knew that virtualization would enable customers to do more with less hardware, so they avoided virtualization until competition was so serious that they needed it as a competitive advantage.
An LPAR is a hard partition with fixed resources assigned. "IBM VM" is the hypervisor like mainstream users know today, highly dynamic with resource sharing.
z/OS is the current name for the most popular first-party IBM mainframe operating system that has previously been known also as "MVS" and "OS/390". It gets chosen for workloads that already require a mainframe-based dependency like CICS, are already written in non-portable code such as IBM HLASM assembly language, require hardware high-availability along with centralization, or -- most often -- all of the above.