Nas or s2d storage
Posted by Cultural_Log6672@reddit | sysadmin | View on Reddit | 39 comments
Good morning. I would like to make a cluster of two nodes with hyperV + quorum device, I wonder about the choice of storage if I want ha/replication. Is a nas with storage or local storage in s2d on the servers better?
Godcry55@reddit
3 node for S2D minimum - 4 if budget allows.
2 node hyper-converged will work, but depending on workload, may not last for long.
A small SMB can live with a 2 node S2D loaded with enterprise SSDs running a 2-way mirror using NVMe as cache.
Run scripts to poll cluster health periodically to proactively resolve issues before they become a bigger problem.
iSCSi SAN with MPIO is tried and true.
Cultural_Log6672@reddit (OP)
I'm going to leave on the Starwind solution
Arudinne@reddit
Fuck S2D.
Cultural_Log6672@reddit (OP)
What other solutions then?
Arudinne@reddit
I finally got the budget to get an all-flash Dell Powerstore earlier this year and it's been great.
Previously I tried for over a year to keep a couple of 3-node S2D clusters stable before eventually just saying fuck it and making them independent nodes before I got that budget.
Cultural_Log6672@reddit (OP)
The problem is that the budget is tight and so tight that I can't make more than 2 nodes for my cluster so I'm looking for the most reliable solution for my cluster and service availability. It is clear that with a substantial budget we can really make good infrastructure but when we are limited it is frustrating. What was the type of problem encountered in s2d?
Arudinne@reddit
Yeah, I couldn't get the budget for it until we moved vendors for our M365 CSP. Our new vendor saved us roughly $20K a year on W365 licenses.
I tried both 3x 2 node-clusters and 2x3 node combinations and none of them were stable for more than a few days or weeks a time. I don't recall the specific error messages, but the most common failure node was a VM would get "stuck" in a rebooting or shutting down state and they only way to recover it was to kill the process running the actual VM. That would usually trigger a cascade of various other errors and eventually I'd have to shutdown the entire cluster and then bring it back up. Rinse and repeat for a year.
I would rather run each node on a single 5400 RPM HDD each than ever touch S2D again.
Cultural_Log6672@reddit (OP)
In view of the budget I thought of using two nas synology in ha cluster to eliminate the spof and use the nas for the shared storage of the cluster to be able to have ha. Is this consistent?
Arudinne@reddit
Never tried that myself so I don't know.
topher358@reddit
All I’ve heard is stay the hell away from S2D. I just recently did this type of implementation with iSCSI SAN and it worked flawlessly
Cultural_Log6672@reddit (OP)
Thank you for the answer. So I only have two solutions to use Starwind vSan or a physical nas/san. But I am limited by the budget and needs of ha and I must limit the spof as much as possible, that's why a nas is problematic in my context and I'm rather looking for hyperconvergence
topher358@reddit
I would definitely avoid a nas because that introduces a single point of failure. In my case I had a SAN available with dual controllers and enough NICs on the hosts for MPIO
Cultural_Log6672@reddit (OP)
Yes, that's why the nas for me is really a last resort. I look at a little Starwind vSan which could overcome my problem I inquire a little it already looks better than s2d and rather robust and reliable and used by many companies. It could meet my needs
RealDeal83@reddit
I switched from SAN to S2D and regret it. S2D works most of the time but has odd issues sometimes. Our old Nimble worked 100% of the time for 5 years straight, 0 down time.
Cultural_Log6672@reddit (OP)
I wanted to put a nas but the problem is that because of this there is an obvious spof?
cbass377@reddit
Any NAS you buy, even the cheapest HP MSA or Dell Powervault will have hardware redundancy built into the hardware, so the SPOF is not that big a deal.
topher358@reddit
It’s worth mentioning that both of these are SANs and not NAS devices. NAS unlike SAN is not normally redundant at the hardware and physical path level
Cultural_Log6672@reddit (OP)
Are nas synology good for this use?
ledow@reddit
Don't do S2D with 2-node clusters.
Cultural_Log6672@reddit (OP)
And Why s2d with 2 nodes no ?
ledow@reddit
Unreliable, breaks, goes down all the time.
2-node and iSCSI or 3-node with S2D
Don't do 2-node and S2D.
Cultural_Log6672@reddit (OP)
So I should use a shared nas for my 2 nodes ?
ledow@reddit
Depends on what you're trying to do.
I would use either 2-node cluster with other storage, 2-node Hyper-V replication, or 3-node cluster with S2D.
Cultural_Log6672@reddit (OP)
I wanted ha for my cluster it's goal number 1. And I have no choice but to have only 2 nodes in my cluster. That's why I'm trying to find a reliable solution
ledow@reddit
You won't get HA with 2-node S2D, trust me.
Built lots of those, they are not reliable.
Cultural_Log6672@reddit (OP)
So the solution that most corresponds to the need is nas shared between the two nodes or I heard about Starwind V San that could correspond?
ledow@reddit
If the NAS fails you lose both nodes, so that's not HA either.
Starwind is basically just a better version of S2D. Never used it, so I don't know.
Cultural_Log6672@reddit (OP)
So I'm stuck for the moment in my choice. Yet S2D is recognized natively by Microsoft and is supposed to be reliable. But I read contradictory opinions everywhere whether for a nas, for Starwind or s2d. For s2d there are any special requirements to be respected?
ledow@reddit
Because all of those solutions are cheaping out, and you're not supposed to cheap out on a HA failover cluster.
You're basically just running software iSCSI on the SAME MACHINES that are trying to access that iSCSI, that's all S2D and Starwind are doing.
And when you only have two nodes and one of those goes down, you're then basically... just running a server off its own local storage, but with huge layers of complication in between.
Clusters are really for several nodes, with independent, redundant storage, not what you're trying to do. S2D is just a cheap bodge if you don't have real iSCSI storage.
But S2D, I can assure you, will work... and then one day... won't. And you will have a HELL of a time with it. If you had more nodes, it wouldn't be a problem, but S2D does not work as advertised with only 2-nodes. You will, at some point, get a complete cluster failure that you can't recover from if you do that.
Either:
- Forget about HA and go for just replication between two nodes (this way, you have a complete working copy of all VMs just one-click away from booting up)
- Use cluster but use independent, redundant storage (i.e. not just a NAS)
- Use cluster and S2D but with more nodes so you can cope with failures and update it safely.
I guarantee you with 2-node S2D, you will one day get an alert that everything has gone down (most likely after a cluster-aware update) and to recover it will start rebuilding your entire storage. And if ANYTHING fails while it's doing that, you can lose the cluster storage.
You can't run enterprise stuff on toy hardware.
Cultural_Log6672@reddit (OP)
If I go to three nodes it will be functional with s2d?
Cultural_Log6672@reddit (OP)
Any idea with 2 nodes ,
TechHardHat@reddit
S2D if you want true HA without a single point of failure, NAS just moves the bottleneck, not eliminates it. Two nodes with S2D and a file share witness is a cleaner architecture if your workload justifies the licensing cost.
Cultural_Log6672@reddit (OP)
I will use S2D then.
mcapozzi@reddit
Depends on the budget, and the disk throughput requirements of the workload, S2D works well enough in most small business situations.
I've done clusters in Hyper-V with S2D and iSCSI SAN. I've also done VMware clusters with vSAN, DAS, NFS, and iSCSI SAN.
Cultural_Log6672@reddit (OP)
Pour le budget et la résilience le choix sera plutôt s2d alors ?
mcapozzi@reddit
Yes, that is the budget conscious choice.
I built one for a private school a couple of years ago.
Cultural_Log6672@reddit (OP)
For s2d in terms of hardware for hard drives, are there any specifics to respect?
mcapozzi@reddit
Fastest stuff you can afford a lot of.
Cultural_Log6672@reddit (OP)
Thank I will see for good hard drive