iSCSI and S2D on same SET vSwitch (hyper-V 2025)
Posted by elaci0@reddit | sysadmin | View on Reddit | 12 comments
we are building a new hyperV 2025 cluster using two Dell's blades. The concern is about the storage: we could leverage on a classical iSCSI connection to a NetAPP but I would like not to miss the S2D feature given that each host has 2TB of nMVE. Unfortunately each of the eight hosts has "only" 2x NIC (10/25Gb broadcomm) +2x NIC (10/25Gb intel) so even if the plan is to create two SET vSwitches the doubt is if one vSwitch could manage both S2D and iSCSI networking.
Anyone could advice?
Thanks!
Zealousideal_Fly8402@reddit
You really don't want to deploy S2D without using Dell's ReadyNode solution with ProSupport....
elaci0@reddit (OP)
Sorry I don't get it. The have already bought these blades with 4 hosts inside each box and each box has 2x 2TB nvme disk. Why not use them as a S2D for my siteB (read my edit in OP)?
Zimfi@reddit
Only do S2D if you have RDMA. Your network must be lossless. RoCEv2 will give the best results, but also slightly more headache for configuration.
You can create one or more vNIC's on a SET vSwitch (made with PowerShell or VMM). Dedicate one or more vNIC's for storage purposes. ISCSI can go over a separate vNIC.
Configure queues and MTU appropriately. Know what you're getting into with write multiplexing and so on with S2D.
Leverage CSV block cache in your cluster to boost the read speeds. Your writes costs more for each mirror copy you have.
Do not use parity for S2D.
ultimateVman@reddit
You absolutely MUST configure pinning if you are using iSCSI in a SET.
IF both vNICs are traversing the same pNIC path, and that connection goes down (for a switch patch for example), both vNICs have to transfer to the other pNIC path. That's a 1-2 ping blip, and Storage connections are CRITICAL to the millisecond. The time it takes for the vNICs to failover WILL kill the storage. You must pin the storage vNICs. One to either side.
This is one of the biggest WTF moments admins have with Hyper-V and Failover Clustering and why a lot of people put down Hyper-V as being "inferior." It's jsut different and the nuances are hidden in configurations such as this.
This is also why I tell everyone to steer clear of iSCSI and just do FC. Hands down 100x cleaner in networking config and reliability.
elaci0@reddit (OP)
Thanks, I haven't thought about the 1-2 ping blip. I will search how to pin pNIC for iSCSI. At this point I am not sure if it's even better to go for typical pNIC reservation for iSCSI only and use the other SET vSwitch for mgmt, live, guest and heartbeat, leaving out S2D at the moment. On the other siteB (see my edit in OP) I am thinking to use S2D with their own pNIC
matg0d@reddit
I have seen this advice and followed when configuring SETs on a hyper-V host, but what would you do when using iSCSI on a VM? Like a SQL Server VM? I dont see an option to pin a given vNIC on a VM to a given pNIC in the host, and how to do it in a cluster too? In my case atleast all the pNICs have the same name on the nodes.
ultimateVman@reddit
It's generally ill advised to connect iSCSI to a VM. You're better off just connecting that iSCSI to the host and creating a vhd for the VM.
elaci0@reddit (OP)
At the moment we are just in design phase. Hosts are Dell C6620 and their NIC support RDMA but they are different in term of brand, firmware, etc and because I am new to these new SET vSwitches (and still think with "NIC teaming in mind") I suppose I will need to create two SET vSwitches for which, I guess, redundancy will be only between those 2NICs inside each vSwitch.
Would something like this work?
1a--MGMT 1b--liveMig 1c--heartBeat
2a-- iSCSI 2b--S2D (not sure if this is possible or advisable)
Sorry, hosts have 2x 1.8TB available for doing a nice S2D with enough redundancy between hosts, but we are not confident with this storage so we would still use iSCSI for most of the VMs and use S2D for.. replica? backup? running high performance VM?
ultimateVman@reddit
Your setup will work.
For 2a, you WILL need to pin the iSCSI connections. See my other comment on why.
You should check out the r/HyperV sub for more info on detailed networking configurations.
ledow@reddit
Please, please, please, do NOT do 2-node S2D on Hyper-V cluster.
3-node, fine.
2-node without S2D - e.g. iSCSI, fine.
But don't do 2-node S2D for a cluster.
Just search this sub and you'll find a lot of other people saying the same.
matg0d@reddit
I see this advice all the time here, but I have to question, is with without a quorum (SMB/cloud) or S2D just brokes on a 2 node even with a quorum?
We have a 4 node cluster with everying doing P2P without a switch and still use a quorum just to be safe, been going for 3 years.
Planning on having other clusters for smallet but CPU heavy demands and using two nodes + SMB quorum.
ledow@reddit
Even with a witness, including a local file witness on an nearby but isolated machine.
Twice, at two different sites, with two entirely different setups (created by two different unrelated people, in my case SEVERAL times over as we had to rebuild several times).
3-nodes and above, you're okay.
But 2-nodes and witness it will work fine until one day it just DOESN'T and no matter what you think the quorum should be and what should continue to stay up... it doesn't.
Remove S2D from the equation, it never happens. Add another node, it never happens. But 2-node S2D shouldn't be a supported / allowed configuration.