Need to migrate some drives on a Hyper-V VDI over to new drives/raid
Posted by Cj_Staal@reddit | sysadmin | View on Reddit | 3 comments
I have a Hyper-V RDS setup using a VM pool, that VM pool is on the D drive (nothing else on it) and is a raid 5. (C and D are 2 separate arrays) I'm switching it to a SSD RAID1, but there's no more slots left. I was thinking of stopping all services, moving over the data to a temp folder on the C drive, then powering off, removing the D drives array, adding the new array, logging in, activating disk, set to D, move everything over, and re-enable the services. Theoretically, this should work. Am I overlooking anything?
DWC-1@reddit
Hyper-V stores VM configuration in the registry and Hyper-V Manager. After moving files back to D, you'll need to verify that the VM paths are correctly updated. If Hyper-V has hardcoded paths to the old D drive location, the VMs might not start. Check the VM settings in Hyper-V Manager to confirm they point to the correct locations on the new D drive.
When you copy files, ensure NTFS permissions are preserved. Use
xcopy /E /I /Y /O /Xorrobocopywith appropriate flags to maintain ownership and permissions.The rest is basic stuff like having appropriate storage space on C to hold the VM pool data and factoring in extra time for the RAID 1 to initialize. Depending on your hardware RAID controller, initializing a new RAID 1 array on large SSDs might take longer than expected, even before you restore data
Cj_Staal@reddit (OP)
Basically, going to make the "new" D drive be an exact copy/pathing of the old D drive. Hoping that should negate any issues of repathing. Also, it's not just a normal VM, it's being handled by windows RDP/VM assignment
DWC-1@reddit
It should be fine, just verify after the migration:
Open Server Manager -> Remote Desktop Services -> Collections and confirm all your pooled VMs show as healthy/available.
If anything looks off, you can refresh or (worst case) use PowerShell to re-verify the collection members.