Took the plunged and switched to Enterprise NVMe - Now wondering what I'm doing wrong as performance is awful.

Posted by ADynes@reddit | sysadmin | View on Reddit | 58 comments

So it was time for a server change out, replacing a Dell PowerEdge R650 that had 6x 1.92Tb 12Gbps SAS SSD's in a RAID 10 array on a PERC H755 card. Had no issues with the server, we proactively replace at 2.75 years and have the new one up and running when the old hits 3 years when it then gets moved to our warm backup site to serve out the next three years sitting mostly idle accepting Veeam backups and hosting a single DC. Looking at all the flashy Dell literature promoting NVMe drives it seemed I would be dumb not to switch! So I got a hold of my sales rep and asked to talk to a storage specialist to see how close the pricing would be.

Long story short with some end of quarter promos the pricing was in line with what the last server cost me. Got a new shiny dual Xeon Gold 6442Y with 256Gb RAM and all the bells and whistles. But the main thing is the 8x 1.6Tb E3.S Data Center grade NVMe drives rated at 11GB/s read, 3.3Gb/s write sequential and 1610k random (4k) IOPs, 310k write (4k) IOPs each. Pretty respectable numbers, far outpacing my old drives specs by a large magnitude. They are configured in one large software RAID 10 array through a Dell PERC S160.

And here is the issue. Fresh install of Windows 2025, only role installed is HyperV. All drivers fresh installed form Dell. All firmware up to date. Checked and rechecked any setting I thought could possibly matter. Go to create a single 200Gb VM hard drive and the operation takes 5 minutes and 12 seconds. I watch Task Manager and the Disk activity stays pegged at 50% hovering between 550Mb/s and 900Mb/s, no where near where it should be.

Now on my current/old server the same operation takes 108 seconds. The old drives are rated for 840Mb sequential read and 650Mb seq writes. In that servers 6 drive raid 10 that would be 650 x 3 = for 1950 Mb/s for a sequential write operation. So a 200Gb file = 200/1.950 = 102.5 seconds (theoretical max) so the math works out per the drive specs. But on the new server the sequential write is 3.3 GB which x4 drives is a ridiculous 13.2 Gb/s. I should be writing the hard drive in 200/12.3 = 16 seconds yet it's taking almost 20 times that.

Is my bottle neck the controller? And if so do I yell at the storage specialist that approve the quote or myself or both? Anyone have any experience with this that can tell me what to do next?