Windows Server native data deduplication - Does anybody actually use it?
Posted by Bob_Spud@reddit | sysadmin | View on Reddit | 28 comments
Winserver data/block deduplication has been around since Winserver 2012, it appears not many people use it.
Out of curiosity I did some testing on it found it not that efficient in deduping data and it is not an inline dedupe, it runs as a scheduled task.
randomugh1@reddit
It seems good until something happens. The smallest corruption wrecks the entire filesystem.
Once you have a corrupt filesystem you learn you can’t restore because you can’t fit 200-GB of data onto a 100-GB volume. Since this limits overprovisioning (the only point of dedupe) there’s no real benefit.
It also consumes a lot of ram and can starve the rest of the system. So many performance problems lead back to dedupe. It’s slower than non-dedupe storage. You have to monitor the event log for filesystem corruption events and manage and be aware of the dedupe job schedule.
Again you shouldn’t overprovision so what’s the point?
In short, friends don’t let friends use dedupe.
buzz-a@reddit
Yes, have used it a bunch.
We found it's fine for smaller data sets, but once data gets big it is a problem.
The deduplication "refresh" where it calculates which data is a duplicate and stubs it out is too slow to keep up with even modest change. Once it falls behind it's actually worse than not having dedup at all.
In the end we are only using it on highly compressible data like SQL bak files.
For everything else it was too much overhead and work to maintain.
johnno88888@reddit
I used it on an s2d cluster. Had 30TB of exchange data that for some reason someone that wasn’t me thought it was a good idea to not backup.
The disk became full
Dedupe data became corrupt
We no longer have the exchange data
Hunter_Holding@reddit
No DAG? No LAG?
If 30TB of online exchange databases you have, 120TB of storage you need minimum (raw, physical - straight passthrough, non-RAID on 4 non-virtualized exchange servers). (of course, you needed more) in order for exchange NDP and non-crap backup routines to function well.
Exchange sings if you do it by the book, but almost no one does..... definitely an architecture issue there that caused that data loss, though. I'd suspect some write caching fuckery there possibly, as just filling disk on a exchange DB should cause it to offline recoverably,
johnno88888@reddit
None of that lucky it was an archive and we may have just got by. It wasn’t the exchange disk filling up it was the s2d iscsi role disk filling up.
Hunter_Holding@reddit
Yea, that's what my last line was all about - the s2d volume filling up.
Glad to hear it worked out well enough, at least. But the point about the exchange setup was mainly "exchange done right couldn't have had this problem happen...."
Burgergold@reddit
Not since storage units offer dedup and compress.at a larger scale
Hunter_Holding@reddit
I mean, at $work a few petabytes (available/usable, not just raw) of storage, windows *is* the storage unit, providing iSCSI, NFS, and SMB using WSS (storage spaces) and all its various functions and components as needed.
Replaced NetApp, Data domains/equallogic kit, and a bunch of other storage solutions across a wide variety of platforms. iSCSI and NFS volumes mainly to back non-hyper-v farms that are left (we opted for hyper-v pre-broadcom with a planned slow-roll migration for better vCPU - less hardware overall - density and for site-local systems better local storage performance, have about 4k of our 6k VMs migrated so far, the storage aspect actually came later in the game as we were initially running with Hyper-V hosts on existing iSCSI storage)
Burgergold@reddit
I think that your solution would fit as a storage unit
My point is its better to activate those feature at a larger scale than on each individual small workload
czj420@reddit
It breaks file indexing since deduplicated blocks are not indexes leaving windows search incomplete.
extremetempz@reddit
General file server, 16TB raw and 9TB with dedupe
Walbabyesser@reddit
Using it - works great on file server
tech_is______@reddit
I use it all the time
Vicus_92@reddit
I have a few clients with a tonne of large point cloud scans, and engineering project folders.
For these environments, I'm getting around 50% dedup with no noticeable impact on users.
If you want to check, you can run a utility to check how much space saving you'll achieve by enabling it. If it's only 10%, don't bother as it does come at a performance and risk cost, in that it's another thing that can potentially go wrong.
If it's a significant space saving, could be worth doing it. Improved our backup times significantly, which was why we did it.
malikto44@reddit
I ran it, and it became a huge performance hit. It was the most usable when I was making images for a VDI system, and when I tinkered with the golden image, I'd save it to a volume that deduplicated, which gave excellent results.
Even though ReFS has a good rep for deduplicating, I'd rather hand that off to the SAN or NAS, even if the SAN/NAS is just doing ZFS on the backend.
I have been bitten before by Windows's deduplication, losing TB of data, so if I do use it, I make sure to have good backups, and I use it very sparingly because of the performance hit.
ChangeWindowZombie@reddit
I use it on our Windows file servers and see around a 45% dedupe rate. Users like to copy the same data to multiple network locations for reasons, and this has shown me just how much they do it. My current 9TB volume would be around 14TB if it were fully hydrated.
Only issue I have with this feature is it complicates data migration to a new volume if you want to keep the new volume as small as possible. You have to migrate a bunch of data, let dedupe reduce data size, migrate more data, rinse and repeat until complete.
g00nster@reddit
Not anymore, it's more efficient to handle this at the SAN.
Sylogz@reddit
We run it on our fileserver and its golden.
3.52 TB Capacity
2.8 TB Used
731 GB Free
54% Deduplication Rate
Deduplication Savings 3.38 TB
UnrealSWAT@reddit
I used it years ago, quickly discovered the amount of changed block noise it was generating was ruining my efficiency on my VM backups, and then promptly stopped using it. I gained space in production, but lost space and increased my backup run times in exchange by leveraging this.
sambodia85@reddit
We regularly see 35-45% on some of our file shares, but that mostly because lots of documents generated from templates.
Saw up to 80% on a fslogix share back in the day, probably because most people’s OST’s are just full of the same emails from bulk distribution list emails.
It’s really good where it’s good, you just can never ever let it run out of space. And never mount a restore point in the same server.
WillVH52@reddit
Yes! Have been using it with Veeam Backup repositories for several years. Current dedupe values are 83 percent saving on space. Storing 1 TB of data as 209 GB of data!
Have previously run into small issues with data corruption but this was caused by Sophos AV interfering with some of the 1GB chunk files.
andrea_ci@reddit
Yes, and it works. BUT it depends on what data you're storing.
For generic files? I've seen a 25-40% deduplication rate; and it's A LOT.
For "updates" directories? I've seen 80% (but it's a limit case, there are a LOT OF duplicate files, because software updates are mainly small edits).
Bob_Spud@reddit (OP)
When I checked it out found that its dedupe when compared to free backup apps using the same data, its dedupe it wasn't the best.
Backup application dedupe doesn't have the same requirements, one of the key differences being that speed of hydrating data is not critical. In winserver speed of reassembling the data would be more critical, that may explain the efficiency difference.
andrea_ci@reddit
the more you dedupe and compress, the biggest the performance impact
yep, backups *can* be slow
Skrunky@reddit
Depends on the type of data you like to keep. We have an archive drive with GIS data and Images. We make sure to exclude database files and others that don't play nice with file-level dedupe. I think the space savings on that drive are around 8%.
Other drives with terabytes of Office app files get much better compression. On those drives we're seeing 35% dedupe rates.
Also depends on what Server OS version you use. We went from 2012 R2 to 2022 and we got a few extra percent in space savings.
It's horses for courses though. Not everyone needs de-dupe, and sometimes it's a cheaper way of making storage go a bit further.
Bob_Spud@reddit (OP)
Image, encrypted and compressed files will not dedupe that well, 35% dedupe saving for regular files is low but is better than no savings and it doesn't cost any extra.
autogyrophilia@reddit
35% dedupe savings is absolutely massive.
autogyrophilia@reddit
There are two types of data deduplication that you can do in Windows since 11/2025.
There is the server one, that uses a minifilter, essentially splits all data in chunks and tries to find repeated ones.
It works very well but it's very expensive. It's good for archival and document shares. given how much users tend to store repeated info.
It can do a few things that others dedupers can't, such as detecting embedded headers (for example, images reused across Office XML documents and other ZIP files) . Or at least it claims to be able to.
However, if you are in Windows Server 2025 and are comfortable using ReFS, I would advise using ReFS native deduplication.
It is not very well documented because reasons, but it isn't hard, and it works very well.
https://learn.microsoft.com/en-us/powershell/module/microsoft.refsdedup.commands/?view=windowsserver2025-ps
I don't use it in any servers because we do the compress and dedup outside the VMs, but I have succesfuly used it in Windows 11 computers without issue.