Sysadmin wants every Windows server to be a fileserver for redundancy?
Posted by iingot@reddit | sysadmin | View on Reddit | 265 comments
I'm still fairly new to this field, so please forgive me if I'm being an idiot. I am being trained to take the sysadmin's position at a small company because he is retiring. Every server, including the domain controllers have virtual drives added in Proxmox that are 2tb each and these serve as the network file shares.
Today I asked why we don't make a big NAS, connect it to one server via iSCSI and put all of the file shares there so we could reboot the DCs without knocking users off and also so we don't have to constantly maneuver files around on a bunch of 2tb virtual drives.
He says that, if we use a big NAS, the motherboard could die and we would lose every share while we restored the backup. He says that it's better for redundancy if it's split up across multiple servers and multiple drives. Am I crazy for thinking a NAS would be better? What are some arguments I can present that a NAS would be a better solution? (Management is also against anything cloud-based and everything must be selfhosted).
Single-Virus4935@reddit
I stopped at "Including DCs". Its just against any recommendation. DCs only do DC stuff. CA only does CA stuff.
Hamburgerundcola@reddit
Never heard of the prodDcFileCaAppExcSqlDhcp01?
WastedFiftySix@reddit
SBS 2003 has entered the chat
Lazy_Owl987@reddit
I hate you with every fiber of my being. I thought I had left that BS behind.... lol
WastedFiftySix@reddit
SBS is like Hotel California I'm afraid đ
Lazy_Owl987@reddit
God I know! Used to have clients that would refuse to move away from it even after one got hit by an external attack that left them crippled.
someguy7710@reddit
I just threw up in my mouth. What a terrible product MS came up with there. And yes I had to support several of those over the years
21stCenturyGW@reddit
SBS reminded me of the concept of Integrated Apps back in the dayâ˘, like Lotus Symphony and Ashton-Tate Framework. One app that was a word processor, spreadsheet and database, and sometimes ohter stuff as well.
It's good that we don't have that concept any more.
Oh, hi Microsoft FabricâŚ
anonymousITCoward@reddit
we still have an 2011sbs server in production... yay?
WastedFiftySix@reddit
Dont worry about it, it has been end of life for only six years. I'm sure no security flaws have been discovered during that time.
wbrd@reddit
It's not that they haven't been discovered, it's just that the code is written in cursive and the kids today can't read it.
Sid_the_Bear@reddit
Okay, *that* was funny.
currancchs@reddit
I thought I was pushing it running it until about 3 years ago!
Danowolf@reddit
The Power of Christy Compells You!
The Power of Christy Compells You!
ShermansWorld@reddit
.. haha .. I'm taking one down this weekend. Gotta admit, I've had two cars from new to sold in this time of SBS2011. Both server and SBS were daily drivers...
Mr_Kill3r@reddit
Look, SBS was a great learning tool, mostly of what not to do, but that is beside the point.
WastedFiftySix@reddit
I feel your pain, brother. Will never forget having to take a complete customer down because some Exchange issue required a reboot to fix. I'm honestly baffled this monstrosity didn't go end of life until 2015!
ghjm@reddit
I deployed a lot of SBS 2003 in the mid to late 2000s. It was a great product when used as intended: for small offices with limited connectivity, where there only is one server, so it has to do everything. Connectivity is much better now, and you can do things in the cloud, so this use case is largely extinct. But at one time this described a huge number of small businesses.
I'm sure they didn't EOL it earlier just because all those electricians still running the SBS 2003 server I deployed in 2005 would have raised holy hell. Nobody should have been using it for new installs after AWS existed, unless installing on a mountainside where the only connectivity is a scratchy modem line.
mustangsal@reddit
Don't fib... From workgroup to domain management in a day. It was amazing for the first week after install and configuration... Then the nightmare of managing and maintenance appeared. Oh... and the fun of "You want to restore a backup? You silly child."
EvilRSA@reddit
đ In a stone quite house, I just laughed out loud...
Waiting for someone to ask what was so funny.
4runninglife@reddit
SBS? We going back to Windows NT, Novell even.
Zealousideal_Ad642@reddit
Eye started twitching when I saw that
atomicwrites@reddit
Oh rip we just took over a new client and they had a server 2016 DC (we are moving them to 2025 currently) but it's AD is a big mess of SBS remnants, all the GPOs from SBS are still active and the users and computers are in the SBS MyCompany OU tree. And apparently SBS had like 10+ domain admin accounts that it would add.
WastedFiftySix@reddit
Sounds like multiple decades of neglect to me. Best of luck!
Complex_Ostrich7981@reddit
In a very crowded field, this might be the single worst thing Microsoft ever built.
Loudergood@reddit
Thats a high bar with Clippy and WindowsMe around.
mineral_minion@reddit
How dare you, Clippy was ahead of his time.
Royal-Wear-6437@reddit
It's just that it wasn't the present era
anonymousITCoward@reddit
Thanks that twitch behind my eye is back...
WastedFiftySix@reddit
You're welcome!
TKInstinct@reddit
I worked at a place using a DC as a WSUS server too.
rairock@reddit
Yea I think there are lots of them in small companies, right? I have seen a couple in two companies of 50\~ and 300\~ employees.
gogreenenj@reddit
You probably donât need to label the file server if theyâre all file servers so itâs just DcCaAppExcSqlDhcp01
catnip-catnap@reddit
Gotta put a usb license dongle and app in there too somewhere
theevilapplepie@reddit
âWe remote in, itâs secure because it runs on the server!â
Deadpool2715@reddit
Probably also a physical server instead of a VM
oznobz@reddit
I've seen multiple companies name this server "Atlas" because it holds up the world.
Like it would be a funny thing one company did, but it's depressing that multiple companies came to the same naming structure.
CptBronzeBalls@reddit
That is the perfect naming convention. No notes.
healious@reddit
You forgot dev and test on there too, is this amateur hour or
CeC-P@reddit
We have 4 of those
AggravatingAmount438@reddit
This made me huff air through my nose.
Very humorous, 9/10.
Single-Virus4935@reddit
Heard of that and done that. My first Job was a single server setup on a SMB. It grew to best practise later ;-)Â
Hamburgerundcola@reddit
How could we forget the print server?
vrtigo1@reddit
Yep, DFS was my first thought if they really want file sharing redundancy.
But OP said they're a small company. They didn't mention the rest of their infrastructure, but it seems like it'd be better to have a single fileserver VM on a redundant hypervisor cluster. That way everything on the cluster benefits from the redundancy.
If you really need zero downtime then deploy DFS, but it's a lot of headache to deal with and most small businesses can tolerate an hour or two of downtime per month for patching, etc.
Lopoetve@reddit
Primary CA is turned off. Tucked in a corner. Locked in a safe. In a faraday cage. Covered in concrete. Guarded by the fifth mountain. Your SECONDARY CA does signing if youâre doing right. It sure as hell isnât a file server! Itâs turned OFF. But⌠at the very least - itâs not a file server!
DCs?!? Jesus.
ArgonWilde@reddit
Heck, your primary CA can literally be a cold stored hard drive that you boot a laptop off of whenever you need to renew the certificate. With a mirrored copy stored offsite of course.
Internet-of-cruft@reddit
I went through this exercise of enumerating failure scenarios, operational concerns (renewing certs, invalidating certs, etc.)
In my honest, very opinionated opinion? I don't see the value for a root CA if the environment is small.
It's extra steps to set up and in practically every scenario you're going through the same machinery if you have to rotate the direct subordinate signing CA.
But, we also heavily automate provisioning a CA so if a private key compromise happened and we chose to nuke it, it doesn't really take us a ton of effort to be operational again.
TheDevauto@reddit
Yeah this is bad all around. This sounds like a suggestion from 1999 that would be followed by laughter and an extra spot on call.
There are standard ways to build fileserver capabilities that have been around forever. Two servers and failover using your choice of strategy.
Your old susadmin is a hack.
charleswj@reddit
Yep
NoradIV@reddit
I have a small remote site with like 6 users. I have a single server with a DC, file services, 2 printhers and a VM under HyperV.
Is it really that bad?
Single-Virus4935@reddit
Unterhaltung mit Gemini
Welche konkreten technischen grĂźnde abseits "jede rolle erhĂśht das risiko" gibt es auf einen windows DC keine andeten rollen zu installieren
Bittr kurz als reddit commentar auf english
CarnivalCassidy@reddit
That means buying skateboard shoes, and reading comic books, right? Unless DC stands for something else that I'm not aware of.
^/s
Single-Virus4935@reddit
Domaincontroller
Aim_Fire_Ready@reddit
Iâve only ever worked in SMB and never really managed a DC, and even I thought making a DC a file server sounded foolish. Yikes!
2CasinoRiches1@reddit
This guy admins
Single-Virus4935@reddit
Hmm?Â
PrincipleExciting457@reddit
Heâs saying you smart
anonymousITCoward@reddit
IIRC best practice says not to mix your AD/DC with any other roles... so "every Windows server" would be a bad idea.
You could (/should?) use DFS... for redundancy... but also you should do the sane thing and have working backups...
TightBed8201@reddit
Dns is fine on dc. Everything else not so.
A lot of "xp" means nothing in general. You can have guy working at one company for 30 years and doing misconfigurations left and right. OP should learn from this.
Heard to many times how bad configurations are best way because it is how it was done at their company since forever.
anonymousITCoward@reddit
DHCP is usually a part of the AD/DC role, and DNS I believe is a requirement so those I usually say are a given, just like the FSMO roles. These would be MS best practices, not just "xp"
yummers511@reddit
Yeah. AD/DC, DNS, and DHCP can all be on one individual machine or server without issue.
TightBed8201@reddit
I would put DHCP on other server, but that is me. Master-slave mode so i can patch without worry.
AtarukA@reddit
You could also have multiple file servers, each serving the files in redundancy.
You do not want a SPOF, your NAS dies that's it, you're stuck without your data. One server dies? The others still serve the files.
discgman@reddit
NAS servers are built for redundancy. Having your file server on a DC is just dumb and against Microsoft's recommendations.
gzr4dr@reddit
Was going to say that a proper network storage solution (SAN/NAS) will have multiple controllers and high fault tolerance from a RAID and hot spare design. DFS can be used for redundancy from a file server compute standpoint but it's not necessary unless this is a 24/7 operation that can't handle any downtime. The fact that the current Sysadmin thinks placing shares on a DC is a good idea makes me discount every other idea this person has.
AtarukA@reddit
Oh come on it's not that bad, it's only a SMB1 share.
discgman@reddit
I agree with that. Plus your introducing possible viruses on your DC by people unknowingly uploading them to the file share.
AtarukA@reddit
I gues I did leave out the part where your DCs should be DC only, since that one was too obvious for me at this stage.
discgman@reddit
Well DC's and dhcp servers. Lets throw in some WDS too.
AtarukA@reddit
Tell you what, let's throw in an Exchange server in there and WSUS.
xMrShadow@reddit
Also if a NAS dies you can get a new one and slot in the HDDs from the old one. Synology diskstation will import the config from the old NAS and then everything is up and running again. I imagine other NAS will work the same. And if itâs configured like a RAID-5 the data is still good and accessible as long as 2 drives donât die at the same time.
MasterSea8231@reddit
It sounds like from the post the windows servers are VMs on proxmox so the point of failure isn't even windows it's whatever they are using as the storage backend. And if I had to guess that is probably just the storage of the proxmox server being split up and they don't strike me as a shop that is using HCI so they already have a single point of failure if their hypervisor goes down
telvox@reddit
There are some people who are too stupid to realize how dumb they are. This man retiring is one of them. You're never going to convince him how pants on head stupid his idea is. Jist let him retire and fix it when he is gone.
JMCompGuy@reddit
At the end of the day, uptime/SLA's will help dictate an appropriate solution.
The idea of moving shares around is a terrible idea and each server should have a single function as much as possible.
I'd recommend you start understanding what your existing networking, storage and compute layer looks like. I saw you mention proxmox and I assume you have several proxmox hosts. You can then start to think on how do you ensure one component going down has a minimal interruption of service.
rra-netrix@reddit
This has to be rage bait.
Please tell me itâs rage bait.
Itâs not rage bait, is itâŚ?
Prestigious-Past6268@reddit
No.
bunnythistle@reddit
In a Windows environment, the easiest way to do this would be to have 2 file servers and use a DFS Namespace and DFS Replication.
A DFS Namespace would essentially create a share on your domain (\\yourdomain.tld\DFS\Share), which would map to \\fileserver1\Share and \\fileserver2\Share. Clients will connect to \\yourdomain.tld\DFS\Share, which will then redirect them to one of the two File Servers.
DFS Replication would ensure that those two Shares are constantly syncronized.
DFS is a very simple and reliable technology that's built into Windows Server. From a users' perspective, everything is in one place, even though it's distributed across two (or more) file servers. It also makes replacing file servers easier - add a new server to the namespace, replicate to it, take the old server out, and as far as endpoints are concerned, the mappings never change.
compmanio36@reddit
"simple and reliable"
Experience has taught me otherwise. In theory you're correct but in reality DFS is often hot garbage.
Top-Perspective-4069@reddit
DFS problems tend to be the fault of people who didn't know what they were doing. If it isn't set up by a rookie, it's excellent.
thewunderbar@reddit
I haven't run DFS in years just because I haven't needed to, but I ran a DFS network across 7 physical locations that basically touched both oceans for almost 10 years and never had a single issue.
harley247@reddit
Anything not set up correctly or used in an incorrect manner will be hot garbage
BldGlch@reddit
I have many rock solid dfs setups across clientele
Steve_78_OH@reddit
If it's setup correctly, it works great. It CAN still have issues even after being setup correctly, but it was pretty rare in my experience. I managed DFS-R / DFS-N environments at two different orgs, each with a couple dozen to several dozen nodes.
OregonTechHead@reddit
If you're having issues with DFS, it's likely a problematic configuration.
I've never seen an issue with DFSn, and DFSR issues are typically related to misconfigurations.
The big downside to DFSR is lack of file locking. So if someone edits a file on server1, and someone else edits the file on server2, someone is losing their changes.
But that challenge isn't unique to DFS.
Angelworks42@reddit
I honestly have never seen an issue with dfs and we use it along side Windows servers and netapp.
I worked at a place ages ago that had three sites connected via charter cable and it worked there as well just fine.
sceez@reddit
Our dfs is rock solid in 2026. Our only issues prior to 2020 were bandwidth related. We have 9 sites
Ok-Measurement-1575@reddit
This. DFS has been around for over 20 years, lol.Â
FLATLANDRIDER@reddit
We deployed DFS in our environment for this reason and ended up ripping it out for one simple reason:
DFS does not support indexing. Everytime a user searches a network share, it ignores the search indexes on the share, and enumerates every file in the share individually until it finds the result you're looking for.
As a result searching through DFS shares is agonizingly slow.
bingblangblong@reddit
I just let everything search index the file server on each client.
JerikkaDawn@reddit
This doesn't get mentioned enough. To be fair, Windows Search service says it's "not for enterprise scenarios", but it's still a BS limitation. DFS-N is almost 30 years old, there's been plenty of time for a DFS-N capable indexing and search service.
FLATLANDRIDER@reddit
If you look at the packets with wireshark, DFS just asks which share it should go to, then it sends the user to the share and the actual search is performed on the direct SMB share, not the DFS path. I don't know why they can't have it reference the indexes on the file server since it's using the shares directly anyways.
It's such a stupid thing. DFS would be amazing if it could just handle indexing properly.
SpecialistLayer@reddit
Came here to say this exactly. We have to have file indexing and I could not believe this was not a feature with DFS. We had to rip out the DFS because of this.
Walbabyesser@reddit
⌠https://www.it-admins.com/it-search/how-it-works/
xSchizogenie@reddit
Works good in theory. Practically you stumble across shit.
ITGuyThrow07@reddit
Yup, DFS is for distributing files across different sites, not for redundancy.
RyeonToast@reddit
Still better than turning your DC into a user file server
xSchizogenie@reddit
So, no file Server is better than a not working one. Okay.
andecase@reddit
Personally, no file server is better than a file server on a DC.
xSchizogenie@reddit
Forgot the /s, sorry. lol
Jarrus__Kanan_Jarrus@reddit
Who puts file shares on a DC (aside from a light netlogon script?)
Jawshee_pdx@reddit
Most "Big NAS" offerings have built in redundancy for stuff like the controllers. I think your coworker is just a gray beard who has not messed with modern Enterprise equipment in a while.
SaintEyegor@reddit
Grey beard wannabe
Practical-Alarm1763@reddit
Lol what the fuck, I stopped reading there.
spazmo_warrior@reddit
what, and I cannot stress this enough, DA FUQ?
jdptechnc@reddit
r/ShittySysadmin
CeC-P@reddit
Personally, I'd put in an external NVME enclosure with a good brand of stable SSD over the fastest PCI-E multi-lane connection I can get. Whatever fancy new USB, get that. 2.5 GBPS to a NAS is expensive, has overhead, had security patches, and lost logins, and mechanical drive slowness. It's a shame most lifetime corpo IT workers don't know anything about hardware, building computers, etc and just buy what the salesman tells them to and pretends that means they're doing their job. But most VM hosts can't see external drives in their control systems. So that's annoying.
Timely_Finger627@reddit
This sounds just as terribly janky but in a completely different way. OP do not do this.
danieIsreddit@reddit
I am in a similar position as you. Just wait until he retires and implement it your way. There are multiple ways of doing it, and a single big NAS would be easier to manage to me, but there's probably some back story. I am waiting for my manager to retire so I can start implementing my own changes. There's no value in fighting back now if you just need to wait a year.
danieIsreddit@reddit
Also, you wouldn't need just one big NAS, you would need two for redundancy. Maybe there's a cost factor involved. But I still agree with what you're thinking.
iingot@reddit (OP)
That's great to know. Someone also mentioned DFS, so I'll definitely be looking into that.
RyeonToast@reddit
We operate a collection of file servers, with one of them running DFS. All paths we give to users go through the DFS.Â
When one of the file servers died recently, we moved the drives the shares were on to other servers, created the necessary shares on those servers, then updated DFS. It took a little time because there were a good number of shares to move, but the recovery time wasn't bad. Some people had an overly long break is all.
From the user's perspective, nothing changed; all their old paths still work. We're gonna move the shares back to the rebuilt server and the users will never notice that because we are going to do it during one of our regular maintenance windows.Â
We also don't need to put user file shares on the DC. That's a puke worthy plan.Â
merlyndavis@reddit
Just donât lose a drive, unless youâve got some sort of RAID on those servers.
RyeonToast@reddit
We do, and have replaced a drive here and there.
MasterSea8231@reddit
You don't necessarily need 2 they have NASs that have multiple controllers so in case one controller goes down it fails over to the other
SysAdminDennyBob@reddit
This! Why fight about 2+2=5 with someone that's an idiot. Just chill and wait. Then once he is gone it's "Now presenting the iingot show, staring iingot!"
geegol@reddit
Sure letâs just have 1 server be the DC, FS, SCCM, syslog, and web server. That sounds like redundancy to me.
HackAttackx10@reddit
How many physical servers do you have and are they the same size?
woodyshag@reddit
The alternative is to build a windows cluster for file servers. Its a bit more involved to setup, but provides you redundancy and you can update each server without impacting users.
DehydratedButTired@reddit
Why not make them all DCs, exchange servers and SQL servers while heâs at it. Cluster print queues in all of em, letâs just load em up.
LONG LIVE MICROSOFT WINDOWS SMALL BUSINESS SERVER!
musiquededemain@reddit
Clearly your coworker has never heard of high availability or disaster recovery.
hurkwurk@reddit
I disagree. hes using distributed computing, which is a version of both, and when cost is a factor, its a cheap way of achieving both in limited scale.
without knowing the full constraints of the organization, we cannot properly judge if its a good or bad decision. Ive seen near zero budget non-profit organizations do solutions like this because most of their kit was donated to them, and better to have some amount of redundancy than to pile everything on a single point of failure when they have no option to purchase any kind of second redundant system.
I personally built out a lot of DFS based shares in those situations and just let 2 servers be redundant for each other and built out small networks almost like RAID disk sets with 3-4 servers acting as redundant for each other by being backups for each others active shares.
you work with what you got. that said, it sounds like the OPs situation is one where this guy was just used to doing things one way and never stopped.
MasterSea8231@reddit
The post literally says that the items are a proxmox host, so the storage layer is is abstracted from the VM unless they are passing the physical disks into the VM which would seem backwards. I'm not sure what the difference is between a Windows VM with one big storage pool versus a bunch of Windows VMs with a bunch of little storage drives, if the storage is coming from the same Proxmox hosts.
musiquededemain@reddit
I understand you work with what you got. I've worked in places that easily could be described as "IT Wastelands" and have also worked in places where HA+DR is standard practice. But your last sentence...
"...this guy was just used to doing things one way and never stopped."
My point exactly.
MasterSea8231@reddit
Just purchase a nas solution that has HA nodes. TrueNAS sells them or netapp as well.
If one fails then the other node takes over
daven1985@reddit
Your sys admin has no idea what he is doing.
uptimefordays@reddit
Youâre not crazy, your instinct is sound. The issue is that the retiring admin is reasoning at the VM layer without thinking about whatâs underneath it.
The real question isnât âone NAS vs. many virtual drivesâ itâs: what is Proxmox actually running on, and how is that storage managed? Right now you have file shares living inside VMs, but those VMs still live on physical disks somewhere. Whatâs protecting those? If the answer is ânot much,â then the redundancy argument heâs making at the VM level has a much bigger hole underneath it.
His concern about a NAS being a single point of failure is legitimate in principle, but it applies equally to whatever physical hosts those VMs are running on today. The difference is that a proper storage platform gives you tools to actually manage that riskâRAID, redundant controllers, hot spares, snapshot-based backupsâin one place, rather than hoping nothing goes wrong across a bunch of independently managed drives.
For a small company on Proxmox, a reasonable path forward would look something like: a NAS or storage appliance with redundant controllers and proper RAID (TrueNAS or a Synology RS-series are common choices at this scale), presented to your hypervisor via iSCSI, with your VMs and file shares running on top of that. Thatâs not exoticâitâs just doing storage properly. In a larger or better-resourced environment youâd look at redundant SANs with dedicated FC or iSCSI fabric, but thatâs probably not the right fight for a small shop.
Longer term, consolidating file shares onto a dedicated file server with DFS (Distributed File System) is worth bringing up, it decouples your file shares from your domain controllers, which solves the reboot problem you already identified, and gives you namespace flexibility as things grow.
Youâre asking the right questions. The fact that youâre thinking about this before youâre fully in the seat is a good sign.
Ummgh23@reddit
Is.. this AI?
uptimefordays@reddit
Nope, just a guy worried OP and his mentor could be talking past each other because theyâre not considering actual underlying storage.
Ummgh23@reddit
Sorry, just asking because you used âIt's not x, it's yâ a bunch of times haha
uptimefordays@reddit
Well, yes, Iâm concerned about the senior adminâs setupâhaving every server be a file server is not only bad but also wrong. OPâs instincts are good; it seems strange. However, my concern is, âWhy did the outgoing sysadmin do this?â I somewhat suspect that the designer was only familiar with virtualization and not storageâso splitting file storage across VMs rather than underlying disks seemed logical to them.
Ummgh23@reddit
I dien't say anything about what you said, just your use of phrasing that seems ai generated
VonTreece@reddit
Youâre not crazy, their phrasing and writing patterns scream GPT but hey the info isnât bad so đ¤ˇââď¸
simonjakeevan@reddit
Where do you think LLM's got their writing style from?
VonTreece@reddit
Well duh, but itâs also never been common for people to speak, especially on social media, with perfect grammar and writing structure. It stands out when you see it lol
uptimefordays@reddit
Sadly itâs a professional forum and Iâm not that anonymous.
uptimefordays@reddit
Training large language models on all available internet content has some unfortunate consequences. In the English-speaking world, thereâs a strong preference for Standard American English, whether itâs right, wrong, or indifferent. I canât even use em dashes anymore without being accused of using or being an LLM.
VonTreece@reddit
Yeah, I agree. I wasnât accusing you. Sadly in todayâs world, proper writing structure and grammar = AI đ¤
uptimefordays@reddit
Yeah it's a brave new world.
uptimefordays@reddit
Sadly LLMs were trained to talk in the same tone I do lol.
iingot@reddit (OP)
Thanks for the info! I'm definitely going to do some research on things you mentioned here. We have three Proxmox nodes clustered and everything is setup with high availability there, so I assume the servers and file shares would be fine if a node goes down. So, I can see where he is coming from there and how the VM solution wouldn't be a SPOF.
uptimefordays@reddit
Youâre welcome, happy to help! Itâs still worth investigating how your hosts get storage. Is your cluster backed by Ceph for converged storage, local storage on each host with no shared storage, or somewhere in between?
In either scenario, youâll want a file server cluster, but how itâs built will depend on what kind of storage you have.
FabulousVast350@reddit
terrible idea.
LesPaulAce@reddit
You have file shares on your DCs?
iingot@reddit (OP)
Yes, both DCs currently have file shares.
Simmery@reddit
I had to peel off tons of bullshit on our DCs from a predecessor. They are so much easier to manage now.Â
iingot@reddit (OP)
Both DCs were also running Server 2003 until last year, when we upgraded them to 2019. A lot of crap got moved over, so I will be happy to clean them up.
Walbabyesser@reddit
Even 2019 isnât that up to date đ
slowclapcitizenkane@reddit
At least it's still supported.
GX_EN@reddit
OMG
INSPECTOR99@reddit
Set up a single 20 TB RAID 10 NAS. Get all that garbage off the DCs. DONE....
thewunderbar@reddit
The fact that this sysadmin was running Server 2003 in the year of our lord 2025, and for some reason only went to Server 2019, which has just 3 years of support left, tells me quite a bit about your situation.
None of it is good.
Frothyleet@reddit
I mean, technically, everyone does.
OpacusVenatori@reddit
What a cluster-fuck of a sysadmin. D00d probably needs to revisit the org BCDR plan as a whole rather than just being so tunnel-focused on file server "redundancy".
Danowolf@reddit
Backup all data and "accidentally" nuke all sbs machines. It's like putting Skynet down only smarter.
adestrella1027@reddit
If this was the solution for fileshares, just know this is probably just the tip of the iceberg.
iingot@reddit (OP)
Yes, there are a lot of problems everywhere and I'm doing my best to discern that is correct and what isn't. For example, everyone uses Office, but they bought retail copies from Amazon and each one is tied to a throwaway Microsoft account. We have a spreadsheet with the login for each Microsoft account so we can activate Office for new employees. However, I think Microsoft 365 would be easier to use.
justice_works@reddit
My ex company uses to do that until the ex manager got sacked and I rrpaced him and ripped out everything.
Some one commented this is just the tip of the iceberg. Yeah man its a whole fking rabbit hole of shit i had to clean up and it goes deep.
Here's a pic of the "server rack".
SpecialistLayer@reddit
This screams of a very old sysadmin that has never attempted to stay up to date with modern times. I would go through everything, get it properly documented and start looking for ways to properly optimize the architecture.
throwpoo@reddit
I turned down a director role because I found out the guy that's retiring is what you described. Also the guys that he trained up was unbelievably incapable. Unfortunately this is fairly common in small businesses.
WWGHIAFTC@reddit
And they are somehow 'proud' of it because the found a 'solution' and it's complicated so it must be good, right??
Walbabyesser@reddit
đ
AnotherCableGuy@reddit
https://i.redd.it/3hvgxiawctwg1.gif
purplemonkeymad@reddit
Found it hard to convince people to switch office from cap ex to op ex. But after they went to 365 licenses they were happy with it. Predictable, no sudden costs. Specially as we can give people access to the billing centre so they can see costs and figure out if they are over playing. Some now manage a monthly version of some subs so they can bridge and minimise costs for overlap of departing staff.
Explaining licensing sucks, but it's better than juggling licensing jank.
Ummgh23@reddit
Jesus christ⌠give us some more examples!
thewunderbar@reddit
That wasn't that uncommon in small shops. But it definitely doesn't scale past a few employees.
And you're going to have to fight the fight of "but we don't want to pay for it every month"
40513786934@reddit
gross
merlyndavis@reddit
(Disclaimer: I work for an enterprise storage vendor)
Centralize your files for godâs sake! A dead mobo on a modern NAS means you replace the mobo, maybe reassign drives and are up and running in hours.
If you go enterprise level, and if a mobo fails, another node takes over and no end user ever knows anything happened!!
Backing up all those little storage pools has got to be insane, and trying to track down which one has a specific file sounds like a nightmare.
Your sysadmin needs to grow the f up and realize itâs the 21st century and he should actually use stuff for what itâs designed for.
Storing user data on a DCâŚWTF!
sdrawkcabineter@reddit
I found his Novell certification at Goodwill.
nyckidryan@reddit
đ¤ đ¤ đ
Fit_Prize_3245@reddit
Man, that sysadminMan, that sysadmin has a serious problem.
The best option in your case is a NAS. With adequate RAID and encryption keys backup (if you chose to use disk encryption), your data will be safe. Want to have a quicker restore, replacing a damaged NAD with a new one (of the same brand)? Keep a backup of the configuration. Want to take no offline time in case of a NAS damage? Check for a model with dual standby, so you can hit one with a hammer and the other one will take the post. It all depends on your budget.
anonpf@reddit
Just because youâre new doesnât mean you donât have a voice. Speak up, use microsoft documented recommendations and give your reasoning why you feel the way you do. Personally if Iâm slated to take over, I would want a significant amount of say in what I will eventually be supporting.Â
Inevitable-Star2362@reddit
A HA NAS?
texcleveland@reddit
umm just have a backup NAS synced with the primary and failover the IP if primary is down.
piperfect@reddit
Are the servers Proxmox servers running Ceph as a hyper-converged clusted and the domain controllers, etc running on guest OSs?
xaeriee@reddit
Ew
realmozzarella22@reddit
Primary and secondary NAS. Many will have failover capabilities.
realmozzarella22@reddit
Primary and secondary NAS. Many will have failover capabilities.
malikto44@reddit
That is why you get a NAS with more than one controller. IF the NAS's motherboard dies, it will just use another. Alternatively, some NAS vendors can sell two identical models in HA mode. Yes, it means twice the drives, RAM, etc... but it allows for a failover capacity.
I'd look at something like a Promise NAS for the low end... it doesn't have much in features, but it can do multipathing iSCSI with both its controllers well enough, and they have 24/7 enterprise support (which is the critical thing.) From there, size it at least two times what you think you will need (I do a factor of 3-4x because once a NAS is 50% full, it is time to start looking to expand, and you need to have a second NAS for Veeam... but that one can be a single controller, provided it has tape or cloud for another destination.)
IWantsToBelieve@reddit
If someone is installing roles in tier 0 they do not know what they are doing. Even an LLM would be better to trust than this person. You're right to challenge their proposed architecture.
Brandhor@reddit
remember to make all the servers domain controllers as well for redundancy
enolja@reddit
I threw up in my mouth reading this.
JeroenPot@reddit
Why not use SharePoint?
mistercartmenes@reddit
JesusâŚ
gandalfthegru@reddit
Its good he's retiring. Hopefully fully and completely and will not impose his ideas on other companies.
Just nod your head and bide your time. He'll be gone and then the real work of untwisting years of bad decisions starts.
drinianrose@reddit
Ha! Back in the early 2000's I took over IT at a company where the previous sysadmin had decided to make every server a domain controller - "just in case".
What's worse is that there were a bunch of laptops that they would treat as servers that went to trade shows that were all also domain controllers (which of course would occasionally "get lost" and disappear).
Everything was a DC, the file servers, SQL servers, IIS servers, etc.
This same guy never once deleted an inactive/terminated account, there was no password requirement (e.g., blank passwords were fine), and the domain admin password was hardcoded in a batch-file login script that mapped the network drives.
I used to joke that the prior sysadmin should have been held criminally liable for all the damage he did.
cwolf-softball@reddit
I am confident that this person you're talking about has done some really dumb things with backups and security.
Puzzled-Formal-7957@reddit
So much nope in this. Only admin shares on DCs. Ever. And those are also only if you don't a choice but to put something on there locally.
Either set up 2 NASes with replication or get a SAN with dual controllers. The way he is doing this is just a recipe for disaster and is a matter of time before it eats its own tail.
the_doughboy@reddit
Your DCs are already File servers, the Sysvol DFS volume is on them. But the other stuff sounds like bad decisions.
Most storage appliances now offer multiple controllers and multiple IO paths in a 2U form factor, connect those to the Proxmox hosts, present virtual storage to the Guests and have 1 or 2 file servers with DFS. I would NOT recommend letting Windows VMs connect to iSCSI, a dedicated Hardware controller is a much better option.
Walbabyesser@reddit
Didnât users need writing access to a file server? Funny idea how this would play out at sysvol đ
Nomaddo@reddit
I was f-ing around one day and managed to figure out how to execute scripting languages using the Group Policy editor. If you gave end users RW to the SysVol someone could just drop a malicious gpo file and next time someone opens the editor. Boom. Someone's having a bad day.
bluelobsterai@reddit
I think the word your boss is looking for here is hyper-converged. And in small environments, it makes a lot of sense. Like a 7 node Proxmox cluster. You would have three copies of the data, etc., etc.
I think your boss has a lot to learn
scheumchkin@reddit
This is a no from me dawg DC is only a DC it's never storage it's never anything else.
Splitting it up to different servers is fine if they were just file servers. For backups look up the 3-2-1 rule and how that works in your environment could be different.
We use azure so our data is backed up in a recovery vault. We also have file servers and are actively trying to get off them for SharePoint and storage accounts. We do use cream as well but yeah your setup sounds bad. Nas isn't a bad idea in any way but depending on size or usage you may need something more enterprise grade which is a better solution to what he suggested.
Normal_Choice9322@reddit
Your admin is a fucking moron
Run for the hills
kliman@reddit
How many proxmox servers are there?
Would be hilarious if all these windows VMs were being hosted on a single server. For redundancy.
iingot@reddit (OP)
There are three Proxmox nodes clustered together.
kliman@reddit
So how is the shared storage handled if not âone big NASâ (or SAN)?
Just trying to imagine the logic behind what heâs got going on.
iingot@reddit (OP)
Most of the VMs live on a RAIDZ1 NAS with \~20tb storage. I saw that he installed a few VMs on the local-lvm drives too.
kliman@reddit
Sounds to me like that redundancy idea isâŚ.not all that redundant.
Youâre not really missing anything. Redundant storage is a great idea, but this is not how you should handle that.
iingot@reddit (OP)
Do you have a better idea for the VMs? I've seen CEPH suggested, so I'm going to look into that. What other option are there though?
kliman@reddit
Ceph isnât a great idea with only 3 nodes. Itâs really designed for a lot more than that.
Assuming the storage currently is something with 2 controllers, just âgood backupsâ. Itâs pretty unlikely to fail, and if it doesâŚthe current VM situation isnât going to help anything. Itâs sort of âcomplexity for no real reasonâ to me.
Cool-Calligrapher-96@reddit
Get a NAS, it will have redundancy, that is it's purpose, ideally a replicated NAS such as Dell's Powerscale, allows snaphots for previous versions and back up with CIFS.
Spraggle@reddit
We just run SharePoint 365; the files are in teams, so already highly available; add something like Barracuda for backup and job is done.
danieIsreddit@reddit
SharePoint 365 is not a file server. At my last company we migrated our file server into SharePoint 365. It was a hot mess. You run into issues if there's a large folder structure or long file name. Don't migrate file servers to SharePoint 365.
Spraggle@reddit
You absolutely can deal with this. Long file structures are already a problem in Windows file servers, and require you to remap to get around the limitation, and let's face it, it's bad file organisation anyway.
Teams lets you have many channels to be able to separate files in to groups of resources.
We migrated our file server (300 staff sized) in to Teams, no problems other than users not doing a good enough job of deleting things they didn't need.
SweatinSteve@reddit
We have a DFS namespace and 3 file clusters
19610taw3@reddit
When you say management is against anything in the cloud .. please don't say you have exchange ...
iingot@reddit (OP)
No, we have an open source selfhosted mail server.
yojimboLTD@reddit
Yikes đŹ
Zer0CoolXI@reddit
You are right.
If NAS failure is a concern you would use redundant/clustered systems to provide robust network shares.
There are also plenty of enterprise storage vendors out there offering storage systems for all sorts of needs. Netapp is just one example.
I would rather have no network shares than split them across DCâs lol.
Chances are you will never convince the guy retiring to be/do better or change with the times. Wait until they retire, draft up a plan, present it to management and if they approve implement a better solution
Cyberprog@reddit
Two fileservers with an iscsi disk shared between them and high availability. Simples.
chesser45@reddit
Is this rage bait OP? Pls say yes.
St0nywall@reddit
After being trained by the sysadmin, make a list of everything you're being taught and come back here. We'll help you cross off the bad things and point out the good one, effectively retraining you.
My price for this is pizza and beer.
Walbabyesser@reddit
Maybe we could dump the list and save time?
St0nywall@reddit
This is a good efficiency. Gold Star for you my friend. â
Walbabyesser@reddit
Yes!
Connect-Comb-8545@reddit
If he wants business continuity and disaster recovery heâs doing it all wrong.
Get a service and solution such as Datto BCDR to sync local data and to do cloud syncs. If file server dies, spin up local Datto. If building goes on fire or someone chucks a grenade in the server room then spin up all servers in Datto cloud.
If ransomware happens, recover from local Datto. If someone deleted something a year ago and just realized itâs missing, restore from cloud Datto.
The current solution is not best practice and is messy imo.
Can dm for more info and free consultation.
bingblangblong@reddit
Why iscsi? Dunno how big your setup is but I just have a windows VM dedicated as a file server, local storage.
jimicus@reddit
The solution if you really want redundancy is you get a NAS that has redundant controllers - so, not some cheapie Synology-type device. There's a few on the market.
Adept-Pomegranate-46@reddit
Bad idea. You are saying a SQL server that is really busy might be backing up at the same time...Don't listen to them. Servers should be sized for the load of the App(s). Throwing another app (like DFS) at it is crazy. 'Nuff said..
no_need_to_breathe@reddit
Terrible practice - especially considering you're literally already on Proxmox, which has Ceph. If you're running 3 or more PVE hosts on decent speed networks, it's a no-brainer to design and use a Ceph cluster for this. It provides not only file server, but OS-level, redundancy. NAS is fine as long as there's replication of some sort. Don't forget backups either - replication is not a replacement for 3-2-1.
Fritzo2162@reddit
Bad design. You're supposed to have:
Dedicated DC
Redundant DC
Dedicated File server
Dedicated Apps
Preferably all VMs.
iSCSI drive performance isn't that great. We've tried it and always went back to dedicated servers.
Phreakiture@reddit
Any significant NAS solution is a cluster of at least two nodes. Some (Isilon) actually require three.
You can and should also replicate them to other NASes.
thewunderbar@reddit
I'm actually interested in the setup? like, are all of these fileshares identical between all of the servers? if so, how are they kept in sync?
Or is it "accounting files are on primary domain controller" and "HR files are on the secondary domain controller" type of situation?
Do you have actual backups of said data. just spreading the files across multiple servers is not a backup. What happens if you get ransomwared? or the building burns down (assuming you only have one location).
a NAS is great, DFS is great. They are not backups.
iingot@reddit (OP)
> Or is it "accounting files are on primary domain controller" and "HR files are on the secondary domain controller" type of situation?
It's like that. A batch script checks for changes every night and backs them up.
thewunderbar@reddit
where do they get backed up to?
That just all sounds like a manageability nightmare.
iingot@reddit (OP)
There's a Windows 11 PC that has a bunch of connected HDDs. Once a month, we remove a drive and take it offsite.
ctbjdm@reddit
of course there is...
thewunderbar@reddit
Run. Run away, Simba. Run. Run away, and never return.
BrentNewland@reddit
We have one dedicated file server in vSphere. It only does our file shares and nothing else. The VM and all the files in the file server get backed up to a Datto appliance, which replicates to the cloud overnight.
Hot-Meat-11@reddit
A real SAN/NAS is going to have redundant controllers. This is a "small shop" perspective from someone who doesn't have any enterprise exposure. That's not saying that you have to go to high five or six-figure enterprise level gear to get these features. They're within the "if you can't afford it, you probably don't need it" price range.
waxwayne@reddit
Just have two NAS servers
idontknowlikeapuma@reddit
Dude doesnât understand a software RAID 5 or at least a 10. Then it doesnât matter if the motherboard takes a shit.
llDemonll@reddit
Iâd encourage you to look for a new job where youâll have some sort of senior who can help train and mentor you. At the current place youâre going to be picking up a spaghetti pile of garbage and learning very bad practices.
Xibby@reddit
Guy doesn't know what he's doing or doesn't have the budget. A good NAS or SAN will have redundancy. It's one chassis, but there are two full controllers in there with redundant power supplies. Plus if there are multiple disk trays there are redundant connections from the controller to the disk trays. Obviously you'll have to spec your chosen NAS/SAN to have have that capability, and have redundant switches if you connect via iSCSI.
A good enterprise NAS will also most likely have a good enterprise SMB stack so you can host file shares directly on the NAS without the need to export a volume to Windows, setup Windows shares, DFS paths, etc. DFS Namespaces are still a good idea for maintaining consistent UNC pathing, if for some reason down the road you change to a different NAS you can just update the target folders in your DFS Namespace.
CaptainZhon@reddit
DFS is hot garbage- itâs never worked right. Just like windows printing is garbage too. Real NASâs usually have two or more heads or controllers so when a âmotherboard diesâ the other node takes over. Get a NAS, sleep at night.
SpecialistLayer@reddit
I would never put any file server or any unnecessary junk on a DC. I'm more in favor of the NAS router and use a synology NAS or similar. If you need absolute redundancy, the synology have an app for literally doing that where all files are synced between two units. You can go even bigger and use three units for full offsite backup with it. The synology units I manage only ever do updates and reboot after hours so downtime has never been an issue in almost a decade with them.
TheCookieMonsterYum@reddit
With qnap you can put the drives in another qnap and it picks up the RAID.
Maybe same with synology. I only know that because my home qnap broke. Bought a newer version thinking I might have lost the data but it worked. Not had to test it with a qnap server thankfully.
Recommend RAID 10 if speed is required.
If you're thinking of presenting it while he's there I wouldn't. Just doesn't look good on you.
What's the budget though. Recommend HA
shiranugahotoke@reddit
Uh, failover cluster file share?
CaptainSlappy357@reddit
Dude is nuts. Just nod and grin, donât trust a damn thing heâs configured, and start a list of all the crazy that youâre going to change. Donât argue, dont ask if something else would be better, just act like whatever bullshit he says is the most brilliant thing youâve ever heard. The lock the doors as soon as heâs gone.
thewunderbar@reddit
This is not the dumbest thing i've ever read, but it is probably in the top 10%.
Walbabyesser@reddit
W-T-F? He really is IT or just some dude the took from the street?
S1im5hadee@reddit
Sounds like the old sysadmin knows how to Windows
LokeCanada@reddit
A server should only server one function. DC does DC, file server does files, Database server does database, etc...
A sysadmin who is short of equipment, money or time will load multiple services onto one piece of hardware.
The biggest issue is you now have one single point of failure for everything and every service will impact the other.
You have file share problem so you need to reboot the domain controller. The database is having performance issues so nobody can log in till it is fixed. You will learn fast how bad a problem this is.
RAVEN_STORMCROW@reddit
This is crazy crazy Get onedrive...
No-Ant-9159@reddit
You say, "I see, thanks". Get the job and then do it the right way.
RobieWan@reddit
Your sysadmin and management are idiots.Â
Start looking for another position. You don't want to be part of that mess.
GreenWoodDragon@reddit
I went through a similar stage when I was a newly minted sysadmin. I even looked at created a distributed file storage system across the office network.
HeligKo@reddit
He really doesn't understand redundancy. Unless there is mirroring going on, you don't have redundancy, you have just mitigated the risk of losing all the files from a single failure. It might be out of your price range, but they make SAN/NAS systems with fully redundant backplanes and power to avoid the specific fear he has. As others mention the right solution is going to involve two storage systems that replicate in some manner to each other. To figure out a proper solution the stakeholders need to be brought in and a continuity of operations plan needs to be made so you can build out the solution to meet those needs.
mvbighead@reddit
Generally speaking, no not every server should be a file server. Especially not DCs.
However, I can see some practicality around having file servers central to a given application being separate from the main file shares. Reason being that you may encounter file locks that for whatever reason cannot be released without reboot. So rather than losing all shares, you simply tie some application related things to their own file server that can be rebooted as needed should something happen in that manner.
As for the rest, DFSN and DFSR are both highly useful and should be configured for all shares if possible. More specifically DFSN. DFSR can be used for critical shares IF the shares can be backed by different storage solutions.
twotonsosalt@reddit
Just for clarification here, NAS is file and object storage, SAN is block. Yes you can have both on the same hardware, but you still differentiate the access methods.
TheNewBBS@reddit
Domain controllers only do domain controller stuff. No additional file shares, no printer stuff, no DHCP, etc. Every service you install increases the attack vector and will likely eventually result in you having to delegate server local access to manage those resources.
I don't know why you'd need an attached server, just set up SMB shares on the NAS in a structure that reflects business needs. Unless performance demands something else, set up the drives with a version of parity to cover drive redundancy, then have a second NAS that syncs from the first to cover controller redundancy. Ideally, that second NAS will be in a different site that still has appropriate physical security.
DFS exists, comes with Windows, and has worked fairly well in places I've worked. But it does take some expertise to set up and maintain, whereas my experience with Synology suggests it's very easy to at least set up. And I personally prefer dedicated hardware for stuff like this.
Only you know whether pushing for the change will be a net gain or loss: it'll probably piss off the old sysadmin, but it'll also show management you're finding better solutions and set the stage for when you take over and put something better in place.
KindPresentation5686@reddit
This is where you tell your leadership why itâs a horrible idea
headcrap@reddit
Do a NAS, no need to use block storage and iSCSI at all. Leverage AD on it.
The rest depends on how much redundancy the business will budget for.. and what their appetite is for the downtime incurred without varying levels of that redudancy.
Glad the person and their old ideas is retiring.. def sounds like they did it their way and old-school for way too long there. Bunch of 2TB virtuals sounds like good old MBR partition days... ffs.
Laxarus@reddit
There is this thing called high availability. Useful stuff in case 1 nas goes down.
Surfin_Cow@reddit
Are you guys using DFS by chance?
iingot@reddit (OP)
No, not that I'm aware of.
Surfin_Cow@reddit
It would explain why theres so many drives attached everywhere.
I would make sure you know the architecture of what the current person is doing. If not for a specific reason other than "I said so", then maybe bring up the benefits of your proposed solution.
Refurbished_Keyboard@reddit
Uhhh if he wants redundancy then setup 2 windows file servers running DFS...not running on the DCs.
btukin@reddit
Depends on what the files are. If flat files and no database, then DFS across multiple targets for redundancy. If you have SQL or any other database, then look at HA SAN.
halodude423@reddit
DCs should not be fileservers for sure either way.
jsand2@reddit
Oof, with this system admin retiring, you might only pay attention to what is needed. When he is gone, fix your storage issues. Build redundancy into it.
No, that is not how you do things.
We use a SAN here.
Arudinne@reddit
Your sysadmin should be fired.
squishfouce@reddit
Get a redundant NAS pair. Synology supports this out of the box.
MonkeyMan18975@reddit
Did homedude RAID his servers?