Can you tell me why I should move away from "golden master" imaging?
Posted by georgecm12@reddit | sysadmin | View on Reddit | 111 comments
I work as a desktop systems administrator in higher education. I admit that we are behind the technology curve on some things, and one of those things is that we still use "golden master" imaging. The reason? It just works for us.
We're a school that has used Broadcom's Ghost Solution Suite since it was Altiris Deployment Solution (and before that LabExpert). With it and PXE booting, we can get a machine wiped and imaged up with our "Faculty/Staff" image in about 15 minutes, with all of the productivity and pedagogical applications installed and configured, and ready for use by the end user.
Since we lease all of our machines, 1/3 of our fleet comes up for replacement every year, and we generally have 1 month to turn those laptops around and get the old machines back to the lessor's ITAD company. With golden master imaging, I setup a deployment lab, and can get 30+ laptops imaged in under an hour via multicast. (I'm really only limited by power and physical space.)
I have some experience with the paradigm that Autopilot would offer; I'm using that with my Macs because I don't have a choice (macOS has basically eliminated the ability to "golden master" imaging long ago.) From that, and looking into Autopilot, I'm not seeing literally any advantages that Autopilot would offer me, other than just to do what is common these days.
Can someone educate me on why I should be looking into moving away from GM imaging and likely to Autopilot?
QuyetCompass@reddit
honestly the bigger question isn’t why move away from vmware, it’s what problem you’re actually trying to solve
most of the conversations i’m seeing right now are being driven by pricing shock, which is fair, but jumping platforms without a clear end state usually just trades one problem for another
we’ve seen teams rush into hyper-v, proxmox, even public cloud just to get off vmware, and then spend the next 12–18 months dealing with performance gaps, tooling differences, or skillset issues
vmware was expensive, but it also solved a lot of operational problems people don’t fully appreciate until it’s gone
if the goal is cost reduction, resiliency, or flexibility, there are usually better ways to approach it than a straight platform swap
are you trying to leave because of cost, or is there a bigger architectural shift happening?
georgecm12@reddit (OP)
Did you reply to the wrong post with this? My post isn't at all about VMware.
Fatel28@reddit
There's a midpoint here, golden images are a little clunky when you have many different models and drivers become a bit annoying, updating them is also a bit of a pita.
We use sccm for OS deployments, there's a WIM base and the rest is done in the task sequence. It takes longer to image (about an hour) but the sequences are modular so we don't need a different golden image per configuration. We can add drop downs to the task sequence start wizard for choosing software/etc.
Autopilot is neat but I don't personally find it more convenient than sccm task sequence ending for a lot of our customers environments
georgecm12@reddit (OP)
We have a pair of hardware agnostic images, one for faculty/staff machines and one for lab/classroom machines.
We create separate "jobs" in GSS for each hardware type, but each job just lays down the same hardware agnostic image. The difference being that each of the jobs, once imaging is complete, triggers a unique series of post-OOBE tasks to install the drivers for that platform from the network (among a few other tasks that cannot be integrated into the image.)
Nachtwolfe@reddit
The power of SCCM without the price tag or headache: https://fogproject.org
FowlSec@reddit
Obligatory I am not a sysadmin, SCCM is something we see regularly, for obvious reasons. Please ensure that for any configured network access accounts to use the principle of least privileges.
Network access accounts and their passwords are stored as DPAPI encrypted blobs in the policies. Any attacker who compromises an SCCM managed device and has escalated privileges to system can decrypt these secrets with the system masterkey. Furthermore, any attacker who has compromised a computer account on the network can also enrol the device to the SCCM site manager server, in order to retrieve those credentials.
A common configuration that can be exploited is to allow the NAA to request client authentication certificates on behalf of users and computers from ADCS, which would lead to a full domain takeover in the instance of an attacker escalating privileges to the domain computers group.
This is not a "do not use SCCM it's a security risk", you should however be aware that SCCM configurations are vulnerable to abuse and how, in order to ensure you don't inadvertently make your network vulnerable to attack. SCCM is an amazing tool.
johnjohnjohn87@reddit
I don’t think network access accounts are required anymore
PowerShellGenius@reddit
I'm a sysadmin, and I 100% agree, but it's not just the Network Access Account. The Domain Join account used in the task sequences is just as exposed, if not more.
With any endpoint management system, you need to understand what is being done client-side vs. server-side. Don't use over-privileged credentials for client-side tasks. The NAA and Domain Join account credentials are literally handed to imaging clients, so they can use the NAA to access SMB shares during imaging, and the Domain Join account to join the domain. It is critical to assign the least privileges necessary to join computers to the domain, and not a Domain Admin.
SCCM is a great tool, but because it's so common and there are so many "just get to the finish line without knowing how it works" tutorials out there that anyone thinks they can run it.... it is often deployed in a grossly insecure manner.
When I was hired at my current job, a Domain Admin account was being used for domain-join in the task sequence that was deployed to Unkown Computers (and the servers had no PXE password). Someone thought I was being paranoid to care about that. I simply started imaging a blank laptop, hard rebooted it just before the domain join step (when these creds would exist in unattend.xml) and booted it to a live flash drive, opened up the unattend.xml file, and showed my boss the Domain Admin password, explaining that I just retrieved it with no special "hacking" tools in a way that anyone with access to an ethernet port in our schools could have...
As for the Network Access Account - you don't even need that if you just let the client download the image and then extract it. The download is over HTTPS.
And yes, ADCS is another misunderstood thing. Managed properly it's wonderful for your security, it lets you have PKI, proper RADIUS and an unphishable VPN on a budget you already have, when third party PKI solutions cost an arm and a leg. Managed badly, it's a gaping hole because the certs it issues are a credential that can authenticate to AD and it's a tier 0 asset no less than a DC.
TwinkleTwinkie@reddit
This, JIT deployments are significantly easier to maintain and scale well.
Autopilot is a great solution for most use cases and it's where Microsoft is putting most of their modern efforts at least in regards to physical builds however it isn't a once size fits all silver bullet solution.
glity@reddit
This is cool can you point me to a guide. This seems to resist the supply chain attacks that would be difficult to manage if drivers get device specific again. Would you be able to deploy drivers correctly in this type of stack like the old pre universal windows update driver/update days?
exedore6@reddit
It's been a while since I've done it, but part of MS's deployment toolkit lets you essentially mount the installer image and inject drivers into it, so the device gets detected.
schrombomb_@reddit
This can be done without MDT. DISM itself can handle mounting and injecting drivers into a WIM and that's just a part of Windows.
It can be done on Linux even. Using wimlib to mount wim images, and then DISM in wine to handle driver injections.
NeighborGeek@reddit
MDT is dead unfortunately. It was pulled rather suddenly earlier this year when security issues came up and it wasn’t being developed anymore anyway.
exedore6@reddit
I wish I could say that surprised me.
Zenkin@reddit
Just chiming in to agree. Drivers being auto-injected, not having to re-create the golden image for every OS, and being able to update the versions of programs being installed at about any time are very nice.
Hashrunr@reddit
Autopilot's strength is in zero touch remote deployments. I build Autopilot profiles under the assumption the device has a stock Microsoft base image and is being sent anywhere in the world. The user inputs their credentials and 2-4hrs later the device is ready for use.
If your devices are local and you're imaging them before being handed off the to the user, having a local image solution makes sense.
UKYPayne@reddit
We moved to actually do a setup using PDQ. Join domain with standard OOB setup, then the machine is found as “new” in AD and the automations kick off of installing the appropriate applications for the specific OU. This same process also keeps existing computers on the latest versions of software unattended.
gumbrilla@reddit
Don't think I can chief. I use Autopilot and I'm sure as hell not building out 30 machines in an hour.
I can have someone build out their machine in Spain, the US, and UK, and not have to get out of bed.
You probably build more machines in a day than I have in the business.
Trigonal_Planar@reddit
I am moving a pretty large multinational business to Autopilot (not my choice, just implementing) and I definitely agree that if you want 30 machines an hour Autopilot is not it. When you’re supporting a wide variety of models and build customizations and whatnot it starts to make more sense, but it sounds like OP doesn’t really need that and I would not particularly recommend adopting Autopilot in his situation, I think.
georgecm12@reddit (OP)
That's really my thinking. Autopilot seems to be built for the remote workforce where you would appreciate calling up your VAR, have them ship a laptop directly to a remote worker, and they turn it on and wait a bit for it to configure itself. We don't have any need for that deployment framework.
Even if we have a remote worker with a completely DOA laptop, it's easier for us to image a hot swap laptop here, throw it into a box, FedEx it to them, then they UPS ground ship the replacement back to us.
certifiedsysadmin@reddit
This is exactly it. Autopilot is so useful for remote workforce, small offices, etc. It can save helpdesk teams a ton of time and it keeps deployment very consistent.
For your type of environment where you have way more machines and local staff, especially lab settings, imaging makes way more sense.
JwCS8pjrh3QBWfL@reddit
Autopilot self-deploy can be the solution for labs and non-user-specific devices. It's almost zero-touch, you just need a network connection and someone to click "next" through the first couple pages of OOBE and then forget about it for 30 minutes.
PowerShellGenius@reddit
But it doesn't replace server infrastructure, because you still need plenty of Connected Cache servers in ultra high density environments.
The difference is, if DO/CC fails everything will download from Microsoft's CDN (and bring your internet uplink to its knees). Versus updates will be delayed a day while you fix your SCCM server and will not hammer your WAN without your permission.
man__i__love__frogs@reddit
pre-provisioning technician mode would work too. No need to set up an imaging station, just install the device, log in and walk away.
GullibleDetective@reddit
Autopilot is extremely slow for bulk deployment at a singular time due to the pull mechanics and offsite
bemenaker@reddit
Autopilot is to replace SCCM. Microsoft wants to make everything a cloud subscription service. Autopilot is SCCM's cloud subscription.
TechIncarnate4@reddit
Intune is to replace SCCM. Autopilot is a feature of Intune.
bemenaker@reddit
Yes, but autopilot fills in the missing piece of doing os installs.
TechIncarnate4@reddit
Autopilot Preprovisioning is an option also. Its like imaging or an SCCM task sequence where someone from IT is expected to start it first. It will be slower than your image on individual machines, but you don't need to keep creating new gold images. Just swap out the software or add new software to the process.
RaidZ3ro@reddit
With autopilot you simply won't be imaging yourselves anymore, it's all self service by your end users. You can even set it up so your hardware provider pre enrolls your devices so they can ship directly to your users' desks without any further intervention.
IwishIhadntKilledHim@reddit
Adding on to this...it's nice from a user perspective when they can hit the user-friendly factory reset button in settings and it resets back to corporate default.
Have I had users call irate that they were able to do this by accident? Sure! But I like things that give the user a consistent experience between home and business
ManLikeMeee@reddit
We use autopilot here and it requires constant babysitting!
man__i__love__frogs@reddit
We've been using it for years on hundreds of devices, 20 offices, 100+ remote staff and I can't recall the last time it needed babysitting.
You probably have something set up wrong, like mixing LOB/w32 apps, or using dynamic groups instead of device filters for assignment.
SikhGamer@reddit
If it ain't broke...
For us, we have a few 100k devices. Impossible to buy the exact same spec/make/model across the global; so things like golden images don't work for us.
It's much easier to have something like DSC (or whatever the fuck they call it now in the cloud) and have it all declarative declared as a config.
Trakeen@reddit
How long does it take you to update all the images you need to maintain? I was in higher ed and used wds and mdt when that was a thing. Then we moved to vdi with app layering and man was that nicer for all the weird builds every lab needed
rcook55@reddit
I've not read all the comments but I put this out there. ManageEngine's OS Deployer. It uses Golden Images but it can take an image from disparate models of hardware and inject drivers as needed. I update images quarterly if needed, update our 'base' software if there is an update, all on whatever is the most recent model laptop we're deploying. The image is backward compatible with the rest of our fleet with no changes or modifications needed.
We also leverage post-install tasks for more critical software that updates more frequently, like AV. I just need to select the package as a task, so if the AV updates I can have the latest version available w/out having to update the Image.
Were still going to look at Autopilot for remote users but it sounds like on-prem golden imaging is still very viable.
thegreatcerebral@reddit
Basically the answer is that you don't have to "update" Autopilot installs is my understanding. It will install the most recent version unless I am wrong. You are at 1 year for 33% of your fleet then you may not feel the need for updating the image nearly as much.
Independent-Sir3234@reddit
Golden images feel great right up until one driver, one app version, or one weird model difference forces you to recut the whole thing. At a school gig I inherited, the real pain wasn't deploy time, it was that every small change turned into full-image testing and one bad capture could wreck the whole refresh window. Task sequences and app layers were slower at first, but they made day-two changes way less painful.
uptimefordays@reddit
If you’re doing leasing that way, why not look into factory imaging from your OEM? You can basically hand all your localization to them and get a machine that auto-joins the domain and gets configuration out of the box. It’ll save a lot of time on deployments while eliminating misconfiguration opportunities.
defnotajedi@reddit
I use a GM and FOG. We're a dell shop so there's not much variance in platforms. I'll continue to use FOG when org's desire a good free solution. If given the option to upgrade, I'd say put the money elsewhere until more cons develop for this method.
rc_ym@reddit
It depends a bit on the usecase of the devices. Autopilot really shine with remote workers or heavy heterogeneous self service needs (dev work, etc). If you got to get a PC to people all over the nation to technical users it's infinitely more effective than GM and shipping in house.
But it sounds like that's not your profile.
Unless there is some deal you can get from the ITAD or lease company where they are doing that work for you (which would be particularly silly if you are doing 1/3 of the fleet individually shipped to each user.. at the their office... where you are... SMH), I really don't see a benefit.
Fuzzy_Paul@reddit
We have done it in the past with zenworks. Excellent tool voor pxe and imaging. Drivers is a breeze. We migrated to sccm and that was a step backwards later we moved to intune and again back into the stoneage. Now we do everything with intune ad get the job done. But sometimes we miss Zenworks that has a thing like dynamic admin, alnis pull down menu no need for scripting. Detection if software is installed fully automated not detection script needed not even for updates. Promote a workstation to temp image distribution, image progress etc. The list is long that has advantages over all others but thats not what you asked. I think gosting and sysprep are the fastest way to reimage with multicast. That is unbeatable if you have more than one gosting machine.
phoenix823@reddit
Sounds like a problem in search of a solution. If your current process works, great. Autopilot made a lot of sense for us when we needed to ship equipment around the world, to countries where we don't have IT people and cannot image the device first and wait for it to go through customs. It's a logistical solution, not a technical one.
ScrambyEggs79@reddit
I agree. Use what works for you. I think larger and more diverse hardware to support can benefit from using a streamlined process (os install, drivers, apps, etc) where each steps can be customized/changed/updated individually. But if you have a working process and the environment is more homogeneous then you could probably save time with a standard image. Also still a good choice if you have highly customized workstations.
unscanable@reddit
If it works for you and youre happy with it why change?
jimicus@reddit
I honestly can’t.
SCCM et al are all well and good, but they all suffer the same problem: the MSI install process is fundamentally sequential. You can’t have two installations running simultaneously. (And it has to work this way because installer A may behave differently depending on whether or not product B is already installed. Concurrent operations would result in inconsistency in the end result).
This means that for any non-trivial installation, it rapidly becomes dog slow. It’s pretty obvious that as far as Microsoft are concerned, the edge case of “must complete very quickly” is so niche it isn’t worth turning the whole installation process upside down for.
sccmjd@reddit
I still use golden images. I've heard a lot of criticism about them not being "modern" but it still works fine. I don't have huge numbers of machines, and I don't have a strict time limit for prepping them.
There is an advantage I think to being able to image completely offline and to be portable.
It also dawned on me a couple years ago that even an older golden image can still be useful. It may be off on the year of the OS but that's just one OS upgrade away. Apply an older image, upgrade the OS, and it's back where it would have been otherwise. In terms of time spent paying attention to that extra upgrade it feels like five minutes, so nothing. Out-of-date software going to get updated essentially on its own with probably zero attention or seconds of attention paid to it at all. So an old image that I put meticulous attention into keeps paying off years later.
It has stumped some people when I've been criticized when I ask how I can image completely offline at certain locations. That's met with lots of ums and that it needs internet access. I just need the machine, a couple thumb drives and hard drive, and I'm good to have it mostly prepped completely offline and more on the go or more remote. Add in no strict time limit, and that's 16 hours available outside business hours, plus an extra 48 hours over a weekend. There's almost always plenty of time.
bgr2258@reddit
Oh man, I miss the Alriris deployment console. We used it in a school district for a while before it got bought and cannibalized by Symantec. It just worked so well. Easy to set up task sequences, multi machine remote control, and the ability to just "run this command line things on these machines once and never again". It was fantastic.
Regarding your question... I have no advice. I'm still using golden images in a SMB setting, even though I probably should be moving to something else at this point. 🤷♂️
Neat-Researcher-7067@reddit
Here is the difference:
Imaging - YOU do this: "I setup a deployment lab, and can get 30+ laptops imaged in under an hour via multicast."
Autopilot - The USER does it for you with little to no issue.
Internal-Chip3107@reddit
The user does it for you part only works if a user can read a simple instruction.
I've told my desktops guys to stop help but hey I'm not the manager.
Before Intune/Autopilot we used Specops Deploy, a laptop was done in 10 min and updating images was a 5 min job.
Neat-Researcher-7067@reddit
We can only do so much....
I_AM_SLACKING_OFF@reddit
We're not in 2013 or 2015 anymore
man__i__love__frogs@reddit
Hardware based master imaging stopped being a practice when Windows 10 first came out. It was not recommended by Microsoft - and they specifically released deployment based tools such as MDT, which were free btw.
With deployment you start with a base image, then you push config, apps, drivers, customizations, etc... on top of that base image. There are a number of reasons why this is beneficial.
There is less chance of something becoming corrupt and having gremlins or whatever causing constant/repeat environmental issues. There is much less overhead from administration, for example you can deploy things from a repository such that you're just telling the tool to fetch the latest version of Adobe and install it. You don't need to worry about keeping Adobe up to date on your image. The same can go for your base image itself if you're just pulling one from Microsoft. Not only is this less administration, but then users don't have to wait for a bunch of updates to run the first time a computer is used.
Autopilot is just the next level of this, to the point everything can be customized and the device can go straight from the vendor you bought it from to the user without needing IT to touch anything at all.
MidgardDragon@reddit
In your situation, there really isn't an advantage to Autopilot, IMO. The advantage to autopilot, IMO, is when you need to ship things direct from the vendor to the user. You have them add the hash to your intune, set up autopilot for that machine, and user turns it on, logs in, and wait and eventually* everything is there.
*Within between 10 minutes and 24 hours, depending on how much you're trying to push to it.
Og-Morrow@reddit
Hard to maintain
geekywarrior@reddit
Since you seem to be contained to 1 campus, can't really see the need to switch.
We golden image, since the later win 7 days, haven't really seen drivers be a major obstacle at all.
Once in a while our PXE environment needs a new network driver injected. But it's pretty rare.
Once the image loads, very rare for a network driver to not be loaded and once thats on, Windows update takes great care of the rest.
Maintaining the golden can be cumbersome, but in higher education you have semester cycles built into the workflow. Upgrading it after everything semester seems more than enough.
It can get annoying if you span multiple campuses as then you either need to set up mutiple deployment environment or machines need to come from the main campus. Not a showstopper though.
jason9045@reddit
As long as your process works, I'd stick with it. Autopilot is fine and all, but it's also unpredictably slow. A machine may check in and pull its config and software deployments in half an hour. Or it may take two hours. With the volume you're deploying, there's no way that's manageable.
Accomplished_Fly729@reddit
Device prep so far is bullet proof in my experience
cjbarone@reddit
Our SD is moving away from our on-prem solutions we've been using in favour of Autopilot.
Our existing system was to use FOG (Free Open-source Ghost), and I have two images - one for student devices, one for staff devices. Drivers are copied during the imaging process (depending on what the mobo reports itself to be), so when new systems arrive, I extract the driver pack to a folder on a server and get imaging. Like you, I can do entire labs in under an hour, even remotely. With one image for every staff, and one image for every student, I know what EVERY computer should be doing, along with the users. I wouldn't want it any other way.
The new system we're moving on to has had issues with completing the entire process of adding the settings/apps that are required. We have no visibility into whether a package had been deployed, and can't ask for it to re-apply a package. Sometimes the image we apply (yes, we have a bunch of USB drives to wipe the Dell-installed Windows Image, and put on our plain-Jane one created a year ago) needs to be applied again, but some of the Autopilot apps won't install after that.
donith913@reddit
Former higher ed admin here. For your purposes? You probably don’t.
From an image maintenance perspective, I do prefer to not bake apps into a WIM, and something like SCCM can deploy apps during the build or you can let your software deployment tool handle it afterwards. Slower than just straight up dumping an image onto a drive, but then you always have the latest packaged version on your newly built machines.
Zero Touch deployments make sense when you are dealing with a distributed workforce or a lot of users with modestly different configs that can be dynamically applied. Large enterprises are often already paying a VAR or services company to do their warehousing, imaging and shipping direct from their facilities to cut down the overhead and the time it takes to deliver a machine. In those cases, autopilot saves money and cuts a partner out of the loop.
If you’re a Mac Admin, that’s a very different conversation driven by Apple killing imaging a decade ago.
Certain_Prior4909@reddit
MDT is depreciated and Windows 11 STore appX conflicts suck goatballs. It took me 3 working days to fix cortona, bing news, and other problems which is infuriating when I needed to create a master image. WTF does Microsoft even test their products internally? How can everything go to appdata by default now?
Autopilot avoids these. But I do feel uneasy about OEM junk and lord knows what else especially for an infected machine which I feel more comfortable on a fresh image.
exedore6@reddit
Microsoft was bought out 15 years ago by a startup called Azure, and have been waiting for their big legacy clients to figure out that on-prem is deprecated and barely on life support.
Agile_Seer@reddit
Highly recommend looking into FOG for your imaging: https://fogproject.org/
This is what I used when I worked for a school district. With multicasting, I could image a full computer lab in under 30 minutes.
ncc74656m@reddit
I was very much all about an MDT server in the past. SCCM works fine too if you really want. Done right and maintained they work well and can be fairly quick.
All those things are going away in favor of cloud management and deployment. If your environment supports the bandwidth, or if you can spin up an Autopilot Cache server, and you properly monitor and identify what packages work properly and what cause issues, you can deploy very fast. Mind, it's not necessarily as fast as a local ghost, but it's also pretty hands off.
It's up to you. Do you want to keep maintaining what will likely soon become an out of date system, doing things the old way and letting your skills atrophy, or do you want to move forward with what is certainly the new way?
tin-naga@reddit
I really like the modular approach to Config Manager. Super easy to tweak the 10 to 11 image. Since switching to Endpoint Central we've gone with a golden image. It works okay, and I have no major complaints. They have an option where we can image from "the cloud" but you have to pay for the additional storage.
georgecm12@reddit (OP)
I really need to look at Endpoint Central... that's one of those tools that seems like it might be a good drop-in replacement for GSS and get away from Broadcom.
exedore6@reddit
We stopped using Ghost when Symantec made the licencing untenable. Went to Microsoft's WDS (which was far lighter than systemcenter). You can maintain your old ghost workflow, build the image, seal it up, PXE boot to capture it to the server, PXE boot to deploy (with multicast).
You can deploy quite a bit using group policy, either by msis or scripts.
You don't get all of the features of systemcenter of course, reporting is difficult.
For us, eventually the image got so close to factory, and we needed to support devices that might never end up on the home network with any regularity (thanks COVID), along with MS's disinterest in developing or supporting on-prem solutions, led us to bite the bullet and embrace our new cloud-based overlords.
tin-naga@reddit
I sold the CTO on it with the endpoint privilege management. No more chasing Respondus, Examsoft, and Autodesk admin prompts. Theres other solutions but I thought it's interesting to have it built-in to the endpoint management system.
firedocter@reddit
Not sure about autopilot. But I use pdq deploy and group policy.
Otherwise I need to build a new golden image every time there is an update to something.
brisquet@reddit
I used to work in education and use Altiris and it is way better than what I do now with SCCM and Autopilot. I miss multicast and imaging an entire classroom or wing of the school at once. I say keep using what you are using.
nme_@reddit
If you only have one geographic location, what you’re doing is fine.
I’ve got clients with global work force and we have autopilot configured and it’s a life saver. IT didn’t need to touch the device at all.
When you’re talking 40,000 endpoints gold imaging just isn’t an option without a dedicated imaging team.
Barnox@reddit
You've not mentioned any pain points, and it all looks like it's working for you, so I'm not seeing a reason you'd change right now. Drivers and updates/patching is the only thing I can think of.
We're currently running a mix SCCM and Intune deployment - straight Windows + Office/Adobe through SCCM + other software through Intune, then for the more complicated labs we've got wims with all the software preinstalled, recreated annually over summer. We use TsGUI on the SCCM Task Sequence to choose what goes on to each machine.
The advantage of Intune for this is being able to more quickly update and upgrade software, and change what is installed to a room. The downside is Intune's idea of "quickly". Where a machine built from image would be ready to go once it hits the login screen, Intune deployments take as long as they want.
Generally everything here is based on-site - if you've got a lot more remote workers, Autopilot becomes a stronger choice.
exedore6@reddit
How frequently do you update your master images? How much time does it take for that image to be fully up to date?
Yes, you can do it. Build your image, configure it for your environment, and reseal it for imaging.
But it's not ideal by far.
Ideally that golden image would be as small as possible, as generic as possible, hosted somewhere where all of your clients could get to it. The clients would ship from the factory, preconfigured to get a current image, and enroll in whatever management platform you use.
The autopilot model is just that. The vendor enrolls the device at purchase. The factory image is clean enough. You can drop ship the laptop wherever it needs to go.
If you still want to unbox them in a lab for staging, you can. But you don't need to. You don't need to keep abreast of which Desktop.ini files need to be updated in the Default Profile folder, nor do you need to care that shortcuts aren't portable across devices anymore.
equinox6k@reddit
Look for problems and challenges in your current process (too slow, too expensive, too complicated, insecure etc.) and evaluate if a different product actually adds something positive to it. If there isnt any advantage, or if it creates more challenges, keep your current procedure. Don't switch just for the sake of switching...
m5online@reddit
I'm in Higher Ed IT support. I manage about 450ish endpoints with about 300ish being lab machines spanned across about 12 labs. I still use golden master images for the labs quite simply because my lab images are north of 500gb. Images that big take all day to push out via SSCM or even ManageEngine. Quite frankly, I have a bank of 10 USB C m.2 drives that I manually image labs with just using Macrium. It's very old school but with the help of a few student assistants I can get a 40 station lab imaged in about 3 hours vs. 8-12 hours (which effectively makes it a 24 hour process if imaging go's past 5pm) being pushed out by SCCM or ManageEngine. We are going to play with Thick Image capturing with ManageEngine and experiment with pushing out that way, but we'll see...
Stonewalled9999@reddit
for labs/classrooms Autopilot is going to be slow as tar (I mean it already is but for 10-50 machines at a time with faster LAN your process is going to be faster). If the PCs are cookie cutter and you only have to master 1 or 2 images what you are doing is probably fine.
JwCS8pjrh3QBWfL@reddit
At a school, I would hope they already have a Microsoft Connected Cache on a server with a fat network connection directly into a core switch. Ours was on a ten year old Dell server with spinning rust and it still sped up deployments a ton.
georgecm12@reddit (OP)
Yup, only two real images, a "fac/staff" and a "lab/class" image. Images are basically hardware agnostic.
The lab image is enormous, but it delivers what our faculty expect - "everything, everywhere." There's no dedicated labs on our campus (e.g. "engineering lab," "computer science lab," etc.) so all the software for all disciplines is on all machines. This gives our registrar freedom to move classes around. And this lab/class environment is what I was really thinking about - I can't even fathom how to wedge an "Autopilot" like deployment into that sort of environment.
TheCanadianShield@reddit
Having been there and done that with that type of image split and environment? You’ve already identified the biggest issue with golden images in regards to your lab image. It’s enormous and the complexity means more and more things go wrong the bigger it gets. There’s a few people in this thread that have advocated for a middle approach (customize your Windows base and modularize your applications) and it’s the approach that I personally found the most sustainable in terms of balancing maintenance versus deployment time.
Roland_Bodel_the_2nd@reddit
I think you are right to focus on the main metric of how quickly you can deploy e.g. 30 systems, if that is your primary use case. Doesn't matter which technology details you use to do that.
Usually the first issue with imaging is when your hardware fleet or software environment becomes more heterogeneous and you have to maintain multiple images.
riddlemethrice@reddit
get ready for the SJWs to key in on "master" in 3...2....
discosoc@reddit
How often are you maintaining the images with updates?
smarthomepursuits@reddit
We still do golden imaging.
Biggest issue we've noticed from the last few months is if these devices all have the SID, RDP and c$ breaks due to a newer windows update. It's when trying to RDP or c$ into a device that has the same SID.
https://www.stratesave.com/html/sidchg.html works well to fix, we've probably done 20 machines so far, but it requires about \~15 minutes of work to uninstall AV, turn off Defender real-time scanning, and some other weird steps.
georgecm12@reddit (OP)
Sounds like you aren't sysprepping your GM image.
LaDev@reddit
Autopilot or SCCM Provisioning just offer less overhead, makes some things easier, but definitely not everything. In reality they are just other tools in our toolboxes. Golden imaging is a perfect solution to some problems, just as Autopilot or SCCM is.
Icy_Butterscotch2002@reddit
Not sure how relevant this is since it’s talking about buying a different product, but I would really go check out PDQ smart deploy for imaging
georgecm12@reddit (OP)
I'll certainly take a look. GSS works for us, but if there's a better (and possibly cheaper) product out there, it's worth a look.
PDQ_Brockstar@reddit
I used to do a lot of imaging when I worked in higher ed, and I don’t think Autopilot would have fit our needs either. Just because the industry is moving towards autopilot, doesn’t mean you have to.
JosephRW@reddit
I'm still an ITMS admin in my environment. Between being able to set up targets for various software configurations, detect software configuration state, remediate automatically based on our controlled input, and the way we are able to stage our imaging and device move programatically, it works perfectly well for our... 1200ish windows endpoints at my district and deals with 25k devices across the whole org just fine.
jM2me@reddit
A while ago before we adopted Intune and Autopilot there was no true solution for imaging that company has adopted. Every device had to be setup manually. Adopting MDT was a quick win to rebuild base image however often we needed to (once a week), and then the output of that was the image that we applied to 10 times the devices that we were preparing before in just under 10 minutes.
This worked great for in-house imaging. With move to Intune and Autopilot, our base image became just a base windows 10 instal with no bloat and with driver pack installed per model. In-house we use OSDCloud which will install clean windows iso, apply driver pack, and run scripts for updates and autopilot. Zero interaction from the point of booting into OSDCloud and getting to use login enrollment screen. This is a “golden image”
Now our vendor has their own process for this golden image, but we don’t share it.
I guess the point I was trying to make is with Intune and Autopilot your regular golden image just becomes a base set of requirements before Autopilot.
techie1980@reddit
I think that we're going to be speaking different languages. I'm mostly a large-server based *nix person, but here's why I moved from the "golden thick image" to the "thin image" philosophy - there are too many moving parts now, and too many patches and an inability to do a top-to-bottom test. The golden image philosophy imlpied (at least in the medical industries) a full QA analysis every time. The amount of friction involved meant that on our best cycle we'd maybe get an image out there in four months. The nature of the "one giant image" mode also created a lot of organizational choke points.
There was an expectation of the people maintaining the image to understand every single package and their interaction - which is simply not reasonable in a modern *nix.
Testing with a thin image made things MUCH more scalable. Want a new openssl version? We can put out an image and a specialized yum/apt resource that allows exactly one change and give the app teams the tools they need to do their testing without a full stack in the middle. And it makes my life on the OS side much easier when say a new network config comes along and I need to drop in a single new driver and not go through a heavy process involving app teams signing off.
Add onto that the technical cost of imaging was climbing over time. A full *nix stack can easily be in the >50GB range once everything is installed, PXE boot doesn't really know how to handle multi-threading , and the load on the network increases over time - especially when we're trying to reimage a few hundred or thousand nodes all at the same time.
Having modularized approach also makes it much , much easier to audit. "here's a pile of deb files" means we can give people reasonably simple tools to gather their own data. And it opens us up to use modern tooling for hosting / versioning.
That all said - I'm not a desktop or windows person, so your situation might be completely different. In your case, I would suggest making sure you find the right balance between what works for you right now and what is supportable in the long run. Meaning in three years will you be able to find someone to hire that will understand your current process and be able to keep it going? Or will you need to find the windows version of a graybeard?
Slivvys@reddit
If it doesn't save you time theres no reason to move away from a golden image.
If it saves you time and you have an automated pipeline (autopilot? Intune), and you handle more than 50 reimages a year per tech then by all means make a move.
Fake_Cakeday@reddit
Our leaders wanted to go as much cloud as possible and also to go entra joined instead of hybrid joined.
Since we have many locations in the same bigger city, having any location without IT just be able to reinstall their own user laptops so a new user can use them is very nice.
The issue has mostly been in retraining our older L1 techs into using Intune and getting them out of the golden image mindset.
They sometimes keep reinstalling a perfectly fine autopilot PC with a USB because they think they have to in order to autopilot join it every time...
But honestly if it wasn't because we lost our SCCM tech I would have preferred to have gotten out SCCM task sequences updated rather than going autopilot. But maybe or management would have gone cloud native anyways...
mediaogre@reddit
I think for Autopilot to really show its benefits over more legacy models, scale and geography need to come into play. Elements that warrant adding complexity because deployment scenarios and location demand it. We currently moving rapidly towards Autopilot + Patch my PC but I wish we currently had what you do.
acid_jazz@reddit
I don't think you should. Autopilot makes more sense with remote users (EntraID only) and zero-touch basic configurations. These are labs that are all on prem. Keep using Ghost if that works for you. Bad part about it is you have to rebuild the gold image every time you get a new model. Personally we use SCCM Task Sequences so we just need to drop in new drivers... though it does take way longer than 15 min.
Also, is this the same Ghost from Symantec years ago? If so this brings back a lot of memories. I remember back in the early 2000s using a boot floppy to PXE boot and pull down a ghost image. We could image like 30-40 systems at at time.
TechGuyworking@reddit
I have some questions about your golden master implementation. Do you have only one golden master and just upgrade to similar models? How often are your golden masters updated? What's the age of your oldest computers when they are replaced? Do you manage updates for the PCs and how is that managed? How are driver updates managed?
Basically, I want to find out if managing the PC's outside of fresh install is time consuming enough to need a different option.
OneSeaworthiness7768@reddit
Autopilot isn’t the only alternative to golden images. You can also just use a bare wim in sccm and configure everything else after, which offers more flexibility.
ipreferanothername@reddit
if your need is that everything gets the same thing - i get it, that can work for your scenario.
im in health IT [windows server] and i think the client side guys have a couple of thin images, and a couple of large images, depending on the machines they deploy too. then we have so many health and management apps that its not worth having images of all that, you just deploy as needed for new builds.
server side i have vcenter VM Templates - sccm client, vmware tools, and windows updates. i have 9 templates and i do not want to even have sccm update them all the time because troubleshooting off-domain images is kinda annoying. i have a pretty slick way to deploy a VM and run a job that wraps up some config items and kicks off SCCM to install some standard apps that i keep updated for all servers anyway.
we deliver a lot of things via citrix PVS here - it has a similar golden image concept. app layers make it easy to update once and then assign to all the images these days if there is overlap, but i think theres 12 or 15 citrix images that do have to get maintained.
you do what works for you. and....sometimes technology changes and jacks your stuff up :-/
Zerowig@reddit
I love Autopilot because I got tired of the golden master being constantly outdated. Every month we’d have hundreds of new machines pop up with patches that were months out of date or old versions of apps that needed to get updated before users could use their machines. There were multiple reasons for this, but the most recent we could keep the images IF we stayed on it, due to change control and testing and whatnot, was 2 months old. That’s too old, IMO.
With Autopilot, you don’t “image” anymore. When we update Citrix Receiver, or any app, systemwide the “image” also gets this new version. Office no longer closes on people in the first few hours of use because it needed to update.
And there are costs involved as well. Intune is essentially “free” due to our MS licensing model. It didn’t make sense for us to invest in another solution.
I certainly do miss the days of getting a system wiped and imaged in 30 minutes or less. But for us at least we found the time it took to get a PC imaged, then let sit and update, was comparable to the time it takes for Autopilot to get a machine ready.
t3chguy1@reddit
Thick image for us too, 250 GB. Tried other ways but there is so much to configure per software package, some are not even deployable without a wizard. Even with clonezilla deployment, some programs just don't work on random machines, even if they are exactly the same configuration and purchase batch. Windows being Windows.
justaguyonthebus@reddit
You're probably not doing image maintenance very often. Unless you have automated that, it's a non-trivial effort. Then you have consistency issues when you have to start over as something always gets done differently by hand.
That also means that stuff isn't updated or patched in the image so you start building a post deployment process that gets longer and more complicated as time goes on. When you do update the image, that process gets reset.
With that said, if you have automated your golden image and refresh it every month to make sure it has everything updated. Where you can make edits to the automation and rerun it, then you won't gain much.
Craptcha@reddit
You don’t have to. It’s a question of size and workflow.
With Autopilot (in theory) you can drop ship a device to a user and have it reach desired state autonomously.
With imaging you have to stage the device in a physical location where the image can be loaded.
If you already have a working imaging workflow, you’re probably not going to gain much from transitioning to Autopilot, and third party app deployment could become a lot more work because those packages need to be built and maintained separately through Intune.
If your org has distributed geographies and you need to ship devices without staging them locally, I’m not sure how well imaging would work in that case. There may be benefits to the “modern” approach.
I would stick to what you have for now and look at the evolution of Autopilot 2.0 which is probably going to replace the existing tool over the next few years.
hologrammetry@reddit
I am in higher, our school uses SCCM for Windows provisioning and I personally use Ansible for provisioning the Linux boxes in my department. I think the Macs are done with JAMF and although I started in Mac-land ages ago I've been out of the desktop admin side of it for long enough that I have zero experience with JAMF.
I personally prefer the modular approach of provisioning over imaging because I don't have to re-image a machine to make changes. For our Windows labs, it enables making changes during the middle of the term that we couldn't do when we were imaging only on term breaks or over summer.
It does sound like you've got your process down pat so it's a harder case to make in your situation. Imaging wasn't really working for us any more and we were glad to move to provisioning especially since we get more support from our central IT group this way.
Hg-203@reddit
How often are you updating your golden image with updated software? I work in K-14, and I think you're pushing the technical debt into updating all the updated software vs making sure your software packages are up to date.
nyax_@reddit
Jumped to autopilot 12 months ago, never looking back.
ItsTooMuchBull@reddit
I mean, how much is Ghost Solution Suite costing you? Could that money be better spent elsewhere (tools, employees, talent retention etc) by using products that are already baked into your licensing costs? If not, then keep with GSS. There isn't necessarily a "right way" here. Just for the majority of organizations, Intune and/or ConfigMan offer a better ROI. I have found that an optimized task sequence in config manager is as fast as GSS and provides better reporting data natively, so why would you pay for it?
Knight_of_Tumblr@reddit
We needed a golden image for deploying autodesk products (like 120gb of architectural BS) at one of my jobs, currently I'm using autopilot because we have zero app installs and a handful of remote offices that know to "turn it on, look for our logo, and sign in to your email".
Sounds like you're in a great spot for your use case, kudos!
ghostnodesec@reddit
Autopilot comes into play, if you want to ship direct to user without first having to touch the device, in theory building on the fly means less maintenance on maintaining image. I say in theory because you still maintain the build sequence, so really would depend. So autopilot is great for scenarios where you have remote locations, and can just ship hardware direct, they login get their apps and off they go. But heavy apps, looking at you office... baked into an image still makes sense due to how long it takes to install, and if you're image is cookie cutter not a lot of variance, something like ghost will win every time for mass rollout of 100's of machines
Affectionate-Cat-975@reddit
It depends on your volume of workload. If you’re only doing new deployments then autopilot makes sense. However if you have a regular cycle of reloading windows then the deployment model works better
cjcox4@reddit
Because "world + dog" is trying their very hardest to ensure that your GM imaging approach can never work again. That is, if they (primary OS makers) could completely kill this off today, they would. Is that right? Perhaps not. Usually "more money or more control" is the reason (be that direct or indirect).
dcutts77@reddit
I too am curious... I mean... if it works it works.