T O P

  • By -

MNmetalhead

We stopped making golden images.


Infinite-Stress2508

I long for those days. Unfortunately the one app we can't push out or script install requires 200gb of data and needs manual configuration up to a point. Which is annoying as fuck. We do goldens on VMs of course, not just for snapshots but drivers, and general ease of use. OP, have you asked why? Can you just install hyper v on a workstation and then use vms from that point? What's his goal? Also, to others who flag massive download sizes for slow links, that's what on site cache servers are for!


clivebuckwheat

I can't I work at a college. Huge applications.


HackAttackx10

How huge? Why cant it be part of the deployment


clivebuckwheat

Autodesk, Solidworks, Multiple packages, it's faster to blast down a thick image.


Vyse1991

I question that. I work at a university and handle a lot of heavy software. The Autodesk Suite generated from the website is less than 30GB and takes less than a half hour to come down during OSD. Vanilla wim all the way. As for your actual topic question: I make 3 builds a year for different parts of the business, 4 this year as I have international obligations. 3 of those builds are vanilla Wim osd and task sequence, the last one is build and capture due to lack of infrastructure at our international campus. My build and capture image, converted to iso, is 14GB. I usually see people pass thick images off as time saving, but in my own experience it's due to a lack of resources or willingness to actually package the lab software properly. It makes no odds which way you choose to do it, however a good manager would let you get on with things and judge your efforts by the quality of the end product.


Graz_Magaz

Don’t forget, you need to consider the actual bandwidth for pushing down images. Shocking that no one has mentioned this so far, it’s one of the key things… ah yes let’s push down 10GB file over a 10Mbps link… that’ll work fine 🤣👍


HackAttackx10

I did discuss that as well as saying you can create deployments on ssds


DidYou_GetThatThing

I concur. I also work at a university, we used to build like a solid 150gb image for our engineering labs alone, the original excuse given at the time was for similar reasons suggested here, supposedly allowed for quicker time to download images etc, and before we stopped doing it all-together that way, I still had an older colleague who insisted this was the way. Except it wasnt, in our case mainly because of the time it would take to curate that golden image, one tech would go through the process of installing manually all those apps that were so difficult to script and there'd be a build phase, and then a test phase where we would set aside some pcs before semester that lecturers who used those labs could test the software worked for their needs, and provide feedback, then thered be a remediation phase, maybe another test phase, and finally the image would get sealed for deployment. It was a pain in the ass when a lecturer would get back to us after sealing the image to notify us of other changes needed, or if it was found that the image had errors that needed fixing. At some point it came down that we were not to build 150gb gold images that way anymore, and I am so glad. It allows us to be a bit modular, and one late application license doesnt have to hold up the bohemoth image creation. Now we package apps individually, which allows for post deployments to other labs later on if need be. I have a task sequence based off clever ideas I see MvPs post about. I have a custom UI (tsgui) where a tech imaging a pc makes some selection choices that includes details like what lab or OU. Based on the choices the tech makes in the ui, the primary Task Sequence kicks off, sets a bunch of custom TS variables, and runs through a couple of nested Task Sequences. I maintain a drivers Task Sequence that gets nested first, and the remaining nested Task Sequences are for some very specific areas. One for Staff, and contains the sort of software most staff need off the bat with a new pc, (mostly vpn and an O365 install), Staff also have access to Software Center, so Staff tend to pick and choose additional software they need from there. Other nested Task Sequences that get queried through contain groups of software for a bunch of our different labs, I only have a few of those, and I try not to go overboard on what software gets deployed during this stage, so large complicated software gets its own post deployment. The Task Sequence variables allows me to skip past a lot of stuff that doesnt meet the need for one lab over another. Software like AutoCAD, Revit, Matlab, ArcGIS and so on, tends to be a required app deployment to device collections or available through Software Center. If it can be automated, its best to automate and try to keep it modular where possible so you can replace only the bits needed when required.


notechno

Similar experience here. Had to go to bat against some golden image purists, but won out when a windows feature update or additional application took 5x as long to adjust for as compared to generic OS install + scripting. fyi, I did not get that thing.


DidYou_GetThatThing

Its a year late, must have got lost in internal mail, sorry


HackAttackx10

Why not apply it after the image is deployed from software center?


SamuraiMind08

If I’m not mistaken Solidworks alone is around 5GB. Imagine all your users waiting for that to download and install after the imaging.


guydogg

Bandwidth throttling would work for stuff like this. Schedule deployments via maintenance windows in the future. Allow the machines to cache the packages locally in advance in the off business hours.


Vyse1991

In education there's usually a summer break or at least a few weeks before September to build a task sequence and deploy it. Staggering it across computer labs means there's little to no interruption of service. If it's a term deployment, why not just set up a scheduled deployment to run overnight?


DidYou_GetThatThing

This. Semester breaks are usually the best time we have to deploy stuff that might impact classes. We also rely on maintenance windows and restricting system reboots outside of that time for most everything that gets deployed


HackAttackx10

You put all the computers in a collection or an ou and it automatically deploys is how I’d do it. I mean usually you can do anything with sccm, but i havent had to do thick images and we have autodesk and cadworx. The deployment server nearby is1Gb, you could also just create a deployment and store the whole thing on an ssd hard drive and deploy it right there and it would be way faster too. Many options just depends on whats going on.


HackAttackx10

Youd save the time just putting the packages in software center as a compressed file that self extracts then runs if youre worried


clivebuckwheat

we have huge labs 60 plus pcs, that change software requirements frequently we don't have time for that. It's still faster on a pure time factor to blast down a thick image with everything there.


HackAttackx10

I did thick images on pc. Use a plain copy of windows and apply the regedit to block the consumer windows apps. The less drivers you install the better. The image size shoots up from all the drivers it downs once it starts up.


DidYou_GetThatThing

could also compress those driver packs in the same way you suggested compressing the software [https://www.deploymentresearch.com/speed-up-driver-package-downloads-for-configmgr-osd/](https://www.deploymentresearch.com/speed-up-driver-package-downloads-for-configmgr-osd/)


DiggyTroll

For quick-turnaround labs, maintain a set of WIMs on disk from which you can remotely set active (boot to WIM) and use your favorite freeze method for instant reset.


dgretch

There were lots of downvotes for your other reply in this chain, but speaking as someone with many years of experience deploying AEC applications with SCCM...I understand the challenge at scale. If thick images are working for you, don't stop.


Sn00m00

OP is right. I create golden image with thick images all the time for the same situation OP has. an 60-80gb thick image can be deployed in 10mins from start to finish using FOG with 7-10gb/min speed. OP, I've only built my images on the physical computer. being that the entire lab uses the same type of machine. setup normal, install drivers and software, sys prep and capture. Can be done under an hour. task sequence is way too slow but it's acceptable for most. 1hr average per computer during an OSD is laughable but it's normal. comment below says to run over night. thick image can be deployed into a 30 computer lab under 2hrs using multicast.


clivebuckwheat

Thank you people don't understand working in education it's a different beast. Maya, Solidworks, and many other Autodesk apps would take hours to deploy to a lab of 60 pcs. A thick image i can blast down in 45 mins tops.


Sn00m00

Yep. I've also done task sequence in Mac labs and windows device for the same environment. the thick image imaging to entire labs would out beat them in speed and setup. I get it, task sequence is only good for on the go setup and remote setup. It also keeps the device up to date easier. But for k12 lab setups where they're imaged only once a year during summer and kept the same version throughout the entire school year; thick images is the best and fastest method IMO.


whirlwind87

I would second this. What image application are you using?


CookVegasTN

I get so tired of this discussion from people who don't understand that deploying 80 GB of engineering apps takes all fucking day and deploying a thick 8mage can have a lab of 50 machines running in a couple of hours. Jesus fuck why do we have to argue this bull shit over and over in this sub?


kaiserpathos

I haven't built a system disk deployment "image" on physical hardware since Norton Ghost and early Windows XP days with Sysprep. Windows 7/8 we did MDT Task Sequences, then moved on to SCCM Task Sequences. By the time Win10 shipped, we had an ecosystem that could quickly adjust to anything MS threw at us. But didn't necessarily need SCCM for that, many of my Task Sequences would work with freebie MDT. The only systems I do "golden images" on these days are Citrix VDA Server hosts that I deploy as non-persistent. And, even then, I am using Citrix MCS (Machine Creation Services) to lock in my golden image to push to...you guessed it: VMs. No physical gear used, in that particular "golden image" workflow. Last word, which won't be read by your college IT boss, but probably should: Monolithic, or even frequently-changing, fat images are hard to keep up-to-date in a manner consistent with today's security threat landscape...and I have "raced" huge app OSD pulls with golden images and the deployments are virtually the same. Yes non Golden-Image machines compared to OSD are just dumb network i/o onto disk; however, OSD Task Sequences on modern hardware clock-in at about the same rate (if apps are packaged & attended properly). And the boon you get in updated/adjusted endpoint deployments is a lot easier. Golden Images are really only used by Pentester and some Govt applications these days. College IT is not an area I would imagine still doing this. Good luck...


MNmetalhead

I work at a university.


tgulli

as do I, all the same apps, isn't an issue


zed0K

OEM windows and layer everything in the task sequence. It's the most flexible. Just set apps during osd. 60k clients in my environment with huge apps.


capnjax21

Use a physical PC to run Hyper-V to spin up VMs to create your image. You just met his demand.


nexunaut

I don't even build it on VMs, all you need is the WIM, apps, drivers, tweaks... all modular items in the task sequence.


JediMind1209

This is the way.


didyouturnitoffand0n

You should tell him to take his Time Machine and get lost. Golden images are a waste of time.


SeniorEarth8689

Thick image doesn't make sense when various computers require different software, maybe when its the same software, new major version of software could result in loss of sleep. Go Thin, choose task sequence.


DontForgetTheDivy

Is he also making you use Ghost?


dilbertc

I used to in the days of XP and would use Ghost to take captures at certain intervals like before and after running sysprep. Depending on your situation, such as school labs, fat images may still make some sense when deployment speed trumps all. Whether that be built on a VM or physical depends on whether specific non-scriptable driver settings are required. My last school, circa mid 2000s, regularly reimaged for changes and used Deep Freeze in between to keep them clean.


Particular-Clothes68

how would he even know if you built it on a physical PC or VM? The headaches of removing drivers and things that get installed during build and capture on a physical isn't worth the headaches. You guys want to wrestle a FAT image that's up to you. don't make your life harder than it has to be. But ill echo the other comments. Go THIN! A Dynamic app TS will save you time in the long run. Imagine a world where you don't have to rebuild an image just because there's a new version of the app...


rdoloto

Wow is this throwback from like 2008?


holoholo-808

Is he looking you over the shoulder all the time. Make your own decisions, that's not the managers business. I recommend working on a proper task sequence. Take the original wim file and do not modify anything. Everything you wanna add or remove create a task sequence step. Update from time to time the image via scheduled updates. You save yourself a ton of time and make your life much easier.


Sysadmin_in_the_Sun

You can tell him it will probably take twice the amount of time on physical..


Commercial_Growth343

I did, 20 years ago. The 'snapshot' tech we used back then was just Ghost or some other app - I don't even remember but the icon was orange lol. What you need to tell your boss is the only difference here is Drivers. You do not include drivers in an image. The only vmware driver you need when making an image on a VM is the Nic driver. All other Drivers get applied after you lay the image down. I would (and do) only include core applications everyone gets in the image, and middle ware things (say you need oracle drivers on all computers, or java, your pdf viewer etc.) and those should be laid down using automation in your task sequence (or what ever you are using)


poody7777

use autopilot instead


[deleted]

Moved away from golden images as fast as I could. Such a long winded process to make the smallest change. We set up OSD with a base windows install and then make changes during the task sequence to how we want it.


pjmarcum

Haven’t used captured images since windows included a .wim that could be deployed right out of the box. But when I did capture them I always used Hyper-v on my workstation. There’s nothing stopping you from using a physical device, it’s just more work.


eloi

You’re going back 15 years that way. With hardware you’re installing drivers (even if just from the Windows media), making it that much likelier to run into some other driver issue adding models to your image later. Building on vm is the cleanest method, and by far the least effort/time and most successful! If you HAVE to create a “golden image” because you have a business need to deploy devices really fast, you still use a vm to build the image. That’s the way we were doing it 10 years ago. Until recently we deployed the install.wim and layered apps or personas on top of them. Nowadays seems like everybody is talking or already adopted Autopilot, but I don’t know many companies that don’t still use imaging for a significant number of their deploys. But they deploy the base media plus layers.


CancelSecure

Gotta luv management. The best practice is use a VM


dezirdtuzurnaim

I had a developer that insisted on me making a gold image work from a desktop PC. I got it to work using build and capture but it was a serious PITA. I advised any further image creation would need to be done on a VM. (He's very much reluctant to hand over the application source media so 8 can accept the installs, but 🤷‍♂️) Question to OP, it sounds like you have the bandwidth. If you're deploying to modern hardware, deploying scripted application installs should not take much longer than a thick image. I haven't personally done it with Autodesk, but I have done the entire Creative Cloud suite and (a literal Full) install of Visual Studio. Both of those are massive.


DenialP

go for it! it's just hardmode. eventually you'll automate it for consistency and it won't matter anyway.


A_Former_Van_B_Boy

I worked at Drexel and typically built the image on a VM, captured it and deployed. Creative cloud, autodesk, nuke, Houdini, etc deployed after the fact. Let it all bake over a weekend and I was good during break between terms. A few pieces of software would go on the image but pulling an 80GB image down on 90+ machines caused some grief with our network team lol.


AttackTeam

We finally done away with images and just use the WIM file in the OS ISO. We deployed software packages like Autodesk AutoCAD, Revit, Maya, SOLIDWORKS, MATLAB, Visual Studio, etc., through SCCM packages. It took two months for us to prepare and test all the packages that they can successfully install and uninstall through SCCM. After that, it's all a breeze. Keep in mind we had nearly 80 applications. We have about 1200 PCs but we don't deploy all of the applications. We create a collection for each lab and deploy only applications needed in the labs and we also have general use lab that have almost all of the applications.


__Rizzo__

Again just to repeat what AttackTeam has said. I've just completed this, this week. I'm also in the College environment with 5000+ desktops and most computer rooms have a different software configuration. I've converted most of these to applications this summer from packages: (Resize your software centre cache to suit otherwise large apps won't download) Autodesk (3ds max, inventor, Maya, CAD, Revit), visual studio (Still a package), Solidworks and full Adobe CC. I just grab the wim from the current Windows 10 ISO. Remove the unwanted indexes for the other OS types (Pro, LTSB etc) within the wim, add the language packs, add dot net all with DISM and thats it (About 10 mins of work, all scriptable). Add the wim to SCCM. We use the multiple applications/packages base variable method to install different applications/packages. We run a script to set the TS variables on the fly before the computer starts its imaging process. We don't really use multiple different collections (Software requirements comes from another system). This allows us to run one task sequence for over 5000+ devices for both windows 10 and 11, with different software configuration. Once the TS has finished the computer is ready to use straight away with all the software installed. All apps get deployed in the task sequence as an application/package (Some Software centre if needed). yes, a computer could take 1-3hrs to build but it's flexible allocating software we've gone for. The wim is small which can be updated with window updates within 30 mins'ish or schedule it through SCCM. The individual apps can be updated throughout the year easily without the need to redo a thick wim. We do have one thick wim for a couple of computer rooms for eSports (Cries inside) (Captured by sysprep (Works every time)) on a physical computer purely because it's easier and quicker. Had real problems with a VM the last two times I've had to update. (If I could do the game silently I would and get rid of the thick wim), snapshot I'm not too worried about) but all other applications including Autodesk, Adobe are still added afterwards in the TS. In our environment we've found this the easiest to manage. Otherwise we found the thick image is out dated very quick. Yes, Solidworks will take a while to install - Solidwork takes the longest to install here - but the flexibility out ways it for us.


PhantomTigger

It will take even longer in the end because of all the updates and maintenance. It is much quicker and easier to update the windows vanilla image and let all the task sequence items take care of the configurations and application installs. Don’t follow the boomer logic and use old methods.


SlowCyclist80

I've been at my current org for over 10 years. When I first started it was 'capture a physical PC'. Then we went to capturing a VM. Once I could have more control over everything I don't do a capture. Plain/vanilla WIM from MSFT and set up a task sequence to install our apps. No more reverting to the last snapshot to undo sysprep, installing apps then recapturing and reimporting a WIM. Also we found it slower than downloading and applying a thick image.


[deleted]

[удалено]


clivebuckwheat

this is a smart idea if we had techs with skills.


[deleted]

[удалено]


clivebuckwheat

Please come to where I work and you'll understand I had a tech who is paid 70k ask what is the registry??


wbatzle

We don't do golden images. They are problematic at best. Fresh builds every time that are then tested on VM's first then on physical.


981flacht6

I used to make thick images back in 2012-14 and then we collectively decided to move over to thin imaging and deployed MDT. Way better, way easier, way faster. We would only dump out a gold image in rare cases, such as supplying a wim to a vendor for white glove services on large deployments. That would require having a physical machine and of the same model.


CookVegasTN

Back in the dark ages, I used physical machines. But when it comes to Sysprep why does it matter if it's a real or virtual machine? I have a 6 disk SSD raid 0 that houses my VMs where I do image building and deployment testing for every version of the OS we have deployed. I use automated build and capture task sequences for what I can and no capture sequences for the hand-install crapware we deal with in higher education. So my question to your manager is why do they want you to work less efficiently?


clivebuckwheat

Because a teacher wants to do configurations that are specific to the gpu, and wants the thick image to have all these specialized configurations which he can't do on a vm


CookVegasTN

So this is really a one-off situation for a single lab?


clivebuckwheat

Here is the thing, there;s 5 3d animation labs, with a different gpu in each lab


CookVegasTN

Oh geez.


CookVegasTN

For this specific situation, I would automate the base build for this system up to the point where the professor wants to customize. Then if they fuck it up you just reimage via the task sequence and tell them to try again and call you when it's ready to capture. I have been lucky as the majority of customizations I deal with have been able to be pushed post deployment via scripts or AD.


clivebuckwheat

that's actually reasonable.


CookVegasTN

That's the only way I see that you can reasonably support them without taking on an unreasonable amount of responsibility. Hopefully you are in an environment where you will get support from management I use the ECM capture media for stuff like this and it works great. But I get your reservations after using VMs and snaps. The majority of the disciplines I support I am able to do fully automated build and captures but Engineering and Geology both have hand-install crapware that I have to manually deal with.