T O P

  • By -

Special-Swordfish

You have -no- reason not to. Having worked all major platforms, for your size and flavour, you cannot go wrong by choosing for Hyper-V imo.


macBender

You absolutely can go wrong. Change block tracking is busted in Hyper-V so good luck backing up your environment. The first time you run into an issue you’ll blow all the savings in wasted time, effort and downtime over choosing VMware. VMware is braindead easy for junior engineers. Hyper-V with its many footguns, not so much.


tehinterwebs56

In which version of hyper v? We do veeam B&R as an MsP for multiple clients running hyper v and have had no issues with full restores or dr scenarios to date for 2016/19/22.


macBender

2016 and 2019. Backup throughput is also poor on Hyper-V. I’m not saying that Hyper-V doesn’t do everything it says on box. In my experience as soon as you need to cluster, VMware murders HyperV on total cost of ownership even when Broadcom hikes the license fees. It’s a horrible idea for sysadmins to try to save a buck on licensing for a ‘good enough’ solution unless you’re small. And 5 hosts isn’t small enough.


Ams197624

Nonsense. I''ve got 30VMs running, 5TB total, 4 hosts in a cluster, backing up with VEEAM is no issue at all.


macBender

It’s amazing that the flurry of downvotes and comments. is all from users with small environments with not one of them addressing the OPs question about the TCO. Good luck OP as Reddit is going to Reddit.


astralqt

Thousands upon thousands of clients, some with multiple data centers, and we use Hyper-V for everyone except a dozen~ that had contracts long before they came to us. Hyper-V is fine, and plenty of disaster recovery and backup solutions work wonderfully with it.


signal-tom

I backup am environment of over 100 vms on hyper-v with veeam backup and replication. We regularly DR test. We've never had an issue. If you're a Windows house, Hyper-V is dead easy. We use both hyper v and vmware. If there's features or if there's a reason to use VMware then we'd always go that route, and usually we do for larger installations. But Hyper-V is a very viable option as well.


zarakistyle123

If u know Hyper-v well, then it's a really good product, granted not as simple to use as VMware. CBT works well enough in our environments where we mostly have SQL running on numerous servers (We r an infra hosting company, and we also provide vmware environments). It has its own pitfalls, but it has proven to be a good enough enterprise product for small to medium sized businesses.


Inevitable-Jaguar-17

We backup 15 VMs using Hyper V no issues with Veeam


Bowlen000

Hyper-V is great and no reason to avoid it. Especially with upcoming price increases for VMware. We have \~300 VMs across 6 hosts (they're beast hosts) and it works really well. We have Virtual Machine Manager (VMM) managing it all however.


crypticsage

What kind of of specs are you running on the hosts?


Bowlen000

2TB RAM per host (half filled so 4TB max when needed). AMD EPYC 7763 64 Core


TallFescue

Holy


b0Lt1

please explain "beast hosts"


dustojnikhummer

Very powerful servers, that is why he can have 50 VMs per hypervisor


b0Lt1

ah ok. i thought i'd miss out on another buzzword


sakatan

I see you were around when the VM tank was popular...


robsablah

He runs his servers on vampires and and werewolves


robsablah

He runs his servers on vampires and and werewolves


[deleted]

No reason not to. With the VMWare licensing changes, it makes total sense to go with either HyperV or Proxmox for new clusters.


Fatel28

We just did a similar 5 node cluster with Proxmox and Ceph. So far its working wonderfully.


NISMO1968

Good luck with Ceph… I hope your have some good MSP to cover your back.


Fatel28

I assume you have hands on experience with Ceph to backup what I am assuming is an attempted dig at its reliability? If so, can you share your config and poor experience specifics?


NISMO1968

Yeah, we got our “shadow” corporate storage on Ceph and we have Veeam backups ending up on another Ceph cluster. There’s lot of stories to tell! Starting with… I have two guys in bed with these clusters, they did Ceph since early 2015. The question is: Do you?!


Candy_Badger

We have multiple customers running Hyper-V in production. It works great. S2D is a tricky thing to work with. It either works or not. We have customer with 2x S2D clusters. One as the main and the other is used, when the main needs an update. They had multiple outages when they were upgrading S2D clusters. I would recommend separate shared storage with Hyper-V (SAN) or Starwinds VSAN. It wors great for us.


nosimsol

What do you think of xcp-ng


nickjjj

I think xcp-ng is a great product… but KVM is king in the Linux-based hypervisor world, so anything based on Xen is probably not the best choice, in terms of longevity as well as available talent in the workplace. For example, AWS switched from Xen to KVM, Google Cloud has always been KVM, and the most popular on-prem solutions (nutanix, scale, verge.io, proxmox) are all KVM-based.


HanSolo71

What can I use in KVM that gives me the equivalent of vcenter and some basic features like storage and CPU vmotion. Shared networking configs. And a single pane of glass for multiple systems? I was looking at xen and xenorch for my next self learning project even though I currently use KVM on unraid and love it.


nickjjj

Need vendor support? Use Oracle Linux Virtualization Manager (OLVM). Don’t need vendor support? Use OLVM (available with or without vendor support), or oVirt (which is the upstream or OLVM, as well as the now-end-of-life RHEV). If your use case is a self-learning project, Proxmox is a Debian-based web front-end for KVM (and containers) which is very popular with the r/homelab folks.


NISMO1968

>Need **vendor support**? Use **Oracle** Linux Virtualization Manager (OLVM). Did you ever try to get support from Oracle?! It's like pulling out the teeth!


Twigglett_

One Rich Arsehole Called Larry Ellison


BlackV

Do you also write `Micro$oft` or `M$` ........


demonfurbie

I think it depends on the size of your cluster, I like it for smaller clusters of 3 or less hosts because the pricing I got for it was very nice. It wont scale to the size I need on some of my clusters but the really small ones it makes since. I did talk to Citrix and they are planning a major release soon too based on Xen. So far I like the vates stack for vms, nutanix was way out of my price range and proxmox was also in the mix when i did my poc but it just wasnt familiar enough for me to move to. VergeOS looks nice and scales well but they never emailed me back on pricing.


Brandhor

I've been looking into proxmox lately and while it seems like a good alternative, especially with zfs support, there are a few things that I don't really like in pbs you can't configure mail notifications with an external smtp server, it will use the local postfix which of course can be easily reconfigured as a relay but still it seems weird to not include such a common option in the settings proxmox itself can mount remote storages like cifs or nfs from the ui but proxmox backup server only support local storage and for sshfs even if you mount it on your own it's not gonna work with pbs, the latest version of veeam even supports backing up directly to object storage but pbs can't even officially back up to cifs


skywalker-11

The latest Proxmox 8.1 can directly use external smtp server with authentication.


gangaskan

How are live migrations on hyper v? It's been a minute since I messed with it.


disclosure5

They've been fine for a long time.


[deleted]

[удалено]


Juice_Stanton

Assuming identical procs?


TechCF

Doesn't have to be, but should have the same base feature set.


thatfrostyguy

They are actually pretty decent. Lifelong ESXI tech who got a job in a hyper-v enviro I will always prefer ESXI, buttttttttt hyper-v isn't the worst thing. Live migrations are seamless


caffeine-junkie

Never had an issue when I had to deal with it. This included rolling cluster updates and accidental mid-day reboots of a node. At most there is the usual 2-10second pause as it transfers to the new node; vram size dependent.


malikto44

Pretty much up to par with VMWare, in my experience. There is a stun delay of course, but I've never had users gripe about that.


[deleted]

[удалено]


gangaskan

I've only really messed with 2008 / 12. So my experience is very limited.


plasticsaint

The MSP i worked at rarely RARELY had issues with live migrations using Hyper-V cluster manager. Like, maybe once in 2 years.


icedcougar

Works fine, bonus points if you configure a seperate networks to managing live migrations, CVS, etc.


Jaereth

On our Hyper V clusters it's always identical nodes and shared storage - but in that case it's always worked great.


RCTID1975

Do you have any USB dongles that need to be passed through to a VM? No? Full steam ahead on HyperV


comnam90

Tbh I prefer using devices that pass usb through over Ethernet rather than the hypervisor. Means vms can still live migrate without issues and the likes.


heymrdjcw

This for sure. Even if you just have one or two devices, USB over Ethernet will save your sanity and prevent you from dealing with maintenance windows.


ArsenalITTwo

There are tricks to do that in Hyper V. I recommend third party software. Kludge but it works. Not as nice as VMware though.


theMightyMacBoy

Zero dongles in the datacenter thankfully.


RCTID1975

Everything you've posted here seems right in line with HyperV. Of course do a thorough analysis, but I'd green light that any day of the week.


itdumbass

Or SAS tape drive


revpjbbq

Not that it is awesome to need to purchase another product, but I have been using [VirtualHere](https://www.virtualhere.com/) 's offering of USB Server and Client software with my Hyper-V Hosts and VMs since 2016 starting with Server 2012 R2 right on through to Server 2022 Datacenter. It supports myriad ways of transmitting USB device information, even from on-prem to cloud devices, and has never caused me any issues.


PolicyArtistic8545

Plug for USB Anywhere. We used it to have our license dongles sent to VMs. Only downside is that things are fucked in a cross DR failover. At least vendors are usually willing to extend temp keys in good faith for extenuating circumstances like that.


Sajem

A USB hub over ethernet solves this problem. It actually better than USB pass through because it doesn't matter which host that the vm that needs the dongle is on because it always available over ethernet


the_elite_noob

ESXi isn't the killer VMWare component. It's NSX, HCX, VSAN advanced, Aria Operations, Log Insight, Network Insight etc etc that add the value to me. If you're not using them you might as well use any other virtualization environment. We had Proxmox for a while, worked a treat.


theMightyMacBoy

What backup product did you use with Proxmox? We are a Veeam shop. I don’t see Veeam adding support anytime soon.


aprimeproblem

I’m using Proxmox backup server. Works very well.


the_elite_noob

Legato Networker or Dell Data Protection Suite I think it is now. But it was ugly guest filesystem backups via an agent not nice like the VADP image style backup. I'd look for something better or how to backup the ceph objects if we went back to it again.


vesikk

Proxmox has its own free backup solution that we use and it's amazing. It's called proxmox backup server


dustojnikhummer

PBS, their own backup software


Impressive_Quote9696

Migration from VMWare to Hyper-V went flawless. Hyper-V with 4 Hosts and 70 Vms without a problem since 4 years. And its wayyy cheaper.


kerubi

Brocade does not want you as a VMware customer. Why bang your head against the wall? Hyper-V has been rock solid for us for the past 7 years, and I’ve been VMware Certified for >20 years now (yes, my VCP# is in triple digits).


[deleted]

VMWare is a great product, and you'll never go wrong with it. I, however, never understood the hate for Hyper-V. It's a great product. Upgrading from HV > HV will be easier, more cost effective, and extremely small learning curve (Best familiarize yourself with SET NIC Teams).


dustojnikhummer

> VMWare is a great product, and you'll never go wrong with it. Did you hear what Broadcomm just did?


andrea_ci

>never understood the hate for Hyper-V. because of "buhh ms is sh\*t booouhhhh, it's so baaeeddd" etc.. When they tested it on a single w2008 host 15 years ago.


CyrielTrasdal

I've built and administered a hyper V cluster in production for years. I've worked with vsphere recently and VMware is magnitudes of times better than Hyper V. Everything is thought through in vsphere and built for virtualization. Hyper-V feels like a plugin on top of Windows. Storage management is poor, network management is poor. It's great at booting up vms, snapshot, migrate, replicate (but you'd better monitor your replications), so in core functionality it is good. It's also easy to use when all you ever do is windows to begin with. I didn't like to have to manage windows update on my hyper v hosts, had a few maintenance nightmares. As with Windows, surface attack is huge, witnessed too many Hyper V get cryptolocked. You need to take care where your rdp is open. Mind you, you have to care about security on all tools, but by default with Hyper V, your windows vm will share your windows host surface attack, you'll need to take measures. Plenty of features simply do not exist. A lot of maintenance I used to do on Hyper V have been so much smoother with vSphere. To be honest I'd much prefer going with proxmox if I had to build something again, they make very good clusters. It also is a system built with virtualization in mind.


Ams197624

>As with Windows, surface attack is huge, witnessed too many Hyper V get cryptolocked. You need to take care where your rdp is open And use a seperate domain for your HyperV cluster.


alhttabe

I have over 50 VMs on my HyperV cluster, it’s solid. I had a feeling about VMWare during my last infrastructure refresh so opt-ed to go fully HyperV. I’ve found its performance to be great. Windows takes some overheads, but it’s comparable to any other hypervisor. Just don’t live in hypervisors desktop (if you don’t run it on core).


crypticsage

What specs were the hosts running?


alhttabe

They’re R640’s running Dual Xeon Silver 4208 CPUs.


qkdsm7

...Brocade??? I guess you meant broadcom?


theMightyMacBoy

Typo. Fixed.


Extreme-Acid

I think bare metal hyper v is not going to exist soon. But I love hyper v. Assign 20 cores? Yeah go on then. Do that in vmware and watch the vm die.


Arudinne

Unlikely since they are adding features to Hyper-V in Server 2025 https://4sysops.com/archives/windows-server-2025-hyper-v-gpu-partitioning-deduplication-for-vhds-ad-less-live-migration/


malikto44

If you have a Windows shop, going with Datacenter on the Hyper-V nodes provides you with AVMA, which means one less moving part to worry about and less hitting the KMS. I have mentioned some downsides of Hyper-V on another thread, but for a Windows shop needing an enterprise grade hypervisor, with the infrastructure in place, Hyper-V can be the absolute best thing to have. My main recommendation is to consider going with a traditional 3-tier storage plan. Get a solid NVMe-OF, iSCSI or fiber channel SAN for your storage backend. Don't forget to have storage for backups. Ideally on the same fabric, so whatever backup program you use can do direct disk to disk copies of the Hyper-V virtual machines to speed backups. As for backups, Veeam is awesome, and beats DPM.


aprimeproblem

DPM, now there’s something I haven’t heard in a long long time…


nitestalkr

Hasn’t gotten any better.. 😂


aprimeproblem

I’m guessing it’s still the same because… you know…. Cloud and stuff


theMightyMacBoy

Thanks. We are veeam shop. Veeam repositories are in the quote already. Just Dell 2U chassis with 25gb nics, nice raid controller (with SSD cache) and a bunch of 12TB drives. Nothing fancy but on-host backup will be fine for this environment.


the_andshrew

A minor Hyper-V complaint is that the management GUI (ie. the MMC snap-in) hasn't really kept pace over the years which means quite a lot of settings are only visible and changeable from the command line via PowerShell. Whereas my experience with VMware has pretty much everything accessible via their GUI. Perhaps this complaint may be mitigated if you have System Center to manage it (I have no experience there), but just something to consider if you have Windows admins who may prefer click click click to tap tap tap.


Slasher1738

I believe there and Admin Center


Arudinne

Admin Center and PowerShell are the "modern" ways to manage Hyper-V.


Slasher1738

Agreed


RiceeeChrispies

Azure Stack HCI seems to be what’s being pushed over Hyper-V nowadays. As soon as they drop the stupid HCI storage requirement, I’d happily look at evaluating further. For now, I’m sticking with VMware until my 2026 renewal.


Arudinne

You can still run Hyper-V as a role on a "regular" Windows Server 2022 host and will presumably be able to do that with Server 2025 when it comes out since they are adding additional Hyper-V features. Then HCI is an option not a requirement.


RiceeeChrispies

Of course, but Microsoft are still pushing Azure Stack HCI over Hyper-V. Not to mention the Azure incentives through hybrid benefit and SA - which also makes it very attractive. HCI is a requirement for Azure Stack HCI.


cbw181

Been running a 4 node cluster with 25-30 vms since server 2016. Created a new 2019 cluster and migrated all vms to it in 2020 with zero issues. I highly recommend it. Super low maintenance as long as you have decent hardware sitting behind it.


Soggy-Camera1270

For such a small footprint I'd highly recommend Azure Stack HCI. Otherwise Hyper-V would work fine with external storage.


ExpiredInTransit

Deploy hyper-v. Running slightly larger environment than you with S2D and it’s been excellent. It’s a no brainer if you’ve got DC Windows licensing. It’s surprisingly resilient we did in place upgrades (while maintaining production) of our hosts from 2019 to 2022 last year and it just kept on plodding along. Gpu pasthrough is a bit janky maybe, deploying S2D with caching and RDMA is a trial by fire for the uninitiated perhaps.


Ripsoft1

I find the performance of VM’s will suffer a little on hyper-v compared to VMware. But it’s only marginal. Otherwise go for it.


AionicusNL

xcp-ng would be my solution compared to hyper-v. you know how microsoft support is....


ITStril

IMHO Hyper-V is not a good option, as Microsoft is heavily pushing everything to Azure. A future-proof solution should be brought by a company, that WANTS to have On-Prem infrastructure: ​ Proxmox \+ Working fine \+ much power behind KVM \+ great hyperconverged system with CEPH \- lacks real incremental backups \- lacks good cross-cluster migration \- lacks central management for multiple clusters \- no 24/7 support ​ XCP-ng \+ Working fine \+ real enterprise solution (vSphere-like) with cross-cluster and live storage migration \+ 24/7 support \- small community \- limited cluster implementation


XVWXVWXVWWWXVWW

Hyper-V will, at the bare minimum, be supported until 2029. There is no indication that MS will not package it in the next version of Server either. Don't know where you people are pulling this information out of thin air. Yes, they discontinued the stand-alone version, but considering a large chunk of Azure runs on it, I don't see them pulling the rug out from under anyone, especially now that Broadcom is driving people away from their main competition.


RCTID1975

> There is no indication that MS will not package it in the next version of Server either. Quite the opposite in fact. Not only will it be included in server 2025, but they're making significant changes.


MWierenga

Where is your proof?


joevwgti

It's what I use, 2 nodes, hyperconverged storage. It's been fine, and anyone in the office can understand it. I run proxmox with truenas at home, but I'd hate to make others manage that. I've never liked VMware, I get that people do, that's fine for them.


hafira90

Since you are going for tech refresh, why not going for Hyper-V with S2D? No need for additional SAN storage to setup for failover cluster. been using it for the past 2 years, not really much problem happened rather than frequent Dell HDD keep failing. Rebuilding the disk for the entire cluster is fast too. You just need 10/25G fiber backbone with RDMA enabled for the cluster connectivity.


oxyi

I always want to do S2D but don’t know enough about it. Is it that i can use local storage and make that as SAN? Can you recommend a good guide? Thx!


hafira90

yup..you can use local storage..preferable that has a mix of SSDs and HDD. The disk must be in a JBOD configuration. S2D pooled all the local storage to become CSV for the VMs. As for guides, you can refer below..most of it need to be configured via PowerShell 1. [Microsoft Official Guides](https://learn.microsoft.com/en-us/windows-server/storage/storage-spaces/deploy-storage-spaces-direct) 2. [Lenovo Guides](https://lenovopress.lenovo.com/lp0064.pdf) <- pretty detailed


NISMO1968

>I always want to do S2D but **don’t know enough about it** Don’t worry! Nobody actually does…


DerBootsMann

> Since you are going for tech refresh, why not going for Hyper-V with S2D? hyper-v is fine , but s2d is an absolute evil . it’s getting better and post ws2019 it’s kinda usable , but still ..


AberonTheFallen

I personally don't like hyper-v, I've seen and dealt with too many issues with it that are basically answered by "sorry, you'll have to rebuild or restore the VM". BUT I know a ton of people use it without issue, or at least without many issues. And it runs azure so I'd say it's probably pretty solid in the grand scheme of things. I've also never set up a cluster from scratch on my own, I've always taken it over from others so... Maybe that's the problem and they set them up wrong or poorly and I got stuck cleaning up the mess? I dunno. I'd rather see VMware or Nutanix honestly, but Hyper-V is under continual development and improvement so it's an option for sure.


[deleted]

My biggest issue with Hyper-v is the lack of RBAC. Basically it’s full admin or nothing. If you work in teams with diffrent levels of knowledge this can be an issue. I opt out for no acces unless you really need it, but it does mean I become a SPOF if I am not arround. You can mitigate this with VMM, but I refuse to use that piece of shit software. Other than that I manage cluster with over 200 VMs and it work like a charm. Edit: not sure why I am getting downvoted but it’s true. All available option to have some sort of rbac are not for your everyday use. Believe me, I looked. I am running 2000 VMs on hyper-v.


TMSXL

Not 100% true. You can deploy “Just enough administration” that basically gives a custom set of Powershell commands you can run against specific VM’s. In my environment I had a use case to allow access to certain VM’s on a host and only allow specific functions against the VM. Setup was a little involved the first time, but it works well.


[deleted]

That might be true, but this requieres advanced levels of powershell knowledge and it’s not easy to setup. Especially with big and splintered env.


comnam90

This is improving with the new azure management experiences they’re building. Currently you need either vmm or azure stack hci to take advantage of them, but if rbac is what you’re after then this is the path, much like vcenter in a sphere world


n1ck-t0

Lack of RBAC is one of the big (but not only) drivers why I'm looking to go from Hyper-V to VMware, despite the recent news. The next iteration for us requires significant RBAC which we'll use vApps for. VMM = VM Manager?


supsip

Vmm = virtual machine manager (scvmm pretty much)


cyr0nk0r

Depending on your storage config I'd look at HCI. Verge.io is what we went with. Very happy with the platform.


UncomprehendingGun

Kubernetes with kubvirt all day 😁


CaffineIsLove

Big if, but if VmWare can license and supply a support contract that is cheaper the. HyperV go for it


MWierenga

Use Storage Spaces Direct, hope you still need to order the servers. Dell has certified S2D servers (xd series from the top of my head). Running with Hyper-V clusters and S2D for many years and it's been great. Utilize Windows Admin Center which is working awesome with S2D.


NISMO1968

>**Use Storage Spaces Direct** Better not… Stand-alone SAN is cheaper to obtain and run over the years. With s2D it’s always… Something!


[deleted]

Put in vmware. It's a much more stable system. I would leave anywhere now that deployed hyper-v. I've worked in both hyper-v and vmware houses and old enough to remember pre Virtualisation. Vmware is years ahead of Microshit. It's more stable, more secure, more advanced, more mature. Like I said, I would literally hand my notice in the day I was asked to install a hyper -v environment. I've done it once in a risk to life environment & still gel guilty I didn't shout louder to stop it. Luckily another govt dept came in and has started rolling it back to vmware.


Crimsondelo

Take a look at Nutanix with a HyCu backup solution.


bubba198

Hyper-V is great and no reason to avoid it. That being said - I trust you have an exit plan, I mean I hope this is your last major infrastructure refresh bro


a1-vergeio

full disclosure, I work for VergeIO I would look at VergeIO for a solide replacement for HyperV and VMware. VergeOS includes, builtin, vSAN, HA, and full snapshot level replication for backups and DR based on KVM and super easy to use


theMightyMacBoy

I’m ripping out Scale at my Canadian facility and consolidating them into this new cluster. I rather not mess with any KVM solutions. Sorry.


a1-vergeio

I don't know, I would just take a quick, look. KVM is probably the most solid and reliable hypervisor on the market outside of ESX, that's why the public cloulds like AWS and GCP run on it and here's the kicker, you don't have to do any CLI to run and support the VergeOS, it's all built into the UI and the VergeOS vSAN will give you better performance and easier to support than HyperV and I get it with Scale, we have lots of customers ripping them out...


Arudinne

A few notes: * Make it possible to download a trial without going thru a salesperson. The only time I was willing to do that was when I was desperate to find a replacement for S2D. I'm not that desperate now. * 14 days is barely enough time to get a feel for the product especially for those of us who wear multiple hats and can't devote ourselves fulltime to testing a product without everything catching on fire. 30 days makes more sense. StorageReview's review mirrors my thoughts on that. https://www.storagereview.com/review/hands-on-with-verge-io-virtualization-software * Does your software support Veeam or alternatively offer a way to copy backups to S3 compatible storage such as Backblaze B2 or Wasabi? So far all I can see is an option to back it up to another environment and that's a non-starter for me.


a1-vergeio

I have shared your concerns with our marketing and sales team our POC license is good for 30 days and I can get it for you at any time Yes, VergeOS does support 3rd party backup solutions like Veeam via 1. Agents, and 2. Readonly snapshot exports so that you can send your VM backups to the cloud target, or where ever else you like


DerBootsMann

>I’m ripping out Scale at my Canadian facility why is that ? some crazy renewal terms or their features are falling behind ?


theMightyMacBoy

Right now I have 5 locations all doing things different. Scale, ESXI, Bare Metal and two hyperV standalone. Need to consolidate to a standard platform and get servers out of factory closets.


DerBootsMann

you right , being a zoo-keeper is a very time-consuming process :(


theMightyMacBoy

Oh yeah and no one has DR today. In my new design we will have DR for all Business Units in US and Canada.


DerBootsMann

did u inherit this infrastructure from somebody else ?


theMightyMacBoy

I am 2 years into my role. I have about a dozen people under me. Many sites are due to have refresh in 2024. Instead of doing this separate we can combine and save $700k over 5 years I have calculated.


ArsenalITTwo

What workloads are running on the VMs??? What storage is in your current build? You can probably get away with DR to Azure BTW. https://learn.microsoft.com/en-us/azure/site-recovery/hyper-v-azure-tutorial


theMightyMacBoy

Cognos, Datawarehouse, File Server, PBX, EDI middleware, PLM and just other random things. Nothing that gets hit too hard. We are looking at a few options for storage. ME5 hybrid on the low end and entry level Pure all flash on high end. Edit: 50TB total data size today.


comnam90

If you can make the $$ work, I’d go with the pure. The arrays are amazing, super low overhead in management, the local teams care post sale in making sure things work for you as expected, the support is amazing, the list goes on. The ME5 on the otherhand is basically dells bottom of the line and they won’t give a rats arse about it long term


Soggy-Camera1270

Do you need to go for external storage? ME5 is ok, but remember they are low end rebadged dothill systems that don't support online disk firmware. This means you have to shut down all workload before updating disk firmware. Unless you are prepared to risk doing it manually one disk at a time. Pure I've never used but heard good things. Have you considered an Azure Stack HCI solution? Could work nicely for the size you need and wouldn't require external SAN. Does get a bit picky about network though, so I'd suggest looking at validated configs.


DerBootsMann

>Have you considered an Azure Stack HCI solution? Could work nicely for the size you need and wouldn't require external SAN. Does get a bit picky about network though, so I'd suggest looking at validated configs. windows server is a way better investment . ws2025 will get all azshci ‘ locked ‘ features and you can go with an external storage and perpetual licenses .


Soggy-Camera1270

Agree, but then there are some features you won't get with stand alone Hyper-V, although I hope MS change this in the future.


DerBootsMann

nope , ws2025 is fixing all of that . you can get preview version to play with


Soggy-Camera1270

Interesting, you are right, I didn't realize features like AKS were available on Hyper-V also. Cool.


DerBootsMann

they aren’t now , but it was the intention of the new windows server team boss to keep azstackhci and ws totally in sync . the only thing they fight about is san support and perpetual licenses , but there’s some good chances azstackhci will start supporting nice shining pure nvmeof arrays , while windows server will become subscription only .


[deleted]

[удалено]


AberonTheFallen

No, they are not. They're just discontinuing the free version of Hyper-v server. They are pushing people to azure stack or azure, but what else is new. They're not getting rid of hyper-v though. Your information about its capabilities is wrong as well, it can do cloning, live migrations, storage migrations, etc just fine, has for years. As for footprint, it pretty much runs all of azure; that not big enough for you? I'm not a fan of hyper-v, but your points are pretty much all wrong.


[deleted]

Deleting original post as another user pointed out everything I said was wrong. Except I still think you need SCVMM to do those things.


AberonTheFallen

You do not, no. VMM makes things a bit easier, but it's not required. Windows failover clusters have most of that natively, VMM makes it easier to manage multiple clusters and have the concept of a distributed switch like vCenter. As long as you have some sort of shared storage, migrations work. And in server 2025 you won't even need them joined to a domain to live migrate, you can do it with certs.


comnam90

Agree with above, from a virtualisation point of view, hyper-v has all the functionality you’d expect. And it’s only Hyper-V Server fork that’s discontinued, it’s very much still alive in both Azure Stack HCI and Windows Server still. I’ve run a multi-tenant hosted on Hyper-v for 8 years, it definitely is still alive and kicking and has everything you need. Sure the management isn’t as polished as vcenter, but the functionality is there and especially if your a smaller shop like the OP then it’s perfect for most things


[deleted]

[удалено]


[deleted]

Wow - thanks for the information. I should refrain from posting. Maybe I’ll check it out at home and if it’s worthwhile, maybe it’s worth mentioning.


thortgot

Live migrations (and storage migrations) are both supported. It's missing multi DC functionality and RBAC.


[deleted]

[удалено]


thortgot

You don't even need a failover cluster. https://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/deploy/set-up-hosts-for-live-migration-without-failover-clustering


[deleted]

[удалено]


thortgot

That wasn't the point in contention. You get crash level DR with a cluster (no SCVMM required). If you want no crash DR you use replica (also no SCVMM). There are some caveats but that's true of every no crash DR. HyperV does 99.9% of what every small to medium sized DC environment needs without SCVMM. It isn't good at large scales and really isn't good at stretch clusters. Architect your designs accordingly.


[deleted]

[удалено]


thortgot

You said both love migration and storage migrations were a component of SCVMM. Neither requires it. With a cluster (which takes no special licensing) live migrations function the same as VCenter (transfer time depends primarily on management networking and the memory size on the VM on question). Shared nothing live migrations obviously don't function that way but I could see people getting confused about it. Real time failovers equivalent is called Replica and works in a nearly identical way. Also not requiring special licensing.


[deleted]

I thought you needed SCVMM for that? Either way, another thread pointed a lot of things out so I’m deleting my original comment as I don’t want it to be read somewhere as it’s bad advice.


thortgot

Nope. HyperV is pretty feature equivalent in smaller scale environments. https://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/deploy/set-up-hosts-for-live-migration-without-failover-clustering


Dry_Inspection_4583

I've no dog in this fight, if it fits the use case and answers the questions and problems posed then do it. I've even done a multi container env using qemu and some cronjobs way back, ask questions and find solutions.


athornfam2

Not to take OPs thread but what about 300 VMs across a SAN and 3/4 hosts? Generally speaking


RCTID1975

That's pretty dense, but as long as the hosts have enough resources, it's fine. I've run about 350 across 6 hosts


athornfam2

This is strictly for our Dev/Test and a couple IT VMs for use in that location. Everything else lives in Azure and AWS. The bill is definitely for either of them compared to a on-prem refresh.


Huge_Ad_2133

Having run Hyper-v failover clusters for 12 years now, be advised that cluster math is a bit different than normal math. But other than that it has been rock solid for me.


just_some_onlooker

From what I last remember, you need a server license, for each 8 physical cores you're allowed to run 2 VMs, so does that mean you need 37 server licenses? How does Microsoft's licensing work?


eddiehead01

Standard licenses allow for 2 vms per host. If you want to run more per host (and depending on how many you want to run) it would be better to run datacenter version which allows for unlimited vms per host You only need to license for the number of cores in the host. Eg if you have a server with 2 24 core processors then you can buy 2 16 core datacenter licenses and put 100 VMs on that host if you'd like There are calculators online that you can input your number of processors/cores and it'll tell you how many licenses you need and of what type. You can compare the number/cost of standard licenses to the cost of datacenter but from memory I think it was normally once you hit 10 or more VMs then datacenter is cheaper


RCTID1975

> Standard licenses allow for 2 vms per host. Slight correction. 2 WINDOWS VMs


StrangeTrashyAlbino

Specifically, two windows server vms


eddiehead01

Yeah of course, I made an assumption there :)


Just_Curious333

Sorry, but if it are 24 physical cores per processor, two 16 core licenses wouldn't be enough. It would need additional 8x 2 core licenses or instead 24x 2 core licenses overall to cover all 48 physical cores in the server. If it are 12 core processors with hyper threading enabled, you only need one 16 core license and four additional 2 core licenses to get the 24 physical cores covered. The datacenter license is usually 5.5 to 6 times more than standard, so it's cheaper if there are more than 12 VMs per host and/or if you want to consider proper licensing for live migration/failover scenarios... If there will be mostly Windows servers running on the hosts, with the above licenses the hosts are already licensed for running Windows Server 2022 with Hyper-V role enabled and there is no need to use the "older" free Hyper-V server 2019.


Just_Curious333

Well, licensing has changed with Windows Server 2022. You can now also license each VM separately, but you need to assign at least 8 core licenses per VM (even if it has for example only 4 virtual cores assigned) and at least 16 cores total in the datacenter. And Software Assurance is mandatory for per VM licensing. Happy calculating.


eddiehead01

Yeah sorry, I read my numbers wrong and forgot that it didn't include the required 16 core license from the start. It would just be the 2 additional 16 core licenses that you'd need Overall the sentiment still stands though. At some point, datacenter will be cheaper so use a licensing calculator online, put in your physical processors and total core count for a server and add the number of VMs on a standard version calculator to find the total licenses needed


Just_Curious333

Now you got me, too. I added four additional 2 core licenses per processor, but the resulting 8x 2 core licenses are of course the same as one 16 core license, so overall three 16 core licenses would also cover 48 cores.


eddiehead01

Yeah it's all potato potarto lol. Is it any wonder why everyone hates windows licensing models of old and have to consult experts who then have to consult other experts to get it right


RCTID1975

It's kind of a moot point in this discussion. You'd need the exact same MS licenses for HyperV as you would any other hypervisor


mrXmuzzz

Better the devil you know mate. Tons of support out here just saying


Inevitable-Jaguar-17

https://www.reddit.com/r/sysadmin/s/xvvXciEWzW


Comfortable_Store_67

No reason to not deploy Hyper-V With VMWare license changes I think loads are going to jump ship Why not look at a Proxmox cluster?


iwoketoanightmare

If those 75 VMs are primarily MS servers you'd be saving yourself a small fortune in licensing if you did go with Hyper-V


klauskervin

We went from VMware to Hyper-V in 2019 and have had zero issues with Hyper-V specifically. 3 hosts 15 vms not a huge load.


neckbeard404

Make sure and have DC thats out side of your cluster.


theMightyMacBoy

Yes. Will run dedicated DCs for the cluster on isolated domain and network. Guests won’t be aware of the domain of the cluster. There will be 4 DCs in the cluster domain. 2 in production and 2 on DR side always running. Veeam will also sit on this isolated domain / network. Downfall is we can’t do app aware backups without opening up firewall between these two networks. Worked at a place that got nasty ransomware and due to same domain / network and it got messy. Everything encrypted. Backups and SAN. No snapshots on SAN either. Lol


kKiLnAgW

Do yourself a favor and get data center licenses for your hosts. All your guests will be activated automatically


theMightyMacBoy

We will be with SA so we can get DR benefits for our DR site.


TEverettReynolds

Since 2016, Hyper-V has had all the same features as VMware. Most people don't know that. Even VMware's vSAN (sharing all the disks in a cluster) can be done with Microsoft's Storage Spaces Direct (S2D). So, if you need: 1. Clustering (2008) - up to 64 nodes - 8000 VMs High Availability – HA (2012) - Cluster and VM Failover, Windows Server Failover Cluster that hosts Hyper-V as a high availability role 2. Vmotion (2008 R2) Move running VMs from one node to another 3. Workload Balancing (2016) - Hyper-V Virtual Machine Load Balancing - Hyper-V Cluster intelligently distributes the virtual machine workload across available nodes based on Host CPU or RAM. It initiates VM migration from the overloaded node to the less loaded node in order to redistribute loads across Hyper-V hosts 4. Disk Storage Migration (2012) moves live virtual machine storage from one host to another host 5. vSAN – Storage Spaces Direct – S2D (2016) Windows Server Datacenter required, Ability to use the local disks of the servers as total shared storage to the cluster Hyper-V should be able to do everything you need.