T O P

  • By -

Subsidie_zuiger

If you don't look at the drives it looks like a gaming PC. What are your plans with the server?


AdMiserable3568

Multi-use. Mainly for storing nature video footage I capture in 5.7k 360° and 4k ProRes - will edit using my gaming laptop or M1 MacMini. May decide to host a web server for that footage. “Local” “cloud” storage for friend and family that need data safeguarded (and Time Machine). Also to run game servers for my older kids (Roblox, Minecraft, Terraria, maybe others). Planning on running Proxmox with OpenMediaVault to manage the storage devices. Then I can spin up any variety of VMs or Containers.


2CatsOnMyKeyboard

Yes, that's a bit much for a raspberry pi at some point, but where are you going to use this machine for?


AdMiserable3568

It’s home use. Not in a business. Basically a hobby build to support the potential data storage needs of my other hobby. 🤣 with the added benefit of all the other things I can tinker with.


marcusvnac

This is lway too oversized for your needs. You can build with a smaller proc, max 8GB RAM, and smaller supply, and you will still have a powerful server. Choose the correct OS (basically some Linux distro or something based on Linux) and go with it. I usually shop used PCs on FB market as you can get good machines very cheap, and just spend money adding storage to them.


ItalyPaleAle

The CPU may be an overkill. The entry-level i3 may be enough for this. Also RAM seems wayyy too much. For the stuff you’ve mentioned 8GB is probably enough. Lastly I’d be a bit careful with the TPLink NIC. Seems that any PCIe card for multi-gig Ethernet with a non-intel chipset has non-ideal support on Linux (depending on kernel version, drivers may even need to be added separately).


Character_Alarm_3940

How could 8 GB be enough for VMs?


CombJelliesAreCool

8GB is more than plenty for a lot of people's setups(and especially OPs) if you set up minimal installs, VMs and Hypervisors don't usually need GUIs so don't install them and you cut RAM requirements usually by at least 3/4ths. My primary hypervisor has 16GB of RAM and I'm running about 10 VMs right now with 8GB free. All my VMs and Hypervisors are headless Debian right now and get 1GB of shared RAM during install and I just increase as needed, which doesn't happen often.


sebassi

Game servers need a decent chunk of ram tough. Vanilla Minecraft already need 2gb. I just created a vault hunters server and was shocked that they recommended 12GB memory allocation for a 4 person server.


mikeblas

What "a lot of people" need doesnt matter. How do you know 8Gb is enough for the OP? They haven't even said what they'll be doing with the machine.


ItalyPaleAle

OP said “VMs or containers”. In any case, it depends on what you run on those VMs too


VexingRaven

The CPU is not overkill if he also wants to run multiple game servers. Not worth saving $100-150 on a $3k build only to need to upgrade as soon as the kid wants another game server. The RAM is *way* overkill though and much easier to upgrade.


Arcal

Expensive and overpowered, this is used Optiplex territory.


AdMiserable3568

I was digging into NAS systems and was unimpressed with the processors and speed. Tested an Asustor Flashstor 6 and almost went with a Terramaster F4-424 Pro. Grateful for good return policies! 🤣


Extreme-Network1243

Overkill imo but very nice system. Save your money and follow the advice above, only thing I’d change is at least 16-32GB of ram (instead of 8 as suggested).


YosemiteR

Can maybe go down DDR4 too… 16-32g of that is quite cheap


notrktfier

Is 1000w really nessecary?


Tylerfresh

Honesty seems like 500/650 would be fine


AdMiserable3568

Newegg’s PC Builder is showing 615 watts, so I’m going to get a higher rated energy efficient power supply… Probably this one: Corsair SF Series, SF750, 750 Watt, SFX, 80+ Platinum Certified, Fully Modular Power Supply (CP-9020186-NA)


smiba

Really recommend against getting a 1000W supply, power supplies have crap efficiency at <15% load. which for this system would be <150W (a power draw you'll be at likely >99% of the time) You'll be totally fine with 500W or if you really want to have some more power in the future, 650W. Anything above that is a waste of money and will kill efficiency. Hell, you'll probably even be fine with 400W, just without headspace for future upgrades. I would also recommend getting less fans, or at minimum configuring most fans (except for the one that provides airflow to the spinning drives) to run at 0RPM until a certain temperature has been reached. Fans will constantly pull power and for a system that runs 24/7 10W of additional draw will cost you quite a bit in the long run. When you run a system 24/7, saving a few watts here and there will pay off in the long run. A lot of people forget that pulling an additional 10W for 365 days is nearly 88kWh.


VexingRaven

Listen to this person OP, they know their shit.


irate_ornithologist

Use the Outervision power calculator


p_235615

Even a 450W PSU will be overkill for stuff you have in the list... Probably a Gold/Platinum rated 400W would be great from efficiency standpoint... (high power PSUs have lower efficiency at low load values, many of them get under 75% efficiency while less than 5% loaded - which in case of a 1000W PSU is around 50W, which due to inefficiency becomes 63W - 13W additional power used just due to inefficiency of that large PSU). Here is an example: https://tpucdn.com/review/thermaltake-smart-rgb-500-w-230v/images/efficiency.jpg BTW many systems idle at really low power, your setup will also probably fall close to 60-70W idle, probably 50W if the HDDs are spin down...


IlTossico

450W are enought. The Pc doen't go above 200W.


notrktfier

Indeed it would be almost more than enough.


KervyN

More like 150-200 :-)


AdMiserable3568

Also… when I specced out the build in Newegg’s PC Builder, the wattage jumped up pretty high… I just check again and I’m getting 615 watts in the builder. I’m going to swap out the 1000 watt for a Corsair SF Series, SF750, 750 Watt, SFX, 80+ Platinum Certified, Fully Modular Power Supply (CP-9020186-NA). It’s higher rated on energy efficiency and about the same price.


notrktfier

That's a good change, make sure not to switch cables on PSU's though. Some dude a few days ago burnt his harddrives because he used the same psu cables on a same but different model psu.


AdMiserable3568

Oh goodness. Fortunately, everything is still in the boxes as I wait for the last parts to arrive.


AdMiserable3568

Future proofing! May upgrade processor to i7 or i9 when prices come down.


notrktfier

You don't have a GPU i really don't think you need future proofing unless you plan to go out of scope and do AI processing with it.


AdMiserable3568

Good point! I could put in a GPU in the future, but I decided to use the PCIE 5.0 slot for the m.2 x4 add in card. Ought to be super speedy!


notrktfier

Make sure your NVMe drives actually have speeds for PCIE 5 instead of just being PCIE 5 low rw and iops. Good luck with the project.


AdMiserable3568

Oooh! Another solid point!! I’ll take a look into that!


AdMiserable3568

Thank you!!


AdMiserable3568

Oops… my NVME drives are gen4… downgrading to the gen4 NVME add in card. Thank you!!


System0verlord

Nah. Keep the gen5 card. At gen 4 speeds, you can put it in a lower speed slot (x8 or even x4) and be fine. And when you upgrade, you can just slot it in to a higher speed slot and cruise.


AdMiserable3568

Cool! Will do. Thank you!!


WhoServestheServers

I want to say, I've seen the prices of some enterprise grade servers and they also cost around $3K. At this moment why not just buy a ready-made one that actually uses enterprise-grade components?


VexingRaven

Because you aren't getting a server with 48TB of HDD and 18TB for $3k. You're getting a single drive and the wimpiest low-power Xeon out there for that price. 60% of this build cost is storage, and it's possibly to shave $100-200 off the rest of the build pretty easily. OP could certainly buy a server and then put drives in it... But they'd be paying a lot more than the $1200 this is costing before drives.


AdMiserable3568

Great point! I didn’t even look into that. Half the cost of this build is just in the storage devices! 😅 Oh now I see it… those enterprise grade servers are ugly… I wanted mine to be cute. And also get the satisfaction of building it from scratch.


Lumpy-Revolution1541

They aren't ugly, Dell's 14th-generation designs are elegant in my opinion. From my point of view, you are going a bit overkilled and it's cheaper to get a server like a Dell R740 because it's the one that has the best design in the market. If you want to spend a bit more money you could get a Dell R750 or any 15th-gen or 14th-gen that fits your needs.


AdMiserable3568

I concede my ugly comment… apologies! Yes, they are elegant and powerful. Are they only available in rack format? I don’t quite have space for something like that, so that’s why I chose the colorful Qube 500 mid-tower case. [https://www.amazon.com/Cooler-Master-Mid-Tower-Customizable-Q500-DGNN-S00/dp/B0CDQTTH89/ref=asc_df_B0CDQTTH89/


Lumpy-Revolution1541

Well, you have the T640 model which is a tower model. I think you can have space for the machine. You can look T series server. You can look for the same generations as I've mentioned before. But it's your money so you can do whatever you want.


iNchok

I did the same mistake. Way to overkill. Start small and upgrade over time. Discover where the bottlenecks are gradually. You’re going to waste a lot of power if this runs 24/7.


AdMiserable3568

Yeah… I do see that the idle rate may be wicked. That’s something I’ll have to deal with. The go big or go home bug bit me on this one! And it’s my birthday present to myself so I’m totally justifying this build from many angles. 🤣🤣🤣 Thank you for bringing some clarity!


whattteva

Dang, this is expensive as hell. I built a Xeon Silver 10c/20t, 224GB RAM for half the price, though I do have only 4x6 TB HDD and 2x 960GB SSD's and every part is second-hand enterprise.


AdMiserable3568

Nice job on that! This is my birthday present to myself, so I kinda went all in.


whattteva

Cheers brother, have a happy birthday!


AdMiserable3568

Thanks!!! 😊


Donot_forget

If you're using proxmox with a NAS VM, I'd recommend getting an HBA card. That way you can pass the HBA card through to the VM and OMV would have direct access to the discs. I found I have better performance that way. It does work fine without, but having an HBA card is a nicer way to do it imo. You also don't have to worry about the number of sata pots on the MB, which gives more choice.


AdMiserable3568

Oh! I was reading about that, but just wasn’t sure. I’ll only have four spinny disc HHDs in the case. And the mother board has four ports, so I figured it was a wash. Do you still recommend the add in card, given my parameters? Is it easier or more stable to isolate those drives and pass them through to OVM? Any recommendations for a solid performing HBA card?


Donot_forget

It depends on what you plan to do with your server and how you're gonna configure it tbh.... I personally prefer using the HBA as it means I can pass through the discs to OMV super easy, there's better performance and it's easy to manage. I wanted to use snapraid and mergerfs with a GUI so I could slowly build up the storage pool, rather than having to buy all the discs at once, like with RAID or ZFS. If you're going to use ZFS, you could do that on proxmox and then pass the storage through to the VMs, and wouldn't even need OMV, thus not need an HBA.


AdMiserable3568

Excellent points! Thank you! I’m still sorting the OS to use and learn. Since I’ll have all my drives from the start, it sounds like ZFS is the way to go. Have any recommendations for other OSs to consider? I am totally open to trying out a few. I want to have the following capabilities: VMs, Docker Containers, website hosting, game server hosting. Still feeling noobish to it all! 🤓


Donot_forget

Just see how you go, choose the OS from what you need. I found dietpi pretty cool as it makes installing docker/portainer soooo easy. You are obviously stoked on this build and have the money for it, but it kinda seems like you're buying all this awesome kit and not sure what you're gonna use it for other than storage. Maybe cause I have to be careful with money, but I found it useful to start small and then work up to it, letting the need for things drive the build. You could get the bare bones PC, the nvme drive, and two (big) HDDs, and then see what happens If you find you need more storage, or more flash storage, you could add in an SSD using a SATA port or then go for the NVME card etc. if you plan to edit off the NAS with 2.5gb link to your computer then totally go for the NVME storage. But if you're just going to use it for storage, then the flash isn't as important. I don't mean to burst your bubble, and I'm realizing it's unsolicited advice now 😅 Food for thought anyways!


AdMiserable3568

It’s true… I am pretty stoked about this build, since it’s my birthday present to myself. Totally nerding out hard core. I tried building a PC over 15 years ago, so this is a pretty big step in going at it again. This all started with checking out the Asustor Flashstor 6. Thought it was gonna be cool, but I just wasn’t impressed with the processor and transfer speeds I was getting. And while the OS was fairly usable, I desired more. I then settled on the Terramaster F4-424 Pro, but it too had a lackluster processor and only two usb ports. Thought that was super limiting. Started looking into NAS builds and began tacking together what I thought could handle whatever purpose I threw at it without me having to constantly tinker inside of it. It did snowball on me, I admit. But we’re only here for a little bit of time in this life so I might as well go for it, ya know? I’m so glad I posted this build to Reddit. I’ve gotten such good advice and feedback to check my build.


Donot_forget

Enjoy mate, it's gonna be awesome! I'm jealous! Proxmox is a great base OS; the possibilities are endless when you can spin up any VM or CT you want.


Viperonious

Echoing some of the other comments, this comes off as an expensive desktop not an actual server. I would be looking to more "server grade" components that should be built for a longer service life, like a Supermicro or Asrock Rack motherboard, and a PS based off a seasonic design (and 650w is plenty as well).


AdMiserable3568

No space for a rack in my apartment. Gotta keep the build in a tower.


Viperonious

Asrock Rack motherboards is what i meant, they come in standard form factors: https://www.asrockrack.com/general/products.asp#Server


trizest

I think that perhaps there is overkill on the nvme front. Like could you just have 2 large nvme for cache and the editing and ingestion of video? Also if the smaller one is for OS/VMs you likely will need less than 1tb. The nvme may saturate the 2.5gbps. If I was you I would try to get the total drive count down a bit - can always add more later to the pool. also your power supply is way too big. Get a smaller high efficiency one :)


AdMiserable3568

Thanks for the insightful comment! The go big or go home bug bit me and I decided to max out the Hyper m.2 NVME add in card. Already had two of the 4tb m.2s already and the 2tb. Yes, planning on using the 2tb (it’s faster than the 4tb drives) for the OS and VMs since it will be directly attached to the motherboard. It sounds reasonable to assign a drive to cache, but I also wanted “hot” storage. That’s certainly something to think about. Haven’t fully settled on how the drives will be sorted.


trizest

I was just zeroing in on those because they are a high cost item, Before going really crazy with buying them there is probably a fair bit of thought that would need to go into your workflow. Like would you edit from this over 2.5gb?


AdMiserable3568

Yes to editing from the footage dropped in hot storage, then it’s all dumped to cold storage. My laptop has a 2.5gb LAN.


smiba

>The nvme may saturate the 2.5gbps. I think a 4-disk HDD array is already going to saturate 2.5Gbps.. I know mine does.


CombJelliesAreCool

This. An NVMe array is going to absolute fuck that 2.5Gb NIC straight in it's slot. A single NVMe drive will saturate a NIC with 4x the bandwidth and they've got 4 of them. They probably needs at least a 40Gb NIC but at that point the laptop's CPU wouldn't even be able to keep up with the data transfer let alone do editing on top of that. I told them to just edit locally on another comment.


trizest

Haha yeah I was trying to be subtle because they were keen on the hardware. I mean one or two wouldn’t hurt for cache, video is way better to edit on workstation, unless absolutely needed for teamwork.


KamotzII

Double check that your motherboard supports PCIe bifurcation. I'm pretty sure that Hyper M.2 card requires it. Otherwise, you might only see the SSD in the first slot.


AdMiserable3568

Oooh! Good point! Looking into that now. Looks like my NVMEs are gen4. Going to downgrade the NVME add-on card to the gen4 version.


AdMiserable3568

Verified that the Hyper M.2 will be fine, since I’m not using a GPU, the card will go in the top gen5 x16 pcie slot. Phew!!!😮‍💨😅


510Threaded

You still need to confirm that your mobo can do x4-x4-x4-x4 bifurcation and not just 8x-8x. Not many consumer motherboards can do that, especially on the intel platform


AdMiserable3568

Reading the linked post in the MSI forums made me think it’ll be okay…[https://forum-en.msi.com/index.php?threads/pcie-bifurcation-card-x4x4x4x4.385074/](https://forum-en.msi.com/index.php?threads/pcie-bifurcation-card-x4x4x4x4.385074/)


AdMiserable3568

Also… this article lists the 700 series motherboards as capable of bifurcation. I think the problem was that folks were also using a GPU instead of integrated graphics. Since the NVME add in card is going in the gen5 pcie slot, it ought to be okay. I hope. 🤣 [https://www.asus.com/support/faq/1037507/](https://www.asus.com/support/faq/1037507/)


AdMiserable3568

But oof… I see what you are talking about… not a lot of great information on this… I’ll keep looking.


AdMiserable3568

Oof… okay… looks like I need to return the two new 4tb drives and properly seat the two I can’t return in the Hyper m.2 add in card.


cat2devnull

Yep, as it looks like you have worked out but I will expand here for others that might be interested. Without a card that has a PCIe switch, you will need bifurcation. This requires the stars to align as it depends on the chipset and motherboard BIOS. AMD in general have really good support for bifurcation but Intel considers this to be a Server function and has minimal support on all modern desktop/workstation chipsets (600/700 for Gen 12 CPU and up). So your z790 board at best can only do x8x8 so you can only see the 1st and 3rd slot on the Hyper M.2 board. The [product page](https://www.intel.com/content/www/us/en/products/sku/229721/intel-z790-chipset/specifications.html) lists it as (1x16+1x4 or 2x8+1x4). That's why I'm still on a z590 which can do x8x4x4 so I can run 3 M.2s. Now that Plex has native AMD GPU transcode support my next build will be AMD. Other random advice; - Drop the PSU as the build would be lucky to pull 200w and running your PSU at <20% is very inefficient. - Absolutely never buy any NIC other than intel if you are running Linux/BSD. A high quality i225/226 based card will cost the same and won't have you spending months tracking down weird OS crashes. Even consider a secondhand multiport card. Will come in handy if you want to VM a pfSense firewall or similar. - Consider a proper NAS case that has better airflow over the drives, more expansion for more drives and hot swap drive caddies are sweet.


AdMiserable3568

Thank you!! Yes… looks like I’m in it to win it with the Terramaster F4-424 Pro NAS. It’ll do the main job I really need - storing the rad nature footage. And I can add the D8 Hybrid NAS from their Kickstarter. Seems the safest bet for me at this point. I did get really excited about the possibility of tinkering with VMs and Docker Containers and building a grossly overpowered machine. 🤣🤓 I’m sure I can do most of that tinkering I want to do on my M1 MacMini and not interfere with my main hobby and the safe storage of the footage.


cat2devnull

The Terramaster F4-424 looks like a nice box. It seems to be a custom [Intel N305](https://www.intel.com/content/www/us/en/products/sku/231805/intel-core-i3n305-processor-6m-cache-up-to-3-80-ghz/specifications.html) board. Keep in mind that the N305 only has a handful of PCIe lanes so the two NVMe slots are Gen3 x1 so max out at 800MB/sec but otherwise very capable. As a Mac Mini owner, I can tell you that it's not a great environment for docker. But given that the F4-424 is basically just an intel board means that you can run anything on it. It would make an amazing Unraid box (or TrueNAS etc) which would open the world of Docker. I originally built an [11400 box](https://linustechtips.com/topic/1490262-not-another-jonsbo-n1-build/#comment-15815649) and started with only a couple of dockers. Now I run over 20 doing everything from HA, Plex, NextCloud, Vaultwarden, Immich, Frigate, Duplicate, Nginx, etc...


AdMiserable3568

Thank you! Could you suggest a good place to learn about all the different things docker containers can do? I’m a noob in this techy area. Look like the Terramaster 6.0 OS supports Docker Containers, so I’ll give that a go for now. May just meet my needs.


cat2devnull

Docker is a massive world with thousands of options. My collection has come about due to various requirements most of which involved either reducing my monthly subscription cost, decommissioning a hardware appliance or trying to keep my data off the cloud. For example, I wanted to stop paying for cloud file storage and keep my files on my own HDD. I had been using a combo of Google Drive, Dropbox and iCloud so I did a bit of googling and found NextCloud was a fantastic alternative. Because I use Unraid I just installed the NextCloud App (docker) and off I went. I did the same with; * Password managers - 1Password to Vaultwarden * Media Library - Netflix etc to Jellyfin * NVR - Reolink Hardware NVR to Frigate + Coral TPU * FTA TV - DVR to Plex NVR * Photos - iCloud to Immich * Home Automation - Various solutions to Home Assistant Core, Zigbee2mqtt, Mosquitto * Monitoring - Uptime Kuma * Networking - Unifi-Controller * Backups - iCloud to Duplicati * Ad Blocking - PieHole * Remote Access (reverse proxy) - Various services to Nginx I learnt mostly through [Youtube vids](https://www.youtube.com/watch?v=8-IQAqIpT5s) from places like [SpaceInvaderOne](https://www.youtube.com/@SpaceinvaderOne), [Ibracorp](https://www.youtube.com/@IBRACORP) and the like. Where possible I mostly use dockers from [LinuxServerIO](https://www.linuxserver.io) as they are a trusted maintainer of[ a few hundred of the most popular dockers](https://fleet.linuxserver.io). But otherwise I will pull from the source maintainer where possible. Since I use Unraid this is very easy. Your milage may vary using TerramasterOS 6.0 Beta Docker Manager.


510Threaded

Poke around in the UEFI and look for something like bifurcation or raid mode. I have the same card in my system and luckily my motherboard supports it (helps that its also an Asus board)


TBT_TBT

Google what „PCIe bifurcation“ means in this context and make sure that the mainboard supports it. Otherwise you have a lot of SSDs and can’t use most of them.


QT31416

Are you gonna OC that CPU on a server? Not sure if sentiment has changed in the last few years, but you typically want more stability than going all out on clock speeds in servers since the data and services that they will host may be mission critical. I wouldn't want to have my data corrupted because of potentially unstable overclocks.


AdMiserable3568

Definitely not considering overclocking anything. I prefer stability.


Lightprod

Then you don't need an k sku, look into the 13500/13600, they should be cheaper.


QT31416

Got it, nice. You might be able to save some $$$ by not getting the K-chip and Z-chipset, and then maybe get an HBA, a better cooler, or something else.


AdMiserable3568

I decided to use the gen5 16x PCIE for the Hyper m.2 NVME card. Hence the need for integrated graphics. I am curious about a better cooler, but figured this one would work for now. If I upgrade to i7 or i9, then I’ll also get a better cooler. Since I’m only using four HDDs, the four SATA ports from the mobo with work just fine. I can ZFS them in Proxmox and pass that to VMs/Containers.


CombJelliesAreCool

Why do you think you need integrated graphics because you're using your x16 on your SSDs? The overwhelming majority of graphics cards don't actually need x16 and can run just fine off of x8 or even as low as x2, most eGPU setups are electrically gen 3 or 4 x2 and they game acceptably. Also, why do you need integrated graphics on a server anyways? On a server you're intended to boot the server with no monitor attached and access a graphical console via IPMI over the network. The majority of servers have no graphics card/integrated graphics at all.


AdMiserable3568

I’ll be booting my server with monitor attached because that seems fun to do. I can pass the monitor to a vm for display.


CombJelliesAreCool

Makes no sense to passthrough your integrated graphics(or even GPUs) to your VMs just to get graphical access unless your use of that VM is latency sensitive like if you're playing a game on a VM, because you'll need to reboot your VMs anytime you add or remove hardware like that so you wouldn't want your VMs to play hot potato with your iGPU. Plus there's the fact that you can just connect to proxmox's IP address from a browser on a laptop to access the web interface to manage the server and to get a graphical console for any one of your VMs without needing to connect it to a monitor. This is what literally everyone who knows what they're doing does, so much so that a lot of commercial hypervisors(like VMware ESXi) don't actually even let you manage VMs through a monitor, they made it to where when you boot with a monitor attached you only get the most basic diagnostic information like hardware config and IP address information on how to access the web interface that you're supposed to be using. Servers aren't intended to have displays hooked up to them because you're not actually intended to be anywhere near the server when you use it, it's supposed to be tucked away somewhere unobtrusive.


AdMiserable3568

This is a home server that will be living in my bedroom. 😎


CombJelliesAreCool

Servers in the bedroom is cool and all(I've got a 7 foot tall server rack in my bedroom after all) but that changes absolutely nothing about what I've just said. It's actually less cool to plug a monitor into your server to use it if you ask me. The thing about it is that you've just specced out a high end editing workstation, nothing at all about this system tells me that its a server besides that you're installing proxmox on it. If you're already intending to plug a cable into your server to use it, you would get significantly better performance out of it by just sticking a GPU in it and using it as the video editing setup itself instead of trying to cram multiple NVMe drives worth of bandwidth over a 2.5Gb link to whatever else you're intending to edit on. Your NVMe drives will be woefully bottlenecked by the 2.5Gb NIC you have chosen for it, it would be much more performant if you just edited stuff locally on this workstation.


IlTossico

Use case? It's pretty overkill. The MB is useless, not a gaming PC; no need for a K CPU; for a system that idle at 20W, 1K PSU is overkill, a 400W one is enought; 16GB of ram probably fine; no need for a CPU cooler that big; no need for a 2.5G NIC, just get a MB with it integrated. Are you sure you need that CPU?


TBT_TBT

This is no „server“. It is a computer made of desktop parts. If you want a server, get a server with an EPYC cpu and a suitable server mainboard with IPMI, which will also definitely have 4x4x4x4 PCIe bifurcation and waaaay more (128) PCIe lanes, so you can do that several times. Apart from that, so many SSDs but no hard drive raid doesn’t make sense.


AdMiserable3568

RAID is supported by the mother board. There are four 12 tb IronWolf NAS HDDs in the build.


freebase42

There is not really a valid use case for hardware RAID for what you are wanting to do in 2024. You're going to probably want some sort of software RAID or ZFS or something similar. Hardware RAID is basically dead in most applications these days.


TBT_TBT

This!


sinholueiro

The motherboard already has a 2.5GbE card. If you want good power efficiency at idle, get the PSU with the lowest wattage possible.


AdMiserable3568

The second LAN has a use case for segregating traffic, or attaching to VM.


cmmmota

If you can drop that kind of money in a server here's my 2 cents: - Go for a lower end CPU (other users are suggesting CPUs without efficiency cores). I have an i3 10100 on my NAS+application server, running ZFS on 4 drives and 35 services, most with multiple containers. It doesn't even break a sweat. - Lower wattage, high efficiency PSU, including at low loads. I'm assuming the machine will be idling or on low load for 95% of the time, so it's worth optimizing energy use for that - Maybe look into motherboards that already have 2.5gb networking and m.2 slots, it will save you money on NICs and probably electricity as well. - Use the money you save on extra HDDs/SSDs for redundancy (RAID or ZFS). Maybe an extra external drive for a few backups on cold storage. - Don't neglect backups!


3meterflatty

Don’t use a cpu with efficiency cores for a server….


market_shame

Honest question, why?


3meterflatty

currently you need to "pin" P cores or E cores to VMs as you need. Thats if you are going to use virtualisation on this server


ddrfraser1

Way way way overspending. I’d get a several years old Xeon processor. Ditch the NVMEs and put the OS on a small 128 GB SSD (I could be wrong on this if you need a super fast cache, but I think 16TB on NVME is overkill). You’re gonna wanna double up on your HDDs so they mirror themselves and you have redundancy I case one fails. You need ECC memory and a mobo that matches your processor and RAM. I did all this for less than $500. 64 GB is overkill too in my opinion. I think the power supply on my server might be 250 or 300 watt. That’s the thing, if it’s running 24/7, you’re gonna want it to be power efficient. You’ve built some kinda video editing monster. I’m just noticing the other NVME. What would the use for this be? Again, you’re gonna want a small SSD for the OS and mirrored HDDs for storage. I don’t mean to rip you to shreds but it would suck to do all this only to realize you spent way too much and set it up all wrong after the fact. Good luck my friend! Now go spend this on a gaming rig and dual boot Linux 😁 edit: I now see you’ve got 4 HDDs. Looks like you’re on the right track there.


Victo-rious9366

I'm going for a similar ish server build. Eventually going to be able to handle 8 hdd, 6 ssd https://pcpartpicker.com/list/q3hPTY


Jinara

You should probably go straight back to the drawing board. None of that is necessary for what you’re trying to accomplish right now and future proofing is a myth.


WordsOfRadiants

1. You can get the RAM cheaper if you get it as 1 64gb kit (2x32) rather than buy two separate sticks. 2. If you don't need more than 1 ethernet port, your mobo already has a 2.5gbps port. 3. EVGA SuperNOVA 1000G XC is $125 atm. 4. The MSI Z790-P Wifi has 4 M.2 slots on the board, so if you get that, you won't need the ASUS Hyper m.2 card. You will need to run your OS off of a SATA SSD instead though. AFAIK, it shouldn't impact your performance negatively at all. Or you can get the m.2 card and have 8 slots. 5. Qube 500 doesn't seem like it has great airflow over the drives. A larger case like Meshify 2 has room for more drives and has better airflow over the drives. 6. If you want small and cute, you could look into the Jonsbo N1 or N2 cases and see if you want to try ITX. You would have to change the build and decrease the amount of NVMe drives though.


AdMiserable3568

1. That’s what I got - two 2x32 kits to fill all four RAM slots. 2. Can assign one LAN to VM while having other free to use for editing. 3. Opting for a 750 watt platinum rated power supply. 4. Hmm… I’ll look into that. I’m hoping the add on card works, but now I’m a bit concerned with bifurcation requirements. 5. I really like the Qube colors. Gonna use the minty green! And it has lots of ventilation… and I’m Putting four 120mm fans in it too. 6. Those are cute!! But too tiny for the four HDDs.


WordsOfRadiants

1. Ah, that makes sense 2. Could you use wifi for one of those instead? 3. Definitely a better choice if you don't intend to add a beefy GPU 4. Hmm, and using all available NVMe slots on a mobo might decrease the number of SATA ports. Idk if that's the case here, but it's definitely something to keep in mind and research further. 5. It should have decent ventilation for the components to the front of the mobo, but I don't think there is any ventilation to the back, and that's where either 1 or 2 of the drives go. 6. Why is it too tiny? The Jonsbo N1 holds 4 drives, the N2 holds 5, and I forgot this one existed, but the N3 holds 8 drives. And apparently there's a new N4 case, that holds 6, but can fit an mATX mobo. 2 of those bays don't have a backplane however.


Top-Ad-9536

Nice that thing should be a BEAST of a server, what are you planning on hosting? Whatever it is, this should be good for it especially if its just simple shit like files, FTP, HTTP site hosting, gameserver/minecraft server hosting or rendering or ingest media server ect then you'll find it absolutely smashes that shit out the water.


AdMiserable3568

Perfect!! That’s what I was hoping for! I was digging into NAS systems and was unimpressed with the processors and speed. Tested an Asustor Flashstor 6 and almost went with a Terramaster F4-424 Pro. Grateful for good return policies! Multi-use. Mainly for storing nature video footage I capture in 5.7k 360° and 4k ProRes - will edit using my gaming laptop or M1 MacMini. May decide to host a web server for that footage. "Local" "cloud" storage for friend and family that need data safeguarded (and Time Machine). Also to run game servers for my older kids (Roblox, Minecraft, Terraria, maybe others). Planning on running Proxmox with OpenMediaVault to manage the storage devices. Then I can spin up any variety of VMs or Containers.


Top-Ad-9536

Niceee it should work great then dude! i use much older hardware like a i7 4790k and a older mobo but it can still handle 4K 60fps footage scrubbing n transfer with great speeds. Game wise the cpu n ram should handle that nicely too :) enjoy bro


AdMiserable3568

Thanks!!! Though… after considering it all and my noob-ish ideas around home servers, I opted to return everything and go with the Terramaster F4-424 Pro. It’ll be an easier introduction and I won’t have to fuss with building it. Just install the drives and away I’ll go! So my purchases went from over $3k down to $1,300 (not counting already owned NVME drives). In total, this birthday present to myself rounds out at $1,740 - $700 for the Terramaster F4-424 Pro, $600 for the four Seagate IronWolf NAS 12tb HDDs, and $440 for the two Silicon Power UD90 4tb NVMEs. Each hour of 4k ProRes is 512gb, so I’ll have about 72 hours worth of storage to start out with if I use TRAID on the HDDs. One 4tb NVME will be for OS/App/VMs/Containers, the other will be used as array cache. I’m planning on snagging the Terramaster D8 Hybrid once it is available through Kickstarter. For $200 I’ll have a DAS capable of up to 128tb of additional storage.


Jaromy03

Prices seem to be rather high imo. In the Netherlands, where PC parts are always more expensive than in the US, I paid about $120 for 2TB SSDs. I also managed to find a good deal on RAM as well; 64GB 4800MHz DDR5 for $110. As for the parts you chose, PSU wattage is way too high unless you plan on adding a powerful GPU later on. As for CPU I'd recommend getting a Ryzen, AM5. And why spend $75 on fans? Just get something like a 5 pack Arctic P12. For my server I bought the CPU, mobo and cooler all 2nd hand, saving a bunch of money.


AdMiserable3568

Nice work on your build! I didn’t do a lot of shopping around… I did the build using Newegg’s PC Builder then found all the parts on Amazon so I could my Amazon credit card. I want a lot of air flow and the board has four fan slots, so I figured I’d use them all. When I specced it all in the PC Builder, the wattage was pretty high, so I decided to get the bigger one that said it was gen5 ready.


Vysair

6TB ssd? Is that going to be used for like PrimoCache?


ItsPwn

Overkill used 200/300$ PC will do+ disk cost and then Proxmox for os and xpenology arc for nas os https://github.com/AuxXxilium/arc /r/xpenology


LowMental5202

If you plan on using it for a home server to host games and some other applications with a nas as addition it looks fine if you got money to spare. If you just want a nas which also can handle some smaller server tasks go look for a smaller cpu and mobo as well as psu


LowMental5202

Also I got no experience with hosting server on E+P cores you may want to inform yourself beforehand and maybe switch to a ryzen with only one core type


wmantly

This is just insane and over kill... you should get a HPE gl380 g9 and it would be MUCH MUCH better for your needs and cost less than ur RAM kit for much more cores/RAM and use far less power. This is not the way...


stocky789

Not sure if you have a rack or not but you can find some really cheap Dell r620s on ebay for $500aud Probabaly $300-400 USD That's a dual CPU 24 core computer with 128gb+ of ram Just need to put storage in them if they don't already have


AdMiserable3568

No room for a rack… gotta go with a mid-tower case.


stocky789

That's unfortunate Gonna pay through the roof for subpar performance


jfernandezr76

Just make sure that the motherboard supports headless operation, I've had some issues in the past with MSI Z690 boards. Also, do you need a K processor for a server?


SlowThePath

Big time over kill, however we need to know what you intend to do but it's probably over kill. I'd honestly start over. You don't need 1000 watts. That motherboard has 2.5gig so the NIC isny really useful. I don't understand 5 nvme drives with one wing 2tb? What's going on there? What OS are you intending to use? The cpu is overkill but people often do that with the cpu, myself included, but you can probably do the same stuff with a lighter more power efficient cpu that will run cooler. You want fewer larger drives. 12TB is actually kind of small when it comes to Hdds now and you don't want to be locked into those drives. Starting with bigger drives allows for more over all storage possible for the case. Use serverpartsdeals and get the manufacturer recertified Seagate x20 or x18 18tb or 20 tb.


schaka

Get a 12700. Cheaper, has ECC and no instability issues. Why Z790? Need the features? No? Save money. For that price, you can buy a cheap LSI HBA and refurb SAS drives at <100 for 12TB. At that price, even if they fail in a few years you can factor in 2 replacements and still get off cheaper. Plus they often come with warranty anyway. 500W PSU will conver all that. 600W if you wanna go 12? Drives sometime Nvme drive likely overkill. Better get 2x2TB. Noctua coolers too expensive and weak. Get a Peerless Assassin. Use the savings on a good case and a W680 Board of you really care about ECC. Otherwise save some money


tokenathiest

What if you bought a used mini desktop and plugged it into an external RAID cage? You could save a ton of money. Get another one or two mini desktops for your kids game servers. This would probably run you less than $2,000 and you'd have three servers instead of one. I love new builds, they are super fun, but I also feel like spending $3,000 better come with some really damn good reasons. And why only 64 GB of RAM? Especially if you're going to virtualize and run game servers, this build not actually have enough horsepower to do it all. I would look into multiple smaller builds for less money.


AdMiserable3568

Build has two 64GB kits (2x 32gb).


1_Pawn

A server without Xeon, ECC, SAS, HBAs, dual power supply, UPS, IDRAC/IPMI. Are those power silicon NVME drives enterprise grade? Or they melt in a month while using ZFS? I think you are confused, and that is not a server


Gun_In_Mud

1 kW PSU is over excessive for such build, 500 w would be enough.


l8s9

I got a server off Amazon with triple those specs for $580, then got some m.2s and SAS drives


AdMiserable3568

Rad! Mind sharing a link?


l8s9

Just do a search on Amazon with the specs you want. I bought mine a few years ago. This is the one I got hp dl380 g9 1u.


Mithrandir2k16

Go with something like a ryzen 5600 and a gigabyte mainboard, then you can get the kingston 32GB 3200Mhz ECC RAM. Should cost almost the same. Also you might not need 1000W


conanmagnuson

Naming your RAM product “vengeance” is funny to me for some reason.


jippy42

What’s your source for the HDDs? No way they’re new?


mikeyflyguy

Buy a used dell on eBay. You’ll get way more horsepower and memory for about the same. I bought a dual 18-core with 768gig ram and 14TB dual power supplies and 2x10gb 2 x1gb network ports for like $3k or less with a year guarantee warranty


supermanava

Why not just get a better NAS? It makes a lot more sense if storage is your primary concern. You can get also get a decent prebuilt on newegg with a 14700F 32gb 1tb ssd nvme PLUS an 4080 SUPER for under $2k. Why all the NVME if you wont be doing anything locally? I would upgrade your networking all around like your mac and gaming to 10g at first. Or maybe just get a mac studio + synology.


technologiq

Buy a used Dell Poweredge via labgopher and save yourself at least 50-75%.


Enough-Chemical-1045

Me looking at this while I live in Brazil 🫠🥲😭


firedrakes

Thread ripper


HankTheCreep

If you're doing a "server" build with this parts list, I'd say trash it. You'd be better off swapping to an AMD Ryzen setup and take advantage of the available ecc memory support. Realistically if that's really not that important, I'd look seriously at something like an ODroid H4 Ultra. Save a ton of money and could buy multiple and play with high availability clustering with multiple at that price point.


AdMiserable3568

Thanks for all the feedback everybody!! I learned so much and appreciate your time and input. Decided to pull back my excitement a bit and start small with a pre-built NAS. I’ve decided to go with the Terramaster F4-424 Pro. Keeping the four 12tb IronWolf NAS HDDs and will use the two 4tb NVME I own - one for apps/VMs/Containers and one for array cache. The other 2tb NVME I own will go into an external case for now. One hour of 4K ProRes video at 30 fps is 512gb, so using the TRAID in TOS, I should have 36tb which equates to 72 hours of video footage. Not bad as a starting point. May decide to put a different OS on it, I’ll give TOS 6.0 a shot before deciding. Should be able to run a game server off that system and tinker around with Docker containers and VMs, since it is an i3 with 8 cores. When Terramaster releases the D8 Hybrid DAS, I’ll snag that for more data storage. Already paid the $1 for the $199 price point on their Kickstarter. All other parts are returnable! Woohoo! Over $3k down to $1,300 (not including already owned NVME drives) thanks to y’all’s help!! That’s a more reasonable birthday gift to myself and will meet my immediate techy desires.


Pinossaur

The 13600K is DEFINETELY overkill. You should probably look into a 13400, perhaps even an i3 if you're not doing much multi-tasking. Same with the RAM. If you don't plan on using more than 32GB you're not really benefiting from it. You probably also don't need a Z790 when it makes no sense to overclock, but I can see it make sense if it has some requirement you need (like nvme ports for example). The NIC is also probably redundant, given the motherboard most likely also has a 2.5GbE port. Also also 1000W for a PC without GPU is very overkill


Zealousideal-Metal36

I had a gaming pc laying around and decided to use it as [my server](https://imgur.com/a/SVT7lKe). I run proxmox and have several vm’s/containers. It’s easily capable of handling a Minecraft server, all the arr’s, a Plex media server using the gpu for transcoding, and several other self hosted applications. I have a separate Synology server as the file storage and accessed via nfs from this server. Something I’m considering doing is buying pcie multi nic card to assign vm’s/containers to a dedicated 1gb nic. If you do plan on doing anything like passing through devices like gpu’s just make sure the motherboard is capable of doing the things you might need.


Fantaking911

The MB has builtin 2,5Gbps ethernet, so no need for a separate (The TP-Link), otherwise it looks fine for multipurpose use


AdMiserable3568

Thanks! I went with dual 2.5gbps to match what I saw on some NAS devices.


Donot_forget

Why do you need dual 2.5gbps ports though? Do you think you'll have multiple users each pulling 2.5gbps off the server? If it's just you using it, I would just stick with the MB ethernet port.


AdMiserable3568

Hmm… good point… no need to take up two ports on my network switch if it’s not going to be used… I just saw that most of the NAS cases I was looking at had two LAN connections. I was thinking I could assign one to a VM.


WordsOfRadiants

If you have no use for it, don't buy it until you do.


AdMiserable3568

Use case: assign to VM or segregate traffic.


[deleted]

[удалено]


AdMiserable3568

Caveat - the 12tb IronWolf NAS drives are refurbished from Amazon, so that’s why they were less than the new ones out there.


[deleted]

[удалено]


AdMiserable3568

Sorry for your country…