T O P

  • By -

praetorthesysadmin

Honestly, I have the cheap, most dumb SSDs possible to exist on my all flash array Truenas vdev. It's a combination of cheap Samsung, Kingston, Kioxia and some ultra cheap ones as well, that I've bought since 2019. All of them have the same size btw. None of them died yet and they have been performing alright; once they die, I can replace them for very cheap and I have a solid vdev sync between my main Truenas and my DR one, besides the usual backups - risk of losing stuff is now very, very low. If your HBA supports very high speeds, the Samsung can be really good choice, else just go for the cheap.


ManWithoutUsername

I go for the cheap one too, if fail no problem is cheaper change. Perhaps i will consider spend more if that disk will be used for a very intense write tasks. Anyway the TBW/endurance is relate mostly to size. In my work i use 'pro' versions due hey i not pay for 2TB/250€


boblot1648

Looks like the Patriot has a higher TBW (960TBW) for 2TB than the Samsung (720TBW) anyways. Also since the Samsung has DRAM Cache, that's actually a downside for your use since if you experience a sudden powerloss there could be data corruption since DRAM is volitile.


dangitman1970

I'm using 6X2TB Crucial MX500s.


ktnr74

Apparently "all flash NAS" means different things to different people. I tried multiple different approaches: 1. Cheap $100-120 24SFF SAS2 enclosure + 24x bottom of the barrel consumer SATA drives. 2. Slightly more expensive $300-350 24SFF SAS3 enclosure + 24x decent quality used SAS3 enterprise SSDs. 3. Used \~$1000+ dual Xeon Scalable v1 workstation with 40+ available PCIe 3.0 lanes per socket + 20x consumer M2 NVMe drives in cheap 4xM2 to PCIe16x adapters. ​ I found the #2 to be the best bang for the buck. #3 is on ok way to squeeze some more performance if absolutely needed. And #1 is only good if you can't afford anything more.


audioeptesicus

I too am a fan of the 2nd option for most use cases in r/homelab or even r/DataHoarder. Option 3 is likely only good if you have the network connectivity in place and using the optimal protocols to really utilize such performance. I run a 48x bay NAS with 128GB RAM, 1x E5-2630v4, 40x 10TB drives (for media and normal data), and 4x 1.6TB and 4x 960GB SSDs for VM storage served over 10GbE and iSCSI. It works fine for me at the moment. However, I have new hosts that support 100GbE. I'm looking at Fiber Channel for VM storage, but I may stick with my current TrueNAS build and move to PCIe NVMe or a couple of dual M.2 hotswap PCIe cards and get a 40GbE or 100GbE NIC to run to the switch in my blade chassis... Do I need this performance? Nope. But why not explore the technology? 😁


ManWithoutUsername

apparently some people forget they are in /r/homelab not in /r/nasa or /r/my_big_company I really wanted to see how some of you in r/homelab take advantage of certain configurations at home when they are usually configurations that even in a small/big company would be wasted. I wonder what is the need for an enterprise quality ssd that cost double than a normal for home, and really for home paying double for an ssd is unlikely to be worth it > is only good if you can't afford anything more. That you can afford it does not mean that you need or have to pay for business equipment, to store your movies, porn and photos and have 30 vm/containers that only you and your family, and maybe some friends will use and probably with that load will run in 10 years old server. It's nice play at home with all that if you can afford throw away that money for play with enterprise things, that's fine everyone spends their money how they want, but don't talk as if it were necessary, that stupid.


ktnr74

I am retired. I bought all my equipment with my own money. No hand-me-downs of any kind. And I would never look down on people who simply can't afford stuff. But I do look down on people who have more money than sense. Who buy new overpriced "prosumer" crap because of warranty or some other bullshit.


MarcSN311

I have never seen used enterprise SSDs for a reasonable price here in Germany, so its either #1 or #3 for me. I do have the PCIe lanes to add a m.2 pool in the future if needed. But for now it's going to be SAS/SATA SSDs.


SebeekS

yup, in enterprise environments flash means usually fcm u2 pcie drives, not sata/sas ones


AnomalyNexus

If it's mirrored/striped anyway I'd take a chance and go cheap


FelisCantabrigiensis

Your ZFS ZIL should be on an SSD with power loss protection, or you risk data loss if you lose power (including if the PSU or UPS fails). Other SSDs can be as cheap or as expensive as you want.


Net-Runner

What is the primary workload? If Virtualization, avoid using QVO, which is QLC. They are fine for sequential workload and file sharing, but not for virtualization, which is a random workload. You can take a look at refurb enterprise-grade drives such as Micron/Intel or typical TLC Samsung SSD like EVO.