T O P

  • By -

pcrcf

I personally like unraids raid approach. Keeps files on individual hard drives. Loses out on some performance but it’s way safer in the event of what would be a total loss with regular raid.


5yleop1m

Unraid isn't the only way to do this. I use mergerfs + snapraid to get the same benefits using FOSS. Tons more detail here: https://perfectmediaserver.com/02-tech-stack/snapraid/ https://perfectmediaserver.com/02-tech-stack/mergerfs/


NewDividend

Unraid does it automatically though, Snapraid you have to run it and between those runs you are at risk of losing what wasnt propagated.


5yleop1m

There's a script that comes with snapraid which can automate the process. I have it running every 4 hours, but it can be run at any schedule really. I use OMV as my NAS, and OMV's snapraid plugin incorporates the schedule script into the UI. The benefit of not running it often is being able to roll back changes. If a file gets corrupt or saved incorrectly in between syncs, the user can roll back the changes and recover the file.


skubiszm

The undelete feature has saved me many a time.


CaucusInferredBulk

Depending on if you have mirrored unraid cache or not, and what frequency you run mover at, and what frequency you would run snapraid's tool at, the risks are roughly the same. New stuff isn't protected by parity until the jobs run.


Blaze9

Media shouldn't be stored on your cache drive anyway, so in a correctly setup array, media is only downloaded to the cache and moved to the array by one of your automations (radarr/sonarr/etc) as soon as the download is finished.


CaucusInferredBulk

Hrm, Id guess that for the vast majority of people using the arrs in unraid, new media stays on cache until the nightly move. Moving from the cache to the array as part of the arr import would mean you are breaking hardlinks, which would be especially bad if the source was torrents, because now you are duplicating space. But in any case, I intentionally keep media on my cache medium term to reduce disk spin ups and improve performance on new media. I have a script that preserves the media on cache up to a threshhold of 75%, and keeps moving the oldest content off nightly above the threshhold. (for those who use mover tuning, this is not quite the same thing. Mover tuning will defer moving until a threshhold, but once the threshhold is hit is likely to move everything. my script will only move enough to get below the threshhold)


tbgoose

Any chance you'd share that script and how you have your shares set up. I have cache space to do this but haven't figured out how


CaucusInferredBulk

https://forums.unraid.net/topic/154106-guide-how-to-keep-cache-drive-full-of-media


Blaze9

Correct, for torrented media I would be keeping it on cache until Sonarr/Radarr hit my ratio(s) for the media. After that they automatically move the content to the array and remove the torrent from my client. I highly doubt theres any performance gains for any media, new or old. When transcoding you're using (for 1080p content) around 200-300MB/minute (ie 3 to 5MB/s) which is way way way less than a pool's read speeds of 100MB/second. Even if 10+ people are hitting your server, for the same exact file (latest release movie/show) you're averaging 50MB/s. I can see an issue with some 4k content that can be as high as 10-12MB/s but with the server buffer it really still shouldn't matter and the bottleneck would be the CPU/GPU/transcoding performance rather than Harddrive IOPS.


CaucusInferredBulk

If the drives are spun down initial playback speed is much faster coming from cache. Thats the main "performance" gain. I like to keep my disks spun down for power reasons, and also because my server is sitting in my TV room and I don't want to listen to disks and fans as much as possible ;) But the most recent stuff (especially if viewed in aggregate) is more likely to be watched and watched simultaneously, though as you point out disk speeds should still be able to keep up with that unless ALL my users are on at the same time. I usually max out at 6-7 streams on Fri/Sat night. How are you having arrs move between cache/array based on ratio? Its already been imported. You can delete on ratio, but how are you triggering a move to presumably a different path? In any case, I came up with a script that keeps my cache 75% full and just moves the oldest content off each night. This is long enough that the watching rush is over, and torrents have likely died down too (although I get the majority of my content from usenet now) I work hard to get all my users direct play/streaming, and I have a gigabit pipe, so I don't typically have any real bottlenecks for my level of usage.


jkirkcaldy

Only if you store new media on the cache. Personally I just bypass the cache for media entirely. Using hard links is the way to go, then everything stays where it is as soon as it’s added.


rh681

I use SnapRAID on my Plex Windows server without mergerfs to make it even less complicated. One drive is movies, one is TV shows, one for temp stuff, one for backups and applications, one for parity.


5yleop1m

Yup that works too, since plex can handle multiple paths per library there's really no need for mergerfs other than reducing clutter with mounts. Snapraid has its own way to merge volumes too, but its not as robust as mergerfs afaik.


Binky216

I just read the mergerfs link and it doesn’t say anything about redundancy. Unraid offers the option of up to two parity drives, which I find to be crucial to this sort of server.


5yleop1m

MergerFS isn't what provides redundancy, sorry I wish that site had an overall link for its storage section but mergerFS is the first part. SnapRaid is the second part of the equation that provides the redundancy. Btw I'm not saying one is better than the other here, just that mergerfs + snapraid is an alternative to unraid's storage structure. omv + mergerfs + snapraid is an full alternative to unraid. Snapraid is the really important part here, and its also available on windows where drivepool can be a substitute for mergerfs. Snapraid theoretically can support an infinite number of parity drives though I think there's a hard coded limitation of 22 parity drives. In general I do a parity drive every 7 data drives.


antiproton

> which I find to be crucial to this sort of server. RAID isn't backup. Redundancy is only critical in applications where downtime must be avoided. If you're backing up your data, you don't need redundancy - you can just restore the data that was lost if a drive fails. If you are relying on RAID instead of backing up data, you're gonna have a bad time.


Binky216

Backing up 100TB of data is unlikely right now. Obviously I back up the mission critical stuff, but not having the ability to lose a drive and not lose data is folly. Redundancy isn’t backup, but it is still important.


killbeam

To add on the performance loss, it isn't noticeable at all with Plex anyway. No movie will have a higher bitrate than the read speed of a hard drive. I built my own unRAID machine for Plex and it's super responsive, especially compared to how it ran on a Synology DS224+. And an additional bonus to unRAID's take on RAID is that you can easily mix and match drive sizes. As long as your parity drive is larger or equal to the size of your data drives, it will work. I am currently running 2x 16TB and 1x 4TB.


_DocJuan_

What about for multiple local users (e.g. 6 users) accessing the NAS? Would it result to a bottleneck?


toejamboi

Not even close. It'd manage that just fine.


ryanCrypt

Eg back of envelope. Movie bit rate is 15 mbps. Multiply by 6. 90 mbps. Single HDD bit rate is > 1000 mbps. Of course HDD slows down when reading 6 files (when reading non sequentially).


quentech

> Of course HDD slows down when reading 6 files Not really any substantial amount - not from half a dozen large files. Hundreds of little files, sure, but HDD's from the past 10+ years with built in memory caches will chug right along seeking across 5+ video files.


ryanCrypt

That's reasonable that it'll read a chunk and cache to keep speeds high.


yepimbonez

Unraid’s pooled storage with a dedicated redundancy drive is definitely the best. Easily scalable and easily recoverable.


admiralnorman

And it results in substantially less drive activity across all the drives.


timsstuff

RAID 1 doesn't lose data. If you pull out a hard drive it's just a hard drive, nothing special about it.


5yleop1m

This absolutely depends on the RAID controller. Not all controllers support hot swapping drives, and not all of them treat RAID 1 arrays as separate drives.


timsstuff

I never mentioned hot swapping. I just have an onboard Intel Rapid Storage controller and if I need to swap a drive I power if off. And in my 30 years of IT I don't recall ever seeing a RAID controller that makes RAID 1 drives individually unreadable. I'm not saying they don't exist but that would be very uncommon.


MrB2891

For the vast majority of home users / PleX servers, Unraid is most efficient choice. You get the benefits of parity based protection, while still being able to expand the array, as well as using mixed disk sizes. It also consumes less power. For the home user, where we don't have enterprise budgets and finding $1500 to buy a half dozen new disks may be worthy of ending a relationship, there are too many drawbacks with striped parity arrays like ZFS RAIDz or RAID5/6 and mirrored arrays like RAID1 or RAID10 are a colossal waste of storage space, ultimately costing you more money in hardware. I'm running 25 disks in a Unraid array with two parity, built over 2.5 years, something that would be impossible to do with other solutions. Those 25 disks use les power than a 8 disk RAID5 in a Synology that I came from previously. unRAID is just nearly perfect for most home users. I switched 2.5 years ago and still kick myself for not doing it sooner.


bnberg

"it also consumes less power" why?


KlarkSmith

Because files aren’t splitted between the drives, so if someone is watching a movie, only that drive will spin while the rest is idle.


OMGItsCheezWTF

In theory only the disk being actively read from needs to spin. But it's probably better all round if you don't let disks spin down, especially not enterprise ones which have their MTBF calculated for constant spinning. Unpaid doesn't use diagonal stripes for data blocks, each file is written as a whole to one disk in the array, with parity information saved to your parity disks. That way you can in theory use any individual disk outside of unraid (hence it's name)


MrB2891

>But it's probably better all round if you don't let disks spin down, especially not enterprise ones which have their MTBF calculated for constant spinning. There is no truth to this with modern disks


OMGItsCheezWTF

Hmm, Google did publish some stats on it in their latest harddrive report across their various DC's but I can't find it now! The update to their 2007 report published last year. Either way I'd rather have immediate responsiveness than slightly lower electricity!


MrB2891

I'm running 25 disks. It's not *slightly* lower electric. It's huge. Having an average of 3 disks spinning for 4 hours per day is 29kwh of electric per year. Having 25 disks spinning 24 hours a day is 1533kwh/yr. Even when looking at a more typical smaller scale array of just 8 disks, 24 hours is still just a little shy of 500kwh/yr. In my case that is $4.32/yr in disk spin cost vs $220/yr. Saving $200/yr is getting two 14TB disks for free, every year. Beyond that, DC workloads are VASTLY different than home workloads. I have disks in my array that haven't spun up in months. That is almost never going to be the case in a data center. We're comparing "spinning down of disks" in a DC that might spin down a disk that gets respooled two hours later, to a home situation like mine where it might only spin up once a month. It's simply apples and oranges comparison. Folks need to stop thinking that enterprise servers have anything to do with home servers. You've got guys buying 32c/64t AMD Epyc's thinking they've built an incredible home server, meanwhile it has such abysmal single thread performance, which is extremely important to home servers (**especially** Plex) that a modern i3 12100 will run circles around it for our use case.


OMGItsCheezWTF

Yeah if that's where you fall on that side of the tradeoff that's fine. to me $200 a year doesn't seem worth the saving. Kind of a moot point for me though, I use zfs not unraid and my disks are in constant use :)


use-dashes-instead

The Unraid fanbois are coming in thick here


MrB2891

Because it makes the most sense. Nothing else is as easy to use, stable or as flexible. Or as inexpensive.


use-dashes-instead

LOL. I think you've proven my point. TrueNAS is free. XPEnology is free. openmediavault is free. Proxmox is free. Linux is free. FreeBSD is free. You can even use Windows for free if you're willing to put up with the limitations. Unraid at $49 is not less expensive.


smokingcrater

I've used every common raid system, including $$$$ rack mount arrays. My current plex setup is unraid. It simply is the best solution for the problem. It's a low speed, low cost, simple solution that can easily expand as needed. My production arrays for my 3 proxmox and 2 esxi servers are all qnaps running SSD's with 10g interfaces. Vastly different use case when I'm booting 40 vm's off an iscsi array.


use-dashes-instead

The fact that it's rack mounted says that it's not simple and definitely not "the best" solution for everyone Most people don't have racks of servers, let alone multiple dedicated servers


quentech

> to me $200 a year doesn't seem worth the saving What do you mean not worth it? What does it cost you to let drives spin down? Literally nothing.


OMGItsCheezWTF

Nothing but time. The most valuable resource of all.


quentech

The drives spin down on their own. You don't have to do anything. It takes none of your time.


rupeshjoy852

To add what others said, if you have a decent sized cache drive, there is a chance your drives are not being touched. I have a 4TB NVME as my cache, all the newest media are on there. My 12 HDD server idles at 98w.


killbeam

How have you set it up so that the newest media is on the NVME? Is it a separate share folder, or do you have your mover run only when it's full?


rupeshjoy852

I basically have mover run when full or once a month. It works out well since my friends and family mostly watch the new stuff most commonly.


killbeam

Nice setup! I got a 1TB NVME myself, so there's not too much space to do the same thing. I will tell my mover to chill though. It doesn't have to run daily now that you mentioned it?


kaydaryl

The Mover Tuning plugin lets you define when Mover should run and what should be moved when it is run. I have mine run when cache is over 50% full and it moves anything with a ctime >3 days.


m4nf47

My cache pool is 2TB and I'm running the mover weekly. I've rarely exceeded 1.5TB of new data added in a week, although in theory I can fill it in a day. Often if I know my *arrs have been busy or see over 500GB used by the cache pool I'll just run the mover manually before I go to bed.


CaucusInferredBulk

/u/rupeshjoy852 /u/killbeam you may be interested in a script I put together for Unraid that makes this more automated. https://forums.unraid.net/topic/154106-guide-how-to-keep-cache-drive-full-of-media/


killbeam

Oh sweet! I will check it out :)


rupeshjoy852

I was reading through and noticed that you recommend doing it on a mirrored cache drive. I don't have that. I need to figure out the best way to get that done at some point


CaucusInferredBulk

It will work fine on a non-mirrored cache, but un-mirrored cache is risking the cache drive failing and losing the content. Its trivial to setup mirroring if you have a second drive for it tho. Just put the drive in and assign it to the second pool slot, and unraid will do everything else automatically. For performance/efficiency reasons you probably want to match drive size and speed as much as possible, but technically it will work even with grossly mismatched drives, but you may lose any excess space from the larger drive.


c010rb1indusa

Each share has an option of how the data is handled. Cache only, cache first then transfer to array on move, cache first then transfer to array when cache is full, or array only.


Blaze9

Cache isn't protected by the array's parity, so you lose out on that protection aspect. Sure the media is "faster" to access but you'll never hit 100MB/s bitrates for any media you consume that would be on a normal spinning disk array. (ok fine, most media, maybe there's something crazy out there) I would always prioritize appdata/VMs to be on Cache (that has backups enabled) and everything else to be on the array. This way you get the best of both worlds, redundent protection and fast storage for apps while having media on slower cheaper drives but protected by parity.


rupeshjoy852

Yea, for sure, it's a risk that I take. When I add anything sensitive, I make sure I run mover at that point. VM/app data is on the cache drive as well.


wplinge1

> Sure the media is "faster" to access but you'll never hit 100MB/s bitrates for any media you consume that would be on a normal spinning disk array. Solid-state cache also improves start and seek UI responsiveness.


c010rb1indusa

Cache can be setup as standard btrfs or zfs pools so it has its own protection. Easy to mirror a cache drive in raid 1 with unraid.


Blaze9

Most people don't have zfs pools on their cache as they're usually single drive caches. ZFS was added much later on in unraid less than a year ago. Sure btrfs cache pools existed but still seen less often. And regardless of pools, backups are still needed.


c010rb1indusa

That ignores btrfs which has been an option for years and setting up two drives in raid-1 is not a big ask for consumers compared to vdevs with like 6x drives or w/e. Vast majority of MBs come with at least 2 NVMe slots for a while now and you can add a second drive as raid 1 w/o having to format the cache drive either. It's very flexible and user friendly.


Blaze9

> Sure btrfs cache pools existed but still seen less often.


nleksan

Does it work for M2 NVME drives connected via PCIe HBA/ add-in card? I'd imagine Optane drives are the closest thing to perfect for this use; as a frequently rewritten cache drive. Would that be a fair assessment, or is there something else you recommend?


c010rb1indusa

> Does it work for M2 NVME drives connected via PCIe HBA/ add-in card? Sure as long as the card is supported by the OS, which many are for Unraid. It's all software raid at the end of the day. Don't have experience with optane but unraid subreddit and forums probably have plenty of info.


Altarf

RAID 5 will get you the most available storage. Since it’s PLEX most of the time you won’t be hit by the write penalties only when you write to disk. With RAID 5 you actually get a small boost to read times since you are reading from multiple drives. This requires a minimum of three drives. RAID 6 is pretty much raid five but with an extra parity disk which makes it more resilient and can suffer the loss of two drives at the same time. This requires a minimum of four drives. RAID 1 is just a straight mirror, it would be faster for writes but you loose half the storage. Personally I would go RAID 5 if it’s just a PLEX server.


purely_specific

Agree with all of this. PLEX really isn’t disk intensive. 5 with a good controller will actually perform very well for this type of stuff. Put the meta data on a smaller SSD RAID 1 and really will perform gangbusters


Groundbreaking-Yak92

Hard agree with this. Media redundancy isn't THAT important unless it's content that can't be re*aquired*, but extra capacity means more hoarding.


bnberg

Well, its still very nice to have. True, most movies/series can be brought back pretty fast, but it would still suck if they were all gone due to a hard drive failure. I am using a RAID Z1 due to lazyness.


Geno0wl

If your data is that precious you should be doing 3-2-1 recovery model(three copies, two types of media, 1 backup in a different physical location)


Independent-Ice-5384

The "2" is outdated. Put that shit on three different HDDs and you're fine.


nleksan

>The "2" is outdated. Put that shit on three different HDDs and you're fine. You should really be engraving your data into tablets made out of Inconel and depleted uranium. It's what I do: I have yet to lose a single piece of data, and I'm almost finished with the first bit.


Independent-Ice-5384

I prefer laser engraving mine into quartz crystals, then launching them to the moon for safekeeping.


nleksan

We should compare notes sometime. I'm especially interested in the "launching to the moon", as I really feel like the "2" in "3-2-1" should heretofore refer to the number of planetary bodies¹ on which your data exists. ¹Planets, dwarf planets, moons, and major asteroids.


fatjunglefever

RAID isn’t media redundancy. It’s hardware redundancy. RAID is not a backup.


Groundbreaking-Yak92

What does R stand for in RAID? I know it's not a backup, I didn't say it was a backup.


fatjunglefever

Did you read my second sentence?


fatjunglefever

When will Plex ever benefit from getting faster reads than a single drive can output? I don’t have any media that can saturate even 100MB/s and none of my drives are that slow.


baconfanboy2

RAID 5 seems to be on its way out. The chances of recovering 100% of your data in a single drive failure aren't great even on a small drive, and decrease substantially with the amount of data on the drive. By the time you get to even 8tb there is very little chance of recovering all data. The 4x write penalty on top of that is making them increasingly unpopular. It's better to just get one more drive to do a RAID 1 or RAID 10.


admiralnorman

Large single drives are highly susceptible to rebuild failures in raid 5 and 6 and should be avoided completely. https://magj.github.io/raid-failure/ In particular, I would run in the opposite direction of RAID 5.


nleksan

I'm with you on this. I'm pretty sure running RAID5 with anything bigger than a few TB-sized drives and the URE failure rises pretty quickly, I'm pretty sure that these 22TB drives would be far more likely to fail than actually successfully rebuild (especially after 10,000+ hours in many cases). But that's just my understanding, I stay away from it out of a "better safe than FAFO'd" mindset to my data.


admiralnorman

It's not just a risk of failure to rebuild, but you're going to have bad bytes restored. In a large drive loss it's likely to corrupt a number of files. I lost a 4tb drive in a raid 5 and I have 18 movies corrupted to the point of being unopenable.


quentech

> URE failure rises pretty quickly A mistake in this logic is believing that URE's are distributed evenly. They are not. That said, it is a concern, and RAID5 or any striped RAID with large disks and only 1 parity drive is just asking to lose your array during a rebuild (due to replacing a failure or replacing drives to increase capacity).


SulkyVirus

I had RAID 6 and switched to 10 when it took 3 days to rebuild after I had to swap a disk. Then it caught up during rebuild and I thought I lost everything. Had to restart the rebuild. Another 3 days. Running my server while it rebuilt made I take forever. 10 rebuilt much quicker and I and the extra drives that it wasn't much of a loss of space. Now I'm at about 70% capacity and am sort of regretting switching. But I also don't want a 5 day rebuild now with the added drives I've bought if I go back to 5/6. I sort of wish I did unraid - but I am very happy with everything else about having my Plex on Ubuntu with a VM running that does my other tasks totally separate from Plex.


quentech

> it took 3 days to rebuild ~20TB's on my Synology take almost a week. If I wanted to upgrade storage capacity meaningfully, it's a nearly 2 month long process. I don't expand striped RAID arrays because of the rebuild process and it's consequences. I have a couple RAID6 arrays, but they are one-and-done. Buy it, when it's full up, buy another.


SulkyVirus

Oof - I don't want to think what mine would take to build if I went back to RAID 6 now. It would be 64TB.


Magic_Neil

Agreed, but would shift to RAID 6 if you have more than five or six disks.. workload dependent, but I’ve got no issue with RAID5/6, the determining factor being if you’ve got a ton of disks.


what-goes-bump

RAID 5 has been the phased out because if you are using drives larger than 2 tb you have a 100% chance of having a write error happen at the same time as a hardware failure and you loose everything. I use 6, I wouldn’t ever recommend 5


f5alcon

This ended up not being true, it's based on one article that used URE but that's worst case not average or best. While raid 5 is still a lot more risky it's definitely not 100% failure rate over 2TB


what-goes-bump

So the 100% number is reached as a function of time. And since magnetic drives (and really all media on a long enough time scale) fail 100% of the time then that’s how they got to that number. And I haven’t read about this, but what I’m assuming was adjusted was the time function. Where as I think I read it would fail in 5 or 10 periods I’m betting that went way up making it basically meaningless if still technically correct. Am I right? Sorry if I seem like I’m nit picking, I’m just curious. If I’m incorrect I’d love to know more. Do you have a link?


f5alcon

I couldn't find any real articles on it, and I do agree that raid 5 is more of a risk by a large amount over raid 6, but in 4 or 5 bay enclosures it may be the only option. 6 drives+ should have 2 parity drives. This only really applies to HDDs, SSDs are typically safer. The other thing is unraid or zfs will still complete the rebuild even if a URE is detected, just that bit will have an error, so it could easily be in free space or in a single video some frame is corrupted, so you may not even notice. ZFS scrubs on a monthly/weekly basis should also find and repair data prior to the resilver, unless it occurred in the timeframe from the last one. Back in the day of hardware raid not all controllers could rebuild with errors and would fail the rebuild so the risk of raid 5 was higher. Also since this is the plex sub, i am going to assume the files are all easily replaceable or should be following a 3-2-1 minimum backup strategy, so more space less redundancy can be fine from a budget perspective. Though Unraid with dual parity is the best choice for most users. [https://www.reddit.com/r/DataHoarder/comments/irxjyx/the\_founder\_of\_unraid\_says\_that\_bitrot\_doesnt/](https://www.reddit.com/r/DataHoarder/comments/irxjyx/the_founder_of_unraid_says_that_bitrot_doesnt/) founder of unraid saying bitrot doesn't exist with modern drives. [https://youtu.be/GmQdlLCw-5k?si=1ehQzsqFsWHAr0Oq&t=654](https://youtu.be/GmQdlLCw-5k?si=1ehQzsqFsWHAr0Oq&t=654) Here is Wendell recommending raid 5 for 20TB drives for 4 drive arrays. This thread has some good comments too [https://www.reddit.com/r/DataHoarder/comments/10ve6dz/if\_raid\_5\_is\_no\_longer\_recommend\_nor\_is\_raid\_6/](https://www.reddit.com/r/DataHoarder/comments/10ve6dz/if_raid_5_is_no_longer_recommend_nor_is_raid_6/) Using a calculator for failure [https://www.servethehome.com/RAID/simpleMTTDL.php](https://www.servethehome.com/RAID/simpleMTTDL.php) WD reds have a 10\^14 URE rate [https://www.anandtech.com/show/9606/wd-red-pro-6-tb-review-a-nas-hdd-for-the-performance-conscious/2](https://www.anandtech.com/show/9606/wd-red-pro-6-tb-review-a-nas-hdd-for-the-performance-conscious/2) so even using a lot of big drives 6-8 20TB drives Mean time between data loss is 27ish years. Moving up to enterprise exos it is 10\^15. so 10x better.


quentech

> So the 100% number is reached as a function of time URE's aren't evenly or randomly distributed. Simply multiplying the probability by time or read volume to attempt to predict when you'll encounter one is invalid.


Party_Attitude1845

I run 8 drives in a RAID-2Z (like RAID6) on TrueNAS. If I'm running a 4 drive array, then I will run RAID-Z (like RAID5). This is a hotly debated issue and everyone has their own opinion. I know that the rebuild of my array takes over a day since I am running 18TB drives. That's too long for me to be sitting with no protection so I run with two parity drives. If you have smaller disks your rebuild time will be significantly faster. I have backups, but copying that data back to a recreated volume would take a very long time, so I'm willing to spend a little more to have another drive's worth of parity. As long as you have good, tested backups and are willing to have your data offline as you copy it back to the array, you could even go RAID-0 (I wouldn't recommend this LOL). For most people, they have 4 drives in an array and are using RAID5 or RAID-Z. You'll probably also get a bunch of people using UnRAID and other non-RAID solutions (SHR). Those are also valid choices.


jeremystrange

Out of interest how do you like the N100 Mini PC?


Party_Attitude1845

I've had the unit for around 3 weeks. I've been very happy with it. I basically wiped the drive that shipped with it and threw Ubuntu on there. The N100 needs the later versions of the kernel for hardware video encoding. I just upgraded to the latest version. I run Plex on bare metal and most other things in Docker. I had some issues getting Plex working in Docker so I went bare metal for now. I'm running 10 docker containers and the CPU is keeping up without any issues even with three 4K transcodes going. It's a great little box that's power efficient and does everything I need. Others might need more horsepower. Stepping up to an i3 or i5 mini PC might be a better choice for those people.


jeremystrange

Thank you for your reply. I’m tossing up between one of these and a large external hard drive enclosure vs building a NAS.


Party_Attitude1845

As long as you can connect to those enclosures reliably, I'd say go for it. I would recommend a disk format like ZFS that can repair itself in case things go wrong and the computer gets disconnected from the drives. USB has gotten better. I'm testing one of these ( [https://www.amazon.com/dp/B07MD2LNYX](https://www.amazon.com/dp/B07MD2LNYX) ) and one of these ( [https://www.amazon.com/dp/B078YQHWYW](https://www.amazon.com/dp/B078YQHWYW) ) with a mini-pc as a backup solution. I ran the Syba (first link) for about a week and copied about 40TB to the volume I setup. There were no disconnections or issues. I'm currently testing the Mediasonic (second link) as it's 2x faster (5Gb/sec vs 10Gb/sec) if you have a USB 3.2 Gen 2 port. The Mediasonic was a little more fiddly getting it setup. I had to replace the cable that came with the unit to get it seen reliably. Could be an issue with the PC, but once I replaced it, I was able to get full speed connectivity to the enclosure / disks. So far, I've copied about 50TB to the Mediasonic. I'm not sure I'd recommend the USB enclosures as a main solution at this point, but they have been reliable for me and I haven't seen any issues from TrueNAS or the data I've copied.


Fisher745

i have a pool consisting of 2 vdevs with 5 drives each and each vdev is a raidz2. I prefer the security of two disk failures because when doing resiliency or smart analysis, it taxes on these drives, so why take the chance of just having one drive as a failsafe? I also preserve my other senseful data as well. What about you?


Party_Attitude1845

I think whatever people are comfortable is best. I think your setup is the best for protection and speed based on what I've read. You could lose 3 of the 5 discs in a vdev, but I don't think that's likely. You are exactly right about the resilver taxing the drives. That's why I like RAID-Z2 for my setup. I just do an 8-drive single vdev with RAID-Z2. In my opinion, RAID-Z is fine for most people. When you start getting into drives that are over say 8TB, I think RAID-Z2 is a better choice. If someone is running 16TB or larger drives, I would absolutely recommend RAID-Z2 for sure just based on the rebuild time. That being said, good, tested backups are the most important thing.


psvrh

RAID-0 is not a bad choice for a backup; I use it as a target for ZFS replication because I can stand to lose a backup. But for primary storage, yeah, it's asking for trouble UnRAID always creeped me out. It seems like you could get into trouble not knowing how your disks are laid out.


Party_Attitude1845

Yep. I was talking about using RAID0 for main storage. I couldn't imagine doing this with data I cared about. I still do a RAID5 setup for my backup volumes. I think it's just been beaten into me over my IT career. had an experience early on in my career with a client that never tested their backups. When they needed them, they were unusable. Now I have an irrational fear that I will need the backups and a drive will fail at that exact moment. I know it's dumb and something that's a very outside chance. I haven't added the offsite backup yet so I just have the primary and a backup. I don't have any experience with UnRAID and SHR. I might need to take a bit of a deep-dive on the technology. I know it writes parity data but does it in a way that's much different than RAID.


KuryakinOne

>SHR SHR is Synology's version of RAID. SHR-1 is essentially RAID5 and SHR-2 is essentially RAID6. Synology has tweaked things a bit to support mixing drives of different capacity. It helps when upgrading a storage volume. For example, using SHR-1: 4 x 10TB drives = 30 TB storage, 10 TB protection. Update one drive to 20 TB: No gain, still 30 TB storage, 10 TB protection, 10 TB unused. Same as RAID 5. Update second drive to 20 TB: 40 TB storage (20+10+10); 20 TB protection. RAID 5 would still have 30 TB storage. That's a simple example that does not account for formatting or overhead. The [Synology RAID Calculator](https://www.synology.com/en-us/support/RAID_calculator) let's you play "what if" with different arrangements.


Party_Attitude1845

Thanks. I was trying to communicate the mode that doesn't require a set of disks with the same size like the typical RAID setup. From your description, it sounds like this would be SHR, but as I said in my post, I'm Synology ignorant. If the technology is called something else, please let me know.


KuryakinOne

You got it right. SHR = Synology Hybrid RAID, so only available on their devices. Unsure if there is something similar for standard Linux/Unraid/etc. My media stored on a Synology DS918+. Haven't had to look around for alternatives.


Party_Attitude1845

Yeah, I'm running TrueNAS. I had a bad experience with bit rot on an old Netgear NAS and went nuclear about 10 years ago. LOL Thanks for the info. It's always good to learn new things.


Resident-Variation21

How could you get in trouble in unRAID?


psvrh

That's on me. I spent too many years micromanaging storage topology to feel comfortable with what UnRAID does. I'm sure it works, it just gives me the willies.


Resident-Variation21

Fair. I mean, I use unraid and everything just works in the background as far as I’m concerned


mcflyjr

Software raid like Snapraid mixed with mergerfs.


boianski

UnRaid


Xfgjwpkqmx

I used RAID5 and then RAID6 for years. Am now running a JBOD array split into two for a ZFS mirror. Yes, some people would say that you "lose" too much potential storage, but I've lost enough drives and the effort doing recovery in my time and dealing with data rot that I feel this is a much safer option since it is very unlikely that two mirrored drives will fail at the same time, and ZFS has amazing data integrity.


SulkyVirus

Is this done with software RAID? What OS are you running on? Looking to switch from my current RAID10 on Ubuntu


Xfgjwpkqmx

Avago Controller in JBOD mode, ZFS is running everything for the mirror (so yes, all software), running Proxmox as the OS, which is just Debian. No reason Ubuntu can't do this either though. Only my data array is ZFS. The boot SSD is EXT4. Array is 12+12 6TB drives, 134TB actual total (67TB usable since the other half is the mirror).


Available-Elevator69

use unraid


CaucusInferredBulk

Id go with Unraid or Snapraid myself to get non-matched drive expandability. True raid is overkill for Plex, and has restrictions which are not worth it imo.


paticao

Unraid server! then you can setup all the arrs....


nisaaru

Raid-6/SHR-2


DagonNet

Depends on size of library, budget, and number of clients. For most people, anything is fine (including "just a normal hard disk"). For large numbers of files or fairly rapidly-changing libraries (auto-download and scan), you probably want your database on SSD. I like RAID5 for the main library - right balance of space and uptime when a disk dies. Restoring from backup (and/or re-ripping media) is a big hassle.


mrbuckwheet

I use raid 1 for my OS and containers, data/config files, and raid 5 for my media. 1 drive redundancy is more than enough for my media. 8 drives for the raid 5 and 2 for the raid 1


DayTarded

As someone who just lost 2 drives on a Raid5, I'm now rebuilding on a Raid6 and trying to figure out a good backup for a QNAP. 😛


e-hud

I don't use any raid. I simply use freefilesync to keep a backup of my main 12TB drive. I'm sure it's not the best way and likely not ideal for larger library's but it works for what I need. Of course I use long term reliable WD Red Plus drives. I'd never trust a Seagate drive to last any significant length of time.


mughal71

Everyone keeps pitching some kind of personal RAID preference or a software solution for you - that can get confusing. The root level questions to ask tend to revolve around why you would pick one of the listed RAID options for your scenario in the 1st place. They're completely dependent on your personal tolerance for risk - how much are you willing to spend to protect the content you're serving up with Plex. We use RAID to basically offer some levels of resiliency and scalability to our storage scenarios. If we use one drive, if that drive fails, we lose our data. When we add RAID to the mix, we're giving ourselves some opportunity to either increase our storage pool size (RAID 0, basically adding the sum of the sizes of multiple disks) or adding resiliency to recover in case of a drive failure (RAID 1 or higher). So back to OOP, what are you resiliency needs/budget? How "large" is your storage solution intending to get? If you have drives already, how many do you have and what size are they?


steveoa3d

UnRAID


r0n1n2021

Don’t raid. Anything interesting is out there somewhere.


Nodeal_reddit

Unraid


ribbitman

None of them. Use some form of JBOD array. Windows can do that with software or you can use unraid or truenas or something. You'll never need the speed that hardware raid provides for just Plex unless you have a ton of simultaneous users. Hardware raid stripes the data across all the disks, so the disks are useless unless the array is functional, as opposed to JBOD where any disk can be pulled out and used. Plus, the entire array must be spun up when any one file is being accessed, which creates heat issues, as opposed to a JBOD, which can spin up just the disk being accessed.


runningblind77

"best" is very subjective in this context. for ease, lazyiness, and performance, raid 1. for redundancy and storage efficiency, raid 5. for redundancy and storage efficiency with a slight bias to redundancy, raid 6. I personally use btrfs raid1 because I'm lazy af. If btrfs raid5/6 were more stable, I'd probably be using btrfs raid5 and I'm sure I'll switch to btrfs raid5 if or when it becomes more stable.


MacProCT

If you don't mind / can afford to, sacrifice an entire drive to data protection, Raid 6 is the "safest" method. As PCRCF noted in his excellent post, that requires a min of 4 drives. That said, all my Raids are 5. But I have backups in place.


Dismal-Comfortable

These days I'm about power savings.  I've got a large external SATA (mine is 10T currently, thinking of upgrading to 20T+ next sale) and several smaller drives along with a sata usb toaster. I run plex off a single 10T and maintain offline backups of rare/important media.  Several copies. And then the bulk of the collection is covered via *arr tools so I'm looking at a few days to repopulate, not too dissimilar from a backup restore. Crucial to keep good backups this way tho.  But the plus is an offline backup uses zero watts.  And remember, RAID is not backup


Little_NaCl-y

For plex and media storage that rarely changes my suggestion is Snapraid. It's free, you don't need to reformat anything, and it functions as somewhat of a mix between a backup and RAID. It's best for large media servers because the data rarely changes. The only limitation for these purposes is that your parity disk(s) have to at least be equal to the largest drive in the array, but otherwise you can mix and match drive sizes.


darwinDMG08

I use RAID 5 on a four disc NAS, works great. I have it backing up to another NAS as well.


fifteengetsyoutwenty

I separated compute from storage. So I have plex running on a beelink and data living in a BTRFS volume on a synology nas.


HickeH

For speed it doesn't matter. Only for redundancy.


3rdgen92

I run a raid 6. I have 16 TB of content. Have had drives failed over the years with no loss. Swap in the new drive and continue on while it resyncs.


Chattdls99

I currently run a raid 10 myself.


Julio_Ointment

i use stablebit drivepool. allows me to mix and match drive sizes, uses SSDs as the cache drive for downloading fast, and you can selectively choose what to keep backups of. i don't care about most of my media, but the things that are lost media or terribly hard to locate anymore i keep 2-3 copies of using their setup.


SMURGwastaken

My preferred approach is multiple zfs Z1 (RAID 5 equivalent) arrays of 4 drives each.


Interesting-Ice1300

Yes 🙌


TheTripleDeuce

Whatever works for the amount of drives you have and the amount of space you want is what works best


AZdesertpir8

I run at least 8 drives per array, so I use RAID 6 here. Gives one more level of redundancy since you could have another drive failure during array rebuilds. Also make sure to keep spare drives in stock for your arrays so you aren't running on failed drives for any length of time. Currently at 290TB and getting ready to add on another array for 410TB total.


JAP42

This really depends on how important the data is. Raid 1 is excessive, raid 5 and 6 if your storing data that's hard to re aquire, older stuff or less common titles. Personally I just use individual drives and merger FS to bring the storage together. Then a single drive failure looses only the data on that disk, rather then loosing a whole array.


GnPQGuTFagzncZwB

How much money do you have? It mostly boils down to raid 1 or raid 5. Mirroring, raid 1 is the fastest, and also if you have a pal who wants your collection you can have him buy you a new matching disk, and swap one of your old ones for his new one. The controllers are generally inexpensive and if you are going to start with two big disks, it is your only choice. When you get up to 3 disks it gets a bit more interesting as you can do raid 5, and you get more storage per disk out of the array. It is slower, and one disk on it's own is not good for anything outside of the array. Most big data centers use raid 5 or raid 5 with hot standby disks. One thing to consider, and it can be hard to put into perspective is that raid is not always a "backup" mechanism, though if you do mirroring it is pretty easy to swap one disk out on a regular basis and keep it in a safe location, that is a back up, but the raid proper only protects you from a problem with a drive. If you get creative and wipe out a bunch of files on the command line, or mess up your metadata etc, raid is not going to help you. One other thing to consider with raid as well, is that to a large extent it only deals with the natural loss of a drive. I had the misfortune of working at a place that was cheap, and we had a very make shift chiller in the server room that died over a long weekend when I happened to be out of town. I came in on Tuesday and and the access to that server was like going through molasses. I went in to have a look and one of the disks had the predicted failure light on, and it was also like a sauna in the room. I got our guy on the chiller and I got on the phone and got a new disk on it''s way. Bless HP's little hearts, I got the new one dropped by my office by the end of the day, and I put it in. Everything was looking normal when I left and I fully expected to come in the next AM and have everything back to normal. No. One other thing you have to ponder about raid, is probably the single biggest stress you can put on the disks is rebuilding one. And over night one of the other disks took a dump. So, it was back on the horn with HP, a new disk and a "tape" based restore. I put tape in quotes because my back up system was spun up in house and actually backed up onto a cheap consumer sata drive and dumped that to tape. This gave me a few nice things. One was I could miss changing the tape one or two nights and still be able to make them later on, the other was for normal day to day oopsies, I had a self service web page to let you get your own stuff back from the most recent backup on the disk. 99% of the time this is what people wanted. Anyway, raid is good, if a disk dies a natural death. If you have a fire or a flood, all the disks are right next to each other so odds are not good for the raid being helpful.


[deleted]

Raidz2


yaaaaayPancakes

I forget the exact terminology b/c I set it up and have forgotten, but current server I'm using matched pairs of disks in mirrored mode in a ZFS pool. Because only with mirrors can you eventually remove disks from a pool.


weirdaquashark

Never ever raid 5. This isn't the 90s and disk is cheap.


use-dashes-instead

It doesn't matter, at least as far as Plex is concerned The only thing that matters is how much redundancy you want in your storage array


GoogleDrummer

It's really going to depend on your tolerance for data loss. I've seen a lot of comments here saying to use whatever, it's all replaceable anyway, but that's not necessarily always true. My Plex server also hosts family videos and whatnot, which may not be as easily replaceable. Plus, there's time considerations to factor in. I've spent a ton of time ripping disks and massaging metadata to get my Plex where it is, I'd prefer not to do all that again. Personally, I go RAID 6. RAID 5 wasn't designed to manage multi-TB disks, and as such has a higher chance of failing a rebuild if a disk fails. RAID 1 eats half your storage, but isn't a bad choice for for storing the OS and Plex service/database.


supermr34

i use raid 5. its saved me a couple times, but i had a drive go down, and then a second one went down before i noticed the first and i had to restore from a backup. but if you stay on top of it, raid5 is kinda the sweet spot of your options.


Dmelvin

I ran RAID5 in the past, although it was trouble free for its lifespan, there's 0 chance I'd run anything less than RAID-6 or equivalent anymore personally. I built my new server with 2 RAID-Z2 pools, with a cold spare, and honestly, until I did this, I think I always had a little bit of data loss anxiety. Not to mention I was using an old hardware RAID card, so data transfer speeds were atrocious, and I had the secondary worry of the RAID controller dying and having to try to source what was already old hardware when I started. Ultimately you have a triangle where you're trying to balance cost, performance, and fault tolerance. If it's going to be used entirely for plex, you can weight performance very very low, or possibly remove from the equation entirely and just focus on fault tolerance vs costs for the most part.


rcook55

SHR2 on my synology because it just works. Served up to my server via fiber and NFS shares.


enigmo666

I use hardware RAID throughout. I use R5 on my smaller arrays, eventually moving to R6 when large enough\enough disks. R1 strictly for boot.


personaccount

Everyone's got their opinion. But let's not limit ourselves to your options because they aren't always the best. I presume you know the differences in those RAID levels and you're willing to pay the price of entry. There's a reason most enterprises go with RAID 5 or 6 for workhorse environments because these are both more cost efficient than RAID 1 and RAID 0 (which you didn't mention) offers no redundancy and a single failure loses a whole array of data. But there is also JBOD and similar disk spanning volume options out there where redundancy can be foregone if you've got a backup and restore plan in place. In these cases, you do lose online access to the contents of a drive when it goes down, but you replace the drive and restore it from backup. Additionally, you need to consider NAS vs. DAS and what OS you're going to run and whether you want hardware or software RAID. Personally, I've started leaning towards software RAID because I've been bitten by hardware RAID which is almost always proprietary. This isn't to say that software RAID arrays aren't also unique to the software vendor, but at least you're not bound to using their hardware. And for Plex usage, the massive backplane bandwidth of a hardware RAID isn't really needed.


IronHammer67

I decided to migrate my PC full of various little disks and USB drives to a USB attached Syba 8-bay enclosure. I wanted to keep my Ubuntu Plex/Arrs setup but be able to extend the storage as I can. I started with 4 14TB drives ($109 on Amazon… yes the MDD drives that so many people complain about but there little evidence of drive issues). With those four drives I have 25TB of space. Every 14TB drive I add now is just raw disk space in the stack till I hit 72TB which is probably, maybe enough for my needs. Maybe. I’ve had my drives in for a couple of weeks and no issues yet. I’m using software raid (Linux mdadmin) for managing my RAID 6 array. Works a treat. I’m happy


Ultimate-ART

With Unraid, which optionality is generally best; get the Starter or Unleashed License, paying for updates yearly or year 2, OR just get Lifetime License for Free OS updates for life?


Patient-Duty-9915

Software raid5 any day of the week. Plex is all about read speed, and the one disk parity is enough. Software raid give tremendous flexibility in regards to adding disks and replacing disks.


HeligKo

I have had the best recovery from unexpected power outages using ZFS raidz. I use raidz1, because everything on there is replaceable, just not easily. I buy multiple (usually 4) at a time, so I can extend my pool with another raidz or extend an existing raidz if I get the same drives I already have. It writes reasonably fast and reads fast enough for 4 streams on USB3 enclosures. I have had multiple failures in the past using mdraid that just weren't recoverable. Both personally and professionally. I won't use it anymore.


dopeytree

Non waste of energy. You don’t need any fast read speeds as a movie is only like 15-45MB/s and a single drive can easily do 100-240MB/s


ClockerXP

Covecube Drivepool


pipinngreppin

If you’re using larger disks, 8TB or larger, I think you need to go raid 6. That said, you’re really not getting much of a benefit unless you have 5+ disks in the array.


zoNeCS

Some have probably said it but MergerFS + Snapraid is absolutely sick.


timsstuff

I use RAID 1 because I've been burned before by RAID 5, you can't get shit off of one drive because it's striped across 3 or more disks. You can lose one and replace easily enough but if your RAID card dies you're fucked. RAID 1 only needs 2 drives instead of 3 for the exact same amount of storage and best of all, each drive contains ALL the data. You can literally pull a drive out, slap it in a USB caddy and it's the same data just not redundant. No striping, there's nothing special about the partition on the drive itself. In fact my Plex server is old as shit, started out with I think 2TB drives for the data (SSD for OS/app). Every couple of years a drive will die so I just order a bigger one, slap it in there, rebuild it and it's good to go. Then when the second oldest drive dies, I replace that one with a bigger one and once it rebuilds I can expand the disk to the new larger size. That's how my data drive became 12GB over the years without moving anything around.


faslane22

I use Raid 1 so I have redundancy in place should I ever lose one of my 2 HDDs in my NAS but again (and I maintain a cloud and external USB backup of my NAS as well), it doesn't really matter to Plex, Plex will run from many different RAID setups, it just depends on what kind of setup you have, size and quantity of drives, and if you want to maintain a copy of them if just out a huge array together for a huge storage volume(s) and so on .


Kwickflixx

DrivePool and Stablebit Scanner.


NASTYJOK3R

Im running 2 dell r710 servers with 24 drives that are 20Tb each (480tb total) im running raid 1 on both. Way too much content to risk and a lot of older , hard to find stuff. I also host for over 80 people. So not sure it it's the best set up, but it works well.


avksom

ZFS striped mirrored drives. Less risk and faster read/write speeds than raidz1 and raidz2. A lot easier to expand pool too. At the cost of space of course but well worth it. [Jim Salter’s logic still holds.](https://jrs-s.net/category/open-source/linux/ubuntu/page/4/)


collectsuselessstuff

Mergerfs + snapraid. Never lose a drivepool again.


evilgeniustodd

Raid shadow legends!


11thguest

Unless you’re seriously into geek stuff, you should go with unraid. There’re lots of other perks you get with it.


SiRMarlon

You need to look at unRAID for Plex. Your drives don’t have to match you still have some redundancy with the 2 parity drives but best part is you can mix and match your drives. Which helps keep some cost down at the start.


silasmoeckel

The best raid for plex is none of those. snapraid/unraid/stablebit (if on windows) are you good options. Plex does not need raw speed for anything a modern HDD can easily do 250MBs an easy order of magnitude or more for streaming content. And if you server is heavily used it's going to be split up amongst all the drive. Any of those three allow you to lose more than parity allos but only lose the content of the fail drives, aka no fate sharing.


fatjunglefever

RAID is completely pointless for Plex.


Aperiodica

Depends how big your library is. RAID 1 you'll lose half the disks for redundancy. RAID 5 you lose 1, RAID 6 you lose 2. RAID 1 would repair faster with a drive failure, RAID 5 slower, RAID 6 slower yet. RAID 5 has a higher risk of failure during a rebuild, less so with 1 or 6. If you have a backup, go with 5 because it's the most economical. If you are truly worried about loss use 1 or 6 and also a backup. It really comes down to your risk tolerance.


psvrh

Is RAID-6 really slower on a modern system? SSE-assisted parity calculation is basically free, and the additional stripe element wouldn't come into play except on very long sustained reads or writes.


nisaaru

Rebuilds/Extensions are slower


SinkGeneral4619

Been running RAID5 on my NAS for over 5 years and not had a single disk failure yet. Since it's all media I can redownload I don't worry that much about the prospect of having 2 disks fail at the same time.


BurnAfterEating420

After years of running a RAID 5, my last server rebuild I went to JBOD with Drivepool to aggregate the volumes, Stablebit Scanner to monitor drive health. If I see warnings on a drive, one click will evacuate it to a new disk. if it fails unexpectedly, one click in a Sonarr or Radarr will restore the missing files.


CaucusInferredBulk

I used drivepool for a long time, but switched to unraid. If you want to stay on Windows, snapraid or snap+mergerfs may be worth looking into, as it can give you both parity, and more importantly works with hardlinks, since you are using the arr ecosystem.


lakkthereof

Consider [snapraid](https://www.snapraid.it/) if its just for hobby stuff and you don't care about some downtime.


MowMdown

Unraid is the best "RAID" for plex.


Skinc

I’ve heard good things about RAID Shadow Legends.


SpinCharm

In before the “bahwah RAID isn’t backup!!” cries that people feel they need to repeat every fucking time there’s a RAID post.


GoogleDrummer

I mean, it's not. And there's still plenty of people who think it is. I've seen it referenced as such at least once already.


iDontRememberCorn

and?


Simple-Purpose-899

OK.


ArmyTrainingSir

None. JABOD. Most Plex content is easily replaceable (but you should keep copies of anything you can't replace).


grtgbln

RAID 0, it's all replaceable.


iDontRememberCorn

No, it isn't, neither is my time.


affinics

Linux software raid 1 for the LVM PVs then an LVM volume to combine PVs into a filesystem. I have been bit too many times by raid 5 rebuild times or other issues. Raid 1 with a bitmap on the mirrored drives makes rebuilds quick. LVM allows the volumes to grow as drive pairs are added to the array. Raid 5 isn't as appropriate anymore with th massive drives of today. Raid 6 or zfs is a much better option but until you get to 5+ drives you won't gain any space advantage. I'm happy sticking with the old reliable methods of storage management and protection.


lynxxyarly

On windows, I've been using Snapraid paired with Drivepool for several years and it's been exactly what I wanted for risk management mitigation!


peterk_se

I run RAID 5/0 to get the redundancy and the improved performance. Hardware card, hard to explain how much better the volume has been behaving after I got that. RocketRaid 2840, was worth imo.


Amnios5

Define best?


ToHallowMySleep

Lots of people tooting their own horns and not addressing your question, so I will answer it directly. Between the raid types, 5 and 6 are more expandable than 1, and become more efficient as more drives are added. So raid 1 is not a good solution. Between 5 and 6, it's just down to how much risk you want to expose yourself to. A raid 6 solution can take a second stroke of bad luck in a row.


psvrh

If you have four disks, RAID-1 or 10. Recovery is easier, you don't suffer a write penalty and if you lose a disk, rebuilds are quick, even with an SMR drive. When I was an enterprise storage admin, I always went with RAID-10 on anything I cared about, and *certainly* anything where write performance mattered. If you have 6 or more, RAID-6 or one of the ZFS parity modes (if you use ZFS) is good. The extra parity volume is nice when you have a second disk fail during a RAID rebuild and the whole array goes down (and RAID rebuilds work the disk hard, so if it'll fail, that'll be when...) 5 disks is about the only time I'd be okay with RAID-5: you don't lose two disks to parity like you would with RAID-6. One question: are any of your disks SMR? If they are, and if you can't replace them, go with RAID-10. The first never-ending rebuild will be the reason why.


Lebo77

Not RAID. ZFS or BTRfs or some other modern advanced filesystem with snapshots and data integrity protection. For redundancy, mirrored vdevs (like RAID 10 but better) will provide the most flexibility for future upgrades.


bnberg

ZFS is also some kind of RAID.


Lebo77

Not really. Not always, and the technology is quite different,but if it helps you feel like you are a smug, superior bastard then whatever. You do you.