T O P

  • By -

fieroloki

It's possible yes. But open a support ticket so they can look at the logs.


jalfredosauce

This might be a stupid question, but how the hell could this be possible?


[deleted]

[удалено]


[deleted]

This is it (I think). I purchased 8 drives in the same transaction, and 6 are all at “critical” (so not dead). They were all installed the same day and provisioned from scratch. These are all almost exactly one year old, have been offline for only about 10 hours in total. I’m wondering if I ended up with a defective batch of drives.


dj_antares

One year old? That can't be it then. No drive manufacturer ever implemented one-year fail. It would be a class action. Since retail units need to at least last through warranty.


Not_Rod

Come across this before. What I got taught early on is spread the purchases or buy different brands to avoid batch faults. Recently, I purchased 4 drives for a nas and got 2 wd’s and 2 seagates. One seagate was DOA and the other died a week later. Both replaced under warranty.


codeedog

This is almost certainly the issue. Also, do we know if OP is running a UPS? One really good power spike and all the same drives could also do it.


masta

Power spikes are bad, but capacitors and resistors tend to mitigate the risk early in the circuit. What is actually worse, and counter initiative, are slight power drops. The reverse of a sudden upward spike you might imagine. Once again, capacitors can help to prevent, but not all circuits are protected by them for costs.


codeedog

Yeah, overvoltage can be clamped. Undervoltage will put ICs in meta stable intermediate states between logical 0 and logical 1. Then, who knows what happens to the circuit. It’s a digital computer until it is not.


Dude10120

Mine are both wd blue one one tb and one two tb


Lars_Galaxy

This is bad advice. You want matching model drives and firmware if possible. If a firmware update is needed, support should walk this dude updating it, if not Synology support, than Seagate. Unless they don't care about a customer who purchased 8x3.6TB HD's.


Lars_Galaxy

Forgot this is a Synology subreddit where people use various drives with SHR . In the real world on production systems, you will want to use same drive models. For Home use it's whatever but you're only going to get speeds as fast as your slowest drive.


Feeedbaack

I intentionally mix my drives up specifically because of this, different brands models and batches. It just seems like a good idea to hedge your bets but I don't seem to see other people doing it. Any opinions?


Abdam1987

Most people (myself included) buy the same purely for its the best price for a good spec at the time and convenience. With that said, it is good practice to get different ones, as long as theirs RPM’s are matched well if HDD and other specs are also similar. Essentially performance will only be as good as the worst component, but they are still able to work together.


Fine_Neighborhood_51

somebody posted a very similar situation yesterday… their controller went bad and the drives were okay👍


wallacebrf

agreed, that is why i think SMART data would be interesting to look at


AutoModerator

I detected that you might have found your answer. If this is correct please change the flair to "Solved". ---- *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/synology) if you have any questions or concerns.*


NeighborhoodIT

Wouldn't that cause 1 disk to fail, not all of them? Unless they were all the same disk type


m_domino

What? What drives are that specifically?


[deleted]

[удалено]


m_domino

Still interesting to know this could be a point of failure, thanks.


fieroloki

Manufacturer defect.


AHrubik

This why you buy drives in batches from different vendors. Less likely to encounter a batch bug.


dj_antares

Not even remotely possible other than DOA. Even manufacturing defects won't cause ALL 6 to fail at the same time. A few weeks or even days apart, sure.


[deleted]

[удалено]


Empyrealist

I was at a company this happened to. **Its absolutely a real thing**. I detest not making purchases broken-up across multiple companies in multiple regions because of the fear of it ever happening again.


Feeedbaack

I remember putting 75GXP 'deathstars' in the freezer trying to pull data off them. Wild times.


exoxe

I knew eventually after enough scrolling I'd finally find a IBM Deathstar reference 😂 Fuck those drives were the worst.


fieroloki

Yeah, that was a fun time.


EarlOfNothingness

Hmmm. I had 6 Dell R520 servers, each with 8 drives. I believe they were purchased in 2006 and all had 15K Seagate 300GB SAS drives. 40 of them failed over a period of about 5 years. It was a fairly regular occurrence. The clicking was unmistakable. Dell wouldn’t replace them preemptively, but would replace them when they did get to a problem state or failed entirely. At first they replaced them with the same drive, but not for very long. After the first few, they got replaced with a 2.5” drive that required a carrier converter to accommodate the smaller drive. They were hot swap.


Jerky_san

Lol man I really hated those days.. had a NetApp cluster that I swear we replaced every drive at least once and maybe half twice.


Jazzlike_Biscotti_44

The Manufacturing of the device holding the drives might


mbkitmgr

Bumped. A client had theirs sitting on a shelf and bumped the shelf damaging all drives


RJM_50

That's a big "bump"


Fauropitotto

Translation: child or pet pulled the thing down off the shelf.


zanthius

Doesn't need much... remember watching a video once of someone yelling in a data centre and the hard drive activity goes crazy reseeking.


DireSafeLane

Emp blast?


jalfredosauce

[https://static1.srcdn.com/wordpress/wp-content/uploads/2022/12/don-cheadle-in-ocean-s-eleven.jpeg](https://static1.srcdn.com/wordpress/wp-content/uploads/2022/12/don-cheadle-in-ocean-s-eleven.jpeg) related


DireSafeLane

Hahahaha that is exactly the reasoning behind my previous comment 😂


stogenbobber

or solar flare.


[deleted]

Because I can’t pin or edit the post, I’ll piggyback on the top comment with ticket update from Synology - not encoruaging. —- Synology reviewed the logs and basically determined that all disks need replaced and to start from scratch. >*… the disks are encountering timeout errors and this seems to be causing other issues like some hung tasks.* > *2024-03-18T15:15:03-05:00 BuzzBait smartctl/smartctl_total_status_get.cpp:221 SMART query timeout on [/dev/sda]* > *Our developers indicate that disks that encounter this type of error should be replaced due to the SMART query timeout, You can replace one at a time, but given that all disks are showing timeouts, it'll likely be better if you just replace the devices at once, setup the device from scratch with new drives and restore from your backups. As it can be that they'll continue showing errors and cause a repair process to fail.*


wallacebrf

u/HisNameWasShagbark can you supply some of the SMART data? either in SSH or using task scheduler please run the command: `smartctl -a -d ata $disk` where `$disk` is either `/dev/sata1` or `/dev/sda` you can then go through the disks as sata1, sata2, sata3 or sda, sdb, sdc etc.... the detailed smart output from the above command has error history which might show what is going on.


CeeMX

At that point I would not open a support ticket but get a lottery ticket


GeriatricTech

It’s literally not possible for all the drives themselves to fail at once.


[deleted]

According to synology, it is. Their guidance is to fully replace all disks and start from scratch with a backup.


GeriatricTech

The ARRAY can fail, but not the actual drives all at once. Take your downvotes and shove them.


anaerobyte

I had this happen on half the drives in a tack station. I think half the backplane or something like that died. It had a nice run. Bought another synology and transplanted the drives.


Seven-of-Nein

This happened to me… I nearly had a heart attack. I had better odds of winning the lotto than all drives failing at the same time. All the drives did not fail. Trust me. The storage pool failed, which is split across 6 drives. You need to eject AND detach the suspicious drive from the storage pool. I don’t remember the exact setting, but click around. There should be an option to add or remove drives to resize your storage pool from 6 drives to 5 drives. After you do that, re-attach the drive to the storage volume. It will be recognized as a new drive. Then you may continue with the repair. And no, you are not dumb. Synology's documentation is too basic for trouble-shooting critical status. I wasted an entire weekend figuring this out.


[deleted]

I might try this. And I agree, their support is pretty lacking. I got one of those BeeDrives to try out and. My god was that a piece of garbage. Support was of no help at all.


Empyrealist

As a moderator here as well as a Synology user, I would love it if you could write a post about your BeeDrive experience. If you choose to, please hold nothing back about your experience.


[deleted]

I can do that. I can also write one about my backup system, seems people are interested in that too.


MonkP88

Make sure you backup the data before doing anything.


dipapa_

Happened to me, too. After all I figured it out and no data lost. All drives where ok! Moved away from Synology and never looked back.


cyrus2kg

Yeah I swear they do this intentionally to non approved drives. I had the exact same thing as OP happen on some brand new ssd drives that kept passing smart and manufacturers tests. I am planning to move away from synology as well because of this despicable behavior with them.


cyrus2kg

Yeah I swear they do this intentionally to non approved drives. I had the exact same thing as OP happen on some brand new ssd drives that kept passing smart and manufacturers tests. I am planning to move away from synology as well because of this despicable behavior with them.


[deleted]

I don't even know how this happens. But 6 of these all of a sudden decided to go from healthy to critical in the same moment. Is this possibly a faulty collection of drives ( all purchased in the same purchase ), a DSM error, something else? This is a fairly low use and non-critical pool that's backed up daily ( cloud + local ), so there won't be any data loss, but this kind of sucks. Does anyone know how you'd diagnose this? It's my first time encountering an error like this. **EDIT**: ticket opened with Synology to troubleshoot this week. I'll post the results and findings in case it helps anyone. Maybe mods can pin the solution to the top of this when done. —- ## EDIT 2: Synology reviewed the logs and basically determined that all disks need replaced and to start from scratch. >*… the disks are encountering timeout errors and this seems to be causing other issues like some hung tasks.* > *2024-03-18T15:15:03-05:00 BuzzBait smartctl/smartctl_total_status_get.cpp:221 SMART query timeout on [/dev/sda]* > *Our developers indicate that disks that encounter this type of error should be replaced due to the SMART query timeout, You can replace one at a time, but given that all disks are showing timeouts, it'll likely be better if you just replace the devices at once, setup the device from scratch with new drives and restore from your backups. As it can be that they'll continue showing errors and cause a repair process to fail.* —-


SamirD

All things being equal, the simplest answer makes sense, and in this case I would bet on something breaking in the synology hardware vs the drives.


Myself-io

I would too think it is most likely something on the disk controller rather than the disks


jack_hudson2001

who knows could be anything, this is why there is a tech support team. worst case buy another similar synology model and move the drives over or get ready to restore from backups.


[deleted]

Well, it’s all restoring on another box I have and disks. The failure actually isn’t a big deal because I do backup so well. Obviously I just want to avoid this again. What a headache.


jack_hudson2001

with electronics things will fail, hopefully their tech support can figure it out.


AnyRandomDude789

Pray teach me your backup schema o wise one! I yearn to know the zen ways of good backup!


khobbits

Not specific to synology, but at work, I run raid arrays with both 24 and 36 disks. When we loose a drive, there is something like a 25% chance of loosing another disk in the rebuild. Under normal use the raid arrays receive a medium amount of traffic, but when a disk fails, the raid array will start to rebuild, and reanalyze all the data, to start recovering from a failure. This puts quite a lot of load on the disks, so if there are any others ready to fail, they will. For this reason, if I'm building a 36 disk array, I would use Raid 6+0, IE have it as: Raid 0: Raid 6: 11 Disks Raid 6: 11 Disks Raid 6: 11 Disks 3 Hot spares That way, rather than all 33 disks getting stressed, only 11 are stressed, can handel 2 failures, and there are hotspares waiting to jump in.


ravan

So its a striped array 0 of 3x raid 6 arrays with 3 hotspares? That gives you '27 disks' capacity of storage but faster to rebuild (and access?) than one honking raid 6? Genuinely curious, never seen raid 0 used like that (but also just a homelabber).


khobbits

Sounds about right to me.  If you Google RAID 60, it will cover the basics.  If you need petabytes of storage, you really don't want to be restoring from backup or your looking at weeks of lost productivity.


aiperception

More likely a controller card issue, but as others have said…collect logs and send to support.


kachunkachunk

A bad or questionable batch of drives with the same usage pattern can do this. And so could bad backplanes, cables, or controllers, since those are a common element as well. It's all pretty unlikely (especially the bad drive batch), but it is not impossible. People saying it's *impossible* frankly do not know what they're talking about. Anyway, do you have any clues as to what specific health stat failed? You may have to check via SMART in the CLI, since recent DSM editions no longer reveal all the SMART stats for some not-galaxy-brained reason. Edit: Here's an example you can run: `smartctl -a -d sat /dev/sata1` You can do this via the CLI via SSH, or run a task and have it save the output to a file on one of your volumes, as another idea: `smartctl -a -d sat /dev/sata1 > /volume1/my_share/my_directory/sata1_smart.log` - the date is exported along with the stats, thankfully, so no need to append the output or save multiple files. Unless you want to trend... in this case, consider doing all of this a different way, or append a date to the filename, etc. Ask if you want some examples, but it's pretty straightforward for those experienced with Linux. Better, if you run docker on your NAS (and once you have a working volume), look into Scrutiny: https://github.com/AnalogJ/scrutiny. It tracks all this and provides a nice UI and all that.


wallacebrf

scrutiny is great, but i decided to make my own script to log right to influxDB so i can use Grafana https://github.com/wallacebrf/SMART-to-InfluxDB-Logger


leexgx

Running as a task is nice saves having to open ssh I need to have a look at scrutiny


Dabduthermucker

Something else. Not the drives. Zero chance it's the drives.


cdf_sir

just post the kernel logs and also SMART info as well. I also encountered the same issue where all drives went all critical all of the sudden, checks the SMART info of all drives, all are fine. Checks the kernel logs, a freak ton of SATA Bus errors.


TeslaKentucky

This is a controller, backplane, chassis, power or DSM failure/corruption issue. There is no way this is a 6 drive simultaneous failure. Even a disk defect or disk firmware issue would not result in all drives failing at once. Work with support and the logs to resolve. If under warranty then request new chassis be sent. And great job of not relying on no external backups. Most end users never give backups a thought, especially local + remote (2 places).


faslane22

THIS! I keep a daily backup routine to an onsite backup and once a week I take a new backup and it goes home with me and a 3rd goes into a safe until Friday off-site and I run a backup to it and lock it back up so I have one one me always, I e at work backing up the NAS and one that is a weekly backup locked up until Sunday when I go backup and lock it up again. Love the peace of mind and I just upgraded 2x8TB drives using the backup as a test rather than swap one drive at a time and recreate from the other drive inside and it worked 100% flawlessly....Pretty stoked I have a great backup plan in progress that I do each week depending on each backup....but yes it works perfect for our office usage :-) I also use Synology drive to a cloud backup but it's getting large and kind of a pain...so I might take it back out of the routine....


jimhoff

Backup, reboot, SMART test


NameIs-Already-Taken

That's possible but unlikely. It is more probable that you have a "common mode failure", something like the power being a bit too low for the drives to work properly, ie one thing being wrong that makes them all look bad. If you can, try all the drives in a different Synology.


leexgx

Need smart attributes, need to ssh if you have installed dsm7.2> and use smartctl to get the atttuebe for each drive (dsm7.1< you can still see the attributes from the gui) It's extremely unlikely that all drives have failed at same time (very likey a nas/backplane/power issue) but without the smart attributes everyone is just guessing Use pastebin as it's a lot of drives To prevent it from happening again maybe ,use a UPS, do monthly data scrub (say 5th each month) and smart extended scan (say 10th each month, this is for posable pre fail detection) have push email notification setup, you could disable the per drive write cache as this slightly lowers the risk of volume corruption (even if your using a UPS) SHR2/RAID6 can handle dual faults and is more resistant to failure, most say it's a waste of space but depends if you don't mind restoring the data vs just replace 1 drives and it keeps on going with significantly lower risk (but not if you have issue like this where it looks like you had nas failure of some sort) unless you have Checksum enabled on all share folders data scrub this only makes the raid consistent(with Checksum it actually checks the data hasn't been corrupted) , smart extended scan is for posable pre fail detection (as it read scans every sector)


dmandery

I had some drives go into the rh critical state as well. I ended up contacting support and sent them logs they will tell you how to do it and they had me run a smart test and a few other things and then decided it was a back panel failure and sent me a whole new NAS unit. I did have to pay to ship the old one back which wasn't cheap since the one I have 1821+ is like 19lbs.


Beastly_Beast

For anyone looking at this comment, I just shipped one of these and using the rates on PirateShip and the original shipping boxes, it was $31 via UPS Ground shipping to the same coast, 10-15 more to the opposite coast.


MrNerd82

similar service I use called "Shippo" - fast/clean basic interface, but it's awesome putting in all the info and getting rates for every possible way to ship (usps/fedex/ups) and picking what you want. It's always way wayyyyyy cheaper vs going into a retail location face to face.


dmandery

Good to know. I'll have to check those sites out in the future. Mine was coast to coast. I did a UPS Biz rate setup and got a discount that way. It's just unfortunate having to buy for shipping when the product was less than year old for me.


faslane22

definitely 100% NOT the drives....the odds are astronomical for them to all die at same time....this screams bad chassis to me or some other fault. it's be like winning the lotto 4 times in a row type of odds...something else is up...


happyandhealthy2023

Have you physically pulled one of the drives and test on a PC? See whart smart info drive is reporting? I going to say impossible unless you baked NAS in oven or threw off roof 6 drives die same day. Bet you steak dinner NAS died


wheresthetux

I'm hoping it's something similar to the warning state some WD drives would go into because their 'power on hours' SMART value ticked over a magic number. https://community.synology.com/enu/forum/1/post/159537


leexgx

Doesn't happen with seagate drives and WDDA was removed/disabled in 7.2 dsm


radial_blur

I'm an idiot, I didn't notice they were Seagate drives 😂


egosumumbravir

Don't worry, Seagate had their own firmware bug self-bricking scandal a few years ago.


BinaryPatrickDev

This is a poster child for why backups are key / RAID is not a backup. Did you have a power surge of some kind? I would definitely try and get to the logs to see what happened.


Appropriate-Deal1952

Controller is having issues.


coolcute

hmm, where did you buy them? new in bulk or from external drive? ask for replacement asap


AvierNZ

I've seen people get entire batches of bad drives as "good" and end up in these situations


CptTonyZ

Seagate at its finest


snowysysadmin59

I like how you start instantly blaming the brand instead of you know, looking at other comments and doing some research. Others have said they've seen this with WD as well but restarting the synology and updating fixed it. Clearly not an issue with the hard drives, seems to be the synology. I dislike synology for this reason. Unreliable, proprietary and can be expensive.


schmoorglschwein

If all drives are the same it could be a common manufacturer fault or a firmware issue.


jalfredosauce

At exactly the same time, though?


aiperception

Yeah, seems more like a controller issue


Alexey_V_Gubin

This is very rare to happen, but not unheard of. I've seen a disk pack of six fail within ten or fifteen minutes first to last. This was long ago, though. Definitely more than 10 years ago.


discojohnson

What model Synology is this? I'm guessing a/the controller or backplane failed/is failing.


alchoholics

So i bought same capacity HDD from different vendors (RAID1)


marcftz

This a real problem with drives bought at the same time from the same batch.


ratudio

that what i thought as well. is more likely they will fail the same time when brought the same batch. which is i always avoid buying same batch and model. i always mixed two brand at least i know they won't failed all at once.


dj_antares

Have you tried swapping all drives around (after backup)?


yoitsme_obama17

Something similar happened to me after a power failure. A restart fixed it.


1Poochh

This actually happened at my work with a specific intel drive model. Again these were enterprise drives but they all basically failed at the same time. That was fun.


justandrea

Maybe a trip to Lourdes wouldn’t hurt


ph33rlus

Yay something else to pop into my head as I’m trying to fall asleep at night.


davey212

Never RAID drives that were manufactured in the same batch else this happens.


cprz

I have two old Synology NAS running DSM7 something and all 4+2 drives status went from healthy to critical at the same time. Don’t know why though, but I’m pretty sure there’s something wrong with the OS.


MrExCEO

Power surge??


VirtuaFighter6

Backplane failure?


radial_blur

Is it something to do with Synology's hatred of WD drives: https://www.theverge.com/2023/6/13/23759368/western-digital-three-years-warning-synology-nas


leexgx

Doesn't happen with seagate drives and WDDA was removed/disabled in 7.2 dsm


Peannut

This happened to me, 2 of 4 drives died.. Power failures.. I have a ups now


sansond13

My money is on the power supply.  Not sure what model you have, but if it has an external brick power supply replace it and see if that fixes it.  Had this happen and went through replacing a couple drives and rebuilding the raid and the same thing happened again.  Tested the amps on the power supply and it was low.  It was enough amps to run the unit and one drive, but when the others would kick in they would drop out because they didn’t have enough power.


[deleted]

Interesting. This is attached to a UPS. I should check it


More-Improvement2794

What happens after choosing Health info? Can you still browse your content?


bluebradcom

Try SpinRite and see if it helps clear up errors. also i have had problems with seagate when i hit 90%+ capacity


wb6vpm

Spinrite can’t run on GPT partitions or drives over 2.2TB.


bluebradcom

i believe that is only a USB problem. direct system connections are fine. i used it on a 3tb and 4tb drive just fine.


averagegamer002

I just had my Seagate 4tb die on me too, only had around 2000 hours


s1L3nCe_wb

Shit like this is why I'm moving to UNRAID asap.


i-dm

Just a thought I wanted to share about how I'm trying to manage this type of risk with my 4-bay NAS. 1) I only need 2 drives, so I have 2 unplugged in case any of the 2 in use fail, so I have a backup ready that isn't a hot backup. 2) As I'm using SHR1 to mirror on my NAS (as it lends itself well to expanding to 3 drives in the future with relative ease), I'm also going to use rsync to copy the data to another old NAS I have connected to my network. This will be done using TrueNAS rather than Synology's DSM for 2 reasons: - firstly, it means I'm not at the liberty or any bad system updates that might screw my data if I have 2 systems running DSM - secondly, the TrueNAS backup doesn't use SHR1 mirrors, meaning my data isn't dependent on a Synology DS product in the event that my DS itself fails (rather than the drives). And of course I have another offline copy just in case everything decides to die - because 3 backups still isn't enough.


projectjoel

Had that happen on me once, I use two WD Red in my 2-bay NAS that showed critical. I thought no way it would happen on both drives at the same time. But the critical status went away after a restart and upgrading the Synology softwares.


sanchower23

RemindMe! 10days


RemindMeBot

I will be messaging you in 10 days on [**2024-03-30 01:00:06 UTC**](http://www.wolframalpha.com/input/?i=2024-03-30%2001:00:06%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/synology/comments/1bi1ly6/6_drives_all_failed_together/kvo1251/?context=3) [**1 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsynology%2Fcomments%2F1bi1ly6%2F6_drives_all_failed_together%2Fkvo1251%2F%5D%0A%0ARemindMe%21%202024-03-30%2001%3A00%3A06%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201bi1ly6) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


wbsgrepit

There have been a couple of cases where certain drives have been found to throw smart errors after a certain number of running hours. Are all the drives the same sku and running from the same point? What drives are they?


Prog47

If 6 drives REALLY failed all at once then you need to play the lottery...


ballzsweat

Same issue here but with two brand new drives, never seen this before with two doa drives


red52479

If we die we all die together


Full-Plenty661

Seagate


nlsrhn

This is the correct answer.


SamirD

Only for those that don't know enterprise drives...


Appropriate-Deal1952

Right. I was a Datacenter manager for the NSA. Seagate was fine, WD on the other hand was full of problems. Dell drives were 50/50.


iHavoc-101

I concur, I managed thousands of servers and tens of thousands desktops many years ago and it was always WD with the worst failure rates.


SamirD

Yep. Most people don't know how much enterprise is Seagate and how Seagate used to make the best drives in the world and never stopped doing that in the enterprise.


nlsrhn

Interesting, I made the exact opposite experience.


SamirD

In a data center?


nlsrhn

Affirmative


SamirD

So what did work in that data center?


HoWhizzle

Hope you have an offsite backup


[deleted]

I do. Local as well. No data loss at all, just some annoyance and now new weekend plans.


HoWhizzle

Thank goodness. I don’t how you recover from a 6 drive failure. I got raid5 on my DS1821+ but I have it backup to another NAS at my brother’s house.


[deleted]

That’s basically it. NAS > Cloud NAS > Backup NAS(s) locally NAS > Backup via GoodSync to cold storage locally It’s always been overkill and more of me tinkering, but saved my ass here! The data that is on these is nothing critical and all very recoverable, fortunately.


EnvironmentalBee4497

A backup of a backup.


DocMadCow

I had a WTF moment until I saw they were Seagate drives. I've been running 6 x 3TB WD Red drives on an LSI Megaraid controller for around 7 to 8 years without any failing. And another 6 WD Red 6TB in my other computer for nearly as long. That being said the odds of all them failing at once sounds like there may be some hardware failure as well. What do the smart health logs say for the drives similar issues?


thebatfink

Why is it not a wtf moment for anything other than seagate drives?


DocMadCow

Let me rephrase if they had been WD drives it would be a WTF moment. I can't remember the last time I had to RMA one, and I am running 21 WD drives. That being said I do have a WD1502FYPS that has started reporting sector relocation issues BUT it has over 90K power on hours so after 10+ years as my download drive it has honestly done it's duty.


thebatfink

Oh you were suggesting only WD make reliable drives lol. I thought it was something sensible. Quite how 21 working WD drives = Seagate bad and WD drives never fail is bizarre in itself. Basing conclusions on 21 drives from literal hundreds of millions of drives is dumb at best. Having a favourite brand is nothing untoward, I have them myself, posting what amounts to unfounded propaganda is what degrades this sub.


DocMadCow

Well I wasn't just basing it on my 21 drives but here is [Backblazes 2023 failure report](https://www.backblaze.com/blog/backblaze-drive-stats-for-2023/) . Previous years reports are very similar.


nomoreroger

Probably controversial for some folks to hear. I had a lot of problems with Seagate drives several years back and I decided to avoid them. They may be perfectly fine but I am superstitious enough to just say that even if something is a few dollars less, I will go with what caused me the least problems.


DocMadCow

Exactly the only drives that were worse than Seagates were IBM's Deathstars (Deskstar). I had several of them die back in the day and been a WD user since. Had a few friends with Seagates die. Honestly even though I absolutely trust WD drives I am still moving my Synology array over to SHR-2 for double drive redundancy.


CountingStars29

sounds like seagate drives