I guess so! I was speaking with someone on Mastodon today with some WD Reds and they failed at almost exactly the same number of power on hours as mine (~43k hours).
Given mine were in a ZFS mirror, they would have had almost identical wear so I suppose we could say they that they're predictable (though mine are the CMR drives which you don't get in the 4T these days and SMR drives are bad for ZFS).
If I was to replace them with the same drives I'd have a rough idea when to replace them before they start failing but I'll be buying larger capacities these days so those numbers are probably out the window.
I trust the Reds. Although, as someone else just said, it's an anecdote, not real data like the Backblaze report.
I started my NAS in 2012 with two 2 TB Red drives. Later I added two 6 TB Red drives. Some time, around 2016 I think, one of the 2 TB Reds failed and I replaced it with another 6 TB. Then the other 2 TB Red failed, like a year later and I put in another 6 TB so they all matched. I did not get replacements even though one failed during its warranty period (I am pretty sure the 2 TB Reds still had a 5-yr warranty at that time), because I wanted to replace it with a 6 TB anyway.
Currently all four 6 TB drives are still running, plus a couple of 4 TB Toshibas I grabbed at some point.
So, I don't think that drive failure after 4 or 5 years is so bad. I got my money's worth anyway, and that's why a NAS has redundant drives.
You are lucky, or bought the right brand of drives at the right time.
I had five new Seagate 7200.11s which had an easy life but not make it past 22 to 25k hours before they started failing en masse (and thats not even counting their firmware bug that got me). ZFS RAIDZ1 (RAID5) pool, which survived a second drive starting to fail while rebuilding from the first failure. It was that event that made me love ZFS forever.
Contrast to my WD Reds which are 52k+ hours without errors or issues (no failures in a set of 8). And some HGST refurb drives that are at 70k+ hours (some failures in a batch of 14, but they were refurb with wiped SMART data so the failures werent unexpected).
On the other hand, I've got a WD Caviar Black that's been running strong for more than five years. None of my WD drives have ever failed (knock on wood!). I don't run 100+ drives though, so I'm probably just hiding in the statistical margin.
Fwiw (antidotal) I have 8 shucked WD drives in my my nas approaching 5/6 years worth of power-on hours. Moderate to medium read-mostly use case. No data integrity issues (according to zfs).
Certainly getting near the age where I wouldn't be shocked if I started seeing failures crop up, but I've gotten my savings plenty worth there.
I have some ~50 TB total in a NAS, all WD drives. I bought some 3 & 4 TB WD Red drives around 2014, and they have all been going 24/7 with absolutely no problems at all.
I recently needed to expand (to the point I am at now) and bought & shucked 14 TB WDs, so I'm curious to see whether there will be any long term difference in terms of reliability between the "official" red drives and the shucked whitelabel drives.
To counter that, I've been running 11x 4TB WD Red (WD40EFRX) since 2013 and zero failures, with 10 drives in RAIDZ2 and 1 warm spare.
I have them in a 4U chassis in the attic, with 5 case fans blowing through; two on the rear and three across all the bays. It's a 24 bay box, but I've not yet had pressing need to fill the rest of the bays.
I've had 2/8 Seagate IronWolf drives fail on me slightly outside of the first year I had them. I'm using them in a Synology NAS. I've never had such problems with the WD Reds in my last Synology.
They don't fail that reliably, unfortunately. It's not a light bulb with a certain number of hours on it. See Backblaze's drive statistics.
Edit: I just saw it's about ssds and not hdds, but while Backblaze might not have stats on those, I'm fairly sure my comment still applies. Not 100%, but I assume they account for failures due to predicable issues like write wear.
I also generally tend to keep drives powered up, and they indeed tend to live long. Still, I have to be on alert for failing drives. I had four of the WD30EFRX together in a 2x2 pool (roughly equivalent to RAID10). One day, one of them began showing checksum errors after a scrub, and I replaced it with a Toshiba DT01ACA300.
Once the bad drive was out of the pool, I ran a full self-test on it, which it failed. I did at least get five years of service out of it, but in any case, it's time for me to think about some newer hardware.
For my next NAS, I'm kind of leaning towards using 2x8TB mirrored, with a third drive to be rotated into the mirror, splitting off the rotated-out drive as an offline backup.
Yeah agreed 100%, for my RAID mirror setup I use drives from distinct manufacturers for that exact reason -- I can presumably expect a different failure rate (hopefully). :)
> I bought some 3 & 4 TB WD Red drives around 2014, and they have all been going 24/7 with absolutely no problems at all.
It may seem like they are running fine, but are they actually? Have you run a zpool scrub or equivalent?
What happens with these old drives is that one dies and then you have to replace it, which is very hard on the other drives as the array is rebuilt. Then while the array is being rebuilt, another drive dies. It's better to replace drives when they are EOL (usually 4 years if running 24/7) rather than waiting until there is a problem.
before my current crop of 8TB WD reds, I ran Toshiba enterprise drives and they were extremely reliable for me. none of them failed in a 24x7 hardware raid6 environment after a few years. only replaced them to upgrade capacity.
Yev from Backblaze here -> Not necessarily, it just means that if we had a set of those drives and replaced them multiple times over a given period, that number can be quite high. We're comparing the drives that failed against the number we have spinning, not the total we've had in service.
Yeah, but it certainly feels in the right ballpark from my personal experience. I’ve normally got 20 or so drives in use at any time (perhaps fewer spinning rust drives not that ssds are more common), and I have a drive go bad occasionally - not every year or two, but certainly every 5 years. So I reckon they’re at least order of magnitude right.
I've had highly divergent experience with WD drives.
My home system using WD Reds has been great. It is larger than a normal home storage box (24 drives), and it gets substantially more use than normal (backing store for a bunch of VMs, mostly). I've replaced one drive in five and a half years.
Contrast with storage servers built for work with WD Golds. Those are 45 drives each and used for database backup workloads. We replace about a drive a quarter (of the 90 total), and this has been pretty consistent for a bit over two years.
Not sure if I got great Reds or if Golds are just terrible, but that's what I've seen.
I can confirm that e.g. there has definitely been e.g. a batch of enterprise SSDs from Intel a couple years ago which failed en masse after a certain amount of powered-on time.
Given mine were in a ZFS mirror, they would have had almost identical wear so I suppose we could say they that they're predictable (though mine are the CMR drives which you don't get in the 4T these days and SMR drives are bad for ZFS).
If I was to replace them with the same drives I'd have a rough idea when to replace them before they start failing but I'll be buying larger capacities these days so those numbers are probably out the window.
reply