I even used PCPartPicker for this PC build but for some reason it didn't occur to me to search for possible drives there to see what other stores could have them besides my usual stores (also I avoid Amazon for anything related to computers).
For a desktop drive, it shouldn’t matter since you’re probably not going to RAID it, right?
(I guess it might be slower for big transfers, but since you’re probably going to keep frequently-accessed files on an SSD, that shouldn’t be a huge problem either)
I do have them in a ZFS mirror, actually. And I noticed the performance is abysmal, both read and write. So the warning is a good one.
For my use case it's just Yet Another Different Place (tm) to keep another copy of some files, so I can tolerate low performance if it means it's cheaper.
Since I'm still learning and don't have enough confidence in my abilities when it comes to storage reliability, I know that it's just a matter of time until I mess up and lose a few drives due to a silly mistake from my part.
Hmm, I’ve read that the combination of SMR drives and ZFS is a surefire way to lose data though.
Apparently, if you need to rebuild the array due to losing a drive, it will take so long (and stress the remaining drives so much) that the chances of losing another drive in the process are non-negligible, thereby losing the data on the entire array.
Therefore, it might be wise to consider CMR drives for your use case.
As a general tip, all three manufacturers have specific NAS-class drives, which cost a bit more but are more reliable and usually CMR:
- WD has their Red Plus/Pro line (the non-Plus/Pro Reds are SMR, so avoid those)
- Seagate has their Ironwolf (Pro) line
- Toshiba has their N300 line
edit: ah, you just have them in a mirror. In that case, the magnitude of the risk may be less as you only need to copy data once to rebuild. I’m not sure though as I’m not an expert on the topic.
Related question since we’re on the topic of cache: does it matter how much cache your drive has? I’ve seen drives with various cache sizes for sale, but I’ve never really looked into the difference.
You are technically correct, best kind of correct. But you missed this:
> It certainly seems that way. Earlier this year I was looking for internal 2.5" HDDs with more than 1TB of capacity, and I could only find Seagate drives.
I didn't miss it, I was under the impression that the discussion steered to all kinds of hard-disks. I don't think you can install into a laptop the previously mentioned WD Red and Seagate IronWolf :-)
By the way, I'm shocked that someone would put a SMR HDD into a laptop. Most laptops already had slow disks when CMR was used, but with SMR, I don't even want to think of the results.
> I was under the impression that the discussion steered to all kinds of hard-disks.
Fair assumption.
> [...] laptop [...]
If you said this because of any message from me, then I probably should have given more context.
This was for a desktop PC upgrade, in a Corsair 5000D case. The OS (FreeBSD), user files, and almost everything is in M.2 (two of them, in a ZFS mirror). I just wanted a mechanical drive where I could send ZFS snapshots, local git mirrors, and some other rarely-read rarely-overwritten files (stuff like also videos/epubs/etc that I occasionally serve over HTTP). The purpose is just having as much storage as cheap as possible, and more conveniently than an external USB HDD (which I also have and use for backups, but I don't want to keep it connected to the PC).
The initial plan was to use 3.5" HDDs because the 5000D case technically supports them, but while actually building it and saw the PSU cables my noob lazy self thought how much of a PITA it would be to actually use 3.5" HDDs (or how ugly it would look if I put them in a different place than intended).
So the plan changed to using the 2.5" mounts, and here I am now.
I'm currently working on building a NAS (separate from this Corsair 5000D desktop build), so soon-ish I should be able to just use that NAS for these files, and use the free 2.5" mounts for SSDs to increase the system ZFS pool.
Hopefully this additional context clarifies things.
> edit: ah, you just have them in a mirror. In that case, the magnitude of the risk may be less as you only need to copy data once to rebuild. I’m not sure though as I’m not an expert on the topic.
Sadly, the problem still exists even for a simple mirror.
Though it can be mitigated by configuring a slow rebuild rate, so the new drive would have the time to perform the maintenance.
>Why would you use an HDD as a boot drive in this day and age anyway?
I didn't purchase it but thought it would have similar boot times to a regular HDD, which is only a few seconds more of a delay than a SATA SSD.
Nope this was one of the 8TB WD SMR's and takes about 10 times as long to boot compared to regular HDD. Still that's only a few minutes total and it's easy to just boot a few minutes earlier before I want to use that particular PC. Once Windows is initially loaded into memory (against upstream return data being slowly written to the SMR) it mostly works like normal after that.
Plus when you've got terabytes there's plenty of room for a number of 64GB partitions to install various bootable OS's in, without hardly compromising the major amount of free space you have left over for bulk storage in the large partition(s).
https://tweakers.net/interne-harde-schijven/vergelijken/#fil...
reply