Personally, I'd be fine with using Sid as a desktop as long as all data was backed up - but that should be a given for any OS.
I'm a bit of a hypocrite, though, as I use a Mac for daily use. I do run Debian Stable for my servers, though! With Bullseye nearing completion, it looks like it's about time to bake some new VMs.
Lol, I was going to make a btrfs quip and say I’d rather trust my data to Sid on xfs or jfs rather than an alternative stable distro that uses btrfs cough OpenSuSE cough.
There are features of btrfs that are currently considered experimental/unsafe to use like their raid-5 implementation.
I personally use btrfs raid-1 setups and have survived actual device failures without data loss. However, I also perform regular backups so I'm not overly concerned about "eat my data" bugs in a filesystem either.
I was under the impression that the data eating RAID5/6 issues were patched "long ago", but due to the write hole issue, which isn't likely to get fixed anytime soon, which means your data is (probably) safe, except for whatever was being written during the "write hole", and your array may crash and become read-only.
The kernel wiki says the following :
RAID56
Some fixes went to 4.12, namely scrub and auto-repair
fixes. Feature marked as mostly OK for now.
Further fixes to raid56 related code are applied each
release. The write hole is the last missing part,
preliminary patches have been posted but needed to be
reworked. The parity not checksummed note has been
removed.
Plenty of people, including me, have never lost a single bit to btrfs.
Now, is it super fast? (No.) Is it (or any other COW FS) the right choice for an SSD or database? (Probably not.) Is it the right choice for data that get read much more often than written and you'd like to be sure 10 years from now that you haven't lost any of it? (There's a pretty good argument to be made there, I think.)
I’ve lost* data with just RAID1 thanks to btrfs bugs post-1.0 that corrupted the entire metadata on both disks. There was no good place to get support nor any instructions on attempting reconstruction of corrupted metadata at the time and I haven’t bothered with it since. Apparently I wasn't the only one that suffered such a loss and as I recall, it was blamed on OS-integrated CoW under certain circumstances but it was quite shortly after adopting it and not a particularly weird configuration so I swore it off and have been happily btrfs-free since.
I should have known better since during my initial evaluation in search of a better llvm for Linux, I set up a root non-raid btrfs volume comprised of multiple dissimilar disks and lost all the data after an unsafe shutdown (a kernel panic that may have been caused by btrfs in the first place) even though all the disks were still functioning fine. I was an early adopter of ZFS - first under OpenSolaris, then under OpenIndiana, then (and now) under FreeBSD, so I thought I understand what "initial stable release" meant but it is clear that what ZFS devs consider to be stable and what btrfs devs consider to be stable are leagues apart.
* I was able to use forensic tools and low-level fs-agnostic recovery methodologies to get some of the important stuff back, but the btrfs volumes were completely lost.
They'll have some write amplification at the filesystem level but should cause less amplification at the drive level.
Either way very few workloads get anywhere near wearing out an SSD, and the upsides of CoW features are almost always much higher than the risk of wearing out a drive. I'd say they fit just fine on an SSD.
I tend to agree with this. The times I've gone back to Debian, and ran testing or unstable, I still found it to be too slow for me. There are certain things where I want to closely track the latest upstream.
I also really found myself missing the Arch wiki, and ending up back there anyway. And customizing Debian so much, I might as well just ran Arch.
I'm a bit of a hypocrite, though, as I use a Mac for daily use. I do run Debian Stable for my servers, though! With Bullseye nearing completion, it looks like it's about time to bake some new VMs.
reply