Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> The fact that not all metadata is encrypted

This was one of my concerns when I first encountered this trade off in Solaris zfs. That is, you claim that the Solaris zfs encryption is better in this regard but it’s not.



sort by: page size:

> The hard case is databases.

I can kind of see your point, but I trust ZFS to never lose data, and I trust (in my case) postgres to never lose data, so the only issue is performance, and while that varies immensely, I mostly work on data that compresses well, so I can barely afford not to use ZFS with compression, because it saves a ton of space and actually improves I/O performance (if you're I/O bound, compressing your data lets you read and write faster than the physical disks can handle, which is still wild to me). Of course, that all depends on trusting all parts of the system; if I thought that ZFS+postgres could ever lose data, or possibly that there was a real risk of it causing an outage (say, memory exhaustion), it'd be a harder trade to make.


> I have some archive data stored on GELI drives; this is a welcome development.

Sure, I have a few luks encrypted drives (having run Linux, not freebsd).

But I'm inclined to move them over to zfs.

> GELI is a much more straightforward way to deliver an encrypted block device compared to an encrypted Zvol.

Maybe if you need an encrypted block device - but with zfs in Linux and freebsd as standard, experimental support on windows, Mac - I'm not sure I agree geli/cryptsetup is an easier way to deliver an encrypted (at rest) filesystem.

More to the point - between experimental geli support on Linux, and stable zfs on Linux and freebsd - I would strongly prefer zfs. (or freebsd and geli for accessing archives that are tricky to move over to zfs for some reason).


> The benchmarks I've seen do not make ZFS look all that great.

The thing about ZFS that actually appeals to me is how much error-checking it does. Checksums/hashes are kept of both data and metadata, and those checksums are regularly checked to detect and fix corruption. As far as I know it (and filesystems with similar architectures) are the only ones that can actually protect against bit rot.

https://github.com/zfsonlinux/zfs/wiki/Checksums

> And as far as I can tell, it has no real maintenance behind it either any more, so from a long-term stability standpoint, why would you ever want to use it in the first place?"

It has as much maintenance as any open source project: http://open-zfs.org/. IIRC, it has more development momentum behind it than the competing btrfs project.


> I did not specifically mention ZFS anywhere in my comment

Wow. This is an exceptionally weak argument. But eye roll, okay, whatever.

> bad memory corrupting data being actively changed remains a problem for any filesystem. If the filesystem is actively changing data it can not rely on anything it is writing to disk being correct if the in memory buffers and other data structures are themselves corrupted.

And, yet, that's not the argument you made. This is what makes me thinks this is bad faith. Just take your lumps.


> ZFS is a categorically different concept than said examples, in that it's distributed.

What? No, ZFS is not a distributed filesystem. It never has been, it almost certainly never will be, and it has little in common with distributed filesystems.

What makes ZFS different is that it is a production-grade copy-on-write self-validating merkle tree. Most of its properties fall out from that. There's nothing distributed there.

I'm saying this in the kindest way possible: please don't write about things that you have zero idea about. You cannot possibly be more fundamentally wrong about ZFS, and nothing you wrote makes any sense. :(


> 1. Author dislikes ZFS because you can't grow and shrink a pool however you want.

I believe that this was on the future timeline for ZFS. It required something like the ability to rewrite metadata or something.

The problem is that nobody really cares about this outside of a very few individual users. Anybody enterprise just buys more disks or systems. Anybody actually living in the cloud has to deal with entire systems/disks/etc. falling over so ZFS isn't sufficiently distributed/fault tolerant.

So, you have to be an individual user, using ZFS, in a multiple drive configuration to care. That's a really narrow subset of people, and the developers probably give that feature the time they think it deserves (ie. none).


> Other than its native raid 5/6 story what major features is it lacking in comparison to zfs?

For example, native at-rest encryption. dm-crypt/luks is adequate, but has significant performance problems[0] on many-core hardware with highly concurrent storage (NVMe) due to excessive queuing and serialization. You can work around these things by editing /etc/crypttab and maybe sysctls.conf, but the default is pretty broken.

[0]: https://blog.cloudflare.com/speeding-up-linux-disk-encryptio...


>Did you miss the part where I mentioned "for personal use"?

Since ZFS is simpler to use then your setup, is used to store 55PB of data without a single bit error since 2012, i don't see why someone should use inferior stuff, even when it's "personal use".

>But many small tools focused on just the functionality I need allows me to build a simpler system overall.

Sometimes monoliths are better for example the network-stack and storage....maybe kernels (big Maybe here)


> We tend to think that in a digital world bits are just bits and do not get corrupted — which is decidedly untrue.

That it’s not true is pretty much the reason why ZFS was created, though lots of people still don’t want to hear it, including companies (APFS only cows and checksums metadata for instance).


> well the commands are terrible compared to ZFS

Really? I don’t think so, I find btrfs usage extremely straightforward and easy to grok. ZFS on the other hand has all that confusing lingo about vdevs, etc...

I get that this is subjective but I disagree.


> Not really. I've tried it, and it still has pain points I'd not like to have in my filesystem. It's like ZFS almost a decade ago (and I'm not talking about features)... although ZFS on Linux vs. btrfs on Linux... right now I'd still go with btrfs.

Care to elaborate? I've never tried ZFS, but been very happy with btrfs for my smalltime personal usage, I'm wondering why people find it so painful in comparison.


> I've been using ZFS since circa 2008, but it's a tradeoff.

Indeed. I was disappointed about the low quality of the article. A good article on why not ZFS would have been an interesting addition, to help users decide.

I've been using ZFS on my home NAS for over a decade and overall it's been a great experience, but as you say ZFS does have some limitations which makes it a poor fit for certain use-cases.


> What’s surprising to me is that no file system (even ZFS) or database utilize error correction controls.

Maybe I misunderstand your statement, but ZFS has had error correction from the very first versions.


> What is the purpose of ZFS in 2021 if we have hardware RAID

Hardware RAID controllers predate ZFS by a long time.. ZFS is a much more modern design and because it integrates the whole storage layer it can offer all the features it does, which a RAID controller hiding behind a disk interface cannot do.

When ZFS came out many people (me included) considered that the end of relevance for hardware RAID controllers. I used to use hardware RAID pre-ZFS but have never again after switching to ZFS when Solaris 10 first included it.


> Anyone using ZFS in a serious capacity would have both dedicated ARC and ZIL.

I contend that most people using ZFS in a serious capacity do not have a dedicated ZIL.


> was his dismissive remarks regarding copy on write within a filesystem

? ZFS is copy-on-write.


> The post was literally about how ZFS compression saves them millions of dollars.

... relative to their previous ZFS configuration.

They didn't evaluate alternatives to ZFS, did they? They're still incurring copy-on-write FS overhead, and the compression is just helping reduce the pain there, no?


> * The proliferation of Linux filesystems such as ext4, XFS, ZFS, btrfs, reiserfs, JFS, JFFS, bcachefs, etc. If any of those filesystems were truly adequate there wouldn't have to be so many.

While I get your point, I would like to point out that ZFS was developed by Sun (now Oracle). I've used ZFS for years from a data-integrity and array-mirror perspective and love it. No other file-system you mentioned next to it gives me the confidence that ZFS does (maturity, stability, etc).


> relying on unstable software

Where the hell are you getting this from? What instability has existed in ZFS on Linux? Are you confusing it with BTRFS or something? I think your fears of ZFS are seriously misplaced.

next

Legal | privacy