> You're running linux (with nix installed) on the bare metal of a mac, and then a VM on top of that running macos (with VFIO passthrough so it's not dog-slow?), is that right?
You got it.
> Performance is pretty reasonable? APFS works fine in this setup?
Works well enough for my use cases. It's certainly less slow and fragile than running Linux in a VM via macOS.
APFS works fine in the VM, but I use HFS+ just because there are more mature tools to poke at HFS+ images than APFS, right now. The APFS FUSE drivers work well and support reading and writing[1]. There's a closed source driver that supports writing and encryption, though[2].
> No gotchas with drive encryption? ... actually, if your machine does use drive encryption, which is doing it, linux or mac? I don't know enough about mac hardware to know if there is some mac boot environment that handles encryption and then hands off to the OS (be it macos or linux).
I don't use T2 disk encryption, I just let Linux handle partition encryption. Apparently there's support here[3] for it, though.
> compatible with macOS filesystems (HFS+ and APFS)
How far along is this? I think she's underestimating how hard it is to implement a modern filesystem that won't eat users' data. I've been working on a Linux APFS driver[0] for several years, and it's not fully functional yet. It's a pity that she is working with FreeBSD, or it could have been of use to her.
"When you upgrade to macOS High Sierra, systems with all flash storage configurations are converted automatically. Systems with hard disk drives (HDD) and Fusion drives won’t be converted to APFS. You can’t opt-out of the transition to APFS."
I personally suspect there will be a hidden option to skip the automatic conversion (which I plan to use), but still--this is an aggressive rollout. Very impressive, but also a bit scary.
> The takeaway for me is that I'm OK with what's currently in Linux for the HDDs I use for my backups but I'd probably lose out if I encrypted my main SSD with LUKS.
Yep, when building my latest workstation, I went with a pair of ("regular") SSDs (RAID1) for my data. Later, I decided to add an NVMe for the OS for the additional speed.
I then went and encrypted all of the drives (via LUKS), however, which basically killed any additional performance I would've gotten from the NVMe drive. I would have been just fine as well off with only the SSDs and without the NVMe drive.
Quote: "The SSD is more than fast enought for my needs. Funny thing is that it’s even faster under Windows (3.5 GBps read and 3.3 GBps write)." Also OP is showing ~2.5 Gbps in an image.
Funny thing, I run my Macs in virtual machines (VMWare), and the speed is identical to Windows (host OS), while all other OS'es (Linux / BSD / other Windows') show slower speed. I wonder why.
> How cool would it be if we had a great GUI for ZFS (snapshots, volume management, etc.). I could buy a new external disk, add it to a pool, have seamless storage expansion.
VxFS, which (according to Wikipedia) is claimed to be supported by Linux. The HDD is partitioned into several logical volumes. Interestingly enough the LVM utilities have the same command names as Linux LVM2, but I don't know if it follows the same on-disk structure.
It would be really handy if Linux supported something like a loopback virtual disk drive into which you could bind whole disk images (instead of just loop block devices to which you can bind only partition images). Maybe there's some (3rd party) kernel module I don't know about for this.
> Have you got a write up somewhere?
Right now I'm just looking around, what's in the system. There's an ancient C compiler on the system but it does only K&R syntax. All the system utilities apparently have been compiled on an external build system. There's an ancient version of Samba there. For simple file transfers I made me a netcat binary by pasting source code into the terminal and compiling
cat > nc.c
^D
cc -o nc nc.c
Used that to
cd /
tar cf - | nc ... ...
HP was nice and left behind source code for the really interesting stuff, i.e. the sources for the userlevel programs and kernel drivers of the nonstandard, proprietary devices that make this thing a measurement instrument. This seemed to be completely unintentional, but this is what you get if your build system can not do out-of-tree builds and the developer just slaps the build directory into a installation tarball. I mean: Who except for the maintenance guy, who's updating the box is going to be able to access that directory? m(
Oh, that covers the source code of several revisions of the software, by which I mean the kernel drivers and the user interface.
Thanks HP! :D
This will of course not compile for Linux. But hey, I don't care. The kernel drivers are easy enough to re-write from scratch over the course of a few weekends. The userland stuff takes much more work though.
> In High Sierra they painlessly replaced the filesystem.
Except that a colleague of mine who was adventurous enough to try it lost all the data. That stopped any one of us from upgrading (fortunately, if I may say so).
(it might be due to encryption, but it doesn't matter - the "painless" part is clearly not for everybody)
> Rather than saying the article is wrong, can you demonstrate /why/ it is wrong?
I think you might to re-read my entire comment: note that I'm not arguing that the technical details are wrong, only that they're insufficient to support the huge “APFS is unusable” conclusion.
As previously noted, Windows and Linux work the same way and they are used by more people in individual non-English locales than the total number of Mac users. Would you say “NTFS is unusable by non-English users” is a useful statement?
There's plenty of room to say that a particular tool needs improvement, or that people making systems which copy or archive files should check for pathological cases, but it doesn't help anything to overstate the case so broadly.
> To be fair, Apple’s relatively new APFS file system is designed to speed up file file copies using a technology Apple calls Instant Cloning. But a win is a win.
Um, no.
While snapshots on ZFS or APFS are wonderful, they don’t help me when I am modifying large files, or compiling a big project.
So I don’t know what to take away from this article.
> People still use hardware RAID? Windows exempted, as there's unfortunately nothing better available for Windows.
I've always wondered whether anyone has gotten fed up with this, and decided to just virtualize Windows Server under Linux in order to feed the Windows Server VM a virtual disk that sits atop all the Linux storage-layer tech (but where otherwise the Windows VM gets all the rest of the computer's resources passed through.)
(Come to think of it, I've considered doing the same for a Hackintosh at some point. It got too contrived, but I'd like to come back to it one day.)
> Why not just setup backups from inside the VM's, while having the base VM image backed up somewhere (once) as well?
That would be quite a project, compared with backing up everything on the host system as I do now.
I have all sorts of VMs. Some of them are extremely minimal OSes (think router/firewall distros). I have no idea how I would be able to back these up from inside the VM. And even for the VMs where I could do that, why bother? It seems like a lot of work.
By having an extremely fast sector backup running on my host system, I can be sure that all of my VMs are backed up, with no extra effort when I install a new one. I don't have to worry about how I would do a "restore" in any of those specific VMs, I can just restore files on the host OS and know that it will work perfectly.
> Other than its native raid 5/6 story what major features is it lacking in comparison to zfs?
For example, native at-rest encryption. dm-crypt/luks is adequate, but has significant performance problems[0] on many-core hardware with highly concurrent storage (NVMe) due to excessive queuing and serialization. You can work around these things by editing /etc/crypttab and maybe sysctls.conf, but the default is pretty broken.
> I’m curious about the use case for plugging one external drive into multiple computers in 2022, when storage is super cheap and networking is super fast.
I have an external drive for a media library, storing there videos I'd like to watch and audiobooks I'd like to listen. Over years, it's grown pretty large. I'd like to have read/write access to the library from either a Macbook or a linux laptop. The linux laptop is old, and might die soon; and I am not sure whether I'll be replacing it with another linux laptop, or will switch to the Macbook entirely. I know that some people recommend getting an NAS drive; but I don't have appropriate setup for it.
> OS will not actually remove the pages from swap despite them now being in RAM as well
Now the hard part: I don't have an idea, because I'm no that versed in OS VMM, but it does make sense even if it is 1993. But this is the basics and I doubt it works some other way. I would be happy if someone more familiar would chime in (and the reason why or not), but in my experience this is what happening.
> Well, that's desktop-grade hardware in a server. Might work, but if it doesn't, you get what you asked for.
Extremely often and works not just 'quite well', but well enough. Similiar servers (literally, just without LUKS) were fine, with > 90% of 'SSD health'
> How does LUKS affect that
Every write is different, because the bytes on the storage are already encrypted. Ie you write the same bytes to the same blovk pn the FS, but the underlaying, encrypted, bytes are not the same => new write.
This was one of my concerns when I first encountered this trade off in Solaris zfs. That is, you claim that the Solaris zfs encryption is better in this regard but it’s not.
You got it.
> Performance is pretty reasonable? APFS works fine in this setup?
Works well enough for my use cases. It's certainly less slow and fragile than running Linux in a VM via macOS.
APFS works fine in the VM, but I use HFS+ just because there are more mature tools to poke at HFS+ images than APFS, right now. The APFS FUSE drivers work well and support reading and writing[1]. There's a closed source driver that supports writing and encryption, though[2].
> No gotchas with drive encryption? ... actually, if your machine does use drive encryption, which is doing it, linux or mac? I don't know enough about mac hardware to know if there is some mac boot environment that handles encryption and then hands off to the OS (be it macos or linux).
I don't use T2 disk encryption, I just let Linux handle partition encryption. Apparently there's support here[3] for it, though.
[1] https://github.com/sgan81/apfs-fuse
[2] https://www.paragon-software.com/us/business/apfs-linux/
[3] https://wiki.t2linux.org/
reply