Obviously a few of these need work, but I've been in the unfortunate position of recovering a Linux system with a busted .so that broke almost all of coreutils, but cargo worked and so did the coreutils alternatives. Static linking is an absolute godsend.
When I started programming, static linking was the only way of packaging software in mainstream computing, if it was so superior we wouldn't have moved away from it.
Also although we are talking about open source here, in many OSes, and enterprises, distribution of binary libraries is a very important subject, which cargo doesn't allow for.
Loved his rant on glibc, etc. breaking stuff. I've been saying this myself for years. Really static linking is the only way to make a binary for Linux that will work on any distribution.
Dynamic linking is not the only way to deliver security fixes, just one particular, limited, C-oriented solution. Note that dynamic linking is also not sufficient to change C++ templates or C header-only libraries. It shifts all of the code replacement complexity from the OS (just replace a file) onto the language and library authors (ABI compat, fragile base classes, no inlining or generics).
Rust/Cargo gives you Cargo.lock (and more precise build plans if you want) that can be used to build a database of every binary that needs to be rebuilt to update a certain dependency. Distros could use it identify and update all affected packages. There are also caching solutions to avoid rebuilding everything. It's just something they didn't have to implement before, so naturally distros prefer all new languages to fit the old unbundled-C approach instead.
With closed-source binaries the argument for static linking is the strongest, IMHO, or at least shipping all the required shared libraries and LD_LIBRARY_PATH that in. The amount of times I've had to muck about to make a five-year old game work on a current system is somewhat bonkers, and I would never expect someone unfamiliar with shared library details (i.e. a normal non-tech person, or even a tech person not familiar with C/Linux) to figure all of that out.
Yup, and it seems we forgot about static linking, then broke it, and now are reinventing it in new and convoluted ways.
I do think Linux could make smarter use of static linking in places. Weird, esoteric and tiny libraries with unstable APIs make terrible shared libraries and are a constant source of irritation.
Static linking has it's place, and so does shared libraries.
It also doesn't help that Rust keeps on pushing on the idea that "static linking is the only way to go". This is another cargo-cult which I wish didn't end up being engrained so deep in the toolchain because while it has some merits, it also has significant drawbacks of a typical unix distribution.
Static linking might be good for folks distributing a single server binary over a fleet of machines (pretty much like Go's primary use case), or a company that only cares about a single massive binary containing the OS itself (the browser), but it stops "being cool" very quickly on a typical *nix system if you plan to have hundreds of rust tools lying around.
Size _matters_ once you have a few thousands binaries on your system (mine has short of 6k), think about the memory requirements of caching vs cold-starting each, and running those and how patching for bug or vulnerabilities will pan out. There's a reason dynamic linkage was introduced and it's _still_ worth it today.
And rebuilding isn't cheap in storage requirements either: a typical cargo package will require a few hundred megabytes of storage, likely for _each_ rebuild.
Two days ago I rebuilt the seemingly innocuous weechat-discord: the build tree takes about 850mb on disk, with a resulting binary of 30mb. By comparison, the latest emacs binary from git rebuilds in a 1/10 of the time with all the features enabled, the build tree weights 150mb (most of which are lisp sources+elc) with a results in a binary of 9mb, not stripped.
It's the distribution and packaging parts that I love about it. Also might help with cross-platform distribution as well; While dynamic linking does have advantages, I get really frustrated when an old application won't run on a new kernel due to requiring old libraries that simply can't be installed. Static linking fixes that, and it's why anything I try and write is statically linked for the most part!
Having a stable syscall interface is one thing... not supporting static linking is another... Even on linux, statically linking libc, GTK, ... is not a great idea.
Unfortunately outside of linux distros static linking seems to be becoming more and more then new norm, especially if you include what is effectively "soft" static linking of bundling all dependencies, which suffer from the same problems. Rust is only suitable for static linking, .net core statically links and most .net projects will have all sorts of out of date dependencies, npm bundles huge dependency chains that may not be upgraded. Even in the linux distro world there is movement towards tools like docker where dependencies will often not be patched.
Even the idea of stable versions of libraries with security patches seems to be a dieing one.
Correct me if I'm wrong, but they do solve the build problem -- you can build static libraries way easier if you replace all the system stuff libc was doing for you with approaches like unikraft[0] or rumprun[1].
reply