Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Now try building a "portable" binary that runs on a version of Linux older than yours.


view as:

Easy static link everything.

I think you meant a GUI application that runs on a version of Linux older than yours.


Statically link everything? Including glibc?

> Including glibc?

Any reasonable person doing this would use Musl of course.


And then you discover your program can't resolve hostnames because "DNS responses shouldn't be that big".

Can you explain how musl is related to this DNS resolving problem?


But honestly, they really shouldn't.

AppImage is sufficient.

nope, it still requires at least the glibc version that the appimage was compiled against. snaps, flatpaks nor appimage solve the long-standing glibc versioning issue that plagues portable Linux binaries. the closest I've seen to fixing this issue is https://github.com/wheybags/glibc_version_header

> the closest I've seen to fixing this issue is https://github.com/wheybags/glibc_version_header

Or just compile against an older glibc version. Plenty of tools to setup sysroots for that, e.g. https://wiki.gentoo.org/wiki/Crossdev


Trick question: glibc doesn't support that.

Trying to statically link with glibc is a fool's errand.


glibc doesn't support static linking, but you can fully static link with another libc if that is what you wanted to do.

Not that it is needed for forward-compatible Linux binaries since newer glibc versions are backwards compatible so dynamically linking the oldest one you want to support works fine. Would be nice if glibc/gcc supported targetting older versions directly without having to have an older copy installed but that is a convenience issue.


When trying to statically link everything, glibc is not as much a problem as GPU drivers are if you want binaries that work on more than one system

I tried that with Rust, actually. My Manjaro was running a much newer version of Glibc than my server so I had to deal with that.

First I tried to get it to link to a different system library but no package manager is happy with multiple major versions of glibc.

Then I tried MUSL but it turns out the moment you enable MUSL several OpenSSL packages used in very common dependencies don't compile anymore. There was something about a custom OpenSSL path that I would need to specify and a cert file I'd need to package, but I gave up at that point.

The solution was to use a Docker image of an ancient version of Debian with an old version of glibc to build the file. I have no idea how you'd deal with this crap if your version of glibc is even older or if you don't have glibc, my conclusion was "I guess you can't use the usual HTTP crates then".

Oh, and the "just statically link everything" approach is also terrible for security patches because most developers don't release security patches with the same urgency as library developers do. GnuTLS had some pretty terrible problems a while back that were quickly resolved with an update but the most recent version of some binaries online are still vulnerable because the devs chose to statically link GnuTLS and abandoned the project a while back.

Libraries are an enormous pain point for Linux development and even static linking won't always be there to save you. This is one of the reasons some game developers choose to release a "Linux version" of their game by just packaging Proton/Wine with their executable and making sure their game performs well under the compatibility layer. All the different versions of all the different distributions and all the different flavours are impossible to keep up with.

Linux devs have chosen to ship entire Linux core libraries with their applications in the form of AppImage/Flatpak/Snap/Docker to solve this problem. If static linking solved all problems, Docker wouldn't have taken off like it did.


"Statically link or bundle everything" is how most Windows apps deal with this tho. So if we're comparing Windows and Linux, and saying that Linux packaging is worse, this method can't just be dismissed on security grounds.

Windows developers tend to stuff directories full of DLLs if they need to ship dependencies, they're not statically linked per se.

Regardless, it's incredibly complicated to compare linking behaviour between Windows and Linux. Windows has tons of components built into the API (and which is maintained by Microsoft) which you'd need an external dependency for in Linux. Microsoft provides interfaces for things like XML processing, TLS connections, file system management, sound libraries, video libraries, rendering engines, scripting engines and more. If there's a vulnerability in WinHTTP, Microsoft will patch it; if you statically link curl, you'll have to fix it yourself.

Of course many open source developers will statically link binaries because that way they don't have to write platform specific code themselves, but they only need all those dependencies because your average distro is pretty bare bones if you strip it to its core components. If you write cod Linux, you're not even getting a GUI by your base platform, you'll have to provide your own bindings for either X11 or Wayland!

Most third party Windows software I run is some application code and maybe a few proprietary DLLs that the software authors bought. Sometimes those DLLs are even just reusable components from the vendor themselves. Only when I install cross compiled Linux software do I really see crazy stuff like a full copy of a Linux file system hidden somewhere in a Program Files folder (GTK+ really likes to do that) or a copy of a modified dotnet runtime (some Unity games).

The big exception to the rule, of course, is video games, but even those seem to include fewer and fewer external components.

Development becomes a lot easier when you can just assume that Microsoft will maintain and fix giant frameworks like the .NET Framework and the Visual C++ runtime (basically libc for Windows) for you. Microsoft even solved the problem of running multiple versions of their C runtime on the same machine through some tricky hard linking to get the right dependencies in the right place. As a result, most Windows executables I find in the wild are actually linked dynamically despite the lack of a library management system.


Hence "... or bundle everything". But what difference does it make from the security perspective? If the app ships its own copy of the DLL, it still needs to do security updates for it - the OS won't.

As far as the OS offering more - it's true, but not to the extent you describe. For example, Windows does offer UI (Win32), but most apps use a third-party toolkit on top of that. Windows does offer MSXML, but it's so dated you'd be hard-pressed to find anything written in the past decade that uses it. And so on, and so forth. I just went and did a search for Qt*.DLL in my Program Files, and it looks like there's a dozen apps that bundle it, including even Microsoft's own OneDrive (which has a full set including QML!).

Even with .NET, the most recent version bundled with Windows is .NET Framework 4.8, which is already legacy by this point - if you want .NET 5+, the standard approach is to package it with your app.

And then there's Electron, which is probably the most popular framework for new desktop apps on any platform including Windows - and it is, of course, bundled.

"Some application code and maybe a few proprietary DLLs" is how things were back in 00s, but it hasn't been true for a while now.


.NET Framework 4.8 will start being legacy, when Forms and WPF finally work end to end on Core runtime.

Heck the designer still has issues to render most stuff on .NET 6.


> Windows developers tend to stuff directories full of DLLs if they need to ship dependencies, they're not statically linked per se.

I assure you, there is plenty static going on in Windows land.


Isn't that simply a matter of targeting an older glibc? I am probably missing something though.

Yes, "simply". It's a very fun process. Around 100 times more fun than the onerous

  git config --global core.symlinks true

Anyone who talks nonchalantly about glibc hasn't had their time eaten up by glibc.

Like it’s absolutely a nightmare but you can eliminate a lot of problems by building on an ancient version of RHEL.

It says a lot about Linux development that for cases like these "just install the Linux equivalent of Windows XP in a container and run the tools inside that" is an accepted solution.

It's a solution that works well and is used by loads of developers, but it's still comically silly.


There are other approaches like https://github.com/wheybags/glibc_version_header or sysroots with older glibc, e.g. https://wiki.gentoo.org/wiki/Crossdev - you don't need your whole XP, just the the system libs to link against.

Sure, having a nice SDK where you can just specify the minimum vesion you want to support would be nice but who do you expect to develop such an SDK? GNU/glibc maintainers? They would rather you ship as source. Red Hat / SUSE / Canonical? They want you to target only their distro. Valve? They decided its easier to just provide an unchaning set of libraries since they need to support existing games that got things wrong anyway and already have a distribution platform to distribute such a base system along with the games without bundling it into every single one.


You can also just cross-compile targeting whatever you want, including RHEL.

I wrote a tool [0] which will take any system and create a cross-compiler toolchain for it, this is what I use to compile for Linux, HP-UX, Solaris, BSD, etc on Linux.

[0] https://build-cc.rkeene.org/


I actually haven’t heard of this approach. Could you explain more or point me to some further reading if you read this and have a moment?

Use ‘zig cc’ and target the appropriate glibc.

I was sad to find out "Zig supports only the last three macOS versions". No ncdu2 for me :(

https://github.com/ziglang/zig/pull/10232#issue-1064864004


Apple only supports the last three macOS versions too, so if you're on an older one than that you are not getting security vulnerability patches.

One of Zig's claims to fame is how widely supported/cross-platform their build tools are, and I had high hopes. But they don't publicize this limit of their macOS support -- I found out the hard way. I really appreciate how far MacPorts bends over backwards to keep things working.

I love my ancient machine and have a few 32-bit apps I need, though I guess old hardware isn't quite the excuse it used to be.

https://dortania.github.io/OpenCore-Legacy-Patcher/MODELS.ht...


I'm interested in re-evaluating this policy once we get a bit further along in the project. It could work nicely given that we have OS version min/max as part of the target, available to inspect with conditional compilation. We get a lot of requests to support Windows all the way back to XP too, despite Microsoft's long-expired support.

All this scope creep takes development efforts away from reaching 1.0 however. If we had more labor, then we could potentially take on more scope. And if we had more money then we could hire more labor.

So if y'all want Zig to support a wider OS version range, we need some more funding.


Go tell that to the schools I have to ship software to which can't upgrade past 10.13 because their Macs are too old but they don't get enough budget to buy new ones

Maybe they shouldn't have buy Macs in the first place.

anything else is a hard sell for art schools sadly, especially in 2010

Compile in an old Debian version docker container.

If you want to build binaries for a distro, build in that distro. If that distro has a Docker image, it's as simple as:

    docker run -v "$PWD:/src" olddistro:version /src/build.sh
$dayjob supports distros as old as CentOS 7 and as new as Ubuntu 22.04 this way.

Compiling on one distro and then expecting it to work on another distro is a foolhardy errand. Libraries have different versions, different paths (eg /usr/lib vs /usr/lib64 vs /usr/lib/x86_64-linux-gnu/ vs...), and different compile-time configuration (eg openssl engines directory) across distros.


> If you want to build binaries for a distro, build in that distro.

> Compiling on one distro and then expecting it to work on another distro is a foolhardy errand.

This is why Windows, with all its issues, is still relevant in 2022. I got tired of updating my distro and software stopping to work. If you are on happy walled garden land of the main repository you're fine. When you need some specialized software or something that is not being maintained anymore, good luck. And at the end of the day, people just want their work done.

This is why Windows with all bloat, advertising, tracking, security issues (no click RCE in 2022, wtf) STILL is going strong.


CentOS 7 is from 2014. That isn’t very old.

It has a few old things unique to it in our support matrix. It's the only distro with openssl 1.0 so a bunch of API and related things are different. It's also the only distro where systemd socket activation doesn't work if your service has more than one socket, because its version of systemd predates the LISTEN_FDNAMES env var.

Also, it is old enough that it's going out of LTS soon.


CentOS 7 may have been released in 2014, but the software it shipped was already quite old then.

As a datapoint, CentOS Stream 9 [0], which was released in 2021, and which RHEL 9 (released in May 2022) is based on, is already ~60% out of date according to repology: https://repology.org/repository/centos_stream_9.

Also: In computer time, 8 years is "very old". That's longer than the "mainstream support" window for Windows 7 was (from 2009 to 2015), and about as long as the mainstream support window for Windows XP (from 2001 to 2009).

[0]: CentOS "Stream" has a different release model and appears to be a bit of a rolling release as I understand it? But that would cause it to be more up-to-date, not less.


> Compiling on one distro and then expecting it to work on another distro is a foolhardy errand.

Only if you link random libs you find in on the system. The base system libs making up a Linux desktop (glibc, xlib, libasound, libpulse*, libGL, etc.) all are pretty good about maintaining backwards compatibility so you only need to build against the oldest version you want to support and it will run on all. Other libraries you should distribute yourself or even statically link with their symbols hidden. This approach works for many projects.


Compile with Musl inside Alpine container.

I totally agree, it's practically impossible. Is it a philosophical thing or a technological thing? GPL is about source code and going against that will never make much progress.

It is almost always possible to do some relatively simple hacks to make old stuff work, though (LD_PRELOAD, binfmt_misc/qemu, chroot/docker).


ok: `zig build -Dtarget=native-native-musl -Drelease-fast=true`

This is portable to CPUs with the same features running older linuxes. To be portable to CPUs with fewer features you should specify the CPU family with `-Dcpu=foo_bar` which seems to be the equivalent of `-march=foo-bar`.


The most stable Linux ABI is Win32 =)

Not that hard. Since it's not a use case that most GNU/Linux software needs to be concerned about it's easy to make mistakes and resources are scarce, but once you know what you're doing it's usually not a big deal (maybe except of some specific edge cases). There's lots of old games on Steam that still work on modern distros and new games that work on older ones (and, of course, there's a lot of them that's broken too - but these days it takes only a few clicks to simply run them in a container and call it a day).

It's very hard. Incompatible glibc ABIs make this nigh impossible, there's a reason Steam installs a vcredistributable.dll for pretty much every game on Windows. On Linux Steam distributes an entire runtime environment based on an ancient Ubuntu release exactly to circumvent this problem.

Look no further than the hoops you need jump through to distribute a Linux binary on PyPI [1]. Despite tons of engineering effort, and lots of hoop jumping from packagers, getting a non-trivial binary to run across all distros is still considered functionally impossible.

[1]: https://github.com/pypa/manylinux


Steam distributes an entire runtime environment because it's a platform that's being targeted by 3rd party developers, who often make mistakes that Steam Runtime tries quite hard to reconcile. When all you care about it's your own app, you're in power to compile things however you want and bundle whatever you want with it, at which point it's honestly not that hard. Building in a container is a breeze, tooling is top notch, images with old distros are one command away, testing is easy; in practice I had much more trouble compiling stuff for Windows than for old GNU/Linux distros.

It's easy to compile an entirely new binary for every platform/distro, and it's easy to bundle an entire execution environment along with a single binary using docker, what's hard is compiling a single binary and have it run across all distros and execution environments.

"Across all distros" - sure, that's outright impossible, and for a good reason. "Across all distros that matter" - no, it's not. How do you think Electron apps distributed in binary form work fine across various distributions?

It really isn't that hard to get a single binary that works across glibc-based distros. Just compile against the oldest version of glibc/Xlib/etc. you want to support and bundle all non-system libraries, statically linking them with symbols hidden if possible.

We are not talking out of our ass here - I myself do this for all the software I maintain and just works. Tons of programs are released this way.


Does having a sysroot with an older glibc count?

https://wiki.gentoo.org/wiki/Crossdev


Legal | privacy