Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> You can typically deploy a Windows binary to targets that use much older versions of Windows, and you cannot say the same for Linux distributions.

This has nothing to do with the OS. A statically linked linux executable will likely run on any kernel from version 1 to version 4. The issue here is dynamic linking, and windows DLL hell has a name for a reason



sort by: page size:

> Try copying some Linux binaries between a few distros, a few release years apart and tell me again about stability.

The issue is the dynamic libraries (which are handled by distributions in different ways). Try statically compiling a binary like that and you'll find that it will work between distributions and kernel versions.

> Try the same with some Windows apps and compare and contrast.

Because they include their dynamic libraries with their programs, which has its own tradeoffs (updating security issues in shared libraries is much harder, even if you had a package manager on windows -- which you don't).

> It's all the auxiliary libraries, much larger surface area.

Right, but GP's point was that Linux doesn't have userland-to-kernel stability and I was saying that's simply not true. It's true that there is no stable upstream driver API but there are also solutions to that problem as well.


> Did you try it? Often, you cannot run even programs built in 2009 on Windows 10.

Yeah, I actually do it all the time. There are some programs that don't work, particularly games, but for the most part it works fine.

> You can run 1995 Linux binaries in Linux, provided a) you have the libraries they were linked against

Including glibc, and take care to jump through the appropriate hoops to ensure the linker can find them because chances are they aren't in your repo.


> While the kernel per se is solid, the userland has zero backwards compatibility

That is entirely wrong.

It is not because your latest distribution does not provide old/antique glibc by default that it means you can not run it.

It is currently pretty trivial to get a 20 years old glibc (or whatever library) recompiled under Linux to run your old game and call it a day.

Software like Nix nowadays even make that surprisingly easy with sandboxing.

That is currently the strength OSS and something you will never be able to do on Windows. As long as the version used is known and the source not lost: Software lives forever.


> I can use Windows drivers from Windows XP era. Try that on Linux.

Linux has a model where all drivers should live in-tree; if we account for that, then yes, most devices that worked on Linux in 2001 will work on Linux today.

> And I can run any win32 binary, regardless how old is it. Try that on Linux.

Yes, Linux also has excellent compatibility with old win32 binaries. This is partially a joke and partially not.


>If then can make Linux run on a Windows kernel, they can also make Windows run on a Linux kernel.

How are you so sure? what if Windows has more modular/flexible kernel?


> It does not apply in a scenario where you would statically link to the kernel.

That's not the interpretation everyone else uses.


> In fact in the free and open world of Linux, backwards compatibility is even worse…

I have linux binaries from 2001 that still work fine on recent Linux kernels.


> What? That makes no sense. The question here was why static linking is relevant

No it is not.

1. ninkendo asked how syscalls worked

2. detaro replies that NT software uses ntdll and never do raw syscalls

3. alxlaz non-sequiturs that "the kernel community is big on interface stability and compatibility"

4. quotemstr points out that backwards compatibility does not in any way, shape or form require a stable kernel ABI, that's just a weird linuxism (because it ships as a kernel alone rather than a system)

5. you apparently decide to bring the entire thing on an irrelevant tangent completely missing the point


> In the specific case of Windows the kernel interface is pretty fiddly (the syscall ids vary between Windows releases)

Interesting - I wouldn't have expected that. Anyone know why that's the case?


> but because the interoperability is missing because it is in the interest of the gatekeeper

Similar issue with running windows binaries on linux, and worse, hardware vendors limit compatibility with only microsoft’s operating systems.

This needs to change and vendors must be forced to publish specs and the source of any software required to use their product while microsoft should be forced to publish any documentation needed to make their stuff run smoothly on other os’.

Linux for instance already does that and linux specific binaries can be run on windows.


> Fourth, they claim Ubuntu runs on it, but they took the typical Shenzhen approach of putting a modified binary on Mega -- in this case an Ubuntu ISO -- rather than upstreaming their changes or publishing source code.

Well, that makes this a no-go for me. Thanks for the info, seriously!

> No way in hell am I installing a binary download like that as my OS

Um, if you run windows 10 and install drivers from them, then you're running binary crap. (not to mention windows 10 is binary crap, but I digress)


> If it would run on Linux, you could at least use a newer Linux and it still should work

No, because if the same guys would be coding for GNU/Linux, it would mean a closed source binary compiled against a very old C library that most likely wouldn't even start in modern GNU/Linux.

Or try to access kernel features, drivers or pathname that no longer exist in modern distributions.

Using *BSD or GNU/Linux for laboratory hardware doesn't mean that the code is made available, or even if it is, it is cost effective to pay someone to port it.

Many of those XP systems have actually code available, at least in life sciences, just labs don't want to pay to port the code to new systems.


> It's not a problem in the Windows world that drivers are not open source.

It absolutely is a problem. There is plenty of not-that-old hardware that will only work on previous versions of windows.

There's also _mountains_ of _terrible_ binary drivers that cause serious stability concerns.


> Except there is a widely accepted standard for binary application distribution: AppImage (https://appimage.org/)

Oh, if only AppImage was widely accepted. Very few developers distribute via AppImage and basically no DEs understand what they are. Not that AppImages need them to, but it'd be nice.

> AppImage was created for precisely this scenario: When you want to distribute a proprietary application without worrying about packaging it for different Linux distributions.

Yes, and it does about as well as could be expected. Which is to say, it kinda mostly works for most distributions that have some level of sanity.

> And for the driver issues: This is being brought up now and then, but comparing the amount of issues I've had with my AMD graphics card on Linux (essentially none) [1] with the amount of anecdotes of Windows driver issues I regularly see in certain forums, I'd say driver issues are a myth by now [...]

Consider that there are an order of magnitude more Windows Desktop users than Linux Desktop users. And I know plenty of people experiencing frequent driver regressions who'd disagree with you.

> On Windows I'd have to look for whatever piece of ad-laden "graphics management suite" I need and then install it as well as regularly update it.

Incorrect. Windows quite often has a driver available via Windows Update, sans overengineered bullshit control panel. It is usually a bit behind the latest-greatest though.


> It probably takes a lot of effort to develop Windows driver, libraries, support for many kernel version etc.

That's easy to solve: "We only support Linux. Testing has only been done on kernel 4.13."


> And as a result, its drivers will be far lower quality and much less stable

I don't buy it. Windows provides stable driver ABIs and works just fine. A lot of the changes to the Linux driver API are gratuitous, and isolating driver development from the whims of the core won't actually hurt.

If anything, a stable ABI will help, since a smaller, well-defined boundary between the kernel core and drivers allows for things like Vulkan-like validation layers, better debugging, better sandboxing, and even transparent thunking to userspace.

Will some drivers be shitty? Of course. But many drivers won't be, and drivers will be less shitty overall when they can spend more time on being good drivers and less time dealing with GFP_FOOBAR becoming GFP_BARFOO.

Conventional wisdom is that backward binary compatibility i some huge unreasonable burden. This conventional wisdom is wrong; if anything, maintaining a stable ABI imposes some discipline that focuses development effort and that forces you to modularize your system.


> at the binary driver level there is no stability at all

You are evading the argument, what we were talking about was stability of APIs in respect to the userland code. Linux never promised binary compatibility at driver level.

Also, Linux has far better stability in hardware support than competing systems. If a relative tells me they have a photographic slide scanner or printer that stopped to work with some Windows x.y release, chances are quite good that it is continued to be supported by Linux.


> Windows has backwards and forwards compatibility guarantees that Linux userland simply does not have,

Not really. Linux base system libraries are quite good about compatibility. And it's not like Windows is 100% perfect here either.

The main thing making this better on Windows is having an official SDK that allows targeting older versions from current compilers (within limits, rip XP support) while for Linux you get to put that together yourself.

> But that same simplicity, combined with the ludicrously fragmentary nature of Linux Desktop, has meant that I've had a lot more problems getting AppImages that work than I have getting FlatPaks that work.

The problem here is not the fragmentation (which mostly boils down to not everyone being on the same library versions rather than distros having completely different solutions) but rather broken AppImages or non-portable binaries that don't include things that they should or were compiled against a newer glibc etc. The source of that is again that there is no readily available SDK to create non-broken AppImages or portable binaries for Linux so not all developers will get it right. But instead of working on such an SDK, the big players focus on the overkill solution of shipping essentially an entire userland (except when they can't like with those pesky graphics drivers).


> > Proprietary Linux drivers have the added downside of not supporting later kernel versions.

> And most likely not supporting future kernel versions.

Huh?

next

Legal | privacy