You bring up valid points about engineering practices and backwards comparability in general. I do think you're nitpicking a bit when it comes to the issue at hand.
For example, in you grievance, is there a way to provide a Docker solution to run the working code under 'emulation'? Is there a way to compile an older version of the kernel to run these programs in the way you want? I don't know but my bet is that the answer is 'yes' to both of those.
My point is that this wouldn't even really be possible without a FOSS ecosystem. It requires labor from someone who cares enough to fix it but at least the avenue is open enough in the form of accessible standards and code. Under a proprietary standard or source these would need to be reverse engineered or, in maybe an extreme case, be illegal.
I agree. The problem isn't with how the Linux kernel brings up new devices, and more with the fact that adding support to unsupported hardware is a long and difficult process. These kernel maintainers have more important, official contributions to concern themselves with. Reverse-engineered software support is tangential to the work that the Linux maintainers do, which is why I'm all-for Asahi unshackling themselves from the Linux project. Hell, they might even find sympathizers in the NetBSD/OpenBSD communities, which have always welcomed unsupported systems with open arms.
The issue isn't effort here, but that each backported fix risks breaking something else. People who want all the latest fixes can choose the latest kernel, instead of relying on backports to stable kernels.
100% agree with this. I started my career backporting fixes to old Kernel's and it is error prone, time intensive, and certainly not catching every bug with security impact.
The focus of the industry should move to shipping up to date software in all devices, particularly in this infrastructure level software.
The direction people are moving towards is re-hosting and directed emulation, where you run the code in an environment where the analysis tools are better and emulate just enough of the platform that the code thinks it's executing natively. Some of the published work even looks at the expectations encoded in the source to emulate what it's expecting from the "hardware". Since the HW is fake, testing new configurations is just a config change away. Take a look at [1], [2], [3] for published work. I also have a tool that's used in a couple companies able to rehost simpler linux device drivers and embedded firmware without source code modifications. Everything's encoded as DT files, so you can just use the configs already in the kernel tree and even do fun things like simulating physical hardware in a game engine.
Thanks to the GPL this is already often the case (at least for the kernel). But vendor code is so abhorrent in quality, upstream efforts are few and far between.
OK then, replace "kernel" in my previous comment with "kernel plus a bunch of lower-level userspace". If anything, this makes things worse not better. It means we're re-introducing the possibility of duplicated effort and gratuitous breaks of compatibility in that lower-level userspace, which is exactly what we were trying to get away from via AOSP.
This makes it more important rather than less to work on an AOSP alternative that's far more in line with existing Linux practices in the mainstream, non-embedded ecosystem. And let's face reality, the chip vendors aren't doing it.
The sad part is the kernel devs can't tell the difference. Doesn't matter if something is open source or proprietary, if they can't take it and relicence it under their GPLv2 licence then it deserves no comfort.
That's not what my original argument was about. So not sure what you're debunking.
I have problem with backaptching 10s of thousands of changes to years old kernels by people who don't really understand the changes or consequences, like 5.4, 5.15, 6.1 or whatever. Not patching up 6.6 or 6.7 kernel with a few out of tree patches, where they mostly understand what they're doing and can test the limited set of changes they're applying.
I strongly disagree on this. If you really think nobody working on these problems imagines better, I assert you are being highly insulting to the people working on these problems. Both in the Linux kernel and otherwise.
Okay, but if we're looking at Linux kernel workarounds, then we have much bigger system than that so this is not really anything that should ever exist, right?
I work on an embedded Linux device. Sometimes, we do whatever nasty hack we need to make this kernel work on that device -- but those changes aren't portable, aren't maintainable, and don't meet the quality requirements to be in the upstream tree.
There can also be a perception it's "a waste of time" since "nobody will run another kernel on this anyways"
Maintaining out of tree kernel modules and guaranteeing compatibility is a nightmare.
Partially because in difference to the strict backward compatibility guarantees of the Linux user interface there are no such (strict) guarantees for the kernel interface.
In the end this is both grate for Linux and Intel (and Nvidea, Amd, etc.):
- kernel developers can make sure their code doesn't brake vendor code, this isn't just about being nice to vendors but about being able to do large internal refactoring which affect many/all kernel modules in a small way (good for both, in case of not yet released and in rare cases internal only hardware mainly for the vendor)
- kernel developers can see how the kernel is used and where there might be problems with the current kernel interface and design (good for Linux)
- kernel developers can enforce a certain minimal level of code quality or not allow hacks which would lead to serve security issues or incompatibilities (good for Linux)
- it's much easier to make sure that different drivers/kernel modules don't conflict in subtle ways (good for both Linux and vendors)
- customers buying the hardware often can use it from release date without needing custom kernels (good for everyone)
- abandoned drivers/modules can be picked up and maintained by 3rd parties allowing longer usage of otherwise no longer supported hardware
Lastly while companies like Intel do not own Linux they are some of the main contributors of financial resources often more then covering any additional costs caused by this (I mean for server hardware and more powerful embedding hardware Linux strongly dominates the marked and it's also one of the main markets for Intel, Amd, Nvidea and one of the main building blocks for companies like Google).
Anyway this is good for everyone and in _total_ reduces the amount of work involved in making Linux work with all kinds of hardware.
I think something that's been overlooked is that no one really cares enough to get Linux kernel patches upstream-compatible.
Even if the sources are available it 'd take many man-months of engineering work to get those compatible with mainline HEAD and even more effort to get to the coding standards required by the kernel maintainers. Manufacturers/OEMs/ODMs don't care because it won't improve their bottom line to have a current kernel (at least not until a customer wants to run latest Debian with systemd and udev, which carry certain minimum requirements on the kernel). The Linux kernel community already has too much work on their hands and I don't see any major company sponsoring the couple millions of $$$ that 'd be required for integrating work.
Just look at the myriad of linux-2.6.24 forks. Android handsets, SOHO el-cheapo routers (people still ship 2.4.x kernels for these, LOL), gambling machines (no joke, I actually own a real, licensed slot machine running 2.6.24!)...
As someone who has actually built binaries to run across multiple distros, this doesn't address even half the issues. Very few people are concerned about binaries that run on more than one architecture when distributing binaries that run on more than one distro is a harder problem.
This solution doesn't solve the hard problems, but solves an easy one in an uninspired fashion. Rejecting these patches wasn't a case of holding back innovation, but merely holding back solutions that more experienced developers felt were not appropriate for the platform.
There was a good reason for that and playing armchair critic after reading a sympathetic story from a developer who is understandably hurt after getting his feature rejected just doesn't help anyone.
The kernel is more well managed than most people give it credit for, and it's exactly because of this that hacky incremental solutions get rejected, even when they might legitimately produce a small benefit. Eventually the overall solution to the problem FatElf is trying to solve is going to have to be something different.
I feel sorry for the developer, no one likes getting a feature rejected. But it was the wrong solution and the technical merits must trump other concerns.
For example, in you grievance, is there a way to provide a Docker solution to run the working code under 'emulation'? Is there a way to compile an older version of the kernel to run these programs in the way you want? I don't know but my bet is that the answer is 'yes' to both of those.
My point is that this wouldn't even really be possible without a FOSS ecosystem. It requires labor from someone who cares enough to fix it but at least the avenue is open enough in the form of accessible standards and code. Under a proprietary standard or source these would need to be reverse engineered or, in maybe an extreme case, be illegal.
reply