What is terrible about explicitly specifying the source of the package definition? Seems like a dramatic conclusion to make based on a minor detail of the command line syntax.
Is there a more verbose explanation about why they’re moving away from compiling packages from source? All the explanations I’ve seen make it sound like that’s the obvious direction it should go, but it’s not obvious to me.
Having to use these automatic packaging tools is exactly my gripe. I don’t want to have to rely on whatever scripts someone conjured up if it’s not supported by the distro itself.
I agree it always makes sense to have sources involved for packages that are getting distributed as part of the distro, but what if I want to just deploy my product on a server without the sources?
I love the download and install section's lack of assumptions that someone looking to install this specific package doesn't already have gcc/git/make avaialbe. "Hi, I'm a brand new *nix user on a fresh install of my distro, and I want to install a command to recreate the Sneakers decryption effect" says no one, ever. =)
So many packages make these kinds of assumptions. I tend to find that in those situations, the likelihood of some other salient bit of information gets left out as well. Also bonus for not being an npm install type of something.
Not the GP, but I note it's difficult to, say, take some code which uses package xyz and substitute the source-compatible package abc without editing every file. e.g. Try compiling a large project with a custom version of 'os' swapped in. (e.g. To add tracing, or simulate random I/O failures.)
Some package management systems can decouple what packages are called inside the code from what they're called outside.
Hey, that's totally fine if that's what you want to do, especially if it's for your own convenience. What I'm trying to communicate really has nothing to do with whether anyone is being forced to install anything. My point is that there easily avoidable problems that are inherent to pulling in packages hosted elsewhere, and that programmers should consider whether they should avoid suggesting that using a third-party package for something that can be written by hand in a few minutes. That's all I'm saying. For your own use, this makes a lot of sense. If it were me, I would avoid sharing it, and I hope more programmers move away from relying heavily on other people's packages for tiny units of functionality. But I probably wouldn't have been vocal about that here if I knew your intent with that package (or that you even wrote it, which perhaps I missed somewhere).
Maybe I'm misunderstanding, but isn't part of the point that the packages themselves are declarative and you're not beholden to some maintainer to get stuff? Wouldn't that mean writing your own... whatever it is your write to get software installed?
I tend to agree with this article in some ways .. I've been using Linux since the days of the minix-list, and have over the years been bothered by exactly the sorts of things that are described here .. it seems every year/new release of my preferred distro (Ubuntu Studio) I have to re-learn things that are not well explained or documented.
So I've just kind of gotten used to using the source. Seriously! If I find something I don't get, I build the package from source, and debug it. This is really the only way I've been able to survive as long without tearing my hair out in frustration over the years.
Its a glib response to the problem, but actually it really works.
It's because individual tools usually don't want to be tied to assumptions made by one particular distro. I actively avoid using distro packages for 3rd party development libraries and such, especially when a good tool for accessing upstream sources (eg pip) is available.
I use packages for certain tools and platforms, and libraries if I feel the library is really something I want to be a standard part of the system environment. For example, I am more likely to use the distro package of a python library (if available) if I'm planning to use the library for a system administration task than if I am planning to use it for application development. I'm also likely to use distro packages for things like apache, nginx, postfix, unless I have some case-specific reason not to.
"PPAs are Ubuntu specific both in the sense that nobody else is using them and in the sense that if someone makes a PPA that package depends on a specific Ubuntu release and won't work on any distribution."
Well, you can use them on debian, other distros (Fedora etc) have a different model with equivalents.
"There needs to be a way for developers to provide distribution agnostic binaries."
Why? With source the distro maintainers can choose to build it as they like, apply patches if they feel the need, tweak it to run against the lib versions they have decided on etc etc. This is what makes linux distros great, that the binaries are built with and as part of the system.
"Distros can choose to provide whatever package management tools that consume the developer-produced raw data."
The only times I've ever needed to compile a package from its sources was when I was trying to hack away at it.
I've not had to compile a Linux kernel in almost 8 years.
Your argument is specious. You should seriously stop using it. As for the LSB, that hasn't stopped distributions in any way, shape or form. Debian got rid of their LSB meta-package because they didn't need it.
I think that's because on most Linux-like systems, the distribution's built in package manager fills this role sufficiently. It's hardly perfect, but for many popular libraries you can just install the -dev package or equivalent, then include the headers from the system path and code away.
I wouldn't mind a more dedicated tool though, especially because relying on distribution-specific packages tends to make cross platform support messy (see: heavy reliance on automake tools to correct for platform differences) and because most distributions only include the most popular, widely used libraries, and don't try to support newer or more niche tools.
Different distros require different metadata for their packages. But in general amongst the top 10 distros, almost all metadata in a package is optional. You don't have to specify the URL where it came from or who the author was in order to get a package accepted. Some distros do require a URL, mainly because they build from source to install on a system. But other distros merely accept a source-package which bundles the source code and that's built on their build servers and released after being peer-reviewed. But the peer-review is a manual process, so it's human-fallible.
As an example, let's compare the way two distros (Fedora and Debian) package an old piece of software: aumix.
Taking a look at this spec file [1] for fedora, we see two pieces of metadata: a URL to a homepage, and a URL to the software. The URL is not used for packaging at all; it's merely a reference. The URL to the file can be used to download the software, but if the file is found locally, it is not downloaded. And guess what? That source file is provided locally along with the other source files and patches in a source package. So whatever source file we have is what we're building. This file doesn't contain a reference to any hashes of the source code, but the sources file [2] in Fedora's repo does.
With Debian we have a control file [3] that defines most of the metadata for the package. Here you'll find a homepage link, which again isn't used for builds. The path to a download is contained in a 'watch' file [4], which is again not referenced if source is provided, and generally only used to find updated versions of the software. There are no checksums anywhere of the source used.
The source to aumix actually provides its own packaging file [5], provided by the authors. Apparently the URL used here is an FTP mirror, not the HTTP mirror provided by the earlier packagers. Could that be intentional or a mistake? And could they possibly be providing different source code, especially considering the hosts themselves are different?
It's clear that there's a lack of any defined standard of securely downloading the source used in packages, much less a way of determining if the original author's source checksum is the same as the packager's source checksum. There are several points where the source could be modified and nobody would know about it, before the distro signs it as 'official'.
That's suboptimal for many reasons. Package names are not consistent. Installing multiple versions is usually impossible. Huge pain for library authors. Doesn't integrate with build systems usually (even basic pkg-config support is iffy). The command is OS-specific. You can't usually choose the linking method. Difficult to bundle dependencies. Usually out of date. Etc. etc.
Who is the user? Most "users" never build 99% of what they use from source. Sure a few things that they do build they would like control over. But randomly linking against things is the default because it make the package maintainer's job easier, and it is hard enough as it is to get people to maintain and update packages for distributions.
You should be really careful about using libraries from distribution package managers because none of them can resist screwing with the source, which often times makes them unusable. Even the vendors themselves say don’t screw with the packages and maintainers do it anyway.
reply