Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

How does that solve the issue here of new broken versions of packages being published?


sort by: page size:

Packages breaking is a real problem. I’ve found that explicitly versioning them (eg comitting their source) has made this much less of a problem.

I don't understand why publishing a new version of a package breaks 1000000s of existing apps?

Do you not depend on a specific version? Do you not use checksums for dependencies?


It seems to me like the better way to solve this would be for distributions to publish which versions of dependencies each package uses, and provide an audit tool that can analyze this list and notify of any vulnerabilities.

There are two actions that need to be taken:

1) fix the bug in the vulnerable package (cut a new version)

2) make sure the new, bug-free version is actually in use

1 is the responsibility of upstream (though sometimes distros will do this too, and release it themselves while they try to upstream the patch).

2 has to be done by somebody for every application using the vulnerable package, either a) automatically (for example, by Debian releasing the new log4j version and all dependent applications picking it up at runtime) or b) individually (by each dependent project's upstream updating the dependency, cutting a new Flatpak or container image, and getting users to update it).

a is much more pleasant than b, at least in the universe as currently constituted. We can imagine a world where b is mostly automated so it is about the same or less work, but we don't live in that world yet.


OP here. Totally right, the final goal being that a new version of the package gets automatically rebuilt whenever a buildpack gets updated with a security fix.

Yes, but a sanely designed package manager will at least require that to be a new, versioned release. _Already published_ versions of a package should be immutable.

What they should do is have a centralized repository of packages but allow multiple versions that function independently. That way nothing breaks when one program installs a newer version.

That may end up being a net win for all involved: distribution users will get an up to date package direct from the source instead of a 5 year old version (or whatever) patched to hell, and the project will stop getting reports about versions modified by packagers.

So if they need to apply a small patch to a given package they'll sometimes discover that it was built 3 versions ago and no longer compiles? Yuck.

Either I'm misunderstanding or this is a non-problem. You can specify older versions of a package when you install it. You can also manage them with packrat. As long as researchers share their language and package versions, you can fully reproduce their environment. (And the base language is really stable, almost to a fault.)

This is just a bad way for the author to promote their own library for dealing with this. The way their library seems to approach this (using dates instead of versions) seems horrible too - on any given date I can have a random selection of packages in my environment, some of them up-to-date, some of them not. So unless all researchers start using the author's library (and update to the latest versions of everything just before they publish), it's only making things worse and not really solving the problem it claims to solve.


Out of curiosity is patching and updating all packages also a common consistent issue? Is there regularly breaking changes?

I feel like the correct move here is to push the library maintainers to update to the new version. Distribution package mods should really be seen as temporary.

It's important for package developers to be aware of other software that depends on their interfaces or functionality.

Some cases will slip through sometimes, but over a couple of releases these should be gone.

>> Where do new bugs and vulnerabilities come from? When the main developers add features or make changes to existing features that go beyond fixing bugs.

Do you have stats for that?

Semantic versioning was supposed to be the fix for that, but as Rich Hickey has pointed out, that is also broken.

Everyone is their own server admin these days. We all want "a stable, solid version that has all the latest security fixes" but it's difficult accept that that might be impossible.


I think this is a key point that many people overlook the importance of when pressuring, say, Debian or Ubuntu maintainers for new package versions inside of a single release.

If your package updates and users of your package update and their code ceases to compile, that seems... fine? It's the system working as intended. They can just downgrade back to the previous known good version. It would be much worse if you made a breaking change to code but consumers' code that used yours continued to compile but no longer functioned as expected

Maybe published package versions should be immutable.

I get the malware concerns but in practice I don't think they are such a big blocker.


The context of the original comment is that you're updating that package on a test server and then testing it.

But sure, just yoloing a single package upgrade can break things, obviously.


If everything goes well, then this change will not have been particularly noteworthy... it is going into Unstable now, and if it causes problems then that is exactly how it works.

Packages are not promoted into (out of?) testing unless they don't have open issues reported against them. If this concerns you, then the right thing to do is to file an issue. Keep it open long enough and this won't make it into the next stable release.


- bundles let you ship less code (tree shaking)

- dependency versions being fixed is a feature. I've had package installs fail because a dependency published a semver-compatible update that broke something. In a web context this would break the app until the dependency pulled the update or the dependent pushed an update disallowing that version.

next

Legal | privacy