I'm guessing they're referring to conflicting transitive dependencies. Everything works fine when you use version 1 on date A. A couple years down the road, all those depends have matured at different rates and the top level packages all specify different versions of the same transitive packages so you have to try to line them all up.
On the other hand, the longer you wait to update, the worse it gets. Updating more frequently you can suss out packages moving quickly vs slowly vs not at all and address the problem before you're in update paralysis
You don't have to update, it's just that developers seem to like to run the latest stuff, and many appear to build production binaries on their laptops (rather than in a controlled build environment, where dependencies are tightly managed, and you could deliberately stick to an older distribution easily enough).
The dependency errors indicate real issues because of the way most distributions handle backwards compatibility: you have to build on the oldest version you want to support. Those errors happen if this rule is violated. For glibc-based systems, the effect is amplified because package managers have become quite good at modeling glibc run-time requirements in the package-level dependencies, and mismatches result in install-time dependency errors. Admittedly, I'm biased, but I strongly suspect that if we magically waved away the glibc dependency issues, most applications still wouldn't work because they depend on other distribution components, something that's just not visible today.
Multiple dependency versions causes problems but the alternative is pretty bad, too. You can get stuck on older versions of a dependency because newer versions have a transitive dependency that causes a conflict with a different direct dependency
Rather, their dependencies seem to have had breaking changes for some reason when updating those versions. I don't know, that's what the article claimed.
Because it’s now down to you to manage which versions of which packages go with what versions of what other packages. When you really need to upgrade one and it fails for version compatibility reasons, it can be a genuinely difficult problem.
Either I'm misunderstanding or this is a non-problem. You can specify older versions of a package when you install it. You can also manage them with packrat. As long as researchers share their language and package versions, you can fully reproduce their environment. (And the base language is really stable, almost to a fault.)
This is just a bad way for the author to promote their own library for dealing with this. The way their library seems to approach this (using dates instead of versions) seems horrible too - on any given date I can have a random selection of packages in my environment, some of them up-to-date, some of them not. So unless all researchers start using the author's library (and update to the latest versions of everything just before they publish), it's only making things worse and not really solving the problem it claims to solve.
If I understand right, they support multiple incompatible versions being used at once via major version upgrades.
What they aren’t supporting is incompatible dependencies being incorrectly advertised as compatible, then being worked around by other dependencies testing against those versions and explicitly marking them as incompatible, in cases where dependencies have a shared dependency with different requirements.
Instead it looks like they are trying to solve this problem outside of the dependency manager, by putting a lot of emphasis on preserving backwards compatibility within major versions being the correct Go practice (and probably also hoping that edge cases get fixed quickly in a collaborative way, since everyone is on the same system).
How well this approach will work relatively in practice seems to depend more on the behavior of library authors than any technical factor, without knowing that it seems hard to say that either is obviously superior.
With pip for instance, it often happens that a transitive dependency gets updated inadvertently breaking your code. This follows from the assumption that all packages follow semantic versioning perfectly and keep backward compatibility where they should. This is not the case in practice and experience has shown it is unrealistic to have that assumption. A better way is to rely on exact versions of packages (up to a single bit) and not on semantic versioning.
One reason is that many developers work with package managers / languages that are INCAPABLE of handling multiple versions of the same dependency. Without this you quickly get version deadlocks without semver.
Another reason is that many developers are obsessing on getting the latest version of their dependencies for fear of security issues or just missing out on the latest and greatest - and they often completely ignore retesting the application since they now have someone to blame if it fails (that other developer should not have pushed the breaking change with a minor version bump!)
I agree with you that it should be the standard to have fixed versions and update your dependencies at a time of your choosing so that everything can get tested properly - but it seems to be an uphill battle.
Huh, in that case I have the converse question -- when specifying dependencies for X, can you not say A 1.x but not 2.x, because 2.x has or is expected to have backwards breaking changes? Otherwise, when A releases 2.0 with some backwards incompatibilities, and all packages that use A as a dependency are rebuilt, don't they break?
These issues (allow updates within limits) are what I understand as the point of dependency resolution, I'm trying to understand how you do without it.
Because dependencies are not perfectly managed / versioned. When you release the end product app, you have 3 options:
- assume that your published version compatibility is correct and let the app fail for some library upgrade in the future
- publish a strict list of dependency versions and keep updating it
- publish a strict list of dependency versions and rely on old versions as long as possible
The first one results in unhappy users, second one in lack of distro packages because you rely on too recent deps, third one either in lack of distro packages because of too old deps or in unhappy users because deps don't work on their new system anymore. There's no way to satisfy everyone unless distributions themselves become aware of multiple levels of package management.
Because people using those hate updating and sharing their dependencies. A depends on foo version 1.2 and B depends on foo version 1.3 despite foo version 1.4 being available. So naturally A and B both bring their own copy of foo.
> Biggest problem was if there were packages that wanted different versions of the same dependency.
How often and in what contexts would you encounter this version dependency issue? Was there a good solution or would you work around the issue by finding compatible versions?
Older dependencies, of course, might contain security vulnerabilities that have been patched in more recent version updates. Its a frustrating choice: stay on old versions and risk exposure to old vulns, or stay up-to-date and risk supply chain attacks.
A consequence of the complexity is that some devs stay below v2 forever because it’s easier.
Backwards incompatible changes pile up in v0.x’s.
It’s then really difficult for consumers to install two versions of the same package, which can happen with diamond dependencies.
Again, it's not just malicious updates. Normal updates can also introduce security vulnerabilities. For example, I have a dependency at v1.0 and v1.0.1 introduces a security bug unintentionally. It is eventually fixed in v1.1. If I wait to update until v1.1, then I am not vulnerable to that bug whereas an automatic update to v1.0.1 would be vulnerable. My point is that in expectation, updating your dependency could be just as likely to remove a security vulnerability as it is to add one.
The problem is that in the real world despite promises of semantic versioning, breaking often occurs which makes it impossible for two packages to depend on the same version of another library though in theory they should.
If package A depends on package C at version 1.0 but package B depends on C at version 1.1, what version of C will be pulled in?
Dependency management is not as simple as only upgrading one direct dependency at a time after careful review.
The NPM ecosystem is particularly difficult to work with as it has deep and broad transitive dependency trees, many small packages, and a very high rate of change.
You either freeze everything and hope you don't have an unpatched vulnerability somewhere or update everything and hope you don't introduce a vulnerability somewhere.
On the other hand, the longer you wait to update, the worse it gets. Updating more frequently you can suss out packages moving quickly vs slowly vs not at all and address the problem before you're in update paralysis
reply