Having a proper test suite and updating when the changes are minimal usually leads to better overall product maintenance.
I dont get these claims to not upgrade unless you're also just as worried about changing a line of code and having it all break too? Make the changes, test the changes, and deploy carefully, just as you would for anything else.
Yes, an automatic update can break things. Personally, I am happy to have minor version updates be applied automatically if my test suite passes. For anything larger, I at least review the changelog to make sure there aren't any obvious breaking changes and then if the tests pass, I go ahead and deploy.
Isn't that the way it should be? Especially in a production environment - not upgrading unless you're sure your core packages don't break your product?
I'm not a fan of blindly updating; I'd rather read the changelogs and decide if I need to update or not, but I sort of understand the utility of it. Most testing happens against the latest release version or HEAD in version control. If the version you're running has different behavior than those two versions and the change wasn't well documented, and the behavior in your version is problematic, it's not likely that other people will discuss it and call your attention to it. Even if people run into that bug, they may just update and move on, rather than post about it being broken.
Depending on the project, and the changes involved, sometimes it's beneficial to upgrade frequently and make required integration changes in small increments rather than say once a year with a big change. Sometimes, it's much worse to follow the changes; some projects churn a lot and integration changes are kind of fixed per release, letting releases pile up means less overall work in that case.
Also, if you get support from upstream (which IMHO is a big if), they'll tend to prefer for you to be working on a current version. You tend not to get much support when you're running older versions. OTOH, I'm not used to having a support workflow. It's often too hard to engage with external developers, so either the external software works, or you make changes so it works, without discussing with upstream.
Not just from a vulnerability poerspective but also a dev and testing perspective. Constant updates makes testing far more involved for every release since you can't just focus on the parts the dev work would have affected, you also need to thoroughly test everything that was calling into the updated dependency. But if you don't update enough, once you really do need to update either for security or feature reasons, you may have a nightmarish process ahead of you of dealing with breaking changes and transitive dependency incompatibilities.
Usually if I have a production project I avoid to step out through major releases.
As you have seen they do bug fixes for previous stable releases, which is awesome and IIRC the security one go even deeper to the tree.
No one force us to upgrade, but I do love upgrades, usually (if well done like this one) are full of things that may help our coding life and speed performances? Are always well welcome!
You should definitely tug on things when you have a controlled test environment and the time to explore what-ifs.
Much like companies should try to replace their own products (before a competitor does), infrastructure teams need to force “predictable” upgrades in a controlled environment on a regular basis. For example: look at your dependencies, imagine what upgrades are likely to be required in the near future, and try making those upgrades on test systems to see what could go wrong.
That approach achieves three things. One, since you’re not in emergency mode and you’ve used a test environment, any problems that you do uncover are not going to cause a crisis. Two, if you do this semi-regularly then you’re likely to see only minor issues. Third, exploratory upgrades give you a lot of time to fix problems (whether it’s time for your own developers to make changes, or time to wait for an external open-source-project/vendor to make changes for you).
I agree, at least for anything that's hard to test and has some inertia, like a larger code base.
For example most of our configuration management is setup to just pull in the latest cookbooks from upstream during tests, and as long as all integration tests across all projects succeed, they get uploaded to our chef-server.
People argued that it would be annoying because things would break all the time. And yeah, things break with updates, though the opscode community is remarkably disciplined about semver. But that's what we have tests for.
And honestly: I'd rather deal with one broken update per day than 300 broken updates once per year. One bad update usually requires some nudging and that's it. 300 bad updates at once are a fully blown nightmare and you'll need days just to figure out what is even going on.
Incremental updates may require more time to complete, as an API may be refactored multiple times over many versions. However, the confidence in moving incrementally is well worth it IMHO. If you don't have an extensive enough test suite or poor/missing QA process (or both!), doing a big bang upgrade is going to both be extremely painful and very error prone.
It's worthwhile to keep up to date. It's probably not worthwhile to upgrade ASAP after a release, but you don't want to wait too long.
This is just as true for client side upgrades, for example in the App Store. As an app developer you're upgrading every client out there, and it's much harder to revert.
Just as you test on the client side, you should have a staging cycle which tests releases before you deploy them.
Wouldn't you test before upgrading packages in production? And usually you'd want to schedule any upgrades so that the next day or two has coverage from someone who can deal with any issues that arise.
Usually you aren't updating production servers unless it's a security patch, fixes a problem you have, or adds a feature you want/need. Even then, usually you have a test environment to verify the upgrade won't bork the system
You’re conflating unattended-upgrades (server mutability, hard to roll back) with automated patching in general. Do automated patching but also run the changes though your CI so you can catch breaking changes and roll them out in a way that’s easy to debug (you can diff images) and revert.
I bet when you update your software dependencies you run those changes through your tests but your OS is a giant pile of code that usually gets updated differently and independently because mostly historical reasons.
If you upgrade regularly, maintain sufficient business-logic testing to catch breakage, and regularly audit your dependencies for those which are no longer maintained so that they can be ripped out and replaced with maintained alternatives, then you're fine.
If you skip any one of those, then your full-time job is longer about shipping features/value but just trying to get the damned thing to work.
I'm always surprised at how often developers freeze a set of versions and leave them for a lifetime. In my past three companies I have been the only one interested in pushing dependencies on a regular basis. I always start with a massive backlog, and end up having to incrementally update from the Stone Age to $today. Once that's set and tests pass it's easy to keep things up-to-date, you end up with single-digit changes every month instead of ~100 every year.
Do it this way and save yourself the pain of zillions of updates when you HAVE to bump a package for a CVE.
1.7 It is not hard to keep up with changes. And it is easier to do them incrementally than have big jump which you never find time to do and end up lagging versions and huge technical debt.
Once your project is deployed then think about sticking with specific version. Development should be on the bleeding edge!
Have you never worked on a legacy product with lots of paying customers? The not-so-hard to imagine case is where updating your code to be Pure and Wonderful would have a negative impact on thousands or millions of existing deployments.
Still doesn't fix the fundamental problem though.
Let's say you automatically install updates on a staging server and automatically deploy to production if all tests pass.
What do you do when you're faced with a choice of deploying an app with a few failed tests (for perhaps not totally clear reasons) or leaving an old version up with a vulnerability?
I dont get these claims to not upgrade unless you're also just as worried about changing a line of code and having it all break too? Make the changes, test the changes, and deploy carefully, just as you would for anything else.
reply