Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Oh I really like those comments. People think that keeping autoupgrades on without control keeps them secure. Right... And then you keep hearing stories about server wont go up because of update, or stuff breaks horribly. Not to mention brining new bugs to the table.

Its not that simple. I care what I install on my servers. Everything is carefully selected. Additionally, I follow KISS concept, so I run simple things that are managable.



sort by: page size:

so you're saying there will never be a breaking change when the or bugs when the software updates? ever? That seems unlikely. I've managed my own servers many times before and that's never been the case.

Disagree, you have to continuously update production servers (not daily, but ~weekly/monthly). The more often you do it, the more automated and risk-free it is, and the smaller the version gap for when that critical security vulnerability hits that needs to be patched ASAP.

I can understand the security argument if you're running a server, but I'm not, and what I care about is predictability: I want the apps that have been working to keep on working in exactly the same way until such a day as I decide to take the risk of upgrading them.

I don't care how foolproof semantic versioning might be in theory: in practice, sometimes you upgrade one thing and that leaves another thing broken, and this is not something I am willing to accept in a personal workstation - especially not as the result of an automated self-update process, introducing churn without bothering to get my permission!

I don't care about storage efficiency. It's been years since I've had to pay any attention to drive capacity. 4GB is nothing. 76 GB is also nothing, when you have multiple terabytes of storage space sitting completely idle.

As far as memory efficiency goes, I'm not running pipelines, so I don't care. On a server with 1GB RAM, yes, I can see that becoming a big deal; but even my laptops all have 8GB RAM now. On a personal workstation, this doesn't matter, and that's where I'd like to see the "link everything dynamically" meme die. I'm not a system administrator and I don't have system-administrator concerns. I want to get work done and not futz with it. That means I want everything statically linked, independent from each other, churn-free, and predictable.


I don't agree - I try to do security patches every few days, and major upgrades when I have time to test them. The hard part is keeping tabs on what servers need them. I have 7 or 8 vm's and when they're not involved in a project they're easy to forget about.

I prefer to be actively administrating my server, so that I know if something does go awry. If I'm asleep, it may be a few hours before I realize that an update broke something. On top of that, I clone my servers and test upgrades in a development environment as much as possible, before allowing an update to go live. As long as you're on top of the updates, a few days between automatic and manual shouldn't have much effect.

When you do a server side upgrade you are essentially doing an upgrade on behalf of every person who uses your product and it's sometimes difficult to roll-back. This is a really good reason to treat server side upgrades differently.

Once you are bit by a bad update you will be very shy about doing it again without a good cause. As an example, we updated to Recent Version - 1 of critical server software and had the thing crashing constantly thereafter. So then we upgraded to Recent Version and had a new bug that was introduced that was causing equal troubles. But now we were stuck and had to wait for next version to fix the bug. Made a really good case for not changing what works.


You’re conflating unattended-upgrades (server mutability, hard to roll back) with automated patching in general. Do automated patching but also run the changes though your CI so you can catch breaking changes and roll them out in a way that’s easy to debug (you can diff images) and revert.

I bet when you update your software dependencies you run those changes through your tests but your OS is a giant pile of code that usually gets updated differently and independently because mostly historical reasons.


Usually you aren't updating production servers unless it's a security patch, fixes a problem you have, or adds a feature you want/need. Even then, usually you have a test environment to verify the upgrade won't bork the system

Why do people feel the need to update especially on production servers? Shouldn’t production servers be updated only when necessary?

Yea, it's not just desktop either but having stable releases for server workloads is pretty much essential. If there's no need to update for any required feature in a later version then doing updates for the sake of it just creates churn and actually increases the risk of potential outages. Stable with backported fixes for security means that only updates for security need to be actioned.

that is not nearly enough. You need to make sure the system still works after the update, so you need to carefully control all the versions that go in, test them in lower environments. Also, for any serious application, you will need to do this multiple times, for hundreds or thousands of machines even for small companies.

I am on your side, actually, I think managing machines is better than serverless, but it's not that easy.


I maintain server versions most of the self-hosted applications they're killing off.

One thing I like about the server versions is the ability to spin up development versions of these applications so I can test out new plugins, version updates, or complicated configuration changes without affecting our production applications.

So much for that. I guess I can skip that step and when it goes wrong in production I can be the old man who yells at "the cloud".


> Seems like you need to default to auto update but have an opt out.

We have that. Our software auto-updates, but when you start our program you get a launcher where you can select one of the last five minor versions for each available major version.

Typically the next major version is made immediately available and active in the test environment, while a "boss user" gates them in prod.

When setting a new major version as active, the previous ones are still available for a long time, along with any potential new minor versions for those.

This has made our life so much easier, since any critical issues can almost always be worked around by the user simply launching a previous version, either minor or possibly major. So we can be much more aggressive with pushing out changes, which our customers also appreciate.

It's not perfect but works very well for us and our customers seem happy.

It does require us being careful when making database schema changes, or similar potentially breaking changes. But a lot can be handled quite transparently in the database (using views typically) or through code, and our database upgrade tool can also migrate data as a last resort.


Yes. My point is doing this all the time is, generally, a waste of time. I recommend turning off automatic updates, do it on a schedule. Weekly is too frequent. I have some servers I update yearly. That may be too long. At a previous startup, we had boxes that hadn't seen an update in 3+ years.

My read on that is that you should be treating your servers as disposable and ephemeral as possible. Long uptimes mean configuration drift, general snowflakery, difficulties patching, patches getting delayed/not done, and so forth.

Ideally you'd never upgrade your software in the usual way. You'd simply deploy the new version with automated tooling and tear down the older.


Every time this topic comes up, we have people talking about really different things not clearly explaining what they are referring to, and causing confusion and people not understanding the reasoning of others.

I've read the following arguments already, but with portions of them implicit. See if you can determine why the people are talking at cross points:

- "I generate VM server images for all my needs, and deploy to virtualization infrastructure behind load balancers to handle services. I treat he OS like an application like the rest of the stack, and all my data is abstracted to a data layer. I just generate a new image with patches and test it, then deploy if it works in my test suite."

- "I have hundreds of servers in dozens of roles with different software needs, and I need them to be secure and stable in a timely fashion, and I can't spend multiple weeks achieving that. Long term support and back-patching allows me the time to plan needed large changes in infrastructure without having to spend all my time managing updates, software changes and configuration changes multiple steps down the dependency graph."

- "I have a few servers with a few roles, or many servers with one or two roles, and I can manage frequent updates just fine, and it allows me to take advantage of the newest features, get security updates immediately as the software authors fix it, and I don't have to worry about end of life of software."

- "I have an application stack with multiple dependencies, and I just make sure to update my stack as I make changes to the software. I would love/have set up continuous integration software to build and test everything as I go, so I know if it works or not before taking it live."

As someone who's been in all of these situations at different points, often multiple at once, let me be clear: Unless you have an argument that addresses ALL of these situations, you really haven't thought through the issue.


Isn't a lot of this wisdom encapsulated in the (old, uncool) configuration management stuff like chef and puppet? The main difference being that you still need to rebuild your systems regularly to keep yourself honest (no lazy one-off changes, everything goes into the CM codebase).

I mean, I get that NixOS can pin versions of everything, and we've all been bitten by a new server build on identical CM code failing because of an upstream version change. But it's an eternal problem: pinning all the versions means that you a now micromanaging a zillion different versions of shit that you don't want to care about.


Take for example OS and app upgrades. Quite often you have few requirements, for example 1)all updates should be first tested in test env 2)updates need to be installed in timely manner 3)critical updates need to be installed quickly (30 days is too slow)

When you start thinking these, they are not so easy. (1) means you can’t just run “apt upgrade” on every server - you need to manage the updates to make sure they get tested first. (2) is kind of ok, but requires some work on regular basis (at least checking things) (3) means you need to monitor the updates for your stack and classify them. You can get feeds for example for Ubuntu, but does that cover whole stack. And checking these weekly (or daily) actually gets boring.

All this stuff can be done, but IMHO it is time consuming and takes time from more important things. The weekly/monthly JIRA tickets for checking x, y and z get quite annoying when you also have n other things to finish. Then you start slacking on them and feel the pain when trying to collect evidence for the next procurement process check.

If you have tens or hundreds of servers and can have a separate infra team with professional sysadmins then this is all fine. My rant is mainly for small teams, where same guys are supposed to develop and run things.


That's still just one runtime needing support at a time. OS upgrades happen in waves, and at a cadence you don't choose. Hosted server products, you choose when to update the server stack and you can stop supporting older versions.
next

Legal | privacy