Woohoo! Congratulations to everyone involved on the release! I’ve tried almost every distribution under the sun and my heart always goes back to Debian. I’ve been running Bullseye for a few weeks (the RC’s) and it has been a pleasure.
PHP 8.0 is 10 months old, and debian’s upcoming release will be upgrading from 7.3 to 7.4, which will make 7.4 the standard for the next ~3 years (even though it only gets upstream support for 1 more year)…
I am starting to reconsider my personal policy of “use debian-stable as a benchmark for what language runtimes I should build on top of”, now tending towards “use debian stable as the bare-metal OS, and build all my projects inside docker, using each language’s most recent stable release”
For those unaware, these packages are built using the same packaging repo as the Debian “offical” packages, by a member of the “official” Debian php team.
I can’t speak highly enough of Ondrej’s work on this.
I'll opine, happily, that nothing showcases the gap between the OS people and the Web people as well as someone who considers 10 months to be a significant length of time.
There is a happy day coming for the web developers, sometime this decade hopefully, when skills learned as long as 12 months ago will still be up to date.
Did you notice OpenJDK has been upgraded from v11 to v11? That sort of radical change is why I run Debian. They don't break stuff.
OpenJDK is still at 11 because that’s the latest LTS version. Subsequent versions have significantly shorter upstream support lifetimes (IIRC, six months). Until another LTS is released, it’ll continue to be 11.
It means nothing that it will be released in a month. Because new software goes into unstable first, then is promoted to testing and only then to stable. So even if the next LTS release would have been released a month ago, it certainly wouldn't have made it into debian stable.
LTS is strictly a JDK/JRE vendor concept. Oracle's distributions have 11 as an LTS and will have 17 as an LTS. That, however, only matters if you are paying oracle for support.
In context, I'm talking about the OpenJDK project which has no concept of LTS.
LTS is strictly a JDK vendor concept. (AdoptOpenJDK is a JDK vendor). How long a version is supported depends entirely on the vendor.
A lot of vendors have adopted the oracle LTS strategy, some offering even longer support. However, it's vendor specific.
Debian is, themselves, a JDK vendor as they are building OpenJDK themselves. So what LTS means for debian is entirely up to what Debian is willing to do to support it.
Although technically true your statement misses a fact. Jdk 11 is still an LTS version for oracle which means that they are committed to improving that version with (security) fixes. These fixes will also be(come) available to the main OpenJDK distributions.
Therefore, as a user of any JDK distribution you can depend a bit on the LTS versions, in addition that what your distribution promises.
> These fixes will also be(come) available to the main OpenJDK distributions.
This isn't true! Oracle is under no obligation to mainline fixes from their JDK into OpenJDK.
That work is somewhat happening via IBM and Redhat, however, oracle is not doing it.
Further, new cuts of the OpenJDK are not happening for those old versions. You MUST get the fixes via a build of the OpenJDK that isn't the Oracle build of the OpenJDK (like adopt OpenJDK).
You can only depend on the "LTS" if the vender of your JDK supports LTS. There's no technically about it.
Exactly so officially you can indeed only rely on what your distribution promises you. In practice you may, with varying success, assume that if you are using a version that is LTS by one of the big distributions some fixes will land in mainline as well and therefore also in your distribution. Particularly for really critical fixes.
Of course, only officially rely on whatever agreement you have with your distribution/distributor.
You don't seem to know anything about the "web people" you're criticizing here. PHP skills learned two decades ago are still very relevant today. It's the Java of the web world.
Hmm no, not at all. My first job in 2002-2004 was using PHP4. I barely used classes (and if I did they were just my code, not from the library because I didn't need anything from PEAR) and used the original MySQL client API. Almost nothing of what I used back then would be useful today. PHP4 vs PHP8 is almost more different than Python 2 to Python 3.
I've initially learned PHP4 when PHP3 was still used. Later I learned PHP 5 which iirc was the first version which had classes. I also learned Drupal 6 and then 7.
Modern PHP code looks nothing like PHP code back then. Of course, the core syntax is mostly the same, but if you look at a framework like D6 or D7 no one would do anything even remotely like this today. PHP security has completely changed since then as well.
I don't understand the downvotes at all. Yes, PHP 8 is different from, let's say PHP 5, but that doesn't mean knowing PHP 5 doesn't help with writing good PHP 8. Same with Python 2 vs 3.
Very fair point :) All my other dependencies, using 5 year old versions is fine; it’s mostly just PHP that gets an order of magnitude less-painful to work with in each release, so being stuck a few versions behind is extra frustrating...
Is it that difficult to build and/or install a newer version that better fits your needs? I tend to stick to debian stable, but in the few cases where I need newer software I just build it myself. Using `apt build-dep [package]` and then something along the lines of `configure && make && make install` basically always works.
edit: I guess if you work somewhere with a policy of requiring you to use system packages, you're a little screwed, but that seems more of an organizational problem than a debian problem.
In this case I’m building software whose primary selling point is “trivial to install on practically any cheap-ass $1/mo shared web host, just unzip and FTP the files to your website folder” - which is the whole reason it’s written in PHP in the first place.
(The more I think about it, the more sad I am that it’s 2021 and still no other language has come close to PHP’s low barrier to entry...)
Yes. Just as an example, I use VSCode. VSCode no longer works on CentOS 7. It needs a newer glibc, so I'm going to have to recompile a lot of stuff and do some PATH fiddling to get it to work.
It's usually not that it's hard to do that for your packages. It's when their dependencies go out of date and I'm looking at having to build a Frankenstein system where I'm running one up to date system on top of a really out of date one, and trying to make the two co-exist in a single OS.
It's not insurmountable, but it's enough effort that dealing with occasional Arch breakage becomes easier. Or even something in between like Ubuntu.
Well I was referring specifically to debian and to PHP. Does VSCode really not compile on debian 10 or 11? I find that a little surprising. But yeah I agree that installing newer dependencies like glibc for VSCode would certainly be a hassle. I can't really speak much to that though since I've never used it.
In other settings I can add a new feature to one of my dependencies and start using it within days or weeks. You're telling me you prefer having to wait years to get any newly implemented feature?
OS should not ship your language run time. OS is a giant cargo ship. It does not turn on a dime and it is not piloted by a "lets rewrite it in rust!" crew.
Sure. In the Rust ecosystem, if you submit a pull request to a library it is very common for the maintainer to release a new version soon after. Even patches to the Rust compiler repository are released to stable in a matter of a few months (far less for the nightly channel that many people use)
There's always workarounds and so forth, but it is really empowering to be able to have an impact over such a short timespan
When I was an intern there was this old dude who had an ancient computer with the same Debian on it for like 7 years. He never got excited about new stuff, and just didn't want to break anything. I thought that was kind of funny and un-hip, back then, but now I'm like that. It's like Dad OS. :)
I'm a Dad and I use Arch. I can't stand non-rolling release Linux distros. They are constantly getting in my way because they don't track upstream.
I have multiple machines running Arch (some over a decade) with no problems. In constrast, doing dist upgrades on Ubuntu has put me into states that I could not figure out how to get out of, and thus had to do a clean install. (Granted, it's been a long time since this has happened, but mostly because I very specifically avoid non-rolling release distros.)
Ubuntu upgrades are extremely messy compared to Debian. If you want a rolling-like experience, the testing channel of Debian is pretty good for that - though it does get frozen when approaching a stable release.
The dist upgrade is more of an example meant to dispel the myth that non-rolling release distros break less frequently than rolling release. Or at least, an anecdote anyway.
Jumping down from the meta level of rolling vs non-rolling, I personally find Archlinux style of packaging much much simpler than Debian's. I've writte numerous PKGBUILD files over the years and it's been dead simple. But my eyes gloss over whenever I look at Debian packages.
I'm sure the complexity in Debian is warranted for one reason or another. But it ain't for me.
Note that, Ubuntu had some rough times, and did some odd things (I distinctly remember changing gid of system groups, for example) - but it has gotten a lot better. I don't think I can remember Debian ever having any serious issues on dist upgrade. Well, there was a bit of hairpulling with the change from lilo to grub - but I think that was because I tried it early, before grub became the new standard..).
But of course, there are no perfect tradeoffs. I'm inclined to believe GNU guix (or nixos) might be the next best thing(tm) - but I've yet to put that to the test..
I've been using debian full-time for decades, but recently ended up switching to a spare Arch laptop I'd installed Arch on long ago just for poking at.
As something to make it easier to install latest-and-greatest junk it's certainly better than rolling your own like an LFS, and feels a bit less annoying than gentoo so far.
But it's nowhere near as comfortable to use as-is as debian, and definitely requires more time-wasting to figure out why things aren't configured correctly after simply installing a package w/`pacman -Su $foo` or some dependency is wrong or missing.
Hell, just the other day I thought to try building something with clang instead of gcc, after not using clang in a while, and this is what (still) happens:
$ clang
clang: error while loading shared libraries: libffi.so.6: cannot open shared object file: No such file or directory
$
This kind of garbage just doesn't happen on debian. At this moment I have the impression that the average Arch install is always at least partially broken in some way.
Furthermore, the tooling in Arch isn't particularly great either.
On my first week using this Arch laptop full-time, I tripped over a `pacman -Sc` bug where it was opening every single file in /var/cache/pacman/pkg via libarchive, even .part files which hadn't been completely downloaded (which means their signatures weren't yet verified) and spitting out liblzma library errors like "no progress is possible" because it's attempting to open an .xz.part file. This is arguably a security risk (in a root process no less) in addition to a stupid bug producing a confusing error message. And I haven't even started going deep into this distro yet, frankly it's already left me with bad enough taste to not be interested in wasting more time on it.
^ This is perfect statement why Arch will never replace workstations or server machines. Lot of people feel it's cool to fiddle around with file system/configuration to make it work (and they think thats cool Linux), it's not! You don't want to fiddle around with OS to make things work or just works out of the box. Debian is 100% suited for dev centric/work station centric environment once you configure, you can use it for years without worrying what might break tomorrow.
I love NixOS but the way Flakes are written, it's not stealing mindshare from the mainstream distros anytime soon. Flakes could have been a chance to write Nix comprehensibly but it reads like a layer of complication on top an already complicated language model.
Your clang error can probably be fixed by a pacman -Syu. Usually that sort of error is related to some libraries on your system being old and other programs being new (and compiled against new libraries), so the packages can't load the libraries properly when you execute them on your system. Doing a full update brings the libraries up to date so the programs can load them and run properly.
Oh I'm aware of this, and I'll get around to doing just that as soon as I'm in the mood to waste more of my life chasing other breakages after pacman turns my entire world upside down just so clang can run again.
I plan on abandoning this experiment as soon as I have the time and interest.
In my comment above I mentioned this was a spare laptop, my primary laptop which ran debian abruptly failed pressing this thing into regular daily use, so I used the Arch install I originally put on it for an egpu experiment.
Honestly, running pacman -Syu on an Arch machine that sat untouched for four years was a much better experience than any of the Ubuntu and Debian dist-upgrades I’ve had to suffer through. I did have to go and merge the .pacnews, but that’s it. (Granted, for Ubuntu the last one was something like 9.10 to 10.04, things might have improved since then.) So having an infrequently-updated machine is entirely feasible, as long as you don’t try partial upgrades. (Except that one time when the StrongSwan upstream decided it was a good idea to rename their units in such a way that an old configuration with ended up running the wrong IPsec daemon after an upgrade. That was a frustrating couple of hours.)
The thing is, it’s not that this problem can be fixed by running pacman -Syu, it is that, in half a dozen years of running Arch on several machines, the only way I could get into this particular failure state was when I ignored every bit of documentation in the name of laziness and did pacman -S thing instead of pacman -Su thing. (Or when I built things from outside the official repos and failed to keep them up to date, but I’m going to guess you’re not running a custom build of Clang, because that is the kind of pain you don’t forget.) Theoretically you might have caught a short window of inconsistent state on a package mirror, but again, from my experience, it’s always just a partial upgrade I did with my own hands. (Compared to apt, pacman is awesome, but its willingness to let you do stupid things could use some adjustment.)
On to more constructive advice—if your latest update was in the last couple of weeks, just
pacman -Su
may work to pull your machine forward to a consistent state corresponding to the most recent installed package. If not, you can use the <https://wiki.archlinux.org/title/Arch_Linux_Archive>: get the last update date by an incantation such as
pacman -Qi | sed -n 's/Install Date *: //p' | xargs -d '\n' -n 1 date -I -d | sort -nu | tail -1
(which I just cooked up, so there are surely better ones, or just look at the tail of /var/log/pacman.log), temporarily replace your /etc/pacman.d/mirrorlist with
Either way, you may see a little bit of breakage (though, in my experience, it’s unlikely), but nothing you wouldn’t have had to deal with when you properly installed your current set of packages in the first place.
I had been a Debian based distro user for ever (ubuntu, sidux, Debian unstable/testing... ) things were breaking all the time. Sure if you stick to stable things only break once every couple of years when you upgrade to the next version. However stable is really not suitable for desktop work, I'd have to install almost all my usual software through other means, that's not what I use a distro for.
I switched to tumbleweed 2 years back and really liked it, but zypper/rpm is awfully slow and I was missing some packages (although OBS is awesome). So I tried endeavorOS 6 month ago (arch with graphical installer) and I have to say I really like it. Not one break so far.
actually the breakage is often in configuration updates. Libreoffice is particularly bad, often it doesn't start without any message after an upgrade, which is fixed by whiping it ~/.config/libreoffice
> At this moment I have the impression that the average Arch install is always at least partially broken in some way.
Mine aren't.
I've never seen that kind of "garbage" either on Arch.
This back and forth is just dumb. I'm trying to push back against dumb crap about how Dad's don't have time for shenanigans by pointing out that, hey, maybe rolling release doesn't have as many shenanigans as you think. And of course, people like you come out of the woodwork and can't help themselves to whine about how your very first experience wasn't perfect. Mine wasn't either. Yet here I am, more than a decade later and I never have any major issues. I always get the latest software and I don't have to bother with dist upgrades.
Getting the latest software is super important to me. It is the number 1 frustration I have with non-rolling release.
> I have multiple machines running Arch (some over a decade) with no problems.
This sounds extremely unlikely. Rolling vs versioned release both have breakage, but with rolling its in constant tiny ways so it's less memorable. It's also a point Arch evangelists consistently fail to bring up.
I feel fortunate to have learned this lesson right before grad school… I used to run Gentoo with all kinds of optimization flags. The day of a conference deadline, I mucked up my /etc/ pretty bad. Switched to Debian and never looked back - sometimes you just need something that works.
I have the same story. I ran Gentoo on all my machines for 5 years (2002-2007) and after the umpteenth update which broke printing, I switched to Debian stable and never looked back.
I disagree, using a filesystem with snapshot support is hardly adding complexity (to the problem, sure the kernel code for btrfs might be more complex, but It's mainline since years and won't eat your data). I interpret your comment to suggest unwise practices for data management. Being able to do snapshots and send them off to wherever is important. This fills the void on your system that git fills for your source-code.
Same. And I loved Gentoo so much. Made me feel like a car mechanic, upgrading his machine with all the custom parts.
I would go back to running Gentoo if I had the time to mess around, it was a ton of fun and it really did give you a system that was truly yours. But it also needed a lot of love to keep running smoothly. And I don't miss those compile times.
For servers, Debian makes a lot of sense. For desktop Linux, I can't imagine living with outdated package, kernels and applications and trying to hack my way around with ppa's or other systems like Flatpaks or Appimages. Rolling distro all the way.
Personally I like Gentoo on the desktop especially in relation to when I code (e.g. very handy to be able to easily switch SW-versions of some packages while always using the same repository). I use it as well on some servers as root OS (I mean the one that runs mainly just the hypervisor), if they have special needs (e.g. if for some reason I want/have to use a recent version of some SW, e.g. ZFS, Kernel, firewall, QEMU, etc...).
On the other hand for VMs I usually just use Debian or Mint, as maintenance/upgrade effort is a lot lower & quicker. In some cases I still have to use PPAs but they're usually exceptions (e.g. Postgres 13 & kernel 5.10 & Clickhouse for Debian 10).
I have a non-ancient computer with the same Debian (testing with a bit of unstable mixed in) on it for over 16 years now. I still get excited about new stuff, I just don't want that new stuff to break anything. :-)
I think the web people have the right idea. If it hurts, do it more often [1], right?
I've used Debian for a long time (edit: on servers), but the short (and in some cases getting shorter) release cycles of popular projects like Chromium and Rust have got me wondering if Debian just moves too slow. I'm tempted to try using Fedora as a container base image and see if it's practical to keep up. Edit: I guess Fedora is no worse than Alpine if you actually update to each new major release quickly, since they both release every 6 months.
It is possible that if you build up a routine to always stick to the latest, it will become an automatism. This opens the door to several types of risks.
Static binaries like Snap, macOS, etc. solve “moving slowly” issue. There should be little need to update the core OS (kernel, initd, /etc stuff) often.
PHP was a no-good, terrible, poorly-made, horrific original design for something like powering Facebook. As a result, it needed a lot of good and important improvements.
Or to be more specific, it was a great design for what it was intended to be: a pet hobby project to make it easier to make Personal Home Pages, and was never intended to be a programming language.
The concept of putting out something this bad, and then refactoring for over a quarter-century to make it almost-usable, is exactly the definition of the webdev way.
sure, I won't disagree with that, my point was more that the updates are more than 10% useful features usually. You can definitely make a point about how long it took php to get its type system into shape, but php is not exactly just some trendy framework of the week, and there are valid reasons to pick php as your language of choice (namely that it's the lowest barrier language especially for having non technical people do their own installation on some random server/shared hosting)
Seen locally, all improvements are useful. PHP is better each year.
Seen globally, it's 10% improvement and 90% churn. One "improvement" is a half-fix, another is another half-fix, and so on, where after 5 improvements you're 97% of the way there.
In the meantime, you've broken backwards-compatibility five times, perhaps not always in the literal sense (old code still runs), but in the practical sense (old code has to be rewritten to follow modern conventions and be compatible with new code).
You've also had five learning curves.
And you have a ton of legacy stuff to support, from each of those five changes, leading to an almost unbounded explosion of complexity.
And we've been making the same "mistakes" with JS too!
But if PHP and JS are so shit and full of churn, why are they so popular and widely used? And have been used to build larger, globally successful projects?
Because they were in the right place at the right time. JS just happened to be the first web scripting language. And PHP competed against gems such as ASP (pre-.NET) and ColdFusion at the time when server-side web development was exploding.
With JS, the answer is because Netscape adopted it, and it had market share. If you wanted to code which ran in web browsers, you needed to use it. Netscape, the company, no longer exists, so I wouldn't call it all too successful an outcome. From there, momentum.
Early history is in '95, Netscape hired Brendan Eich to embed Lisp in Netscape, while simultaneously pursuing a partnership with Sun to embed Java. Sometime that year, Lisp changed into a new language, which was created and shipped in beta, by September 1995. We're talking a few months.
If we had designed a sane framework upfront -- even 6 more months design time -- the web would likely be a decade or two ahead right now.
Most of the early "improvements" to JavaScript were stop-gaps created under similar cultures, which would need to be replaced again and again. It's not that we didn't know how to make a decent programming language in 1995; it's that we didn't.
Today, I do about half of my coding in JavaScript since it runs in the browser, and therefore on all platforms. It has nothing to do with the painfully bad language design.
> the web would likely be a decade or two ahead right now
Seeing how the trajectory of web development seems to be "pile on more and more complexity, most of it unnecessary", I'm not sure if I'm supposed to be sad or happy about this.
PHP and PostgreSQL get a big release every year with good improvements. That you have to wait for the next stable release of debian (or in this case the one after this because they did not upgrade to the newest one) is not a failure of the web people! We are talking about one release every year, not every few weeks! Getting them years later just shows how unrealistic sometimes these stable distros are.
> We are talking about one release every year, not every few weeks! Getting them years later just shows how unrealistic sometimes these stable distros are.
Sure you can use external repositories. But why bundle old packages when everyone then has to use external repositories to not get really old versions?
* QA to make sure all the interdependencies aline correctly
* making sure the upgrade process runs smoothly, and that going from version X to version Y of some package is well-tested
It also depends on the upstream release cycle of the software package. The distro may wish to use the LTS version software (e.g., OpenJDK, OpenZFS, Django, Zabbix) but developers may wish to be more bleeding edge for new functionality.
OpenJDK 17 is in Debian 11 as well [0]. A decision was made to include it before its official release since the release is so close (September). An updated package will be available when it's released.
From the release notes [1]:
Debian bullseye comes with an early access version of OpenJDK 17 (the next expected OpenJDK LTS version after OpenJDK 11), to avoid the rather tedious bootstrap process. The plan is for OpenJDK 17 to receive an update in bullseye to the final upstream release announced for October 2021, followed by security updates on a best effort basis, but users should not expect to see updates for every quarterly upstream security update.
There is a fine line where you need to see which weighs more, using a stable version or using an up-to-date security version, I fully support the latter. PHP 7.4 security support will end in November 2022, will Debian upgrade to 8.0 once this happens?
But it doesn't make much sense to run an outdated PHP. I agree with the point of the top level comment, that is to put your dependencies inside docker and completely uncouple from OS dependencies.
PHP 8.0 was not shipped in bullseye specifically because it could not be supported by Debian. A semi-official backport will most likely be available shortly after bullseye is released and testing/sid are reopened for new updates.
> The timing of this request makes me uneasy: php8.0 has been in Debian for less than a week, and we are a month away from the transition freeze.
> My point of view after actually working on those issues is that there is a significant number of packages that are not working just fine with PHP 8.0, and require a fair amount of work before that. Some of them being security sensitive, so just working around visible issues may not be of the best interest of anyone…
And by the time the maintainers decided to go with 8.0, they missed the deadline:
> The transition freeze has started, so we'll not transition to php8.0 in bullseye.
Honestly, I'd rather just use RHEL 8 (or a clone like Rocky or Alma) with EPEL. You have all the stability advantages, plus an additional repository with bleeding-edge software that's not in the official distro.
(I worked at a RHEL shop for five years, and from my experience there I'd honestly rather not go back to having to deal with Debian-based distros ever again)
Unlike previous PHP releases, 8.0 broke backwards compatibility pretty hard. Some old PHP projects that I have to support are completely unusable on 8.0. This gives me a few more years to port them, so I am all for it.
The same happened with Python 3 and it took a decade to catch up. To be fairness, it was mostly about Unicode support and applications doing stupid stuff with strings.
I'm an occasional Debian contributor and to be honest I think that's the right policy these days. It is effectively what we do at my employer.
At my last employer, we ran Debian stable, and we told people to use versions of software (notably Python and Python packages) from the OS. One of my teammates was a Debian Developer, meaning he has full access to upload packages, and the two of us would package things up for the OS as we needed and get fixes into Debian as appropriate. In many cases we needed newer versions of packages than were appropriate for a stable release and (for whatever reason) weren't appropriate for backports either, so we ended up with a noticeable delta - it wasn't obviously less work than just building things ourselves independent of Debian. But because they were systemwide and you needed someone on the systems team to install those packages, what ended up happening is that my team found ourselves artificially in the middle of artificial development conflicts between other groups in the company, where one wanted a really old version and one wanted a really new version of the same package.
At my current employer, we also run Debian stable, but our VCS repository ("monorepo," as people seem to call it these days) includes both internal and third-party code. Our third-party directory includes GCC, Python, a JDK, etc. (sometimes compiled from source, sometimes just pre-built binaries from the upstream). A particular revision / commit hash of our repo describes not just a particular version of our code, but a particular version of GCC and Python and Python libraries and so forth. Our deployment tool, in turn, bundles up all your runtime dependencies (like the Python interpreter) when you make a release. In effect, it's exactly the software release cycle properties you get from something like Docker.
The practical effect is that upgrades are so much easier because they're decoupled. Upgrading the OS is hard enough - you have some script that's calling curl, and /usr/bin/curl is using the OS's version of OpenSSL, which has deprecated all the ciphers that some internal service that nobody wants to touch uses. Or whatever. Testing this is particularly hard if it affects your bare-metal OS, because you have a fairly slow process of upgrading/reimaging a specific machine, and then you run all your code on the new OS, and you see if it works. If this change also includes upgrading your GCC and Python and JDK versions, it becomes extremely cumbersome. If you can deploy your language runtime, library, etc. upgrades separately (via Docker, via LXC, via something like our monorepo, whatever), then you've decouple all of your potentially-breaking changes, and you can also revert changes for a single application. If the latest GCC mis-compiles some piece of software, it's a lot nicer, operationally, to say that this one application gets deployed to prod from an old branch until you figure it out than to prevent anyone from upgrading. And if a particular team insists on staying on an old version of some library, they don't hold up the rest of the company.
And in turn that's why Debian has such a long freeze cycle and doesn't release the latest version of things. There's a whole lot of PHP software in Debian. All of it has to work with the exact same version of PHP (or you have to go to lengths to find every place that calls /usr/bin/php and stick a version number on it). If something is incompatible with PHP 8, the whole OS gets held back. That's exactly the policy you want for things that are unavoidably system-wide (init, bash, libc, PAM, etc. etc.), but you don't need that policy for application development.
Meanwhile, over here in healthcare, I'm currently carefully evaluating whether to update some software packages from their 2011-release version to their 2015-release version (which are 99.9% compatible but add one or two new features).
And many of our critical systems (mostly MRIs) are on RHEL 5. (Not connected to any network, don't worry!)
Expect all healthcare machines to be networked, even if only internally. Need the active directory to login and need to upload the image/test sample somewhere else after it's taken. That's sadly one reason why ransomware can propagate so easily and stop hospitals. :(
In theory there can be computers with no network, typically an isolated computer attached to heavy test machinery, but it's becoming impossibly rare in all industries.
And they should be easy to recognize, look for the post-it nearby with the shared username/password to unlock it, also notice the doctors leaving with the x-ray printed on paper or a USB key to be able to transfer it elsewhere.
This if done is usually against the manufacturer's directives (if even possible to put online).
The equipment that has networking capabilities is designed for internal networks, physically isolated, and it's a big emphasis on this coming from the OEMs.
Changing/maintaining SW in medical devices is not easy (FDA, CE...) and as many mentioned, not everything is designed to move like the web...
as user5994461 wrote that's a bit too extreme in the opposite direction.
However, most people don't realize that the majority of the software deployed in the world is meant to run for years without getting feature updates.
Most production code is running on servers, vehicles, industrial systems, infrastructure, military stuff and so on. Planes and power plants can have an expected lifetime of 40 years.
Case in point: the civil infrastructure project https://www.cip-project.org/ plans to maintain a Debian-derived kernel and basic OS and backport security updates for 25 years.
> Meanwhile, over here in healthcare, I'm currently carefully evaluating whether to update some software packages from their 2011-release version to their 2015-release version
This could partially explain why healthcare IT is some of the worst in the world. Can't imagine the number of vulnerabilities you have from using such outdated tech.
From EHR systems, to basic appointment scheduling, to billing and insurance, everything is so poorly built and tedious for the users. Not to mention, healthcare is always falling victim to ransomware, shutting down entire hospital systems.
I know you'll say you need to move slow, lives are at risk, etc. But what healthcare IT is doing now certainly doesn't work. So maybe push on the gas a little bit.
Bullshit. When your "minor issue" becomes something that kills people, you test the living bejesus out of it before you think of deploying. When even the slightest change in UI becomes a deadly operator error, you make damned good and sure that those errors won't happen. There are more things that there is no "undo" (or antidote) for than you can imagine. Having things not isolated is the only real problem.
> Bullshit. When your "minor issue" becomes something that kills people, you test the living bejesus out of it before you think of deploying.
At no point did I say not to test before rollouts.
> When even the slightest change in UI becomes a deadly operator error
Stop being dramatic, because it's part of the problem. Yes, UI's are critical, and these UI's are absolute garbage. You need to improve rapidly, not agonize over bullshit changes in Microsoft Office.
I don't think that works anymore with the speed that languages are moving and breaking these days.
If you tried to use Python 3.9.0 for example (released November 2020), it had changes in the C bindings and broke many libraries (like numpy), which was not noticed until after the stable release.
That took months to fix and I'm not even sure if all packages have been fixed as of today (9 months later).
> I don't think that works anymore with the speed that languages are moving and breaking these days.
The beauty of this approach is you don't have to use the latest version if you don't want to, but the option is there if your app is compatible. You can still use Python 3.7.x or whatever you want. Docker has official Python images going way back.
This also has the opposite effect - if there is any old or outdated component or piece of software that you still need to be running for unfortunate reasons, there's always the possibility of putting it in a container with all of the packages and other environment configuration that it requires.
That way you don't have to risk compromising the entire system and don't even have to run VMs with old OS releases, while at the same time only need to expose the ports that the software uses, if even those at all. In my eyes, access to devices and such could still use a few improvements in Docker and other competing container runtimes, but for the most part, the technology is a lifesaver!
While i do think that it's nice if the same packages are also available outside of containers, for the OS of your choice, which i'm sure will also be the case for latest versions of PHP and other software, even if done by a third party shortly after Debian's new release, that's sadly not always the case. In those situations, containers indeed seem like a good choice.
From a sysadmin perspective, I'm afraid this would lead to containers lying around with random unsupported versions of various software -- you need to rebuild the container after every bug found, and upgrade the runtime (if it isn't LTS from the upstream) periodically, right? (however, I quit sysadmin job some time ago, so I'm not sure what the current trend is)
Sure, if your tool is aware of its own versions and can do that, that should be the preference over most systems, Nix/NixOS/GUIX excepted. But it's nice for traditional systems like Debian to be able to do it too, for tools that can't.
That is the correct approach. Docker optional, but language interpreters included with the OS are for the OS. The distribution is there to provide a mutually compatible set of working applications for end users so they don't have to care about matching package versions by hand.
Unless what you're doing is writing an application to be packaged with Debian, the version shouldn't matter to you: /usr/local is there, use it.
(note: none of this is "official" in any way, just what I've learned over 15 years or so of deploying on Debian-derived distros)
IMHO, that is the correct approach no matter the distro. The OS is a platform, the software included with it should be what works best to support that platform during a given release cycle. Anything beyond that can be addressed by containers, or third-party package repositories (including your own personal repo). It may be a biased perspective from the server side, but give it credit for being based on long, hard experience with frightening permutations (as in ugly, prone to misbehaving, many-headed beasts supporting critical services).
Personally, I moved to CentOS from Debian (and then to AlmaLinux OS) because of DNF modules. You get multiple versions of different language runtimes and you can pick the ones you want to use - so I stick to the LTS versions. You aren't necessarily stuck to just one packaged version of a language runtime now.
The only time it took 3 years between Debian stable releases was between Woody and Sarge, which happened over 15 years ago. All releases since have been on a 2 year cadence.
On the other end of the spectrum you have a distribution like Arch which gives you most recent releases, but then you also have to deal with breaking packages / package dependencies much more often.
But i guess this comes down to "there is no free lunch".
Is that your experience with Arch? I've hardly ever had anything break over many years of using it. I've even committed the sin of waiting many months between upgrades, like on a laptop that sat in a closet. Upgrading usually just works, or there is just a minor issue that can be easily resolved. Usually documented on the arch website.
It's been some time since I've used it, but Arch was more solid for me than any non-LTS distro. It definitely helps too stay up to date though. I remember the moves to pulseaudio and systemd...not fun.
Of course, you also have the advantages of being up to date and never needing to do a apt-get dist-upgrade.
Given the inclusion of the Guix and Nix package managers into Debian stable, installing newer versions of software is an option if you need it. Perhaps even a better alternative to installing and using Docker.
I'm similar, except that I use Debian as the underlying system, and install with Guix. I imagine it's the same with Nix folks.
I'm still unsure whether Docker injects that much more complexity and runtime burdens, but it sure feels a little more messy after installing a few tools.
AFAIK, Debians conservatism was the reason why Ubuntu got so popular. So complaining that Debians packages are too old is somewhat of an old story itself ;-)
Debian has the "non-free" repo specifically for stuff like this. E.g. the NVidia non-free GPU drivers have been available there for well over 15 years now.
Years ago I'd be reluctant of using debian stable on my desktop because it mostly meant old packages. Now, with appimages, snaps and flatpaks I can finally have a rock-solid stable system combined just released new software.
Debian are also very important events because it influences all of its descendants like ubuntu, armbian and raspbian.
Personally, I'd be fine with using Sid as a desktop as long as all data was backed up - but that should be a given for any OS.
I'm a bit of a hypocrite, though, as I use a Mac for daily use. I do run Debian Stable for my servers, though! With Bullseye nearing completion, it looks like it's about time to bake some new VMs.
Lol, I was going to make a btrfs quip and say I’d rather trust my data to Sid on xfs or jfs rather than an alternative stable distro that uses btrfs cough OpenSuSE cough.
There are features of btrfs that are currently considered experimental/unsafe to use like their raid-5 implementation.
I personally use btrfs raid-1 setups and have survived actual device failures without data loss. However, I also perform regular backups so I'm not overly concerned about "eat my data" bugs in a filesystem either.
I was under the impression that the data eating RAID5/6 issues were patched "long ago", but due to the write hole issue, which isn't likely to get fixed anytime soon, which means your data is (probably) safe, except for whatever was being written during the "write hole", and your array may crash and become read-only.
The kernel wiki says the following :
RAID56
Some fixes went to 4.12, namely scrub and auto-repair
fixes. Feature marked as mostly OK for now.
Further fixes to raid56 related code are applied each
release. The write hole is the last missing part,
preliminary patches have been posted but needed to be
reworked. The parity not checksummed note has been
removed.
Plenty of people, including me, have never lost a single bit to btrfs.
Now, is it super fast? (No.) Is it (or any other COW FS) the right choice for an SSD or database? (Probably not.) Is it the right choice for data that get read much more often than written and you'd like to be sure 10 years from now that you haven't lost any of it? (There's a pretty good argument to be made there, I think.)
I’ve lost* data with just RAID1 thanks to btrfs bugs post-1.0 that corrupted the entire metadata on both disks. There was no good place to get support nor any instructions on attempting reconstruction of corrupted metadata at the time and I haven’t bothered with it since. Apparently I wasn't the only one that suffered such a loss and as I recall, it was blamed on OS-integrated CoW under certain circumstances but it was quite shortly after adopting it and not a particularly weird configuration so I swore it off and have been happily btrfs-free since.
I should have known better since during my initial evaluation in search of a better llvm for Linux, I set up a root non-raid btrfs volume comprised of multiple dissimilar disks and lost all the data after an unsafe shutdown (a kernel panic that may have been caused by btrfs in the first place) even though all the disks were still functioning fine. I was an early adopter of ZFS - first under OpenSolaris, then under OpenIndiana, then (and now) under FreeBSD, so I thought I understand what "initial stable release" meant but it is clear that what ZFS devs consider to be stable and what btrfs devs consider to be stable are leagues apart.
* I was able to use forensic tools and low-level fs-agnostic recovery methodologies to get some of the important stuff back, but the btrfs volumes were completely lost.
They'll have some write amplification at the filesystem level but should cause less amplification at the drive level.
Either way very few workloads get anywhere near wearing out an SSD, and the upsides of CoW features are almost always much higher than the risk of wearing out a drive. I'd say they fit just fine on an SSD.
I tend to agree with this. The times I've gone back to Debian, and ran testing or unstable, I still found it to be too slow for me. There are certain things where I want to closely track the latest upstream.
I also really found myself missing the Arch wiki, and ending up back there anyway. And customizing Debian so much, I might as well just ran Arch.
I was testing out MicroK8s on Ubuntu, as a Snap. Regarding stability, I can't say, I didn't use it long enough. What really annoyed me is that it's confusing as hell. Files are no where near where you'd expect them to be. When things break and you need to go look for configuration files, you'll find that they are hidden deep down in /var. Snaps add a level of complexity I'm not comfortable with, while I gain very little.
I am pleasantly surprised by how cleanly Snaps uninstall though.
Not at all. OS packages run scripts as root during installation, and this is why you should install software only from trusted Linux distros.
Sandboxing user applications is a completely orthogonal topic.
You can sandbox OS-installed applications or other applications. With the same tools.
Both options equally require custom configuration depending on what files and directories you want to allow or restrict access to (and syscalls and so on).
Fine grained permission are a personal choice and both firejail and nspawn make it quite easy to configure.
I hate snap (no experience with flatpack). I have setup my Ubuntu to not use snap, and have snapd removed. I install Chrome from Debina Buster because that way it is not "snap packaged".
Since I use nearly exclusively open source software I tend to trust it. I see the extra security benefits of snap and want that for sure, but all the trouble I had using it was simply not worth it for me.
I'll come back in 10 years with snap and wayland have actually matured :)
Gotten used to it (first Debian), has decent desktop and server game, kinda "just works" and if not there more people stuck and solutions are published.
Yep, I just moved to Pop from macOS after many years, and while I would rephrase it "just... 'kinda works'" (but I would say that about macOS also) it is a pretty great distro from the perspective of the "I do not wish to spend more than 5 minutes tweaking my operating system" user.
If using Gnome, using Sid is worth it. A lot of hate directed at Gnome is because of old versions, or distribution-specific hacks. I use the latest beta of Firefox for the same reason (as a webdev, worth it).
Edit: the main risk with Sid is with proprietary software, such as Zoom. There's always a fix somewhere, but fiddling around can be distracting.
In theory it's supported, it's practice you might lose the machine.
Tried on 3 test VM last month, nuked 1 out of them, failed somewhere mid process after removing half of the system packages. That machine never rebooted.
For the anecdote, it's possible to upgrade a windows machine in place from XP to Windows 7 to Windows 10.
I've done it at home for multiple computers and it works. I don't do that sort of things at work however.
There's some obviously limitations on the hardware if you really want to go all the way from XP (32 bits), like it cannot take SSD or high core modern CPU.
P.S. Debian doesn't run anywhere near the amount of desktops and servers that Windows runs. Debian has a tremendous amount of embedded devices but they never get upgrade, so not really relevant to OS upgrades.
There's about 300 millions Windows devices shipped a year for the past half decade. They must be mostly desktops, unless Windows mobile made a return that eluded everyone :D
Funny enough, for servers, both Windows and Linux (all distributions) have 27% market share.
I am not sure what are you trying to argue here. The topic was upgrade process. Even Windows 11 hardware requirement, which is sad but too recent to be relevant, does not stop it from upgrading from Windows 10 on unsupported machines.
And even if Debian runs on more machines than Windows (which I doubt), most of them are probably VMs, that have only a few apps installed, and just get rebuilt instead of being upgraded. Which means that fact is also irrelevant for dist upgrades.
it's not a like for like comparison, Debian runs on many more platforms and supports far more exotic hardware configurations that Windows
I've also never had a Debian upgrade trash a filesystem entirely either, but have had that happen on a Windows 10 "upgrade", when it decided to overwrite the MS dynamic disk software RAID superblock with garbage
(not to mention the random registry corruptions that seemed to plague Windows installations in the not-too-distant
past)
Actually I was setting up a new laptop two weeks ago and installed Debian 10 because I had it on USB, when I booted I couldn't get the WiFi to work, because the kernel for Debian 10 was too old. So using the ethernet port I updated to debian testing (which was had frozen for 11) and installed the wifi firmware. Boom I had my wifi working.
Today I apt-get updated again and the same machine now reports it's on debian 11. It's pretty painless to be honest upgrading. Just follow the basic apt-get update && apt-get dist-upgrade.
Latest stable macOS release is in the 11.x range. But it's likely that macOS 12 will be released before Windows 11, so GP's magical moment might not come to be.
Sad to see support for QNAP dropped in the kernel packages for armel, presumably because the vmlinuz kernel is 500kb too large for the default flash partition.
This issue will also hit many mobile/embedded devices in the future where the kernel has to be flashed separately from the main installed system. (Debian does provide a flash-kernel workflow for this purpose.) The general solution is to use a separate bootloader (even Linux itself can be used as a bootloader via kexec), though this can involve a bit of wasted space.
Fingers crossed a solution could be found during the lifetime of debian11. The entire distro is ready for armel, such a shame that 500kb of missing flash partition space holds it back :-/
I saw one of the bugreports had a message talking about using a different flash partition but it involved a lot of scary uboot hex memory addresses, which without a serial console could be a quick way to make a brick...
I don't know anything specific about your situation, so I'm sorry for sticking my nose in here, but isn't this a case where you just need to compile your own kernel with a bare minimum of modules for your hardware? Surely that would save 500kb.
I'll have to look into it. I was hoping to ride the official marvell kernel binaries because building from custom sources on armel would be a great inconvenience. Also not sure I'll be able to find 500kb to cut that the debian people couldn't carve out. I would have thought these QNAP devices would make out a large percentage of the armel install base. Grateful that armel is still officially supported, just crazy that 500kb of missing default flash partitions is blocking access to the dozens of gigabytes of official debian armel goodness.
I never managed to install it on the desktop though.
How do you install Debian on a desktop these days?
Usually, I downlod it, dd it onto a usb stick, boot from it and then Im stuck because it does not have my wifi drivers and I don't have a cable to connect my desktop to my router.
These images are only "unofficial" due to the aforementioned non-free firmware; everything else in them is the same as in official images. Debian cannot guarantee that this firmware will be supported in any real sense, being proprietary; they're simply making it available for the user's convenience.
Debian will not ship non-free software in its main repos / official installers. The unofficial images are exactly that, created outside the rules of the project to avoid this prohibition. But, you don't have to trust them.
You can download and place the non-free firmware files on a thumbdrive yourself, and the official installer can use them. It will stop and prompt you to insert the thumb drive if it is necessary to complete the installation.
If you e.g., netinstall over a wired ethernet connection, it is rarely a problem. But, all 802.11ac and above wifi chipsets require a non-free firmware. You can always add non-free to your sources.list and install nonfree firmware after installation for any devices other than those required for installation, i.e., your network adapter (and in maybe a super rare case for a desktop install, your disk controller).
My first problem with this is the Firmware link you posted. I don't know what to download from it.
My second is that I would like to boot from an USB drive first to try Debian 11 for a while without installing it. So there is no installer asking me to insert a thumb drive. I just dd the downloaded ISO to the USB drive.
That's pretty much the same for the desktop, except you are maybe more often in need of some non-free driver that you have to keep available at hand (eg. on another USB stick) for when the installer asks for it. I find it simpler to do the initial install with an ethernet cable, and only then install the Wi-Fi drivers (or the Nvidia proprietary driver) using the non-free repository.
After installing your desktop environment of choice (eg. KDE), the user experience is very similar to that of Ubuntu, except a bit more barebone (or more minimal, or less bloated, however you say it) and more stable (in the Debian sense of stable; eg. you get Firefox ESR instead of the latest Firefox).
I like the stability of it: there are very few updates, and even when there are, I've yet to see one that break something.
That being said, I wouldn't use Debian on all my desktops: having few updates is not always desirable.
If you run Debian stable (or oldstable), you only get back-ported bug fixes, and no new features-- the upstream versions of the packaged software are "frozen" before each new major Debian release.
So, you don't get a million updates caused by exactly tracking the upstream. You only get the updates necessary to address bugs.
A huge upside of this is that you don't have to worry about breaking changes until a major version upgrade. E.g., some years ago upstream TCL changed its default from blocking to non-blocking I/O. I had to change nearly every TCL script I had to account for this change after dist-upgrading to the new Debian version. It was nice to not have a breaking change like this occur with just some random automated update.
If you want Debian + rolling release, there is testing and unstable. But, packages in testing only get security patches when they trickle down-- there is no explicit security patching for testing. E.g., I wouldn't run a web browser or other security critical software from the testing release where it is exposed to the outside world. Probably safer to either run sid/unstable for a desktop, or careful use of pinning. For my own desktop, I've been running Bullseye testing, but pin Firefox back to Buster since it doesn't have any library conflicts.
Firefox and Thunderbird are handled differently. These packages track upstream ESR versions in Debian "stable", and the version can change in the course of a stable release.
Well, yes; you don't get much to update with Debian, whereas using Arch, for example, you have many updates, very often. Sometimes having many updates is what you want (therefore use Arch) sometimes it's not (therefore use Debian).
You can grab the unofficial images as others say. Or simpler still, just usb tether your phone to your desktop and use that internet to install any necessary wifi drivers from the non-free repositories.
Personally, I use the non-free netinst CD. When I get to tasksel I only check "Standard system utilites". After rebooting (into a system without even X) I install, wifi and GPU drivers, microcode, "kde-plasma-desktop" and "plasma-nm". This way, I get a very minimal and lean KDE desktop, and after rebooting I install other programs as I see fit.
I use one of these two, whichever is better for a given case:
- install it using debootstrap from some live system (https://www.system-rescue.org/, or Debian on a usb stick). Internet works in this live system, so I can install wifi firmware and other required packages
- install it into a VM (I have a VM with prepared Debian system -- e.g. with my SSH keys, with my .bashrc and .vimrc), boot live system on the target machine, rsync the prepared system there
> then Im stuck because it does not have my wifi drivers and I don't have a cable to connect my desktop to my router.
I'd assume you have more than 1 computer at home. So use a USB stick to transfer the missing files. Not convenient, but you don't install a new Debian that often.
How do you not end up with piles of ethernet cables? They seem to throw them in the box with all kinds of stuff you wouldn't expect including blu ray players.
If you're running Debian Stable, using the backports repo is generally preferable to installing packages from the testing or unstable channels. The latter will give you an unsupported configuration that might be broken to begin with, or break without warning in the future as stuff gets upgraded further.
They included a JDK17 pre-release so an upgrade though backports in a month will be easier than. (It's in the release notes linked here).
Same reason why v15 and v16 are only in Unstable right now: They where not meant to be released for Debian 11, so removed from Testing.
Usually Testing and Unstable are mostly in sync. Besides the packages in Unstable that are causing serous trouble of course.
But right now Testing is mostly identical to the (still next) Stable Debian 11. So it makes no sense judging availability of packages in Testing right now until the release is out.
Not a bad idea! guix has openjdk, gcc, clang, cmake etc. Seems a lot more mature than last I checked. How easily does this play with a dev environment? Am I gunna have to do weird things to make tools find their toolchains?
I tried to grok Nix a few times and never understood how it works (since it seems like a rolling release where every package can build against any version of its dependencies).
I think that the most significant feature in Debian 11 for me is wireguard. Debian is the only distribution that I could install on 256 MB VPS to use as a VPN.
Probably futile, but...
Please put wicd-curses back in the repos. Rumors of its deprecation have been greatly exaggerated. I find it continuing to work as well as ever while enduring the status as the only acceptable means of clean network management available in Linux.
-Debian Testing user
More practically, I beseech the compassionate and unglabrous-necked wizards of yore to impart their glorious mana to this dear and ailing friend, wicd. Please resurrect it. I beg.
I have no familiarity with the internals of wicd, but from a glance at the most recent commits it looks like wicd-curses has already been migrated to Python 3: https://git.launchpad.net/wicd/log/curses
I know it may sound silly but I really like that they’re still sticking with the Toy Story characters!
Although I cannot say I even remember Bookworm (Debian 12).
Congrats on the release! I wish the "(almost) always releasable testing" proposal would get more traction tough. These long freeze periods aren't very good if you are using testing as a rolling distro.
I am happy for Debian even though I am on the Fedora family side (since I worked at Red Hat as a packager and it stayed with me). Congrats on the release! It's good we have options and what would Ubuntu do without Debian anyway? :)
Some people in the discussion mentioned old versions of packages. I would like to mention that in the Fedora/CentOS/RHEL we now have modular packages and therefore various versions of Ruby, Python, PostgreSQL at once. Maybe something Debian can steal.
If modular packages or a good SELinux policy sounds like something you might like I am putting together a book on deploying web applications to get people started with Fedora/Rocky/CentOS defaults: https://deploymentfromscratch.com/
I take real offense to this politically incorrect descritptor used in the documentation.
reply