Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Why is Debian the way it is? (blog.liw.fi) similar stories update story
312 points by brycewray | karma 1318 | avg karma 4.99 2023-10-08 05:21:59 | hide | past | favorite | 198 comments



view as:

> what was “free software” was defined by the Free Software Foundation, but in a way that left much to be interpreted

I don't understand what the author means here. What is unclear about the four freedoms? To me, Debian's definition looks redundant.


You would have to compare them at the time. Things have changed. At the time Ian felt it didn’t do enough. Revisions and decades later it’s basically parity.

It's like the US constitution and the bill of rights. Everything in the bill of rights is technically redundant and covered by principles already expressed in general form in the main constitution.

And yet that isn't actually good enough. Those general principles, being general rather than specific, require the key word, interpretation, in each new specific context. And different people with different goals can and do always ALWAYS warp interpretation in infinite ways that are all perfectly reasonable sounding on their face, and yet someone else can always produce a totally different interpretation, which also holds together.


Can you give any examples of software projects or licenses in which there's room for genuine disagreement in interpretation of these rules? I'm having trouble imagining it.

Does a licence that says "you must distribute the source unmodified, but you're allowed to distribute patches with it and a build system that applies them" count as free?

The DFSG said yes.

Does a licence says "you may modify this and redistribute it as much as you like, but if you put it on a CD then everything else on there must also be free" count as free?

The DFSG said no.


Considering how the FSF's own AGPL conflicts with freedom #1, they clearly can't be that clear.

Do you have any link to read more about it?

Well certainly part of it is much of the .deb ecosystem still feels like its stuck in the late 90s linux era to me. But maybe I'm alone on that gut feel.

"stuck in the late 90s" in what sense? Building them? Distributing them? (Dependencies?)

I would say the process on getting "maintainer" status.

That they work reliably?

They work reliably with a lot of unseen work. Debian is a great distro as a user. The .deb packaging tools I found to be a huge hassle and made packaging up missing or updated or custom software a pain in the ass.

Exactly, this is why it takes literally years of human effort to get a new version released

Have you built a .deb with debuild and debhelpers? Can you recall all the goofy rules about what needs to go in each file and what each debhelper does? How do you keep all of it in version control and build downstream packages that depend on the changed one? Have you managed to accidently sneak a local build machine dependency or piece of information by accident into the final .deb only to realize it much later?

I found it be a big ball of easily forgettable twine of rules tools and oddities myself that every time I needed to redo something was a hassle. With many foot guns that lead to odd package issues.


Completely agree with your assessment of the building process. The majority of the complexity involved in the process is not inherent

Better being in the 90s than using snappacks or whatever.

Yes but at least I don’t see ads on my Operating System for Viagra. I’ll take 90s boring linux over windoze everyday of the week.

Deb packages just need reinventing by some younger people.

I’m thinking: implemented in nodejs with a cli with emojis and animations.

The GitHub read me should have a minimum of 80 emojis and a meme or two.

The core dependency of the cli should be a 6 month old framework with 92 commits. When that library reaches 1 year old, it should be swapped out for something newer as the sole maintainer will have left their job and/or got bored with the project having spent so long on it


Excellent straw man takedown

> the .deb ecosystem still feels like its stuck in the late 90s linux era to me.

Can you elaborate, please? What do you mean by this?


Quirky, clunky tooling with a bunch of arbitrary shit you have to memorize to use correctly. People enjoy that kind of thing after they learn it.

Enjoy? No. But yeah, it gets easier once you've done it a few times.

Not really enjoy doing, but enjoy knowing arcane shit even if it doesn't make sense.

Have you anything to point? I don’t remember any non-sense in packaging a deb.

I'd like you to explain this a bit deeper. As a user from the '90s I am quite happy with it, but I might be stuck in my ways, who knows? So not saying you're wrong, but what are we missing exactly?

what deb and rpm and any similar packaging are missing is true version control of the whole packaging ecosystem. at present it is difficult to track which combination of package versions has been tested, and you can't easily roll back to a tested combination. the current systems assume that versioning is linear and that a newer version of any package is always better than an older version. downgrading any package so that every user of the distribution can benefit from the downgrade is difficult and confusing.

there are distributions that provide rollback. but as far as i can tell they require you to keep the old version to roll back to, stored on your computer. you can't roll back otherwise.

i really want this to work like revision systems for code, where i can just checkout any old version that was committed.

another feature that i would like to see everywhere is stickiness of the packaging source. currently, if i include additional repos they override the main repo, such that always the newest version is picked from any repo.

this makes it difficult to include less trusted 3rd party repos. i would like to be able to add 3rd party repos such that only the packages that i explicitly install from that repo will also be updated from that repo, while any other packages in that repo will be ignored unless no other repo has them.

in debian it is possible to set priorities for different repos, but that is not easy to manage. the priorities to have each package update stick to the original repo should be default.

guix and nix do provide some of this as far as i can tell, but i am not a fan of keeping every package self contained with massive link trees. (i may change my mind on that some time maybe, but that's what i feel for now)

conary was/(is?) a packaging system that did have both of these features, although, according to some of the developers the repository was a bit clunky and could have been better. but that was under the hood, not noticeable to users and packagers. i loved working with it and i wish foresight, the distribution using it had become more popular so that it would have had the manpower to keep going.


With snapshot.debian.org you can roll back to any version of any package that has been saved. Using that you can even bisect where issues started.

https://manpages.debian.org/testing/devscripts/debbisect.1.e... https://wiki.debian.org/BisectDebian


You can use pinning to achieve what you want with 3rd-party repos, using the origin feature.

https://wiki.debian.org/AptConfiguration#apt_preferences_.28...


A few of my gripes:

* creating Debian packages is a difficult process to learn. There’s a bunch of tools and wrappers around those tools to handle inadequacies and it’s not clear how to do it “right”. For example, if I just have a binary and want to put it into a Debian package without pulling in some random shell scripts that aren’t part of Debian proper, it’s not immediately clear how to do that.

* Non-atomic and imperative install/remove/etc. hooks make it difficult to robustly handle failures. Since there is no file system manifest or sandboxing, there’s no real guarantees that removing a package will actually remove it.

* there’s no spec for a .deb package. This means the only way to correctly generate a Debian package is through the difficult tooling. I understand why this choice is made, and it has a lot of benefits for the continued evolution of the debian project. However, it makes it difficult to build better tooling around deb packages.


The spec for .deb packages is in the deb(5) manual page:

https://manpages.debian.org/bookworm/dpkg-dev/deb.5.en.html


That's not a 'spec' - it's a vague description of the archive names.

It and the other manual pages linked from it contain everything you need to make a .deb from scratch.

I can’t find it right now, but the Debian docs explicitly indicate that you shouldn’t do that because the internal format is not stable. It has changed significantly in the past, and the only “supported” way to do it is with the official Debian tools

The .deb format doesn't change significantly at all, the only changes in recent years have been changes to default compression choices (old ones are retained forever though) and some additional fields in the control file.

The only significant future change I can think of is the mtree stuff, which would add an additional metadata file to .deb files.

There are several warts in the .deb file format that the dpkg folks aren't fixing because of compatibility concerns.

https://wiki.debian.org/Teams/Dpkg/RoadMap https://wiki.debian.org/Teams/Dpkg/Spec/MetadataTracking https://wiki.debian.org/Teams/Dpkg/TimeTravelFixes


If you "just have a binary" then you aren't compiling from source, which means you can't make modifications to that binary easily, which Debian wants to be able to do to fix security issues and other bugs.

There is a large range of documentation, everything from the simple to the more advanced to different packaging niches. The large amount of docs makes that harder to navigate though, and a lot of stuff is moving towards automated packaging these days anyway.

https://wiki.debian.org/AutomaticPackagingTools


Having to use these automatic packaging tools is exactly my gripe. I don’t want to have to rely on whatever scripts someone conjured up if it’s not supported by the distro itself.

I agree it always makes sense to have sources involved for packages that are getting distributed as part of the distro, but what if I want to just deploy my product on a server without the sources?


The tools I am talking about are integrated into, supported by, and used by packages within, the distro itself. I'm not talking about external things that make low quality packages.

If you want to make external packages (such as those that don't come with source), then you can just use `dpkg-deb --build` or one of its many wrappers to shove the binary into a .deb.


LIW left out one major chunk: because Debian is a volunteer organization, and nobody can make a volunteer do anything that they don't want to do.

Left out another major chunk: the conflict that arose over the eventual systemd adoption. To me that conflict altered the concept of ''What is Debian'' permanently, for better or worse (depending on who you listen to).

Regardless of whether one likes or dislikes systemd itself, I think that unfortunate debacle can only be seen as causing harm to the entire Debian project.

The politics of it certainly generated a lot of distrust and resentment among the users and contributors. The project's reputation was undoubtedly hurt.

Perhaps most importantly where the technological impacts. It's one thing when a user can generally ignore the politics surrounding a Linux distro, and the software still does what it needs to do. It's another matter when one routine update after another causes their computer(s) to no longer boot, among other serious problems, all thanks to systemd. Users definitely notice incidents like that, and it decreases, or even eliminates, their trust.

So much hard-earned and invaluable goodwill was unnecessarily lost during and after that period of time.

If any good did arise from that situation, it was that more people became aware of the BSDs, or tried them again if they'd used them in the past. FreeBSD and OpenBSD saved users who needed the reliability and trustworthiness that Debian used to offer, before systemd negatively affected the quality of Debian.


> before systemd negatively affected the quality of Debian

Is there any publication quantifying this?

I followed the whole debacle with interest, and my personal experience with my servers was the exact opposite: adopting systemd improved reliability and made administration significantly easier. It's sad that this was politicized by a small part of the community, but the end result was worth it.


When I was trying to resolve the numerous problems that systemd was causing for me on multiple computers that Debian had previously worked perfectly fine on, I certainly ran across a lot of bug reports, mailing list postings, forum postings, IRC logs, blog articles, and other online communications from people who were also having problems with systemd.

Beyond that, I've heard of enough problems involving systemd from my Debian-using colleagues and acquaintances, too.

I don't know if it's been formally studied in any way, but it was clear to me that I definitely wasn't alone in experiencing problems involving systemd.

The widespread negative sentiment that exists toward systemd, including from well beyond the Debian community, didn't just come out of nowhere.

From what I can see, it was generated thanks to a lot of people directly experiencing a lot of unnecessary problems caused by systemd.


Yeah, I agree.

Systemd is incredibly controversial among a niche group of people who have strong opinions about how init & core system functionality should work, and then there’s an outer ring of people who focus on one or two problems with some relatively minor problems that Systemd caused for which there are viable workarounds. Like how Systemd terminates processes that belong to your session when you log out.

I remember writing SysV style init scripts or rc.d / BSD style init scripts. It was awful. You had all these copy-pasted shell scripts with various gaps in functionality depending on who wrote them. Getting a service to run in Systemd feels like heaven by comparison. I don’t even care about, like, Docker.

I think the reports of problems (like background processes getting termed on logout, how any security problems in Systemd tends to be severe by nature) were just so numerous compared to the reports of the benefits (like the boot time improvements and the massive improvements running daemons). It was some bad decisions and a lot of bad PR, but the overall impact IMO is very positive.


Have you looked at OpenBSD?

Here's the /etc/rc.d/sshd:

  #!/bin/ksh
  #
  # $OpenBSD: sshd,v 1.7 2022/08/29 19:14:25 ajacoutot Exp $

  daemon="/usr/sbin/sshd"

  . /etc/rc.d/rc.subr

  pexp="sshd: ${daemon}${daemon_flags:+ ${daemon_flags}} \[listener\].*"

  rc_configtest() {
          ${daemon} ${daemon_flags} -t
  }

  rc_cmd $1
Pretty much any service/daemon is similar. You define a few things and you're done.

A declarative unit file is still more readable for me.

Why in the world korn shell?

I stumbled upon an auditor with programming skills, who did all his scripting in Korn shell. To me it was like he was from another planet.


It's the standard shell on OpenBSD.

Here is the systemd unit for comparison:

  [Unit]
  Description=OpenBSD Secure Shell server per-connection daemon
  After=auditd.service

  [Service]
  EnvironmentFile=/etc/default/ssh
  ExecStart=/usr/sbin/sshd -i $SSHD_OPTS
  StandardInput=socket
I leave it to the reader to decide which of those two is easier to understand and maintain.

Debatable, but I was mainly reacting to "... rc.d / BSD style init scripts. It was awful. You had all these copy-pasted shell scripts with various gaps in functionality depending on who wrote them."

Simply not true in (modern) BSDs, well at least OpenBSD; I'm not familiar with the others.


Yeah, maybe it’s not inconsistent any more. But meanwhile, systemd has gotten really nice, even a good version of rc.d seems awful by comparison.

I had the same feeling when Apple came out with launchd in 2005. It felt like such a massive improvement over the existing state of things. Systemd also feels like a massive improvement.


Sigh.

I’m pretty convinced it was net positive in the long term. If Debian hadn’t gone with systemd, then where would it be today? Stuck in shell script hell?

We would have been forced to bite the bullet and implement a packaging extension that allowed debs to describe what to start at boot time, when, and how. The compatability layer would have allowed any init system to be used.

Instead we welded the systemd engine into the chassis and pray that when it comes to replacing it we aren’t the ones on the hook.


> The compatability layer would have allowed any init system to be used.

I don’t see how such a compatibility layer would work in a way that doesn’t suck horribly.


Yeah, it was annoying for a bit of time, but not anymore! I haven't had systemd installed in quite some time. sysvinit works without a hitch on bullseye and bookworm.

added: Debian's (to me) about (among other things) technical superiority, a robust packaging system, as well as user freedom and choice. It's super easy to not use systemd these days, "what is debian" didn't change, there was just a slight delay in reality catching up to principles. :)


For anyone who hasn't yet seen it, there was an interesting adaptation of the Debian systemd discussion into an Ace Attorney-style format: <https://aaonline.fr/search.php?search&criteria[sequenceId-is...>

Your comment makes it sound like volunteers have a great degree of freedom and that's not the case because, obviously, in these organisations you will be shown the door if you don't do what others tell you to do.

From the description, the Debian organization seems to be an anarchist one. A bunch of people, not coerced to be there, have created a diffuse rotating democracy for making decisions. Self-sufficiency is key to the organization, that emerges from thoughtful usage of resources.

Why would you consider an organization with a constitution anarchist?

Contrary to popular beliefs, anarchists have no problem with constitutions or laws or governments. Anarchy is ANti-hierARCHY, which in the extreme case extends to not accepting the hierarchy of the State (i.e. an organization with a license to exert force over all others in society). But,

1. Many anarchist movements do not demand this much change. They only demand removal of specific forms of hierarchies they think most problematic. The Occupy movement for instance was demanding the curtailment of the political power of the 1% over the 99%. Its always better to think of political movements as directions of evolution in political space, rather than specific destinations.

2. Then how do anarchists propose that laws/constitutions be imposed? By consensus and discussion. By making sure everyone is on board. Or by temporarily giving someone conflict resolving power (as in the Debian case). Plenty of societies and organizations operate this way, and work fine. Read The Dawn of Everything for some historical examples. See a region in Syria [1] as a modern example.

[1] https://en.wikipedia.org/wiki/Autonomous_Administration_of_N...


> From the description, the Debian organization seems to be an anarchist one.

Then it mislead you. Anarchist organisations aren't typically characterised by large, long and complex sets of policies, constantly evolving, that are strongly policed. Often by bots.

This is the reaction from one software engineer that stumbled into Debian infrastructure: https://lists.debian.org/debian-devel/2023/09/msg00334.html

To quote one part of that email:

    I've been maintaining free software for 30 years so I've got a lot of experience with a lot of different tools, and I've rarely encountered anything that is as comprehensive and well-documented as all this stuff is.
This style of organisation is characteristic found in engineering organisations try to deliver high quality products, not anarchist organisations.

And while it's a flat(ish) style hierarchy, it has leaders (the DPL), a judiciary (the technical committee) and even behaviour police (whoever polices the conduct - it is policed).


Debian is Toyota. Reliable but boring. Except it’s also built by volunteers.

> The historic background for this is that the first Debian project leaders were implicitly all-powerful dictators until they chose to step down

What was this about ?


I assume the Ian Murdock (founder and Ian in DebIAN) transition to Bruce Perens, then Ian Jackson then annual project managers.

It simply means what it says. The first few leaders were simply in charge of everything like an owner, they made all major decisions themselves and told everyone else what the plan was, and each one did that job until they decided they didn't want to do it any longer and handed it off to the next leader. They were a dictator only in the literal sense that they dictated, not that they were tyrants.

They weren't literally an owner. That's why the "implicitly". Everyone was still only volunteers. But everyone volunteerily let them call all the shots.

Then later they developed a formal democratic structure and the leader is more of a coordinator than boss.


Benevolent dictators

I worked with Ian Murdock at Purdue in the days of the very first release. He was a sysadmin and devloper while I was a web designer for the libraries.

The guy truly believed in the GNU/Linux 'way' and 'free as in speech' software. His initial drive was from the difficulty of packaging and package management and that is probably his biggest contribution. Network-of-Workstations (NOW... think peer-to-peer infratsructure) was his passion that he really never quite got going.

Bruce Perens, the guy he handed control over to, is the authoritarian leader being refered to. I like the guy. He's definitely in the old guard, aka Linus Torvalds, style of management. In big complex projects with volunteers that syle works.

Anyways, the old days of Linux and Debian were a blast. I never quite go tinto like all these other people, but I miss those old days.

There's way too much money people involved today. So it goes.

Ian's manifesto explains it all, anyways.

https://www.debian.org/doc/manuals/project-history/manifesto...


Thank you for sharing this, it all rings so true. Love Debian and still use it.

The hell happened with Murdock after Debian? His trajectory since he stepped down until his death seems quite erratic.



I’m aware of the circumstances of his death. That doesn’t say anything about his trajectory since leaving Debian.

He started his own company in Indiana called Progeny (https://en.wikipedia.org/wiki/Progeny_Linux_Systems). My old roomate ended up working there. That's were they tried to get network-of-workstations (NOW) going but it never really took off. That company sort of just ran out of steam (many linux companies did at that time)

After that, I heard he was CTO of Sun. At that point his marriage had fallen apart and his drinking had become a problem (I got all this second hand through mutual friends).

Everyone was shocked by the suicide and the events leading up. Seemed like he spiraled at the end.


Its rare that we read a piece of content on the internet from 1994! Thanks for sharing this, its older than me.

I still have fond memories of flame wars on alt.devil.bunnies and alt.pave.the.earth

It was a different world. Full of hope and wonder at this new thing. Remember the first major browser wasn't out until '94. We were all playing with Mosaic from the NCSA (which us Purdue kids got to have a small hand in).


With many other non-copyleft alternatives shaping up, and systems like ChromeOS and Android, with the Linux kernel and completely unrelated userspace, I firmly believe when our generation is gone, Linux won't stay around on its present form for much longer.

I wonder if the end-game is a BSD, or some sort of hard fork of the Linux ecosystem.

Ubuntu and RedHat basically don’t work by any of my definitions of “work”.

They’re both enterprisey and bloated and flaky in all the ways Windows was in the 90’s, except they add flatpack/snap, letting each program be its own flaky OS install, compounding the problem. Want to save a file to ~? Read this 1000 page tome on the 21 successors to SEL first.

Anyway, my current heuristic is that if it defaults to systemd or wayland, then I don’t want to use it.

Debian was never the default for big sprawling corporations, so it’s not clear to me that just staying on the “suckless ethos” side of such an ecosystem fork would be that bad vs. Linux in its previous heyday.


[dead]

Perhaps that is the end game for you. My intent isn’t to say this to stir heated discussion but people have different opinions on flatpak and snap, in fact I even like snap besides the fact that you have to use snap’s store (you can’t just start your own snap repository and configure your local snapd to install from it.) No complaints with systemd, and you can still easily swap to Xorg. No idea what you’re referring to with the trouble of saving a file to ~, perhaps that’s a RHEL thing (it’s been so long since I’ve used RHEL.)

All in all, my intent is mostly just to say we’re not in consensus. I doubt a massive revolution is coming if your assumption is that it is because there’s some overwhelming majority with your opinions.


Personally, I like the ideas of Fuschia OS but it's a long ways off from a general purpose OS. Still usable today, though, if you are willing to build the userland stuff you need yourself. Not a fan of Google, though. meh. https://fuchsia.dev/

But neither Android nor ChromeOS are self hosting, in the sense that you can't use Android to build Android nor ChromeOS to build ChromeOS (well for the later maybe you could with a Debian container...)

So I think traditional Linux distributions will remain, at least as development tools.

Of course it is true that many end users these days get by with just a phone or a tablet but this is a general thing and also results in less Windows users too.


A matter of improving existing toolchains.

Also my point is about Linux kernel, being as relevant as AT&T UNIX, after the generation that created it is no longer among us.


If you want to get in on the ground floor of something today... Forth on RISC-V. Go nuts. https://hackaday.com/2023/01/08/forth-cracks-risc-v/

Now that is quite cool!

Can someone explain the controversy surrounding Bruce Perens? I never heard the story and Google isn't being helpful.

I’m also interested by any source about that. I’m reading a lot about open source history and can’t find anything about that story (which seems quite important for those who want to understand Debian history)

Perens was fairly dictatorial. He also viewed the open source thing as more of a marketing tool than an ethos. That rubbed a lot of people in the groups the wrong way. He also was an effective project manager. Little column a, little column b.

Several open source software organizations are remarkable. Not little remarkable, big remarkable. As in they show us how alternative models to the typical corporate business model may well be far superior as a way for people to collaborate. For the most part these remarkable are unknown. I have used Debian for (ahem) a very long time without knowing much about the organization and this article was a very good introduction.

I have been aware of the IETF for quite a while. What is most amazing is that the internet today was built (more-or-less) by the IETF. See The Tao of IETF (https://www.ietf.org/about/participate/tao/). This is an organization with no members. It just works. Hardly anyone really knows about it.

Just as interesting is what happened when the corporate world decided to compete with the IETF for control of how the internet worked. Some people call this the protocol wars. (https://en.wikipedia.org/wiki/Protocol_Wars). For a while it seemed like each month the OSI would announce a project to replace parts of the internet, like TCP, with an X.protocol. Of these efforts very few survived and thrived - like X.509.

The question that comes to my mind is whether these kind of democratic type collaborative organizations are in fact superior (far superior?) to the traditional corporate model. I personally have watched many corporations act with obvious stupidity. Doing things that can only be described as severely fight-their-way-out-of-a-paper-bag challenged. To put it kindly.

Certainly these other-style organizations do not really stack up on an economic basis. The income of most corporations dwarfs that of both the IETF and Debian. And yet as a contributor and creator, I can ask Cuo Bono? Certainly not the contributors, they subsist.

And perhaps most interesting to me, and perhaps worth an experiment, is whether it is possible to use an IETF or Debian style model that competes with the corporate model. It did work once with the Protocol Wars, so maybe.

(edit to remove markdown syntax, sigh)


> Just as interesting is what happened when the corporate world decided to compete with the IETF for control of how the internet worked. Some people call this the protocol wars. (https://en.wikipedia.org/wiki/Protocol_Wars). For a while it seemed like each month the OSI would announce a project to replace parts of the internet, like TCP, with an X.protocol. Of these efforts very few survived and thrived - like X.509.

I would not describe it as 'corporate world decided to compete with IETF', than 'governments tried to enforce its power'. IETF working groups are often full of engineers from corporate vendors trying to collaborate to ensure interoperability, while ISO is traditional top-down governments-led organization.


You are correct. A better way to put it. And it is true that the IETF working groups are often from corporate vendors.

So perhaps that part of my argument is completely wrong. And distracts from the main question: could other fundamental models of collaboration be significantly more productive (efficient) than corporate models?


not entirely. I was involved in the IETF in the early 90s. At that time the old guard were the sort of second wave of internet designers (Clark, Estrin, Zhang, Cerf, Deering, Jacobsen....not going to pretend to list everyone). They primary worked off of (D)ARPA grants, although some of them them did work at places like Parc, and certainly places like Cisco.

during that time, alot more money was being dumped into this internet thing, and companies realized that if they could get their widget written into internet standards, it would be really good for business.

partially due to that, and partially due to a largely ineffective focus on multicast protocols (PIM, RSVP, etc.), these people became less central over time, and alot of the formative protocol design activity stopped.

just my perspective, but it seems odd that we're still largely stuck in the early 90s protocol-wise. clearly there have been some changes (http3, bar), but not really much considering the relative timespans.

in any case, the point being that corporate involvement in the IETF wasn't a given in the early days, and it hasn't been an unqualified win.


proto design has been killed by incompetent enterprise firewall vendors and administrators (or, well, their managers).

the stupid dance tls1.3 has to do is best case in point.


yeah. there are plenty of reasons. I do think the shift to the client(nat)/server(default-free) model did alot of damage. ATM was also a huge suckhole that killed momentum. and I think ISP just stopped listening to what the ietf had to say for the most part.

[flagged]

Paint yourself purple and join Debian - easy...

[flagged]

Because real diversity comes from people who look different.

[flagged]

There's an "anti-sysv-init train"?

`sysvinit` is still available in the current version of Debian, Bookworm, released less than 4 months ago:

https://tracker.debian.org/pkg/sysvinit

Note that `runit` is also available:

https://tracker.debian.org/pkg/runit

Also, while it's not the default init system, there are instructions for setting it as the active init system, on the Debian wiki:

https://wiki.debian.org/Init

Note that while packages aren't forced to provide native sysv init scripts (or native runit init scripts either), I hear that sysv init scripts are just shell scripts that are incredibly easy to write even for inexperienced sysadmins (much easier than writing systemd unit files apparently?) so cobbling together any that are missing for a sysv-based system shouldn't be much work.


A .deb package being available is not the same as init scripts maintained or being allowed to select an init system during install.

Sysv init scripts for every package already existed since that was the only init system in use by debian until 2014, people volunteered to keep maintaining those but they were overriden in lieu if systemd only approach.

It seems you didn't use pre-systemd linux. It felt like you controlled the whole OS. Linux has always been about giving control to users. Systemd works great but it's geared towards corporate users who want better managability. So in a way, by violating the unix modularity philosophy, major distros sold out to corporations. Now systemd manages not just init but dns, logging, cron type scheduled jobs, fs mounting, system time,etc... it forced architectural changes where either you accepted systemd way or the unix way. And a lot of projects caved in.

Openrc and sysvinit manage init scripts/services. That's it. You know how they work and they are designed to be compatible with anything else. If you need a schedluled managed for example, sysvinit or openrc have no opinions how to do that, you can use cron or your own thing, you have full control. Systemd on the other hand has timers, they work great, but guess what they don't play nice with? Openrc and sysv init. Guess what plays nice with all 3? Cron and it's many implementations.

The modular design of Linux gave users power. The centralized opinionated systemd design gives the few "elite" influential people who work for big corporations power and control because their design does not take into account interoperability with arbitrary services.


[flagged]

I think it was why they got on the systemD train. It was the init system they thought they could maintain well.

Sounds like there were lots of hurt feeling in the whole situation. Did some major Debian maintainers actually leave Debian for Devuan? If Devuan’s contributors were originally just Debian users, it seems like no loss for Debian and a win for the community (more developers).


They could have permitted volunteers to maintain sysv or openrc versions of init scripts and made it an installer option. I believe devs and users alike were part of the devuan fork: https://www.devuan.org/os/announce/

> the leadership team

There's no such thing in Debian.


Whether it was formal or not, since the decision was not made by a vote of contributors, there is such a thing, even if it is informal.

There was a vote about systemd in 2019:

https://www.debian.org/vote/2019/vote_002


TIL Debian is mostly packaging.

Love it, Debian is amazing.


Packaging, and more importantly: package dependencies, are (imho) the essence of building a successful distro.

Get this wrong, stuff breaks regularly, and one ad-hoc fix follows another, forever.

Get this right, and everything Just Works™ (generally).

Debian is very good in this regard (along with the BSDs, I'd say).


Debian is very much one ad hoc fix after another, to an incredible degree really. Go dive into the source and patches and you’ll see how much duct tape there really is holding things together. What’s so great about Debian though is it effectively hides all that pain from you (most of the time).

Yeah, that's 90+% of what a Linux distro is, packaging software from a wide variety of projects and trying to make a coherent whole. In Windows/MacOS we have one company making the kernel, basic libraries, desktop manager and some applications, and separate independent software developers making and packaging their own applications (although that changed a bit in the last decade or so).

I just switched to Debian this year after ~13 years of Ubuntu, and I really appreciate it

It grew on me after a long time. I always thought it was not the most "technically sound" way of doing things

i.e. I don't really like the packaging model of global updates where you don't know what's going on, and sometimes there are version conflicts

But I have come to appreciate the stability and good intentions of the Debian project

Sometimes it's not technical excellence that matters the most, but the purpose and goals of the project


Yeah. I keep coming back to Debian after trying out another distro for a while.

There are some specific complaints I have about technical choices for Debian, like the way daemons autostart post install. But these complaints are outweighed by the benefits of using a distro with coherence across packages and upgrades.

Apt is also just such a phenomenal package manager. It is fast out of the box, and supports some relatively tricky scenarios—like using stable for your system, but a newer Nginx from backports. Feels like I can get the newer features for the one or two packages that I really care about, and then use something stable and boring for everything else.


Yup exactly, I hated the daemon autostart thing.

But it's a small issue compared to the mess I see in the rest of software these days ...

Alpine Linux seems interesting too, although right now Debian suits me well. I guess the problem is that I still don't make Debian packages myself, while Alpine's APKBUILD seems more approachable -- pure shell, while Debian has an array of tools and formats.

But Debian "lagging" a bit can be a feature, not necessarily a bug.


I'm a Debian user since 1998 and have had it as my personal desktop since that time. I'd think that counts for something.

> Apt is also just such a phenomenal package manager. It is fast out of the box,

This wasn't always the case. There's a good reason almost all guides first written before 2015 specifically instructed everyone to use 'apt-get' directly. For quite some time the more uniform 'apt' frontend really wasn't intuitive or helpful. (Just to be clear: these days it is phenomenal in its simplicity and clarity.)

And as someone who has has to dive in to the package managers' code bases, the overall quality of libapt used to be .. questionable. Figuring out code and control flows back in 2010 was like trying to rub chili out of your eyes with an unsanded wooden spoon.

But the sheer bullheaded stubbornness Debian imposes on their package universe and its architecture means it's an absolute joy to work with if you're doing any kind of distro customisation work.


FWIW the current recommendation is still to use apt-get in your scripts. It's less about stability and more about intention for backwards compatibility. Perfectly fine to use apt interactively, it is as you say phenominal, simple and clear.

from apt(8):

     SCRIPT USAGE AND DIFFERENCES FROM OTHER APT TOOLS
       The apt(8) commandline is designed as an end-user tool and it may change behavior between versions. While it tries not to break backward compatibility this is not guaranteed
       either if a change seems beneficial for interactive use.

       All features of apt(8) are available in dedicated APT tools like apt-get(8) and apt-cache(8) as well.  apt(8) just changes the default value of some options (see apt.conf(5) and
       specifically the Binary scope). So you should prefer using these commands (potentially with some additional options enabled) in your scripts as they keep backward compatibility
       as much as possible.
https://manpages.debian.org/bookworm/apt/apt.8.en.html#SCRIP...

For quite some time the more uniform 'apt' frontend really didn't exist at all.

Indeed. The "apt" frontend was added in 2014 and included in a stable release in 2015.

So I don't know what tool other than apt-get could "guides first written before 2015" use.


aptitude. And before that, dselect (which I believe predated APT entirely).

Oh my, I don't miss the dselect days.

OTOH aptitude is, to this day, a joy to use.


> For quite some time the more uniform 'apt' frontend really wasn't intuitive or helpful.

What do you mean?


> like the way daemons autostart post install

This can be configured with service-policy.d(5), see https://packages.debian.org/bookworm/policy-rcd-declarative-... for example.


Same here, switched to Debian for my servers as OS from Ubuntu. Main reason: uses (boring & old) working technology. No more netplan, snapd, systemd-resolver.

Note: debian cloud image use netplan.io

> version conflicts

What are you getting conflicts on? Unless you’re pulling from Sid, and did something fun like upgrading libc6, you shouldn’t see version conflicts if everything was installed via apt.


It hasn't happened to me in a long time, but when I first started using Debian/Ubuntu I ran into it and was confused

Debian does have Conflicts package metadata - https://www.debian.org/doc/debian-policy/ch-relationships.ht...

So in theory I don't like it, but I now better understand the possible reasons for it, and I haven't run into it recently

I think there is room for other systems that don't have this problem, but Debian is good at what it does, and you can build other things on top of it


>"Self-contained"

I love that whole paragraph. And in general prefer their philosophy.


Debian doesn't go far enough on that point, the folks at GNU Guix and Bootstrappable Builds are getting to the point where they can build the entire distro from source starting with only ~500B of manually written machine code.

https://bootstrappable.org/ https://guix.gnu.org/en/blog/2023/the-full-source-bootstrap-...


Debian is enough for me in this regard. I am not chasing absolute perfection.

Sometimes I daydream about getting a fuck-it amount of money.

During this thought process I always make a plan of what open source software project I should donate, and debian is always one of the first several candidates.

Now I just need the money! (meanwhile I donate to debian anyway)


This article from 2020 says Debian doesn't need money: https://www.theregister.com/2020/09/10/debian_project_addres...

Debian can't really directly pay contributors (there are some rare few cases like lawyers, etc....), so that would be one of the reasons for what the article is talking about.

The best thing someone could do in this scenario would to be hire someone to work on/improve Debian directly.


Someone could form a nonprofit org that funds packaging work in Debian. Or maybe even a for-profit one. I'm pretty sure a lot of big consumers would rather pay for expertise instead of having an in-house Debian "upstream" team.

https://www.debian.org/donations

> The easiest method of donating to Debian is via PayPal to Software in the Public Interest, a non-profit organization that holds assets in trust for Debian.

https://www.spi-inc.org/

> Software in the Public Interest (SPI) is a non-profit corporation registered in the state of New York founded to act as a fiscal sponsor for organizations that develop open source software and hardware. Our mission is to help substantial and significant open source projects

Edit: But also, freexian

https://www.freexian.com/lts/debian/

> To achieve the 5 years of support, and properly cover all Debian packages, Freexian organizes a corporate sponsorship campaign with the goal of funding the work of multiple Debian contributors who are established as independent workers.

> If you are not yet convinced, here are seven reasons why you should help fund the Debian Long Term Support initiative (LTS):


But what improvement does Debian really need, beyond the capabilities it already has?

Linux in general could use various improvements, if some billionaire decided to fund people to work on it. But I don't really see how any distro could use the lion's share of that funding. Instead, much of it probably needs to go to infrastructure things like drivers, Wayland, etc., so that Linux works better on people's computers. Other things that could use development funding are various applications. But these are things that all distros share.


Me too, but I'd do my own projects rather than donate. Much more fun!

Debian could be great except for driver support which they only tacitly acknowledge:

https://www.reddit.com/r/debian/comments/paxj85/why_debian_w...

"We acknowledge that some of our users require the use of programs that don't conform to the Debian Free Software Guidelines. We have created "contrib" and "non-free" areas in our FTP archive for this software."

I had it running on a couple of my machines about 1 or 2 years ago and an update came in for WiFi that bricked them. I started looking into rolling back or whatever and just decided to switch those to Ubuntu (or Kubuntu actually) and they work great and have has no issues.


They've now relaxed their (stupid) policy so at least the default ISO includes non-free drivers.

When it comes to an already installed system, enabling the non-free repos and installing linux-firmware (or more specific firmware-* package for your hardware) should fix it.


If the default ISO already included non-free drivers, why would you have to separately enable the non-free repos to get firmware?

My Debian 12 install didn't come with proprietary Nvidia drivers, nor did it ask me if I wanted them during installation. I had to enable the non-free-firmware repo to get them.


> If the default ISO already included non-free drivers, why would you have to separately enable the non-free repos to get firmware?

I'm not 100% sure about this but I believe it may enable it for you automatically if non-free firmware was used during the install. I mentioned it just in case.

> My Debian 12 install didn't come with proprietary Nvidia drivers

The primary problem of the previous non-free driver policy is the lack of network drivers which make it impossible to install nor download the drivers even if you somehow managed to install the OS. This is now resolved.

It's not a big deal if the ISO doesn't include every non-free driver out there as long as you can manually install it after the fact.


The default ISO didn't come with non-free drivers until the latest release (12, bookworm). Unless you used the "unofficial" image which did include them. Now the situation is different.

Here is the relevant part from the release notes:

> In most cases firmware is non-free according to the criteria used by the Debian GNU/Linux project and thus cannot be included in the main distribution. If the device driver itself is included in the distribution and if Debian GNU/Linux legally can distribute the firmware, it will often be available as a separate package from the non-free-firmware section of the archive (prior to Debian GNU/Linux 12.0: from the non-free section).

> However, this does not mean that such hardware cannot be used during installation. Starting with Debian GNU/Linux 12.0, following the 2022 General Resolution about non-free firmware, official installation images can include non-free firmware packages. By default, debian-installer will detect required firmware (based on kernel logs and modalias information), and install the relevant packages if they are found on an installation medium (e.g. on the netinst). The package manager gets automatically configured with the matching components so that those packages get security updates. This usually means that the non-free-firmware component gets enabled, in addition to main.

https://www.debian.org/releases/bookworm/amd64/ch02s02.en.ht...


How are they "only tacitly" acknowledging it? Looks like they have put in very tangible solutions in place already.

Debian 12 even made a dedicated non-free-firmware repo for free software purists who would like to concede having non-free drivers just so they can use their hardware.


I personally love and use Debian exactly for its principles and stability.

I have heard users of other distros and a few upstream complaint that Debian "modifies" their packages?

Is it so? If yes, there surely must be a good reason. Can someone tell me about it?


Applying patches increases the cost of making changes.

Debian has a user-first philosophy as well as a focus on integration between packages. When required, that means patching upstreams that don't meet that expectation for whatever reason. The ability to do this is precisely the point of Free Software.

There's mainly three kinds of patches:

* make the software behave like Debian needs it (configuration is stored somewhere in /etc/, no additional downloads at runtime, use the system libraries instead of vendored ones) * security backports. Debian freezes the functionality at release and only provides security updates. Many software nowadays just includes security fixes in new releases bundled with new functionality.

In combination these two kinds of patches lead to growing differences between a Debian released version 1.2 and the "real" 1.2, making it harder to handle bug reports (e.g. you get a bug report for version 1.2-Debian, but only support the "real" version 1.4 with a whole set of updated libraries).

The third kind of patch has mostly gone out of style; it's when Debian thinks they can improve the software. That lead to things like removing randomness from SSH keys: https://github.com/g0tmi1k/debian-ssh


Another kind of patch is when a common dependency library is being updated, and laggard upstreams need patching to make their current releases work against the newer library; it's either that or a Debian release with those packages missing.

This type is actually really common. Debian packages something like 30k upstreams, and so some are always behind.


Do they just patch the package in Debian or do they submit them upstream?

Usually both, submit upstream and add the patch to Debian while waiting for upstream to accept the patch.


There are two main ways of developing your software.

1. Incremental versions. Think like Chrome, there are not bug fixes, just new versions that may contain bug fixes.

2. Major versions, where you'll end up with semver style versioning. You'll have version 1 and version 2, but you'll also get version 1.1 released after 2 as it is the same as version 1, but with just a bugfix applied.

Debian essentially will only work with the second methodology, as they are API stable, which makes software developed by the first method incompatible.

To work around this Debian will backport "fixes" from version 3 to version 1, and create their own version 1.debian-2. The problem here is that people will now raise bugs with upstream on behaviour that was never released.

Yes Debian is allowed to do this, but upstream is also allowed to be unhappy with the additional workload that Debian puts on them.


[dead]

Encourage those sane companies to donate to Debian as well. The more financial stability Debian has the better off we'll all be.

https://www.debian.org/donations.html


I agree it’s probably the most reasonable choice for most, but I’m hoping that changes. I want more immutable infrastructure than what Debian can provide

Debian's policies also led to a heavily restricted version of RetroArch being available instead of the real version. Specifically, RetroArch has its own package management functionality built-in through its "Core Updater" feature, which downloads and installs emulators in the form of library files. This is banned by Debian because it sidesteps the whole package manager system.

Meanwhile, you can still build the full version of RetroArch from source code by installing the dependencies of Debian's source package, but building the original source code instead.


Debian is somewhat of a bad fit for a media center PC. I've learned that the hard way trying to get Kodi and retro arch working.

It's otherwise a great os.


RetroArch provides a Flatpak, making this mostly a non-issue.

no, from a distribution standpoint it's a pretty grave issue. As the name suggests distributions distribute, and if we start to unbundle the OS from the application layer, Debian loses the very thing people chose it for.

It's in a sense like a legacy carmaker or newsroom being more concerned with its own control than with the product. Doesn't end well over the long term.


> Debian loses the very thing people chose it for

I don't buy this. People don't choose Debian for the third party software in the repo. If they did, it's a bad choice.

People choose Debian as a rock solid and stable base OS, and it's perfectly rational to use it as a base for third party software on top from other sources.


>If they did, it's a bad choice.

It's an excellent choice. Debian provides about 60k software packages compared to say 15k in the Fedora repos. Debian and derivatives vastly outnumber other distributions in terms of available software, it's what drove Ubuntu's popularity, everything's available on it.

You can use it as a stable base OS, but it doesn't have any particular advantage, and even some disadvantages compared to RedHat or Suse distributions which offer you many more enterprise tools like Yast out of the box, package managers that are able to do atomic and reversible transactions, which apt still does not do, and so on.


I don't think I want Debian changing their packaging policies so some random Retroarch contributor can break my app (or worse, losing my data) by pushing a bad update. I want Debian to be a rock-solid OS where nothing breaks even if it means old software versions, because adults are doing QC.

If I need a specific app to always be up to date, I can get it from external sources on a case-by-case basis, such as downloading an AppImage/Flatpack from the developer's website or using Docker.

Why wouldn't you want a rolling distro if you want everything updated all the time?


Weird that that was considered an issue while KDE Discover and snapd will both happily install stuff from third-party "stores" by default on Debian.

I think the next generation of immutable distros is going to eventually make Debian effectively obsolete for many service workloads.

"Self-contained" and "No bundled libraries" are two very important concepts that a subset of our ecosystem decided was too much work. Then they re-discovered all the problems that result, and have now coined terms like "software supply chain" to describe them.

Meanwhile Debian doesn't suffer from any of this because it's been doing things so as to avoid these issues all along.


I think either approach makes sense depending on who you are.

If your goal is to distribute software across multiple distros and operating systems, bundling dependencies makes sense.

If your goal is to maintain a distro, shared libraries that you can apply a security patch to exactly once is obviously better.

But these are two different people with either goal.


> If your goal is to distribute software across multiple distros and operating systems, bundling dependencies makes sense.

Of course, an important "exception to the exception" is when you're making software that can easily be distributed by distributions, e.g. because it's end user software and open source.

I think the optimal cases for bundled dependencies are (a) large closed source binaries that never change, like games, and (b) self-deployed software, e.g. something like a server written in Go that is compiled and maintained in its running environment by a single developer or company.


The endgame of dependency management is the Nix model. U believe manually packaged software repositories' era is coming to an end.

> is when you're making software that can easily be distributed by distributions, e.g. because it's end user software and open source.

I have trouble understanding why this is desirable for either authors or end users. Even for open source end user applications, I want the software that I'm running to be reflective of the software that was authored and not the software that some distro maintainers think it should be.


> I want the software that I'm running to be reflective of the software that was authored

I don't. As an end-user, I couldn't care less about what the author wanted, I want to run the best possible version of the software. Often that's the version maintained by my distro, as they've put in the effort to make sure all the different software on my system works well together.


What about the user?

If your goal is to consume software for which you need long term reliability, accepting software that bundles an unmaintainable (to you) set of dependencies does not make sense. Unless you have no better option [edit: or if you're paying to delegate your problems to someone else I suppose].

As a user, using software sources that make the same choices Debian makes is always preferable for you if that alternative is available.


Engineering tradeoffs are always tradeoffs. One is not strictly better than the other from the user's perspective.

Mandating shared dependencies means that Debian is often running software against a dependency version that the original author did not develop against or test against. Sometimes the Debian package is effectively a fork. This results in Debian-specific bugs which get reported upstream. Distribution-specific bugs are a crappy experience for upstream developers because it wastes their time, and it's a crappy experience for users to be told that their software cannot be supported upstream because it's a fork.

Maintaining a huge repository of forked software is also an enormous undertaking. It's common for Debian users to be running fairly old versions of software. This is also not ideal, particularly for desktop users who read upstream documentation and require support when entire features are missing from their antique Debian version.


You seem to assume that Debian users are hapless and are ending up in these situations by accident. That's not true. Most users choose Debian's model because they want to use software maintained by people who care about their use cases. They use old versions of software by choice, because they want a platform that doesn't change under their feet. Others use Debian or something Debian-based because it is popular, but it is popular because of its quality as a direct result of making these choices, not despite them.

If you're an upstream who gets frustrated by Debian users, then it's worth considering why they're using Debian the first place.

There are some users who don't want this, and they tend to be the vocal minority. Debian is not the right distribution for them!


I was a contributor to a small Linux desktop application ages ago. We absolutely had a regular flow of hapless users who installed the Debian-provided package and reported bugs that either never existed in upstream builds or had been fixed months before. I believe the situation was that Debian had packaged an obsolete version of the software for stable because of a misunderstanding of the versioning system. When upstream discovered the mistake and contacted them, they refused to update it to a modern version. Instead, they requested that we maintain their fork and backport literal years of fixes.

This situation resulted in Debian distributing a broken version of our software for several years. I did not come away with positive impressions of their packaging processes.


> because of a misunderstanding of the versioning system

> ...

> I did not come away with positive impressions of their packaging processes

I am not coming away with positive impressions of your upstream versioning or release processes :)


This was the core of JWZ's famous complaint about Debian bundling an out of date xscreensaver.

I won't link to it, as he blocks links coming from here...


This is how I ended up with three separate the Gimp installs on my system. A deb a snap and a flatpack. And I still can’t get plugins to work.

> A deb a snap and a flatpack.

You gave up one package system too early. I managed to get the Python 2 plugins working with AppImage bundle someone linked to on some forum.


Usually developer of application tests it only with a specific version of a library. If you use another version of library, you need to carefully test it and fix all found bugs and I am not sure if Debian has resources to do it. So we can assume that they simply use untested combinations of libraries and hope that everything will be ok (it won't).

As if this isn't an issue with fast-moving "let's bundle everything" upstream code drops either?

Distribution releases have the advantage that they have a large number of followers who share the same set of versions, and so can shake out the issues and fix the bugs together. In practice I think this beats what most upstreams that each pick their own sets of versions can achieve on their own.

It only takes one skilled engineer to fix any given issue in a given distribution release, even at today's scale. That's not a big burden, and is even available to those not skilled with a relatively inexpensive support contract.

Corporate upstreams additionally tend to focus on what matters to paying customers; other use cases can often receive a "not supported" answer. A community of followers operating on the same set of versions can address these use cases more easily, too.


Not to mention that if you run 10 apps daily and have 50 more installed, you really don't want to be running 40 different versions of GTK/QT/whatever.

> Meanwhile Debian doesn't suffer from any of this because it's been doing things so as to avoid these issues all along.

Except that it's clear that Debian is suffering from a manpower problem and has run into fundamental scaling limits with its current architecture.

Thus, we're seeing things like Nix and Silverblue at the OS level with Snaps and Flatpak at the application level.

I don't know what the solution is, but it seems to me like Debian is going to need to do something shortly.


Debian has a ton of embedded code copies, inherited from all our upstreams who bundle libraries for Windows/macOS/etc and also sometimes fork them etc.

https://wiki.debian.org/EmbeddedCopies


The self-contained thing isn't always true either, for a long time the open firmware inherited from the linux-firmware repo wasn't built from source, we just shipped the binaries. I expect there are other cases in the archive too, Debian doesn't systematically strip generated files from all tarballs and regenerate them. Especially with more AI/ML stuff, where we probably can't even get the training data, let alone afford to train them.

I used to choose other distros for more updated dependencies, but now I appreciate the stability at OS level a lot more. Containers also solves the problem for running services.

What I don't like in Debian:

- 3rd-party software is not welcome; there is no mechanism for installing it securely because you are supposed to either install software from official repository or compile what you have written yourself. For example, if you want to install Sublime Text, or VS Code, there is no way to do it securely, without giving untrusted software access to your browser history and SSH keys. Of course, you can ignore security and run sudo curl http://script , but it doesn't guarantee that the installer won't break something. It is like we are back in 95 when every second program would replace system DLLs in Windows folder and break other software.

- there are third-party repositories, but they can cause conflicts and you better not use them, but there is no other way to install third-party software.

Third-party software is very important, I install OS to run it, and it surpises me that Linux is so unfriendly to third-party software, including closed-source software and doesn't provide means to install and run it securely and reliably and without making developers adapt it to every existing distribution.

- their bugtracker is email-based and as I don't use email it is completely alien to me. But maybe this is not bad because it stops most of people from posting bugs and saves time to reply to them.

I also tried Fedora, and here is what I don't like:

- they release a new version every 6 or 12 months and it is incompatible with older version, and you have to use a very weird way to upgrade: first, you need to install non-standard plugin (dnf-plugin-system-upgrade), then you need to download packages, then reboot into a temporary OS, then if everything is ok, it will create a new OS, and reboot into it. It looks complicated, easy to break and probably requires a lot of disk space, while Debian can upgrade everything in place.

- if a system component like Gnome is crashing, there will be neither log records nor crash dumps and you will never figure out why it has crashed

Also, APT is buggy when dealing with mixed 32-bit/64-bit packages: I wanted to install a package once and it suggested to delete half of the system to do it; luckily I have noticed that the package list is too long before agreeing. Why would package manager delete packages when I ask to install something, I don't understand. As a bugtracker requires using email, I didn't report it, and it would be difficult to reproduce this anyway.


> For example, if you want to install Sublime Text, or VS Code, there is no way to do it securely, without giving untrusted software access to your browser history and SSH keys.

First, if you don't trust a bit of software, why are you installing it?

But more importantly - you don't want your text editor to be able to open and edit your browser history files, or your ssh key files?

If my text editor wasn't able to open and edit those files, I'd consider it extremely broken!


Is VS Code a text editor? I wouldn't consider anything that has internet access permissions or a suite of 3rd party user plugins/extensions a text editor but that's just me.

Giving VS Code express permission to your home/filesystem (the default if you install it traditionally) is a security risk [0] [1] most people rarely think about.

[0] https://blog.aquasec.com/can-you-trust-your-vscode-extension...

[1] https://www.techradar.com/news/hackers-are-using-malicious-m...


> Is VS Code a text editor?

Um, I thought it was?

I've not used it, because I'm happy with (neo)vim for my dev needs, but I thought that's what it did?

If VS Code isn't used to edit text, what is it for?

Edit: (neo)vim and emacs both have 3rd party extension ecosystems, with extensions written in languages that can access the internet, so I'm not sure how that affects your test?


Nano is a good example of a text editor.

> First, if you don't trust a bit of software, why are you installing it?

The more people you trust, the larger is the chance that you get deceived.

> But more importantly - you don't want your text editor to be able to open and edit your browser history files, or your ssh key files?

Only with my permission.


Even if you trust the developer (I don't), there is a chance that there is a bug or vulnerability in the software. Also you need to install different plugins from random anonymous guys from Github, and it is difficult to trust an anonymous person.

> Also you need to install different plugins from random anonymous guys from Github

Do I? Strange, I've never noticed needing to do that myself.


Almost all software that Debian packages is 3rd-party. The issue is usually that software like Sublime Text or VS Code is non-free. That is not in itself an impediment for being packaged; after all there is the "non-free" section of the archive. However, often non-free software is also not free to be distributed by third parties. Thus, Debian would break the law if they did.

You don't need to adapt your software that much to have it run on Linux distributions; there are standards that the distributions implement that you can rely on. Often software that claims to only support one particular distribution will run perfectly fine on others. Linux distributions are not unfriendly towards third party software, but they have no obligation at all to spend effort to make that software work, it's the third parties that should do that work.

The bug tracker being email based is because when Debian started, that was the normal way to communicate on the Internet (besides IRC). A lot of tools were built on it, and the Debian developers themselves are used to it, so there is little incentive to change this.

The Debian developers would say that apt is not buggy; it's just that if there are conflicts, they have to be resolved in some way, which means deleting some of the conflicting packages. It also does ask you to confirm in this case. Although it would indeed be better if it would detect this is a very unsatisfying solution.


> if a system component like Gnome is crashing, there will be neither log records or crash dumps and you will never figure out why it crashed

On Fedora you can use ABRT (AKA Problem Reporting) to view logs and tracebacks of a component that has crashed, and report the problem via Bugzilla. Also, GNOME isn't a system component, Fedora would still work without it, but it would use a TTY terminal instead.


Abrt (problem reporting window) shows no information about that type of crash.

Legal | privacy