I'm not against Nix or any package manager. What I'm against is the general fragmentation of Linux that makes interoperability between distro's and window managers problematic. If there's every going to be a Year Of The Linux Desktop (TM) then there must be a way for a user to fulfill the expectation that they can select "Linux" instead of "Mac" and "Windows" when downloading software, and when they double click or drag it or whatever, it just installs without a fuss about which distribution they're using or which UI libraries they need or which package manager they use.
I agree with your point about fragmentation. But isn't the "Year Of The Linux Desktop" just a joke at this point? Even more so now that desktop computing is becoming less and less important to most people.
Well, we can't look forward to "Year Of The Linux Phone," because that's been around for a while now in the form of Android. Maybe "Year Of The Actual, Like, Real Linux Phone, Ugh, You Know What I Mean," though?
There's no fragmentation, Nix apart from many other things is first of all a build system, and it can build Debian and/or RPM packages for you from the same sources it uses for NixOS [1]
The problem as I can see it is that the "solutions" don't really work either. Here's your options, in order of viability:
- AppImage. Basically the idea of "Apps" ripped straight out of MacOS and injected right into Linux. Works fine, but clutters your home directory with Another Folder and doesn't integrate with your desktop like a native package.
- Snap. Works on any system that can install snapd, is owned by Canonical (a pro or a con depending on who you ask), but also auto-updates and has occasional performance problems. Plus, snapd is generally kinda fussy.
- Flatpak. Sandboxing taken to it's (il)logical extremes. If you like random data loss, confusing package manifests, low-performance and generally broken software, this is the way to go. Of course, most people like it when their compositor doesn't break when installing third-party programs, so they avoid it like the plauge. It's a uniquely bad format, almost worth spinning up a Linux VM to try it for a laugh. Just make sure to use GNOME, it's useless anywhere else. That's the true hallmark of a cross-distro packaging solution!
So yeah, we're probably due in for Another Competing Standard. At this point though, I think most developers are contented to just ship an RPM and an Arch package and let everyone else figure stuff out for themselves. Honestly, I prefer that approach when compared to the blasted wasteland that is cross-distro package management.
Ah because of a bug in an app (or it's packaging), the whole flatpak ecosystem is corrupt and causes data loss? How is that different that someone mistakingly causing an rm -rf /home in an RPM post-install script?
So now we're blaming developers for messing up with sandboxed systems? I really hope this isn't the rhetoric you use to convince people to start packaging with Flatpak...
You're the only one assigning blame to anyone here. Not all problems need a villain.
Both of these things can be true at the same time: The library authors ultimately messed it up and could have prevented it by reading documentation better. Flatpak has a footgun and could have prevented it by better dev UX and/or structuring docs differently.
Hopefully this incident is leading to improvements in both.
Flatpak. Sandboxing taken to it's (il)logical extremes. If you like random data loss, confusing package manifests, low-performance and generally broken software, this is the way to go.
Reality check: since the Steam package is broken so often on NixOS, I ended up using the Flatpak on NixOS instead (which works like a charm).
It's a uniquely bad format, almost worth spinning up a Linux VM to try it for a laugh.
I am not sure what you are talking about. It's actually a really nice format that uses OSTree.
We're comparing it to Snap and AppImage. Flatpak doesn't have enough meat on it's bones (or goodwill in it's tank) to compete with the other cross-distro distribution options, and certainly not with native packages. I'm frankly not surprised that Steam is constantly broken on NixOS, Valve doesn't maintain it (and won't due to how insanely different it's concept of package management is).
> Flatpak. Sandboxing taken to it's (il)logical extremes. If you like random data loss, confusing package manifests, low-performance and generally broken software, this is the way to go.
Talk about FUD and taking to logical extremes. Your constant trolling on Linux threads is not appreciated.
Constant trolling? I use Linux on every device I own. If you're interested in stifling democratic development and open-source discourse, there are plenty of other operating systems that can accommodate for that mindset.
So am I. Using Linux on every device you own doesn't automatically qualify and make any statement you utter a truth. I've already replied in the other comment why your opinion spread FUD (especially from a self-described "provocateur"), so I have no interest in engaging with you even further. Good day.
> What I'm against is the general fragmentation of Linux that makes interoperability between distro's and window managers problematic.
I disagree that downloading software being hard is a major impediment. (though I realize the parent did continue to claim that.) How many people download executables on the regular anyways?
It positively surprised me when I did `sudo winget upgrade` (I first did `winget install gerardog.gsudo` in an admin prompt) and it detected and upgraded the majority of the already installed OSS on my Windows workstation.
Aren't we past that? Vast majority of Linux desktops are Ubuntu/Debian or Redhat/Centos. You target those two and you're set. Add Suse if you have a large European user base.
Nix doesn't have anything to do with that problem. If anything it helps as it lets multiple even conflicting versions of libraries and software coexist and run together.
What you want is flatpak or appimage (or even snap). These are self contained distributed binaries that contain and isolate all their dependencies. It is simply a one file download for any linux distro. They work great--I use a ton of flatpak apps like VS Code all the time.
You phrase this as if it's getting continually worse, but I'm not sure if distros were ever that interoperable (unless you deliberately restrict yourself to a point in time where there was exactly 1 distro). If anything it seems to me that you now have a decent chance to get software to run on any recent distro, which wasn't always the case.
Though that's just my impression, I'd be happy to be proven wrong by someone better informed.
I am not so sure that this is really as big of a problem as it's made out to be. To me it seems like people trying to make Linux software management work like Windows because that's what they are used to, so if the experience is different than what they are used to they assume it doesn't work well.
Going to some project's website and trying to find a download to install is not typically how you install software on Linux, and people distributing software like that for Linux probably don't know much about Linux software distribution themselves.
If you want a good experience you should be installing software from the system package manager instead of searching for deb files that are probably out of date and won't work on your system due to shared library compatibility issues. Using the system package manager is much different than what many are used to but it actually works very well. The other option is to use flatpak or similar software, which is designed to sanely handle out of repo software installation.
I think people distributing their own deb/rpm/etc files from their project's website are causing active harm and confusing people who are new to Linux. The best option is to provide a link to the distros package if it exists (https://archlinux.org/packages/foo), instructions on how to build from source and a flatpak image (or similar). When a new user downloads a stray deb and has a bad experience with it (broken software, system breakage, etc) it really leaves a poor impression on them that could have easily been avoided.
I don't disagree with what you're saying, but it's interesting that some people might want to make Linux software distribution more like Windows or macOS because of unfamiliarity.
The way you've described Linux software distribution sounds an aweful lot like software distribution on mobile devices. I think it should be pretty familiar to everyone by now. If anything I think Windows and macOS want to move closer to being like Linux in this regard.
Maybe the experience of using cli tools like `apt` or `yum` has made people overlook the ease of installing software on Linux. Anyone that has used openSUSE or Ubuntu should know better.
> there must be a way for a user to fulfill the expectation that they can select "Linux" instead of "Mac" and "Windows" when downloading software
There's already established ways to do that such as AppImage, which has bonus points (compared to other solutions) for PGP signatures and differential (delta) upgrades. What's lacking is proper desktop environment integration for an ~/Apps folder like you have on MacOS where this pattern of one-file-per-app has been established for longer.
If you want to go one step further, 0install would make it possible to not even have to choose between "Mac", "Windows" and "Linux" on the download page, which is kind of amazing in a way.
> Nix often scares newcomers and experienced devs alike, because it proposes a fairly radical overhaul to how we think about package management and how we run software in general.
I am not sure this is the main put-off of nix.
Ideologically, I love nix. The syntax/language itself is a massive barrier IMO.
I use Nix for personal projects, but I prefer the experience of Devcontainers. They're easier to understand and debug (for me).
I'm watching eagerly the work being done by the ostree/rpm-ostree folks who are also trying to solve for a similar problem. Hoping to see something like "Nix but with YAML/JSON" come out of that.
... kinda like people who pretend that Zig is a competitor to Rust, and conveniently forget to mention that it does not promise memory safety.
I've been seeing a lot of "steal their thunder" type project positioning lately when something is a radical leap forward but requires major effort to learn -- like nix or rust.
People seem to have picked up on a strategy that if you can put together something inferior but easier to understand (zig doesn't need a borrow checker because it's memory-unsafe; ostree doesn't need a language because it ignores dependencies), and market it as an alternative to the radical thing, a much larger group of people will rally around the simpler thing instead of the radical thing because they feel threatened by the new thing that can't be learned in a weekend or even a week, and using only "20%-time" would take six months to master...
I'm going to get downvoted to hades for this, but it's true.
I don't see ostree promote themselves as an alternative to Nix. The GP linked to an explainer of how OStree differs from a bunch of systems, not really a promotional page. And in general OStree's use case is different to Nix so I don't think it competes.
You may be right that this is a pattern, but I don't see it in this case.
... kinda like people who pretend that Zig is a competitor to Rust, and conveniently forget to mention that it does not promise memory safety.
I think that this comparison is unfair. First of all, the OSTree documentation doesn't pretend that it is something like Nix. It's an object storage for operating system images. Secondly, in contrast go Zig, OSTree most likely wider use/deployments than Nix, since it is used by Fedora CoreOS and Fedora Silverblue.
That said, even though Nix and OSTree are not very similar. For users, there are similar benefits: an immutable system, atomic upgrades/rollbacks, etc.
I think it is also much more likely that OSTree gets into the hands of millions than Nix.
(Disclaimer: I like Nix a lot and am a contributor to nixpkgs.)
> I'm going to get downvoted to hades for this, but it's true.
Yes, because it has nothing to do with Nix, but it's just your own off-topic crusade, and being concerned that when someone tries something different, they're "steal[ing] the thunder" from your own favourite pet project. Talk about feeling threatened.
We can do better than having cults around our little language or package manager but instead learning to appreciate the little differences between each project. Perhaps the solution is in the intersection between the two.
The language is small enough that there's no way it's a problem. It seems less complicated than YAML, a couple hours to pick up at most.
I have a NixOS setup on my desktop and use Nix for some projects, but only maybe a week or two of experience. And the language was never an issue.
The bigger problem is the standard library is a sprawling mess: Understanding what options are available or how to get your thing to build by analogy with someone else's, requires digging into nixpkgs with no good tool support.
If you did a line-by-line port from Nix to YAML or Python but kept the standard library exactly the same, it would be even harder to understand.
Yes, nixpkgs desperately needs to be split, at least into two packages:
- stdlib
- packages
The stdlib could also include builders and language frameworks, or they could be split even further.
Then treat the stdlib the same way as it is treated in any other language (make it stable, not changed frequently and make sure everything is documented with the release).
That would reduce the need of reading source code to use nix.
I am not so sure about that. Nixpkgs is a big ball because it is the final integration point. And I do in fact find myself refactoring individual packages and the infra at the same time.
https://github.com/NixOS/rfcs/pull/109/ I wrote to hopefully make the lang2nix packages work with Nixpkgs. I think this would achieve most of the benefits of what you propose: cleaner separation of infra vs vs ad-hoc per-package stuff, while still retaining the flexibility of the monorepo.
The goal is to work with upstream developers so less and less ad-hoc crap is needed. Then the one repo ends up "all stdlib no packages", because the packages are basically autogenerated!
I have mixed opinion about this. I mean, I understand that there are increasingly applications that are getting harder and harder to package, but if there are tools that could just parse lock files and build a project without any work, in that case why even package it, one could just have function where we could point to SCM repo and it would fetch and install the application.
I'm still not fully understanding what import from derivation really mean (technically) is it an ability to fetch the source from repos during evaluation?
If so, that is an issue. If one tries to use Nix in environment with repos that require authentication it is a huge pain. There was a workaround once with passing SSH configuration, but it was removed.
I also think your RFC is orthogonal to what I suggested. I think #1 issue with Nix is steep learning curve, and huge part of it is documentation that is not up to date. The thing is, as others said, Nix is great when everything works, as soon as you step into something that's not as common suddenly things become really complex, because documentation is not enough, one has to dive into nixpkgs which is very complex.
> Nixpkgs is a big ball because it is the final integration point. And I do in fact find myself refactoring individual packages and the infra at the same time.
That's actually the problem I'm talking about. Because this infrastructure code is constantly changing => that makes the documentation be not up to date => makes Nix more difficult for others to start using it. You don't have difficulty with it because you know it inside and out, I know enough to usually find what I'm looking for, but from my own experience I remember it was really hard to get to the point where I'm and I'm still not that great at it.
Normally if that was happening in any other language it would be considered to be not ready yet for wide use, but Nix does want to be used and think this is one of things needed. Even with up to date documentation it there are still difficult things, but at least it would be huge improvement. Imagine if Go or Python sdk was not up to date and the way to learn it would be studying sdk's source code. It would be a drastically different programming experience and I doubt people would call those languages beginner friendly.
I don’t want to be dismissive. “nix but with yaml” is kind of like saying “rust but without borrow checking”. if you think you can approximate what Nix is capable of with YAML and be less complicated, I think you’ve missed the point. what makes Nix powerful is that it is built on the back of an actual programming language. it’s fundamental to the whole thing.
the language itself is surprisingly simple. the difficulty people have with it is that it’s functional, not that it’s nix. I don’t think most sys admins are fluent in functional programming, so it’s a huge turn off and learning curve.
if you approach it as three problems: learn the basics of functional programming, learn the syntax of the Nix language, and only then learn what a derivation is and how it works, it will be a lot easier to approach. at that point the nixpkgs library starts to make a LOT more sense.
I understand that’s a lot to ask when you just want to install some packages, though. It’s a tricky problem.
> what makes Nix powerful is that it is built on the back of an actual programming language. it’s fundamental to the whole thing.
I think I agree with this.
> the difficulty people have with it is that it’s functional, not that it’s nix.
No I don't think that's the problem people have with it. Speaking for myself the thing that's hard about Nix is first and foremost the bad error messages and secondly the complexity of idioms used in nixpkgs. If you stray just a little bit from the happy path, you're liable to get a ton of very arcane problems, and the error message often has nothing to do with the actual problem because a lot of the idioms used in nixpkgs end up doing clever things with Nix that work well when everything lines up just right, but blow up when just a little bit is wrong.
The functional parts of Nix can be obstacles for someone new to functional programming, but when something goes wrong the developer has a clear sense of what is wrong ("ugh why is this immutable?!") even if it's not clear how to fix it. The sheer complexity of nixpkgs though can cause breakages in ways that don't make it clear at all what went wrong.
There are a variety of ways that this could be fixed either in concert with changes to the language itself, or whole-scale changes to the interpreter/compiler.
you’re right. I guess I should qualify that with: what makes the language more difficult to approach as a newcomer
once you’re in there, there’s a ton of warts, of course. I just hear complaints about the syntax so much that it really seems like people are more caught up learning functional programming than Nix. which makes sense, since most people who would need Nix spend their time writing Bash and Python
I think the problem is that the bad error messages make it unclear whether something is a syntax issue or a nixpkgs issue (this is somewhat worse with the dynamic nature of Nix, although I personally don't actually think Nix really needs static typing all that much, but that would be one viable way of fixing this).
One reason is that Scheme has Emacs modes that connect to a live system for precise jump-to-definition, etc. This works with Guix and is extremely convenient. Scheme's metaprogramming also enables Guix's beautiful system of code staging, G-expressions.
> One reason is that Scheme has Emacs modes that connect to a live system for precise jump-to-definition, etc. This works with Guix and is extremely convenient.
Those are all valuable things, but I don't think that would actually be hard to implement for the Nix language either. Just no one bothered
> Scheme's metaprogramming also enables Guix's beautiful system of code staging, G-expressions.
Yes not pasting together strings for bash would be nice, but with all due respect to scheme's metaprogramming, which I do indeed highly respect, I view this as fairly orthogonal.
From a quick glance, g-exps don't have much binding structure? That means hygiene and other things are not as useful, and just doing
["see" "I" "can" "sexp" "too"]
in the Nix expression language isn't actually so bad.
With G-expressions you can splice in computed store values, e.g. the computed output location of a package. I recommend the paper[1] explaining why they exist and what alternatives they replace.
It's not purely a language thing per se, i.e. there are ways the language could change minimally but error messages could have a huge leap in improvement. However, certain idioms in the standard library might need to change, and certainly changing the language at the same time can improve the error messages even more.
I have only a little experience with Guix (replacing Nix with Scheme), so I'm not very confident in this, but the shallow impression I have is that Guix's error messages are not much better.
For me it's a total lack of any description of the nix language. I just have no idea what I'm looking at, and nowhere (last time I looked, which I admit was some time ago) explained what tokens make up the language. Some showed that parts of the syntax were optional, but not what they meant.
Occasionally I'd find a guide that describes in English what a given example file is doing, and it kind-of made sense, but wasn't clear how it mapped to the file shown. There was a lot of magic going on with no explanation how it was getting there, so I could only assume there were semantics tied to some of the syntax that the writer assumed you'd eventually figure out... But they were just abstract enough that it didn't make sense in a way I could duplicate.
It's not perfect and I raised some issues about fragments being written from the perspective of someone who knows the system - but it's definitely good enough for learning the language.
The problem is not the language but the “standard library” used by it, which is quite ad hoc and organically built.
The language may not be the most intuitive at parts, but I feel that it is not the actual problem people have - I think what trips up people (me included) is that one pretty much grep for a similar package to be able to start doing the work. Why does a python derivation require this function instead of the generic one, what are the package names, etc. Some stricter structure would be welcome, in my opinion.
> For me it's a total lack of any description of the nix language. I just have no idea what I'm looking at, and nowhere (last time I looked, which I admit was some time ago) explained what tokens make up the language. Some showed that parts of the syntax were optional, but not what they meant.
I don't think the Nix language requires understanding of functional programming. Overall, it's not so different from JSON. What is different is that it provides a few programming language constructs on top like variables, functions, and conditionals. This enables developers using the Nix language to properly abstract complex configurations in a comprehensible way, thus avoiding the pitfalls of writing JSON and YAML by hand.
idk, as a part-time Haskeller, I think Nix-the-language kinda sucks (tho not enough for me to not use nix). IMO the big huge thing is really inconsistent nixpkgs docs; compare linkFarm and symlinkJoin's docs, for example. I wish there were some way of getting nice autogenerated docs, but I don't think we're gonna get there without either adding some kind of type system or having a pretty strong review requirement that's actually enforced, and doing that for a year or more...
Plus the nixpkgs and nixos docs are generated from some weird hybrid docbook-markdown that mere mortals (well me at least) can't get to build. So we are stuck using the makes-my-eyes-bleed font on nixos.org, and the gigantic mile-long single-page manual format (which is nice sometimes, like for searching, but other times you want a TOC sidebar).
The documentation for the nix language is all done in mdbook, and is beautiful and easy to read. I wish the nixpkgs/nixos manuals were like that.
AKA the curse of macOS designers that have that setting that makes all fonts a little bolder, so they think 300 weight is perfectly acceptable for a documentation site, while everybody else have to strain their eyes to read too thin a black font on eye watering white for hours.
Just go on nixos.org, remove the font-weight: 300 and see how much readable it is.
> the difficulty people have with it is that it’s functional, not that it’s nix.
My experience is that the difficulty people have is with Nixpkgs as a library/codebase rather than the language itself. Poor error message, unclear abstractions, lots of things managed through implicit conventions rather than in the language itself... it's a massive legacy codebase with a lot of internal inconsistency and some questionable practices. I'm convinced that if you had a magic codebase-cleaning wand and kept literally everything else about Nix (the language, the model, derivations) the same, you'd fix like 90% of the problems people have with the system.
maybe that’s it; maybe it kind of sits in the middle where it tries to please everyone and ends up pleasing no one. not a great functional language, but functional enough to confuse you if you’ve never done functional and are expecting JSON.
I've written this before. The issue is not because it's functional at all. That's a proximate explanation that's just plain false.
Everyone I know that has tried Nix was a functional programmer first. Nobody had a good experience with Nix though.
If I could describe my issues with Nix it's that:
1. The syntax is odd. It's usually better not to stray too far from common programming languages if you want your language to be accepted.
2. You often find two examples which work individually, but they use a different way to do things, so it's hard to get them to work together in the same configuration.
3. A lot of examples depend on some specific functions, which are hardly explained at all. There seem to be a lot of these special functions that make it very hard to make a mental model you can work from.
4. The error messages are worse than C++ templating errors.
5. At least last I checked the documentation was really lengthy, but never gave a good overview of the language, only gave some inconsistent examples instead. In the end you don't feel like you learned something.
It's a higher level issue, declarative vs. programmatic environment control. It isn't that sysadmins don't understand programming, functional programming, etc.--everyone is devops these days. It's that in the world of system administration we've gone through many waves and trends with significant backlash against complete programmatic definition of environments.
In the old days you'd exec a pile of shell scripts to go out and put your servers into the state you wanted. It was brittle, painful to understand, etc. Then things evolved into declarative definition of environments with tools like ansible. Now you just define the state of the world you want and ansible or other tools do the work to get the world into that state. This has evolved even further into declarative definition of entire distributed systems with kubernetes--you write out a pile of YAML that defines the state you want and kubetl and k8s controllers take care of making it happen.
So there are definitely people that understand the value of functional programming, but don't want it to define their environments. I don't want to have to grok someone's 'clever' solutions to figure out that ultimately they just want package x, y, z and software foo installed. I just want to list what I want and let the tooling make it happen.
> I just want to list what I want and let the tooling make it happen.
That's just what Nix does?
It's very much more declarative than any system I've seen. Yes, k8s turns your declarative manifest into running processes, but how was the image built? There's a huge gap there: you refer to container images by name, not by a description of the software you want them to contain, and then hope that some other process has built the image correctly. Nix can close that gap and take declarative all the way from the service manifest down to the git commit of the compiler used to build your binary.
The comment you're replying to describes concerns about maintenance burden from the programmatic nature of Nix.
I think "declared in YAML, pointing to a Docker image", and "declared in Nix, pointing to some more Nix elsewhere in the codebase" is relatively moot to this point. (If you wrote the image, you'll know what it's expected to do; if you don't know the Nix you're pointing to, it's as opaque as a Docker image).
Configuring with a program vs using a plain structured file have different trade-offs.
Does anyone write k8s manifests in raw yaml? They're so complicated that they're generated using some templating language (helm or kustomize or whatever is popular these days, I'm a little out of the loop). So you still have something programmatic. Think of Nix as just a better and more principled templating language. (I mean, look at NixOS configs: they're much closer to "plain structured files" than "programs".)
The promise that I see (I haven't seen this fully realized but it should be possible) is that instead of yaml plus some ad-hoc templating language plus dockerfile plus makefile, everything is in a single language top to bottom.
> the language itself is surprisingly simple. the difficulty people have with it is that it’s functional, not that it’s nix.
I wrote down "what I wish I knew when learning Nix", and realised what had initially confused me:
Some people are confused by immutability when learning functional programming. e.g. Programs involve changing things, but then immutable stuff doesn't change.
With Nix, packages get built and installed to the store. In other solutions, this is done by running a command like "install package", or by having some file describing the package and running "install this"... but with Nix, packages are so integral to the language that they're built as part of just evaluating the language expressions, and (before flakes) the language didn't really have a "main file format" the same way that programming languages do, or that Terraform does.
With functional programming, pure functions have no side effects. With Nix, an implicit side effect of evaluating code is that packages are built.
Sort of. There are no real side effects, just lazy evaluation. Every entry point into `nix build` evaluates one expression to produce one output, but in the process, ends up evaluating the outputs of all the dependent expressions as well. The Nix store essentially caches the results of these evaluations, which looks very much like a traditional package manager "installing" packages to the file system.
It's kind of like how an implicit side effect in "normal" FP languages is that memory is being allocated.
The nix garbage collector will even free up disk space by deleting unreferenced packages just like a garbage collector freeing memory by deleting unreferenced values.
A bigger problem is that the command-line utilities (what git calls "porcelain") are full of jillions of inconsistencies. Some tools (nix-env) read ~/.nix-defexpr, others (nix-store) ignore it. Sometimes arguments are interpreted as filenames, sometimes as attrnames, and other times as a search query that nix will use to grind through some big list of package names.
The new "nix" command with subcommands ("nix env" instead of "nix-env") is much better, but has been stuck in "experimental status, breaking changes may occur at any time". Stabilizing the nix command syntax and deprecating nix-env/nix-store/nix-instantiate would go a long way.
> the difficulty people have with it is that it’s functional, not that it’s nix
Not really - functional languages are ok. The problems I have are nix-specific:
- for documentation you need to read the source
- there's no debugger
- there are no sanity-checks in the system which means that some internal function deep down the stack says "attribute never_heard_about_it missing" and you have no context how it got there
- there's no type checking which results in the previous error on attribute foobar while you set attribute fobar and have no idea why things aren't working (or set {system}.{package} instead of {package}.{system})
> I don’t want to be dismissive. “nix but with yaml” is kind of like saying “rust but without borrow checking”.
I mean... your metaphor cuts both ways? A lot of work went into making Rust's borrow-checking non-intrusive, or at least softening its edges.
The big improvements were lifetime elision and non-lexical lifetimes, but there's also a ton of compiler errors for common cases that explain what went wrong, and often suggest an easy fix when there is one.
Last time I checked, Nix didn't really have this. If you get the syntax wrong, it doesn't tell you "oh, you tried to get 'ripgrep', you probably meant 'nixpkgs.ripgrep'".
> Last time I checked, Nix didn't really have this. If you get the syntax wrong, it doesn't tell you "oh, you tried to get 'ripgrep', you probably meant 'nixpkgs.ripgrep'".
I get where you're coming from but as it stands there is no universal description of what "nixpkgs" means in this context. It's a channel, defined only in the context of your environment. Your nixpkgs isn't my nixpkgs. It's just a channel label.
Imagine you typed
curl "PoignardAzur"
compared with entering the same value into your browser's address bar. Would you expect to end up on your HN profile page in either case?
I think what you're missing is that instead of providing the convenience behavior you're looking for, a more powerful primitive is available. Any other package manager has a bloated song and dance where if you want to install something not in the official repo, you either:
- download and install it manually
- add a new third-party repository and install it from there.
Then, whenever you perform a distro upgrade, you have to repeat the entire process because the repo has changed, subtly.
Instead, nix eschews this concept by allowing you to install whatever you want. Imagine running Ubuntu 18.04 and wanting to install a package from Ubuntu 20.10. Have fun with that! Nix not only makes it possible, it makes it as easy as installing anything else.
I now consider "foo install ripgrep" a mistake and an anti-pattern. I now find those tools annoying to use. Could nix be better? Certainly, but you're really giving me a "it's different so I hate it" vibe. Give it a chance.
> I think what you're missing is that instead of providing the convenience behavior you're looking for, a more powerful primitive is available.
That's great, but I still want the convenience behavior.
> Give it a chance.
I tried. I kept getting papercuts, and things that were easy on PopOS were annoying on NixOS. That wouldn't deter me if I needed to use Nix, but as it is PopOS works perfectly for me and I have no reason to switch. I might still try out of curiosity, but I'm honestly not feeling that enthusiastic.
But that kind of response that boils down to "you just think that way because you haven't tried it" or "the reason the UX is so incredibly obtuse is because of the underlying model" seriously annoy me.
You can have plumbing and a porcelain. Nix currently is lacking on the porcelain side. The idea that you have to study the entire programming model of an OS (not just vaguely understand it, which I do, but study it) to use it, and if you think it's not worth the effort, it's because you haven't tried it yet, is inane.
I'll be honest: I don't understand why you wanted to try NixOS out in the first place. It sounds like Pop!_OS works for you? Great! Enjoy. It doesn't work for me.
Not that you're asking, but I found your review pretty uncharitable. You pretty clearly don't want any of the benefits the Nix ecosystem provided. NixOS is not competing with Pop!_OS, and I wouldn't recommend to non-technical users. It really sounds like you want to use snap or flatpak, so I'd recommend those -- they also work well on NixOS.
As for your review, in particular:
- you changed the desktop environment from the default, which is fine, but a lot of your problems are the result of that choice. The default is KDE, and I find it works perfectly well. How does Pop!_OS fare if I install XFCE?
- lengthy rant about /bin/bash vs /usr/bin/env bash, I found boring since the latter has already won as far as I can tell, and righteously so. The whole point of NixOS is to deprecate a traditional FHS, and then you complain about the lack of FHS. Okay.
- You give up on Steam immediately. I run Steam rather successfully and I've managed to play more games on Linux than ever before, I can count on one hand the number of games I've been unable to start, and it's usually because of shitty DRM.
- you don't understand the difference between nix-env and configuration.nix, so I don't think you read the manual.
- you just want to use existing dotfiles/configuration instead of learning how home-manager workapo
- you describe difficulties getting the VLC developer environment set up, but the instructions were not written for NixOS. What are you hoping to accomplish? The Nix way would be to open a shell for the VLC derivation, which works fine without installing Docker. You didn't even try this.
- you somehow don't mention the most obvious pain point everyone runs into, which is that you can't usually trivially download and run binaries as there is no FHS, so binaries don't know where to find runtime dependencies.
> The idea that you have to study the entire programming model of an OS (not just vaguely understand it, which I do, but study it) to use it, and if you think it's not worth the effort, it's because you haven't tried it yet, is inane.
Have you used MacOS before? How did you get started with Linux? Why is Pop!_OS intuitive to you?
You don't need to study, but you need to develop and foster an understanding of how the systems operate. This has always been true. This system is different, but you want it not to be. You're installing your own operating system but then expect to be pandered to. You're seriously exaggerating the effort involved.
Ultimately this isn't a productive conversation. You're not interested in change without a seamless transition, which NixOS does not purport to offer. It's been around for a while, sure, but it's been a pet project for most of its life. Debian is nearly twice as old, and Ubuntu built upon Debian, and Pop!_OS built upon Ubuntu. They're all pretty much the same. NixOS is quite different. In time, we'll see the same polish along with spin-off distros.
I recently briefly tried Nix and yeah the language seems a bit tough with not too many resources floating around. I really don't want to learn a whole language just for my package manager. But then I discovered Guix which is inspired by Nix but uses Scheme instead of a custom thing. Seems more sensible and easier to understand as I know a little bit of lisp already.
You need a strictly pure functional language to do what Nix does. It's able to determine if something is already installed (in a specific state) because it calculates a hash of the output. Side effects throw a wrench in all of that.
The language doesn't need to be lazy. Strict languages model laziness via explicit lazy values. For Nix, this would be some kind of explicit package type. Side effects are problematic though.
I feel like a logic programming language (e.g. Prolog) would be a better fit for what Nix is trying to do than a functional one like Nix. Of course those are even less familiar to most devs than functional languages, so it wouldn't solve the "weird new language" issue.
Guix was a unmitigated disaster for me: after at least an hour of tinkering I got the installer ISO to boot. Not long after linux-libre managed to install itself again, and my machine stopped booting. Not practical, at all.
> The syntax/language itself is a massive barrier IMO.
But oh boy was this a joy. The Nix language isn't that bad (Dhall has the potential to surpass it), but Guix gave me seriously rich error messages, and lisp works great with almost any editor.
Overlays and home-manager just grew out of nix and either lack ergonomics or decedent error messages (compared to Guix). Flakes improve the situation, but nix still feels "all over the place."
I run Nix on my desktop, but I'm actively working on a Dhall-based replacement.
To be clear: I have extremely high praise for nix, and I'm getting nix-darwin going at work for engineer onboarding. That being said, it feels more like a research project. It pioneers some extremely important ideas and, by necessity, some of those ideas are "bolt-ons" and truly just feel like a proof of concept. Guix, for example, has home-manager built right in and doesn't suffer from the error message issues that home-manager does.
I used Arch, btw, and AUR was truly a thing of beauty. PPAs (of Ubuntu) are, in theory, even better. Have you seen the sheer amount of effort it takes to keep nixpkgs going?[1]
Home-manager, amazing thing that it is, has several issues. From their own documentation:
> Unfortunately, it is quite possible to get difficult to understand errors when working with Home Manager, such as infinite loops with no clear source reference. You should therefore be comfortable using the Nix language and the various tools in the Nix ecosystem.
Flakes? Experimental:
> Nix Flakes are an upcoming feature of the Nix package manager.
I also think it should be possible for the mainline (e.g. nixpkgs) to import a flake: the nixpkgs maintenance nightmare has to end. Dhall can do that, and will even verify it with a hash[2].
They came up with extremely important ideas, but the same ideas taken as a whole (instead of iteratively) could be combined into something far greater.
I guess that the biggest challenge such a project would face is providing a smooth transition for package maintainers: if they have to cut-over or maintain two configs it will be much more difficult to retain them.
So my thought is an that maybe you should use a staged approach where your dhall packaging config design can be compiled down to nix, making it easy for existing nix packages to convert and still be backwards compatible with nix, then develop a dhall-only implementation that consumes the same config but eliminates the intermediate compile-to-nix and evaluate-nix steps. I don't know the design, so this approach may or may not be reasonable.
Regardless, I'm quite interested in a nix-like solution with configs written in Dhall.
By "rest of us" do you mean developers? I find Nix especially useful on that front by being able to provide a common dev environment and also when used for CI it can speed up builds through its caching.
They do have unfortunate name of the project and it is harder to find articles about just Nix, while NixOS is more unique.
I'm not sure why you're downvoted so much -- I am all in on Nix. I use it professionally. I agree with you 100%. I'm convinced whoever solves the UX issue of Nix will have the next "big thing" on their hands. I hope it's the OSS community and not a repeat of Docker, Inc.
Nix is indeed a great idea. I do hope that idea’s time has come. But, oh, man, that config syntax. I’d love to see some alternative, human-friendlier interface; am thinking here of how Elixir really induced a significant pickup in usage of the BEAM owing largely to being perceived as friendlier to newcomers than Erlang.
Nix (the language, not to be confused with NixOS, the operating system) is not that hard to pickup, if you already know a programming language or two. Read through about the syntax and you'll get the gist in a couple of hours, and you basically know it inside out if you spend a day or two reading and experimenting with it. You just need to spend the time to understand it, and it's no longer a hurdle.
What I think really stops NixOS from gaining mainstream popularity (eh, mainstream like in the Linux ecosystem, not mainstream-mainstream) is the inability of just downloading binaries and running them, when you have to. It's a huge hurdle to try to figure out the right incarnation to set the lib paths correctly for the binaries but woops, it's actually using something else and now you need to do some other hacky thing. nix-autobahn et al exists but are also hacky and sometimes outright doesn't work, and then you're basically lost, unless you can rebuild the binary yourself, which unless it's free/open source software, you're unlikely to be able to do.
This, I think is the real reason people are avoiding to adopt it, because there is no easy solution to this problem, unlike the problem of syntax which can be solved by simply learning it.
> So in some sense the language is a decent filter for people who won't like the inevitable rest anyways.
I'm not saying i'd like to package complex systems with nix/guix, but i certainly would use them more to package my own projects if the ecosystem had better docs and error messages in regards to typing/syntax.
I'm ready to carefully read through a 20 pages manual if that's what it takes, but i personally find the documentation and diagnostics of both projects to be sub-par compared to my expectations (looks really clear on paper, now why doesn't my declaration work?). Granted, i'm not a skilled engineer so maybe i'm not your target audience, but i believe as an amateur programmer/sysadmin i'm part of the target audience to package anything i find useful and/or my own programs.
Yes, we should have better docs! (I think we have a bad mix of authoritative reference and tutorial that fails on both counts.)
But if you want types, I daresay you are already understanding the basics of how the functional language works :). I imagine your problems are less:
wtf is: ((x: x x) (y: y y))
and more:
wtf is stdenv, wtf is mkDerivation, wtf is a setup hook, why are othre build systems hacks on top of the autotools defaults, and not clean alternatives? etc. etc.
Well to be fair i don't understand this syntax. But i guess i should be able to learn it in under one page of docs.
> wtf is stdenv, wtf is mkDerivation, wtf is a setup hook, why are othre build systems hacks on top of the autotools defaults, and not clean alternatives? etc. etc.
But yes my problem is that i don't have a list of standard options and their types so i end up doing copy-pasta of what works in another package and adjust until it works. On a higher level, i've found it harder than expected to turn an existing package declaration from nixOS into a source-provided build file to try and build a specific branch. Or to use an existing NixOS declaration to build a reproducible (and installable) liveUSB as there seems to be various tools doing this job but i couldn't get any of them to build successfully.
I'm interested if you have good learning resources to share!
Yeah, sometimes you have to rely a lot on context to understand if they're talking about Nix the package manager or Nix the language, but the documentation does mostly do it well by strictly saying things like "Nix package manager", "Nix Expression Language", "Nix Packages collection (Nixpkgs)" and "NixOS Linux distribution" so it's mostly clear, I think.
> Packages are built from Nix expressions, which is a simple functional language
> We provide a large set of Nix expressions containing thousands of existing Unix packages, the Nix Packages collection (Nixpkgs).
> NixOS is based on Nix, a purely functional package management system. Nix stores all packages in isolation from each other under paths
> In NixOS, the entire operating system — the kernel, applications, system packages, configuration files, and so on — is built by the Nix package manager from a description in a purely functional build language
Lastly, the learn page (https://nixos.org/learn.html) links to three different manuals in the bottom, titled "Nix Manual", "Nixpkgs Manual" and "NixOS Manual" respectively.
I'm sure if you find specific places in the manuals/guides/wikis where it's unclear which one is being referred to, they would really appreciate your help in fixing it, or at least pointing out it's not clear. They tend to be really responsive in the IRC channel the times I've talked with Nix people.
I would be surprised if there are any binaries that are outright impossible to run under Nix. I spent a weekend getting Xilinx ISE to run in Nix [0], that is a binary behemoth. It uses bubblewrap (already used in Nixpkgs) to setup a bunch of bind mounts so that the binaries see a standard FHS Linux layout.
Yeah, nothing is impossible if you have the time/energy to spend the effort :) I'm not saying there are binaries that are impossible to run, just that some of them are very hard to get running correctly if they have some special setup where the automated tools don't work very well with it. Things like Java programs that are wrapped in a shell-file that gets called from a binary tend to be a huge hassle to get setup correctly, while on other OSes like Arch Linux, you simply run the binary and it works.
Again, not impossible to get them running, just a huge timesink.
> What I think really stops NixOS from gaining mainstream popularity ... is the inability of just downloading binaries and running them
This is THE reason I switched from NixOS back to a mainstream distro (Fedora). However, I still use Nix for my development environments and builds for a lot of projects. I think adoption of Nix itself is more important than NixOS.
Is it funny that many people are saying that to which the reply is always "...it is not hard..."
I am going with the "configuration syntax is complex and difficult..." rather than deny the reality of actual people who have set out to use this system.
Would not be the first time developers and fans have disagreed with users. side with the users every time (except when that developer is me)
> I am going with the "configuration syntax is complex and difficult..."
I guess it's by comparison.
Whenever some NodeJS or Python project is mentioned, I've never seen the complaint "NodeJS syntax is too hard" or "Python syntax is too hard".
Nix's syntax itself is not difficult. It's slightly different than other languages, but developers are used to same-thing-different-syntax in different languages.
e.g. in NodeJS, you might have a lambda like: `({ x, y }) => x + y`. In Python, this is `lambda x, y: x + y'. In Nix, this is `{ x, y }: x + y`.
> Whenever some NodeJS or Python project is mentioned, I've never seen the complaint "NodeJS syntax is too hard" or "Python syntax is too hard".
Because their syntax is used for more than just configuration (if it's used for configuration). These are full programming languages that a lot of people know and understand.
Nix on the other hand is a DSL that is used in exactly one place: nix configuration. So I think it's fair to complain about nix-as-a-language and its syntax when the only place you will ever encounter it is nix configuration.
And especially with gems like
--- start quote ---
Attempting to perform division in Nix can lead to some surprises.
nix-repl> 6/3
/home/nix/6/3
What happened? Recall that Nix is not a general purpose language, it's a domain-specific language for writing packages. Integer division isn't actually that useful when writing package expressions. Nix parsed 6/3 as a relative path to the current directory. To get Nix to perform division instead, leave a space after the /. Alternatively, you can use builtins.div.
The programming languages developers use have surprises and inconsistencies like this when you compare between them. It seems awfully pessimistic to think Nix is too hard to use for things like that. (Although I never really understood people who detested Python's whitespace significance, either).
I think it's also a bit impractical. I'd be surprised if anyone refused to use Terraform because they didn't like HCL. (Well, these days there's Pulumi). -- I think Nix's boon to managing packages makes it worth the minor unfamiliarity of a very simple programming language.
I refuse to use Terraform, and Pulumi, and many other things :)
One-off, poorly specified DSLs with no tool support, no debuggability, bad error reporting etc. are a good thing only in theory.
In practice, yes, we often have to use them because there's no other choice. It doesn't mean "oh just get over it, and reap the benefits" is a good argument.
Only Terraform uses a one-off poorly specified DSL.
Pulumi is based on real programming languages. The criticism might be that their libraries/API is poorly specified. But the languages it's based upon are standard with all the tooling one's used to.
In my (rather limited) experience, the poor specification of Pulumi's libraries is almost entirely down to the fact that they are exposing equally poorly specified functionality from cloud providers.
I tried nix (the pm not the os) for a while, and the reward did not match the cost for me. The docs are horrendously confusing to a newcomer, the stack overflow answers make assumptions about concepts you've never heard of, and generally things are never really explained to a degree you feel confident. Nix feels like one of those things where a lot of people copy and paste snippets, because learning the underlying language/ecosystem/OS takes an extraordinary amount of effort.
Moreso than the syntax, the language semantics are a problem.
There seems to be no value in a dynamically-typed system description language, but both a practical burden and a degree of unnecessary risk.
Transitioning to a more consciously-designed, statically-typed, still-declarative language should put the project on sounder footing. The existing language interpreter should be easy enough to instrument to annotate all the existing specs with the actual types seen when building the corresponding packages, enabling automatic translation and transition to a better language for almost all packages, excepting only those (probably?) few that actually rely on dynamic-type follies. Those last could stay on the old language, be hand-translated to the new one, or be abandoned, case by case.
4-5 years ago I was first in line to claim that the Nix language needs a static type system. Now that we've seen what happens with languages like Cue I'm not so sure anymore.
In a language like Nix there isn't really a distinction between compile time and runtime anyways and for cases where you want to check the shape of some data you can use e.g. yants[0].
The only difference would probably be slightly better error messages for incorrect type applications on builtins.
I read lots of complaints about how hard it is to discover what types of arguments are needed and what types result for any particular function. If they were declared, there would be no need to dig and guess.
> Now that we've seen what happens with languages like Cue I'm not so sure anymore.
Could you maybe elaborate on this point? Or point me to other resources detailing this argument? I'm unaware of details about the cue project: i found it interesting on paper but when i never tried it as i personally consider it's useless as long as you can't validate data against an external schema (like JSON-LD or XML schemas).
> The only difference would probably be slightly better error messages for incorrect type applications on builtins.
Why only on built-ins though? If nix was statically typed, every package could define its own types for configuration/overrides and entire documentation could be auto-generated. I was not aware of yants, it looks great but it sounds like something that would be very useful as part of nix itself (like a nix check command). I love nix/guix as a concept but every time i've tried i've been been put off by severely-unintelligible errors due to bad typing or simple syntax errors (due to my lack of experience of what to use where and lack of documentation on expected types).
>* The nixstore treats a filesystem as an object store.
That's a weird thing to list as a shortcoming, since that's the main innovation of Nix. ("package management as memory management" as the paper and thesis said)
Because you can encode the same information within the files and some dirs while keeping the benefits of a filesystem, heck you could encode what nix uses sqlite for using symlinks, files and dirs
Third point isn't a shortcoming, it's a design choice.
Fourth point is objectively false: there are no setuid binaries anywhere in /nix/store. No setcaps either. One unprivileged user owns everything under /nix/store. I absolutely love this. Installing a new package cannot create security vulnerabilities, because it cannot affect existing packages or their dependencies, no new processes are spawned, and no setuid/setcap binaries are created. Note I am talking about nixpkgs here, not nixos. If you trust sandboxes (like chrome does) then compiling a new package cannot create security vulnerabilities either.
Second point is just an annoyance. It bothers me too, but we NFS-users lost the mindshare battle long ago. It's all HTTP and SSH now.
I dove in recently and am enjoying it on a home server. The only trouble I had was with setting up fuse filesystems correctly. A couple times it completely refused to boot because I got some fuse option wrong and a filesystem failed to mount. These were options that worked correctly when started dynamically through "nixos-rebuild switch" but not on boot through /etc/fstab. I've written my own overlays and some derivations for my own tools and genuinely enjoy the language.
If you want to deploy and configure something, nix does that exceptionally well.
If you want to install up to date software on Debian, RHEL, or other distribution but not negatively impact the system, nix does that exceptionally well.
If you want to build containers, nix does that decently.
Nix solves a lot of problems beyond reproducible environments. And it solves them permanently for nearly all use cases.
But it sounds like you have to get deep into this programming language, and figure it out yourself for the stuff that's not already there?
Containers and software installs etc. already have solutions that are much simpler to use. Don't require me to learn a new language or get into the weeds of how each package works.
That's kinda fair, but I think it's worth keeping an eye on.
Nix is really good at "programmatic environment for managing packages". Which is excellent for devops. There are advantages for developers, too.. but, developers can get away with quicker + dirtier solutions a lot of the time.
e.g. Maybe your project assumes a specific version of Ruby or whatever. Nix can solve this in a nice way, but you can use Ruby version manager (or the general asdf). Want to install Python packages, without conflicting? Nix can solve this in a nice way, but you can use virtualenv. Build a container image? etc.
An example of "practically useful, which Nix can do, others can't" I've seen is "just run the code from some repository". The non-Nix equivalent I've seen is `docker run <some image for a CLI tool>`. Nix allows the same UX, but without relying on containers. -- I think that's something everyone could benefit from.
I think the benefits for developers are definitely there; the main downside is that compared to those "quick + dirty" solutions, anything off the happy-path with Nix is quite difficult.
I can see how it would be a good system for ruby or python packages, or a docker replacement.
But it seems it isn't able to blindly mirror stuff from pip, for example? Or to just package any linux under the sun, like Docker can? You'd have to have a nix-specific version of every package you want to use.
And then, the first time you hit upon a package it doesn't have, you have to develop and maintain your own version of it (as far as I understand?). This sounds like a lot of work for someone who wants packages to just work, so they can focus on other things.
I remember using freebsd at some point. The annoyance of not being able to just grab a deb from random sites and install them far outweighted any benefits I got from a cleaner OS organization and some extra features. Nix feels the same way.
I am not saying nix is a bad idea or that it'd have no benefit. Just that, for non-enthusiasts and non-early adopters like myself, it's dead in the water until there is some major support behind it.
> Just that, for non-enthusiasts and non-early adopters like myself, it's dead in the water until there is some major support behind it.
Sure, this is pragmatic.
> But it seems it isn't able to blindly mirror stuff from pip, for example? Or to just package any linux under the sun, like Docker can? You'd have to have a nix-specific version of every package you want to use.
I think the effort of "I have to package this" varies from "automatable" to "PITA".
My experience has been that for writing package expressions for Python packages or Golang applications, it was more/less straightforward copy-pasting that required little Nix understanding. (The effort being: what are the sha256 sums, and what are the dependencies). -- I think to an extent this can even be done by "<lang>2nix".
An example of difficulty I've run into was trying to package a repo where I was unfamiliar with the language, and where build process assumed they could modify $HOME. I noticed that e.g. the anki package had to have a workaround anki-bin in order to get up-to-date versions because anki's build process changed.
I've mostly used NixOS, but eyed Guix from time to time, not really dipped my toes into the Guix-water yet but not completely unwilling either, the syntax and cohesion is really interesting and alluring when compared to NixOS.
I have the two threads favorites (sub-threads in a large conversation) that might be of interest to you:
I've briefly used both so don't expect my comparison to be exhaustive:
- nix uses a centralized non-free repo (Github) ; guix relies on mailing lists for patches
- nix invents an entire language ecosystem that's based on JSON + functions ; guix reuses an existing (and complete) programming language (GNU/Guile, a dialect of Scheme lisp)
- nix packages proprietary software as long as they can provide a checksum for their executables/packages ; guix has a dedicated repo (non-guix) for that ; in both cases it's opt-in
- nix has differences in behavior when run as a distro (nixos) or on a foreign system ; guix has unified naming conventions for both cases (although some details of setting up on a foreign distro for eg. GUI applications can be a bit tricky)
- nix CLI is... confusing, to say the least ; guix CLI UX could be improved but is rather easy
- nix packages tend to be updated more quickly compared to guix packages
- guix maintainers are very picky about package descriptions: i tried to package a rust program along with ~20 dependencies, and the maintainer said i should come up with different short summaries vs long descriptions for those 20 dependencies... which are software i've never used so i'm not the good person to do just that
- guix has a strong emphasis on security R&D: bootstrappability is making huge progress alongside the guix project (see blog posts), which to my knowledge isn't researched on nix side ; "guix challenge" and "guix git-authenticate" are very important/cool things to read about if you're curious
I have been running NixOS and uxing Nix for almost everything since I switched to it, and my experience has been great so far. It is one versatile solution to a lot of applications:
1. I run NixOs on on laptop and workstation, for work and for personal use. Any change to the configuration can be immediately transfered to other machines. And give me a new machine, I can quickly bring it up with the configuration that I am always using.
2. I run NixOS on my home server and machine learning cluster. Deploy experiment and service on them is as simple as updating the git-managed configurations. Meanwhile, if I want to use more power to run an experiment, NixOps can quickly bring up a set of machines on AWS with the configuration identical to my local experiment. You do not need to worry about setting up CUDA/pytorch and other dependencies every.
3. In my start-up, it definitely took more time to train the developers to learn Nix, but once they get familiar, everyone becomes a good DevOps engineer, which solves development environemnt (that invloves C++/Python), CI/CD and release really well.
If there is something like heartbleed that needs to update a common dependency, how does NixOS deal with that? Do you need to rebuild everything that uses that dependency, instead of just changing the shared library on a more traditional os?
Yes, but that's not a problem. In traditional systems the reason to avoid rebuilding everything isn't because it takes time (it does, but not that much time); the reason to avoid rebuilding everything is the fear that, halfway through rebuilding everything, you'll discover that some shared lib is missing, or some crucial toolchain has been uninstalled, or that some updated packages rely on incompatible versions of the same dependency, and it turns into a nightmare of toolchain and dependency resolution, leaving your system in a half-upgraded mess. But if the promise of Nix is true, then that's no longer possible; you rebuild everything, all the dependencies work out fine, and you go on with your day.
It sounds like this is similar in concept to Bazel, except the output of Bazel is a statically linked binary instead of a binary that dynamically links .so's with known SHAs.
In both cases you have a language that fully specifies how to hermetically and reproducibly build some output artifact(s) based on dependencies that are fixed down to the SHA.
Dynamic linking is the default in nixpkgs, but Nix itself does not care what you build. So you can use the infrastructure in `pkgs.static` to build statically-linked binaries if you like.
Nix is repeatable, but not upper-case Reproducible i.e. you don’t necessarily always get the same hash from the same build. Order of operations or concurrency or something like that.
> Whatever package is thrown your way, you can rebuild it locally and check whether you have the expected build output. This is analogous to verifying a binary downloaded from the Web by computing its sha256 hash, and comparing it to the one provided by the publisher of the binary.
Can someone explain this to me?
There is an experimental feature to add content-addressable packages that will give you this, but Nix package hashes are hashes of a package description, not the package, so you can't use them like a checksum. You can say "Hey do you have this package I want?" and a server can use the input hash to find what you're talking about, but there's no way that you can verify that what they gives you represents a good-faith attempt to build the package.
I would also be very surprised to learn that a significant portion of Nixpkgs is "bit-for-bit" reproducible. That's hard! It's hard enough to get packages to work with Nix at all, and very few compilers/build systems/etc are actually concerned with literal reproducibility. Nix doesn't magically provide that.
(There are also "fixed-output" package descriptions which are mostly used when you're downloading sources from the internet -- those actually are checksummed and verified, in case the remote source changed. I guess you could use that same mechanism for a build output, but you... don't?)
Anyway, maybe this is a reference to the (experimental! bleeding edge!) content-addressable store stuff, but I don't think it's fair to call that a selling point of Nix yet. (Or I am way out of line. I don't really know what I'm talking about.)
> Why is this important? Because it removes the need to trust the provider and makes the cost of detecting a failure or an attack much lower for the developer. Reproducibility means security for your systems and applications, and peace of mind for your developers.
There is a large amount of trust when you download a binary Nix file, which is why you have to configure only trusted cache servers, and why cache servers will only accept submissions from trusted sources.
Reproducibility is really important, and a great feature of Nix -- you can, given a package description, build it yourself instead of downloading it -- but emphasizing security and trust seems very misleading to me. If someone can clarify these points I would appreciate learning something new about Nix.
> Nix undoubtedly has a steep learning curve, and there are two reasons for that. First, using Nix entails rethinking a lot of accepted wisdom and practices that have become second nature for a developer. Unlearning is hard. Second, Nix surfaces all the complexity involved in building software and forces you to deal with it. In order to make builds truly reproducible, this is largely inevitable.
I disagree very strongly with these two statements. This reads like "It's not Nix's fault that it's hard to learn; it's your fault for not trying hard enough."
I think that's absolutely not true. Nix is hard to learn, and it's mostly Nix's own fault -- which is a good thing! Because it means that it's fixable. Nix's approach to package management isn't some fundamentally difficult concept to wrap your head around. The concepts and ideas are pretty straightforward, once you understand them -- but how are you supposed to understand them?
The Nix manual is sort of infamous, and not without reason. Nowhere in the manual will it teach you how to write a standalone package description -- although it will show you how to contribute one to the Nixpkgs repo, which is a pretty different flow. It doesn't really explain the high-level mental model I have of the Nix store as a simple graph database. There is almost no coverage of nix-shell, or the things it can do (which include some really cool things! Shebangs that explicitly list all of your scripts dependencies!). The quick start guide, until quite recently, explained how to install packages using the "wrong" command -- `nix-env -i git` -- which takes about a minute to run on my machine. (The "right" command -- `nix-env -iA nixpkgs.git` -- is instantaneous.)
But it's not just the documentation's fault. Why is there a wrong way to install packages in the first place? It's one of many weird historical mistakes, which are all forgivable independently, but which add up to one of the most confusing and inconsistent command-line interfaces I have ever used. `nix-env -e` takes glob-looking things, but `nix-env -q` takes regular expressions... I dunno. There are tons of things like that. (There is a new command-line interface that you can opt into by editing a Nix config file, which is much better, but not without its own fun new set of problems.)
In my experience the incidental complexity of Nix was a much larger barrier to learning about Nix than the "actual" complexity of bundling and building software that the article places the blame on.
In case this wall of text should come across the wrong way: I love Nix, and I wouldn't want to give it up for anything. I switched away from Homebrew about a year ago and haven't looked back. I am someone who writes a lot of random scattered projects in lots of different languages, and being able to document "system" dependencies on things like pango or imagemagick or pcre or whatever is really useful for me, as a way to augment my cabal files or package.json files or whatever else.
But it was pretty hard to learn how to wield it effectively. And I wish that others could experience the benefits without having to go through what I did (although if you want to go through what I did, I wrote about the experience here: https://ianthehenry.com/posts/how-to-learn-nix/).
Someone already said this but yeah, Nix the idea: long overdue. Nix the implementation: maybe check back in a couple years.
That's not "a huge number of packages", that's just the 1500 from the minimal installation. nixpkgs has over 80000. And amusingly enough, Nix itself is one of the (two) non-reproducible packages from the minimal install.
That’s great! This is only testing about 2% of the packages in Nixpkgs, but I bet if you weighted by package use that number would look a lot more impressive. Good work, maintainers!
But I stand by: this is the product of a lot of hard work by dedicated people working towards perfect reproducibility and not (as I thought the original article implied) something that Nix gives you for free.
Guix is communicably slightly better but ideologically slightly worse. Shepherd is lovely.
How about someone creates a dsl in Rust that manages declarative system configurations at a _very_ high level of abstraction? We need the declarative purity, the categorical proofiness, the simplistic easiness of nix/guix, but we also need to make it actually useable.
> How about someone creates a dsl in Rust that manages declarative system configurations at a _very_ high level of abstraction?
Did you read my mind? I'm personally more interested in a "rustible" kind of tooling, but i can definitely see myself writing packages/abstractions for a "rustix" too. As nixops and guix deploy have shown, the two concepts are very related.
Where do other likeminded people meet and talk about this? Should we create a chan on a friendly IRC/MUC server? (please not on a platform which requires web clients, or where the bridges to IRC/XMPP are bad)
Well package definitions is what i was referring to as "rustix". While grandparent comment was more about configuration management which i referred to as "rustible".
I'm more interested in the latter, but i can see the appeal for both. Being able to generate docs (cargo doc) for the entire project and run doctests across all packages to ensure all provided examples actually work sounds very useful.
I love the idea of nix but the inconsistency and developer experience is terrible. I want to suggest people use it but there is too many rough edges currently.
For example, if you want to install a package the old way, you'll install it including the channel:
nix-env -iA nixpkgs.ripgrep
but then if you want to remove one, you don't reference the channel:
nix-env -e ripgrep
You have a similar issue if you want to use the new `nix` command. To install a package you'll do:
nix profile install nixpkgs#ripgrep
but running:
nix profile remove nixpkgs#ripgrep
will do nothing. It won't say "I didn't remove the package" or "package not found". It just returns silently. The only way to remove it is to point to the number from `nix profile history` or the actual path.
It is unbearably slow:
> time nix-env -qaP ripgrep
nixpkgs.ripgrep ripgrep-13.0.0
11.17s user 2.70s system 73% cpu 18.970 total
Overall I love the idea but it has a long way to go in developer experience and quality before it is ready for any mainstream adoption.
It's the "-A" that requires "nixpkgs." for installing. It's slower and probably there's other details I'm missing (the complexity rarely makes me want to dig deeper than these bare-basic nix-env commands since I only use it for a couple of things), but you can do "nix-env -i ripgrep".
Actually, -A makes it faster and more precise about what you want to install. The Nixpkgs repository is defined as large associative array containing packages. When "-A nixpkgs.ripgrep" is specified, nix-env looks up the associative array using the key "ripgrep". That in theory should take constant time regardless of the size of Nixpkgs. When -A isn't specified, nix-env searches the entire associative array for a package whose "name" attribute looks similar to "ripgrep." That takes more time as the size of Nixpkgs grows. It's also less precise because the "name" attribute for packages aren't guaranteed to be unique, unlike a key for an associative array. It doesn't do an exact match either, because the "name" attribute also includes version numbers by convention.
For uninstalling, nix-env doesn't have the -A flag, because it has to operate on the set of installed packages, not Nixpkgs.
This has been my experience as well. I have wanted so badly to love it and try and introduce it into many different systems to provide consistent tooling but its just too slow. I also found that many people had a hard time picking it up myself included.
Thank you sincerely for this public service announcement. We might joke about getting "nerd-sniped", but it really can be difficult to predict ROI when choosing which new things merit attention. This glimpse into real DX issues probably saved me (and many others) some real time and frustration.
OTOH I want to encourage those who do have the time and inclination to get involved and improve those things and help reify the good ideas / fulfill their potential.
I really wanted to love Nix and tried my best to ease into it, e.g. using it as a "pip" replacement for Python, then managing dotfiles, etc. Maybe I chose the wrong side of the "flakes schism", but it was so foggy and unclear how one went from "lazily evaluated attribute set" to "an actual system".
I really wish this was written in something even like Haskell or Ocaml, that has actual real language support and tooling. Troubleshooting attribute errors was just not possible and I gave up (GUIX looks awesome, but I can't really use shepherd unfortunately).
It really says something when I found myself looking to Gentoo as an "easier, well-thought-out alternative"...
>I really wish this was written in something even like Haskell or Ocaml, that has actual real language support and tooling. Troubleshooting attribute errors was just not possible and I gave up (GUIX looks awesome, but I can't really use shepherd unfortunately).
The 3 times I've tried to use Nix (and invariably gave up), I always came to this conclusion.
A language like F#/OCaml with great typing and type inference and good auto completion could get us half the way to usability.
Basing their killer app, the package manager, on a weird Haskell-like semi-functional undocumented language was a major faux pas of the project in my opinion. Guix is certainly easier to understand and hack, but sadly too "GNU" (i.e. orthodox) to be as useful in the real world as NixOS.
Adopting NixOS means learning how everything fits together as well as learning the quirks of a not very intuitive language. Way too make adoption even harder for everyone on the fence.
For installing a package, you have to specify the package repository. For uninstalling, you don't because you're operating against a separate repository containing locally installed packages. This should be the same for the new CLI too.
Looking at the manual[1], instead of:
nix profile remove nixpkgs#ripgrep
You'd probably need to run:
nix profile remove packages.x86_64-linux.ripgrep
For searching packages,
nix search nixpkgs ripgrep
is faster than
nix-env -qaP ripgrep
But I find the online search UI[2] to be far more superior.
Is there any data on how many people care about said promises as opposed to care about not having an actively hostile and prickly interface? Because I enjoyed tinkering with this kind of thing in my 20s, but in my 40s I just want something that I can use without having to memorize a ton of arcania, and that lets me solve my problem and get back to my actual goals.
I never wake up saying "today I want to install a package on my computer and revel in my extreme knowledge of how my package manager works". I wake up with plans, and installing a package might be necessary to achieve them.
To be clear, Nix's CLI syntax is what it is because of the following requirements:
* Provide the ability to create a new package by extending an existing package definition programatically
* Provide the ability to have multiple variations of similar packages
To satisfy those requirements, you need a way to uniquely specify packages by its precise location of its definition. It does not interact well with apt's model of having a central package repository assign names for every known package.
> "today I want to install a package on my computer and revel in my extreme knowledge of how my package manager works"
Nix devs seem focused on improving the ergonomics of the new command line, but it’d probably take time to flesh out the details. In the meantime, I think specifying the package source is a reasonable tradeoff given the features it enables.
I also suspect that keeping the package specification the same between installation and uninstallation time is harder than it might look, though. By its nature, Nix identifies packages by the contents of their definition. “nixpkgs#ripgrep” is merely a pointer to the current ripgrep package definition in the Nixpkgs repository. Therefore, “nixpkgs#ripgrep” will likely not point to the same package at the time of uninstallation.
Not strictly necessary, but neither is the dismissal of serious usability issues as "well actually, it's this way because of reasons" which basically boil down to "not designed with the kind of use-cases in mind that many users here are saying cause them pain". The idea that you can't remove a package by the same name you added it by is just obnoxious.
Sure there are reasons for it, but the reasons boil down to "screw you users" if you're just trying to use it. And a project which is actively hostile to usability feedback doesn't fill me with excitement to start depending on it.
`nix install foobar`:
There are N versions of foobar. Choose one of:
* nix install foobar.gnu
* nix install foobar.lfs
* nix install foobar.experimental20220104
(pre-formatted for copy-paste)
`nix remove foobar`
You have two versions of foobar installed. Choose one of:
If you pick on every little detail that differs from an existing tool, call it a serious usability issue, and accuse others of “actively being hostile,” “screwing users,” and “being obnoxious” for the sake of it, that’s the end of any reasonable discussion.
Different tools have different use cases, design goals, and priorities in mind. That is in no way a justification for name calling and mockery.
In practice, you will either write `nix shell nixpkgs#package` for one-off packages, or will add one single line into your configuration.nix/home-env config file and issue a rebuild.
However: I believe that what the Nix crowd is doing is exploring new ways of packaging and deploying software.
It’s quite different from eg. APT and Yum. The upside is quite big. But one cannot expect them to get every little detail right on the first try. I think it’s impossible to do something like this without experimenting. Or doing a series of Versuchs if you will.
Therefore I do not view Nix and NixOS as a polished consumer-grade product. I would not use it in prod as of now. IMHO it’s too early for that.
I won’t even try it on my laptop. I already have homebrew, which is much simpler and works well for my purpouses.
I still find Nix very interesting. I do believe it has potential.
But what will it lead to? Will Nix be able to deliver on it’s grand promises?
Maybe! Or maybe not. Time will tell. In the meantime I’m here on the sidelines, watching with great curiosity.
Nix itself might never become a big player. But even if it doesn’t, I’m certain that many great ideas and practices will evolve from the Nix Versuchs.
I think it's good to consider ways in which Nix's UX could be improved.
But I'd draw an analogy to vim or emacs. These have steep difficulty curves to them, but provide a power-user experience. Roughly, Nix is to apt-get what vim is to notepad. -- It'd be weird to try vim but live in insert mode the whole time.
It's not a matter of "no-the-ux-doesn't-suck", but more "if you can get past the rough edges, the things which Nix allows are wonderful".
Guix greatly simplifies things. Like others said for Nix, I really would recommend managing things declaratively in your system config and a manifest file, but the basic cli isn't bad.
guix install ranger
guix remove ranger
guix pull
guix upgrade
guix search ranger
These basics just work how you'd expect last I checked. There is some `guix shell` stuff as well, but I don't use it, so won't be able to say much about it.
What is terrible about explicitly specifying the source of the package definition? Seems like a dramatic conclusion to make based on a minor detail of the command line syntax.
I'm going to preempt this with anything not explicitly forbidden is allowed, lest this come off as "you holding it wrong".
Using the CLI to add/remove package is considered kind of an antipattern in nix. The correct way to write a fair bit of Nix (the language) to install and configure things. That then can be versioned, forked, etc.
There are a lot of really cool parts of Nix, but they're under a ball of rough edges.
Yeah, the hard part is maintaining a file just to have the tools you want is a major hurdle that is not reasonable for most people to jump over. If I'm working I might do something like this:
curl example.com/data.json | jq ...
and I'll realize I don't have `jq` installed. I do not want to open vim, edit a configuration file, and then re-run a command to rebuild my system. I just want jq. I can't change context like that just to get a package.
I understand this isn't aligned with the purity stance that nix has but if they were able to allow this use case, it would most likely get more people doing it the right way eventually.
I can't drink all the kool-aid at once, I really need to replace bits of my workflow at a time and I can't do that with nix currently.
The approach I've laid out in my original comment (using `nix-env`) provides a bad developer experience and makes me not able(willing?) to move forward with the adopting more of the practices. If the initial experience was better, I'd invest more time learning more.
Oh I don't disagree. The onboarding experience is terrible.
* You have to learn a new and kind of knobbly single use language.
* If you are going the NixOS route there's the non-trival learning curve for that as well.
* And the package manager itself is more than a little confusing.
As for the workflow. I use NixOS on my personal machine. I'm fine with it because its typically pretty quick and happens infrequently. Buuuuuuut I wouldn't really advocate for NixOS. Its cool. I like it. I'm never going to recoop the time and effort I spent learning how to use it.
> Yeah, the hard part is maintaining a file just to have the tools you want
I'd be okay doing that to install packages but what I don't like is the N number of ways to install packages. Do I use nix-env? Do I use the new nix CLI? Do I make an entry in configuration.nix? Do I use this thing called flakes which is marked as experimental but almost everyone seems to be onboard with it?
There's no central and definitive source of documentation either. There's the official NixOS website, nix pills, nixos.wiki, blog posts from prominent developers, and other sources I don't remember.
I once spend 2 hours on trying to get a newer version of nodejs on my system. Turned out that I had used nix-env a while back to install it and that had priority over the rebuild-switch :| So, I agree. Very risky way of managing nix
nix-env has been easier to use for a new use compared to having a configuration.nix, using home-manager etc, but once you start using any of the latter then just remove everything installed with nix-env (and use nix-shell for one-offs).
for those one-off times I want to just jump into a shell with, say, ranger available, it's as simple as
$ nix shell nixpkgs#ranger
if I am starting up a project that needs a tool, I just throw together a simple shell.nix in a folder for the project and run
$ nix shell
from that folder. there's ways to set it up so you automatically switch to a shell with the specified packages (iirc with direnv just like you'd do it for python venvs) but I haven't felt the need yet.
Yes. (Docs at https://nixos.org/manual/nix/stable/command-ref/new-cli/nix3....) Although it won't "throw it away" in the sense of uninstalling it; it's just that when you exit the shell, you'll be back in an environment which doesn't have the symlink to `ranger` on the PATH.
It basically garbage collects it (you can run it through a periodic service, or call it manually). You also have older versions intact, so you can simply change back to a previous version instantly.
Also, this is the only package manager where I could reliably install gnome and plasma, and have it removed to the last package - pacman and everything else will leave behind random files all around the place.
Well, it can’t help random spamming, but home-manager actually can help here! You can have a declarative config for your home dir and it will symlink config files into your .config folder. Also, it will by default never overwrite any file, but it will ask you to remove spammed configs over which you want to use your own, and since these are symlinks, they are read-only so no program will change them (which is good and bad, because it means programs can’t be configured through their gui settings for example)
it's really useful if you work on projects that need different version of dependencies, as you can define on a folder basis what jdk or python versions you want to have. You can even commit this to git so that everyone has the same env locally
For non-nix usage, I use a docker container with debian in such cases. Just docker run, apt install, keep using while needed, then stop and rm the container. Works great to keep the main OS clean.
It'll still be there, but the symlink will go from your path. The nice thing is that if you open another shell and use it again, it's already there, and the symlink is reestablished. If you want it installed globally then it's still there and it just gets added to your path.
If you start to run our of disk space, then a quick garbage collection will remove all unused packages and builds.
The other nice thing about the nix-shell is I can incorporate pip into it (and probably other package managers as well, though I rarely use anything other than Python). So I can open a shell, add various lib type dependencies and any Python packages I want. All of it is only available in the shell, and when I exit they're gone as far as my OS is concerned.
You could clone a repo that contained my default.nix file, jump into a shell and be working with and identical environment to mine.
I tried. I didn't feel like spending days learning their custom config language just to install a program I want to use. Ease of use is not a label I would apply to NixOS.
Try NixOS with the new redesigned command line. It's much simpler. No idea why this is not getting advertised more widely, as it's been worked on for a few years already: https://github.com/NixOS/nix/issues/779
For example, opening a temporary shell with, say, GIMP, is as simple as: nix run nixpkgs.gimp
If you want to check what packages are available for GIMP, because you are not sure what to run: nix search gimp
I tend to forget very convoluted commands, and I find the new nix much simpler than pacman or apt.
It's kind of nice. So everything in nix revolves around /nix/store
This is where each package is stored in a form <hash>-<package name and version>
The second part is only useful for humans and is not really necessary. The hash is computed from package source, compilation flags, dependencies, architecture etc. (there's also experimental mechanism where you can use hash that is computed from the contents of the package, which solves some important problems).
The reason why hash is used, is so you can be able to have not only multiple version of the same package, but you can also have even the same version that maybe even depend on a different package.
For example in a traditional system you could have situation that you need to run two apps, but each of them perhaps depend on a different version of openssl. That situation is generally not possible with some kind of tricks, you either have to recompile one application to use the same library or you will have do some kind of tricks, like compiling statically, or placing openssl in some other location. In Nix you can have two openssl side by side and each package is referencing the one it was compiled with.
The store also is designated that it supposed to only adding packages or removing, you no renames and definitively no changes of the files there. If you do that (well it's linux, you can also change permissions and do that if you really want to, but you're asking for problems)
Now this makes caching easy. Because when you need an application you not only know its name, you know down even to compilation of the flags what it needs to be. So you can simply compute the hash of it. So when it is needed Nix will first check /nix/store. If the package is there, success, you use it, if package is missing, nix can contact configured cache (which can be even an S3 bucket) and check for the package. If package is there it'll download and extract it in your store. If object is missing in remote cache, then nix will begin downloading source code and compiling, because that's how every package is defined (how to build it).
Now that explained the basics, when you invoke nix-shell to enter. The Nix will perform these steps, it'll check if you have the package in your store, if it is missing it'll download from cache, if it is missing in the cache it will begin building, if another dependency is needed it will do that step to it recursively.
Ultimately when the package is present nix-shell will enter a shell, with modified PATH that includes the package(s) that you referenced. You exit the shell you land back in old shell with old PATH. If you use nix for development you can also combine it with a tool like direnv, which can make it so when you cd into a project directory it will ensure that your dev tools are automatically available. And they will disappear when you exit it.
If you were paying attention you might realize that this approach will generally make /nix/store just continue to grow, the new packages are being added all the time. This is especially when you use it for development and you're changing your application, you will get tons of versions of the single app. That indeed is happening and is kind of a drawback, but there's a garbage collection method which you can invoke manually, or you can also schedule it to run periodically. There is also an option (I think more useful for a build machine) where you can specify how much disk space you want to use it, and it will just clean old stuff as you're hitting the disk usage limit.
Anyway, changing PATH is what nix-shell does (originally author said it was meant for debugging (you could enter shell that was used for building the application so you could troubleshoot why build is failing), but it turned out a great feature of its own). NixOS on the other hand works in a way that you have central configuration system where you have a configuration.nix file with all system configuration. When you invoke nixos-rebuild command (typically called with switch parameter to immediately switch to the new config) the NixOS actually is rebuilding the entire system from scratch, the only reason it does not take so long is because all needed files are mostly already in /nix/store and all NixOS needs to do is to recreate symlinks in the root to point to proper packages. This is great, because:
1. you get system exactly as described, unlike SCM like saltstack, ansiblem chef, puppet the nix configuration is truly declarative, and not just imitation of it. For example if you define a package in the configuration file and configure system, then remove that package, after rebuild that package won't be referenced anymore and available (it still will be in the store, but it would be just taking space that you can reclaim with GC)
2. because symlinks are used, and packages aren't replaced in-place. You can actually make a rollback. Let say you have your system running and you want to try Wayland. You make changes to the configuration to install it, maybe the process failed for some reason or you realized you want restore things back to where they were. You can just revert to a previous version. This also will be a very quick process since you likely still have your old packages installed and nothing new needs to be downloaded.
It's really a new very different way of thinking about packages and about OS, and this approach solves so many problems with the status quo we have right now. Also, removes need for the SCM tools as mentioned above.
I think this might be a broader Linux thing: we could probably simplify a lot of things by going back to the assumption that computers will have a single user.
> I've never really understood the use case for nix-env.
If that's the case, perhaps `nix-env` shouldn't be mentioned as the command to use to install packages on search.nixos.org?
I checked the NixOS manual right now but the last time I did, it instructed to use nix-env to install packages. The declarative package management using configuration.nix wasn't highlighted before as it is now.
This is exactly the kind of thing the parent comment mentioned. I'm not willing to go through these kinds of papercuts to get whatever NixOS has to offer with its radical changes.
Oh, and the nix package manager is still unbearably slow. It reminds me of my (unpleasant) time using dnf. Once you're used to package managers like apk and pacman, its an extremely jarring experience to stare at your screen and wait for a package to get installed.
> If that's the case, perhaps `nix-env` shouldn't be mentioned as the command to use to install packages on search.nixos.org?
Yes, you should pretend this command doesn't exist.
There is no need to be using NixOS to do so.
> Oh, and the nix package manager is still unbearably slow
This doesn't match my experience so I wonder what you're doing differently. It's only slow if you're missing the cache, either local or remote, as then you need to build things locally, unlike apk and pacman which install only prebuilt packages from repositories. Nix is fundamentally different to those package managers. I find it very snappy.
My experience is also that it is quick as long as the package is cached. And it usually is unless you are using master, which is definitely not recommended.
Yeah, this could be a NixOS vs every other OS problem. Even when files are declared it was my understanding that `nix-env` was the way to install the file?
Yeah, I think its things like this that make it hard to adopt nix. If you search for a package on search.nixos.org it tells you to install it using `nix-env`.
All I want is a way to say "I want jq, kubectl, and terraform installed" and have it available globally. Not for specific projects or anything like that.
Right now I maintain a makefile that installs everything for me using `nix profile`:
Which almost exactly like I want. Only issue is sometimes a new hash is generated (which I don't understand.. maybe a config update in the repos?) and the makefile can't run anymore:
error: packages '/nix/store/y65pp5hipid0fzxl1z7xjxdk4h9jwfw7-exa-0.10.1/bin/exa' and '/nix/store/gy0bqcs9mcan8af47wakdylhal67dpy4-exa-0.10.1/bin/exa' have the same priority 5; use 'nix-env --set-flag priority NUMBER INSTALLED_PKGNAME' to change the priority of one of the conflicting packages (0 being the highest priority)
I've avoided home-manager because it says:
Unfortunately, it is quite possible to get difficult to understand errors when working with Home Manager, such as infinite loops with no clear source reference. You should therefore be comfortable using the Nix language and the various tools in the Nix ecosystem. Reading through the Nix Pills document is a good way to familiarize yourself with them.
If its common enough to warn about it, not quite the tool I'm looking to pull into my environment.
Wow, I have never got that error but I haven't used nix-env much.
Yeah, to be honest familiarity with the NixOS module system doesn't hurt for home-manager. For me it was worth it but I was also just interested in Nix enough to read through the Nix-pills, etc.
Maybe given some time Nix will be polished enough that a smaller time investment will be enough.
Another thing: Yes, even if customizing your home-manager config could lead to hard to debug errors, once it works it is hard to brick and it will most likely work on new Laptops, colleagues machines, ...
That's why for my small at work my colleagues don't have to be as deep into Nix as I am but they can still benefit from the changes I am introducing in a pretty "safe" way.
It's unfortunate that this is the top comment, as it might unfairly turn potential users off to Nix. Imperative command line installation of packages with nix is contrary the model of how it is supposed to work, and is deprecated. I've never used it once.
I'm much newer user and admittedly the documentation is not great, so I've used forums and IRC a lot and from what I've gathered, the movement is towards declarative reproduceability. This is the reason for the development of flakes. I saw installation of packages using nix-env in many search results and tutorial pages but newer comments seem to indicate that this is no longer recommended, and it fully makes sense to me. I want to get my machine back to the same state using my config checked into git, and installing imperatively using nix-env clearly goes against this.
The thing is, it might be contrary to the underlying model of the OS, but it's how a lot of people want to do it.
To give a similar example: in the Rust programming language, dependencies are declared in a TOML file. While the dependencies are declarative, there's also a popular tool (that's being added to the main project) that lets you do
cargo add awesome-thing-rs
and add the dependency to your project, and edits the config file for you. That's it: no combinations to remember, no weird use cases, no unintuitive error messages, you just say "I want to add X" and then X is added.
Is it really that hard to add a word to a list in a config file? If this is what is stopping you, then you haven't yet understood how nix works or the value of nix. You could write a shell script in 5 minutes that would add and remove package names from a nix config file, but that's fundamentally no different from writing a tool to add #define lines in a C header file rather than just manually editing it. It's missing the point.
For me it is a context switching issue. If I'm reaching for a package it is because I was in the middle of something that needed it. I'm not sitting there thinking "What packages might I need some day?".
So if nix could allow me to get the tools I need without context switching that would be a great addition. I don't want to have to go edit configuration files just because I haven't installed some tool yet.
But I don't want it temporarily, I want it all the time. So I want to be able to make it available for future me too, that way next time I try to use it I don't have to run `nix-shell -p <package>` again
Then I would look into home-manager and/or nix-darwin if you are on macOS.
I think under the hood home-manager basically declaratively manages a Nix profile for you. You (or even better a home-manager managed .bash_profile) can then put that profile into your PATH.
It will still be on your system until garbage collected, so if you decide to install permanently, it will be instantaneous. "Permanent" is a fuzzy concept with Nix in any case, as all it means here is that it's in your path and won't be garbage collected. In this case you just add the package name to an array in your config. It's extremely simple to do, just editing a list in a text file. I understand you want to just run a command, but perhaps Nix is not for you if easy command line usage is more important than automated reproduceable OS config.
That being said, I think there are indeed ways to partially manage your config using simple command line commands but I haven't found the need myself.
The way I work is I have in my configuration a set of packages that I want to always have access to. Things like git, vim, gpg and so on.
Then for every dev project I have shell.nix defined which lists project's dependencies. I also have direnv installed, so when I enter that directory, the project's tools are automatically available for me. I rarely change the main configuration and more often project's shell.nix. And for one-off's I use nix-shell -p <package>
I just started to install nix and my first impression is the opposite of terrible. The way it describes what it will do in detail, before doing it, I think is _excellent_. Of course, I have not started using nix yet but to me this shows some attention to detail that developers appreciate and that most other tools don't bother with. I am hopeful I will like it.
The installer is absolutely first rate, but I would strongly recommend asking for help at discourse.nixos.org as soon as you start getting stuck (which will definitely happen, probably rather early).
Funny you say that, because the installer is about the only thing that is helpful like that. Everything else from there is basically "what do you mean you didn't read the entirety of the official manual, and the deprecated wiki, and the unofficial wiki, and the other wiki, and Discourse, and IRC, and random blog posts, and just learn to read undocumented inconsistent idiosyncratic source code already!?"
$ time nix search nixpkgs ripgrep
* legacyPackages.x86_64-linux.ripgrep (13.0.0)
A utility that combines the usability of The Silver Searcher with the raw speed of grep
* legacyPackages.x86_64-linux.ripgrep-all (0.9.6)
Ripgrep, but also search in PDFs, E-Books, Office documents, zip, tar.gz, and more
* legacyPackages.x86_64-linux.vgrep (2.6.0)
User-friendly pager for grep/git-grep/ripgrep
real 0m0.514s
user 0m0.494s
sys 0m0.019s
We're trying to improve the UX with the new CLI. Precisely to improve the developer experience and quality. Any other major gripes?
Yeah, the UX for the new search although faster is more of a problem for me. When it returns `legacyPackages.x86_64-linux.ripgrep` that makes me think "oh no, I don't want the legacy package. I want the good one", but no non-legacy package is returned.
Also the name is confusing, it return `x86_64-linux` on linux and `x86_64-darwin` but why does that information need to be included? Then it brings up the problem if I need to install with the whole long string or if I can just install with `ripgrep`.
The other big grip I have with the new CLI is the installed packages. If you run:
> nix profile list
You get a gob of mostly unreadable text scrolling through your screen:
Okay. Looks like there is an issue for duplicate profile entries already; just hasn’t gotten attention. https://github.com/NixOS/nix/issues/5587 I’ll take a look.
The verbosity of “legacyPackages.x86_64-linux.ripgrep” in the search results may be too much. We can omit the default portions or provide the full detail somewhere less prominent.
I wanted to like nix, but its just too complicated and inconsistent for me to ever recommend it. If my team adopted it, there would be one nix guy/gal that did everything because no one else would bother to learn it.
Also they seem to have alot of breaking changes in the cli. It's basically unlearnable at this point because none of the tutorials or examples work since they were written for a previous version.
Nix seems like a good idea, but the implementation is just a mess. It reminds me of the messy inconsistent AI/ML code I see coming out of academia.
I'll revisit when they have a single executable, that makes the simple things simple like every other package manager. `nix search mypackage` and `nix install mypackage` shouldnt be so difficult.
I really like the idea of nix, especially for my servers in order to do provisioning and configuration..
Problems
1. Not working by default
2. Documentation is ...not ready
3. There are some versioning issues
[1] A week ago I tried to install the nix (the package manager) on my arch box using the recommended multi-user script and it screw up with permissions and ended up not working.
CMake is aimed at describing how a C/C++ project is built, and competes with autotools. Nix is aimed at describing packages installed to a system, and competes with (say) apt-get or Dockerfiles or Ansible.
When you list the sources for a Nix package you also have to specify the hash expected. If it pulls the sources and the hash doesn't match then the package will fail to build.
I'm not really anti-nix, but never really understood the appeal.
Linux is already a niche market, and this is some subset of that. I've been running a home grown Arch variant for years without a single issue. Why someone like me would want to type everything out as some config language is beyond me.
I could likely completely nuke my setup and redo it all in less time and effort than learning Nix. In fact, I can say it takes about an hour - I usually do it anually.
For me the appeal of NixOS is that I never have to nuke my setup. I probably used to do that about three or four times a year when I was running a Debian variant. I'd get lazy and install a bunch of stuff globally with apt and pip and then not really know what state my machine was in, then nuke it and start afresh. With NixOS I know what state my machine is in all the time, can transfer the configs to another machine, and never have to nuke it. I don't know the Nix language, and haven't bothered to learn it. I do understand my own config files though and steal from others if I need to.
To be clear, I never reset because I have to, just to clear things out. Despite what anyone says, things do get funky.
For example, I noticed different behaviors and tools between wpa_supplicant and iwd. Deleting iwd left me in a working but half state, as the ips stayed the iwd way.
That times hundreds over a year really makes things weird.
Except with nix it doesn’t make it weird. If you delete iwd and something stops working, then you can just roll back to the previous generation of the nix store that included iwd.
Linux, niche?! Idk where you’ve been, but Linux is ubiquitous and runs everywhere.
It’s about scale mainly.
A nixos host can be viewed as a solid state environment, in some respects. At least from a systems management perspective.
If you scale up to more than one host, that you want to keep in a completely manged state, nothing really beats a model like nixos (or guix, although I lack experience).
Having the complete system state of n machines, where n > 1 nixos is hard to beat.
I have extensive experience with tools such as ansible and salt, but the state management of nix is just unbeatable. It’s, IMO, “how it’s supposed to work”.
And with nixos you usually don’t have to nuke anything - a new system generation is enough.
Having managed many thousands of hosts over the years, I’ve longed for something like nixos.
Managing largeish scale infra is not what I do these days, but nixos still appeals to me, and I use it for a few pi’s, a nas and my workstation.
I’ve never tinkered less with my machines, and I love it.
With nixos ad-hoc tinkering is not “the way”, and using a shell.nix for different project folders is just awesome.
I imagine this could help scale dev teams as well.
> I could likely completely nuke my setup and redo it all in less time and effort than learning Nix. In fact, I can say it takes about an hour - I usually do it annually.
Right, this is pragmatic.
I think most of the people on NixOS at least tried using Nix on Linux or macOS first, liked the taste of it, and then wanted to try at the OS level.
The most accessible benefit of Nix is if you're using multiple computers, Nix makes it easy to get the same versions of the software you use on each. I thought that was useful for vim/tmux/fish.
Where I'm optimistic for Nix in the future, for developers, is that Nix is great at describing the packages for a development environment. It's neat to just have a single command to run & have everything ready, compared to hoping that the README contains the right package dependencies for your distribution.
I used Arch for a long time but I always find that I couldn't keep a consistent configuration across multiple machines. The machine I used most often would quickly grow numerous augmentations and changes that I would be missing on my laptop and desktop. There are tools like etckeeper and numerous dotfile managers but most lacked the capability to recognize the machine's state/hardware (e.g. laptop, desktop, server, etc.) or only managed dotfiles but not /etc.
Overtime the best configuration tended to be just using the defaults because getting used to a tmux.conf on one machine would just become an annoyance one I was on a machine without it. (Plus now you need to use about 12 different configuration languages :))
A huge advantage Nix has over any other configuration tool is that is a single source of truth. A normal dotfiles repo doesn't know anything about the actual programs or OS so it's very easy for it to become incompatible with a machine's actual state.
On Nix, tools like home-manager [1] especially when used in a flake-template like devos [2] just solves this problem really well. It's very easy to make a change, switch to the new configuration, and then commit the change. Managing bespoke services and scripts is very easy because one file in Nix can create a service with a script's contents, create a user, install necessary packages, etc.
I think (desktop) Linux is partly a niche market because the management can be very complex. Nix is certainly very complex itself but its structured enough that it can create very simple systems. There are already several NixOS based OS's that are basically just configuration templates. I could certainly see a more user-friendly distro being released using Nix as the foundation.
Nix is like vim. If you base your opinion on the first month of use you are gonna have a bad one. The learning curve can be a bit steep, but once you get past it there's no going back.
I don't get it. So with Nix, every package I install will possibly come with its own dependency tree, completely independent from other packages. So instead of relying on distro maintainers who, say, check one version of OpenSSL, I'll be relying on the devs of each package. I doubt they will do the work, so either they will never update their deps, or update without checking them.
It feels like Nix is like npm for my system, and distributes the trust I put in maintainers between all the devs of all the dependencies. That's not what I want...
> Instead of relying on distro maintainers who, say, check one version of OpenSSL, I'll be relying on the devs of each package.
This actually is not the case (although you could do it). While Nix has the capability for every package to have it's own dependency tree, the actual channels/repositories don't keep track of more versions of packages than necessary.
For example, OpenSSL has exactly 3 versions; the 3.0.x branch, the 1.1.1x branch, and the 1.0.1x branch. Some applications need the old 1.0.1x, most are probably on 1.1.1x, while others are moving towards 3.0.x
Nix allows you to distribute packages and their dependencies as quickly as upstream allows but it still defaults to allowing dependencies to update normally and will rebuild packages automatically. For comparison, I checked Archlinux's repositories and they also have two versions of OpenSSL but they have to make sure the shared libraries don't conflict on the filesystem while Nix handles this automatically.
For more sane examples, Nix can easily provide 5 different Python versions, 5 PostgreSQL versions, etc. But if a package doesn't care what the exact Python 3 version is, then it can just take "python3" as an input which will automatically upgrade as you would expect. If a package works with 3.8 but not 3.9 then it can be pinned to 3.8.
Additionally, while the NixOS nixpkgs repository has a lot of maintainers, there are a lot fewer members with commit privileges so it's not the NPM wild west. With extensive use of Github actions/tags, they can quickly merge simple updates while making sure packages definitions are of high quality.
I think that Nix's time will come in some years. Without features like flakes and command line tools with better UX, it's still too hard. These things are in the pipeline, but at least for me, Nix doesn't feel quite polished yet.
I'm using it on my servers though, and the experience is great when you don't have to do anything that differs too much from the "standard setup".
> As an example of where this central point of failure becomes problematic, attackers can modify a package to include malware in what is known as a digital supply chain attack. What we really want, and what no package manager can give us today, is reproducibility.
That article is merely making the argument that reproducibility on its own does not improve security. This is clearly true. Anyone can also claim that code-signing does not improve security because "well, you have to trust either this blob of software, or this certificate blob, or this signature, what's the difference?" The difference is when you start to compose these parts into a larger system. Dependable qualities of a system (eg: a specific hash function is one-way, or that specific problems are hard/easy to solve) can be used to create the security properties you want.
I doubt Tavis is saying that builds should not be reproducible; just that it is not going to provide security on it's own in a trivial manner. I'd argue that reproducibility can help create secure systems with properties otherwise unobtainable (or difficult).
I’m sure reproducible builds has some small part to play in supply chain security, but I think it is massively over-emphasised.
My own view is that all of supply chain security is somewhat of a red herring anyway. I don’t want to have to trust software vendors at all, whether I think they may have been hacked or not. I shouldn’t have to trust log4j or any other legitimate dependency. And I shouldn’t have to audit the source code (or delegate that) to find out if I should trust it. We run all software with far too many privileges by default. Kate Sills at Agoric had a great article about this a few years back (Medium, sorry): https://medium.com/agoric/pola-would-have-prevented-the-even...
Using a distro like nixos highers the bar for attackers since more eyes might be looking at upstream updates for malicious codes (not an obligation though).
Moreover, i don't know about nix but in the guix ecosystem there's the guix challenge command which enables to query different build servers for the same build hash so you don't have to build it yourself but can split trust among several trustworthy actors. So one build server getting compromised would quasi-instantly raise alarms.
Is there a system for running a package cache for nix? I love how Nix stops dependency churn from breaking my builds. What worries me is that it introduces quite a few dependencies on remote servers, both for any global repository, and any tarballs referenced in the nix file.
With golang, you can setup your own package proxy, point GOPROXY environment to that, and then be guaranteed that even if the original host is no longer available, you can build your code. Is there an equivalent for nix?
I'm unaware of the details (haven't done that myself) but you can run your own build server and repository. The problem is more about archiving upstream sources, but if your own build server runs behind a caching proxy it should not be a problem once initial build of all your packages is done.
Yes, the nix store essentially acts as a cache. Nix can access "substituters" (e.g. remote nix stores/caches) over SSH. If you run nix-serve that's what cache.nixos.org is using (nix store over HTTP). [1]
If you enable the "keep-outputs" and "keep-derivations" in your Nix config then your local nix store should keep anything it used until you manually clean out old derivations (assuming you aren't accessing the tarballs via some ---impure method).
An easier option is to use a tool/service like cachix [2] to run your own repository which can be used just like the normal cache.nixos.org.
is nix commonly used only for compiled languages? for example, is there anyone who creates and manages separate packages for frontend and backend in a web project? I would like to hear what benefits there are if any.
The ability to compile everything simply just from one file is amazingly powerful though, and makes sure its all isolated. With NixOS being able to define whole system with one file is also extremely powerful. I bet valve's steamos wouldve been better with nixos as base and flatpak for users to install apps with, and i say this as someome who has used arch for over 10 years. Nix makes development of complex systems easier.
Since the derivations can be cached, your builds will also be faster.
Perhaps it is just me, but the provided example makes me shudder:
> For a made-up example, let’s imagine your company maintains a tool built 25 years ago with the C compiler from that time. It can’t be upgraded safely because the entire world’s banking system would fall apart if it did. Maybe you keep a dedicated laptop as a digital time capsule to maintain this tool. Instead, you can create a Nix shell with whatever version of the tools you need, pinned and never upgraded.
Yikes! If your company depends on a tool it can never upgrade because the business would collapse if you did, you should be embarking on mitigation projects and/or partial rewrites right now! Don't enable this abusive situation for longer with "Oh but Nix makes it much safer!", because that is going to make the problem that much worse when it finally does collapse.
Look at it like this: Nix will also make it safer to upgrade. If it goes wrong you can easily roll back and try again with some extra patches. The fact that I _know_ I can go back to a safe state makes me less scared about delaying updates. So these situations will happen less if working with Nix IMO
Maybe I'm projecting my own experiences onto this, but the few times I've encountered "critical 25 year old code that can't be upgraded" situations they have always been badly neglected and knowledge about their workings is either already gone or concentrated in a few people about to retire.
No codebase has eternal life: sooner or later you need to upgrade the code anyway because some upstream vendor finally retired an API, or CPU vendor finally ran out of spare parts for the obscure architecture it was running on, regulation like GDPR means big changes are needed, etc etc etc. If you do the upgrades sooner, more of the domain knowledge that the original devs built up will still be around and upgrading will be relatively painless. If you delay until the last possible moment, you virtually guarantee that the upgrade will be as painful as possible. In the example, Nix being used as a bandaid to further delay upgrades is very likely going to lead to long term problems
IMO, any codebase that is critical to a business should always be in a state of partial rewriting/refactoring/upgrading, to keep it up-to-date but also just to keep the current devs familiar with how it all works. Institutional knowledge degrades over time otherwise and regular maintenance is way cheaper than the occasional massive blowup.
That said, nix has other benefits and it's merely this example that I have a problem with. Please don't use nix to excuse poor software maintenance practices, but use it for things it was meant for.
I suppose it depends on the application. NASA is still communicating with voyager, and being able to setup a build environment for the old protocols written 40 years ago on a modern computer would be quite valuable to them.
I imagine there are also still microcontroller systems that communicate via standardized protocols that don't need APIs of the kind you're thinking of. Say, like older nuclear power plants.
Everything has a shelf life, but a long lifetime doesn't necessarily mean a system has to constantly evolved/kept up to date in the manner you're describing. Being able to install an environment that's guaranteed to reproduce the original system as deployed would be valuable though, and Nix seems it could fit the bill.
Fair enough, there are definitely some applications where old does not mean out of date (mostly non-internet-connected applications I think). The original example in the article contained "It can’t be upgraded safely because the entire world’s banking system would fall apart if it did.", which definitely does seem like a disaster waiting to happen.
Nix seems like it could fill a niche, but I do wonder if it will be as ubiquitous as some articles proclaim. Certainly with the apps I currently work on (Ruby/Rails mostly) we already experience almost none of the pain that Nix claims to solve, so I doubt many will switch.
Source code and abstract virtual machines make many things easier. However, I expect Rails is typically deployed via Docker or something to capture dependencies. That part could be replaced by Nix, in principle.
People also use Nix quite in combination with Docker, so clearly Docker doesn't go far enough on the developer experience, and Nix doesn't provide isolation so Docker provides advantages from that side.
I feel like there should be a more integrated, comprehensive solution for this sort of thing, but these are on the right track.
Is anyone using Nix to standardize local development tool versions?
In our company — as in most companies, I suppose — apps rely on a specific toolchain, such as the Go compiler, Protobuf, linters, PostgreSQL, and so on.
Coordinating these is difficult. We have to maintain documents saying "install Go 1.17" and so on. Some people are on different architectures, though we are all on either macOS or Linux.
The other challenge is coordinating these versions with what's used to build and test. A developer might accidentally develop using different versions than what we build and run with.
Our solution for a subset of these tools is to use Docker. We have helper scripts to let you run some standard versions of some tools. But there is a lot of friction inherent to using Docker, especially on a Mac.
My idea would be to use Nix to specify the complete environment, and that thjs could also be leveraged to run exactly the same environment in automated builds and tests.
That would require running Nix locally as well as inside our Docker builds and CI runs (CircleCI, mostly). How difficult is this?
My concern is that it could be difficult to mix Nix with the parent OS if you don't adopt NixOS wholesale, which I think would be going too far. For example, imagine we want Go and the Protobuf compiler in our toolchain. If we used official Nix packages for these, they probably declare dependencies on things like libc and other libraries that Nix also provides. But we'd like to continue using our Debian Docker images. Maybe instead of using the NixOS packages, we could write our own package tree from scratch for just the limited subset of tools we need?
My employer does this. They (the team that maintains this) use Nix & direnv, so local developers get nix-based tool versions when they `cd` into the company repo directories, and whatever their native machine provides elsewhere. They support NixOS, Ubuntu LTS, Pop!OS, and Mac OS for this setup. Anything else and you're on your own, but I use Debian with it and just occasionally submit patches.
I don't maintain it in general. Not much work on my end, about the only thing that's failed is if someone whitelists the supported platforms somewhere.
At work we switched from a big list in our wiki + some shell scripts + brew to a smaller list in our wiki + nix-darwin and I really like the result.
Everything which isn't in nixpkgs I prefer to install manually (such as IDEs, VirtualBox, Docker, ...) because the built-in installer/updater is more robust than brew and most of the time brew just invokes the native installer anyways.
If something is in nixpkgs though you can be pretty certain that it will work at your colleagues machine just like it does on yours which is great.
So let's say the developer has a Mac and wants to have Go and Node on their machine; how do you specify these versions in a central way so that all team members have the same versions of tools?
Also, we want to make sure that any Docker image we build uses the same packages. Do we have to migrate our images to be built on NixOS, or can we continue to use Debian base images and simply insert Nix into them?
I'd also be interested in learning how much actual Nix packaging is needed. If the packages we need are all in Nix already (like, again, Go and Node and so on), we shouldn't need to actually write any Nix code, I think?
For simple devshell environments, you probably don't need to write much Nix at all. You can use a tool like numtide/devshell [1] to create a flake from a template which lets you define packages and commands you want to expose via a TOML config file.
The reason to use the Flake based version is that then it will automatically create a flake.lock file which should be added to version control so every user will use the same nixpkgs version.
While you can build normal Dockerfiles using some Nix tools it would be the same as building the image from Docker directly. If you want the same package-pinning guarantees in Docker images, then you need to build a NixOS based image which will let you add Nix packages. The pkgs.dockerTools.buildLayeredImage and it's counterpart function pkgs.dockerTools.streamLayeredImage, are interesting in that they will build a layered docker image so that packages can be shared between containers which makes rebuilding images much faster. [2]
At some point you will probably want to manage the project's Go and Node dependencies themselves from Nix and that involve writing Nix. Essentially the process is to use a tool that convert's the go.mod and node/yarn lock file into a .nix file [3,4]. This can be a bit more involved as some dependencies especially nodejs ones will want to download additional resources at build time which breaks the Nix model.
The best way to use Nix is gradually; just start with a simple devshell. There are some oddities when using Nix on Mac so it's best to get those figured out before trying to do too much.
I was disappointed to see that the official macOS install is multiuser, and you have to apparently work around that if you don't want the full daemon/root-level install. I'd want the install to be really simple and unintrusive.
> So let's say the developer has a Mac and wants to have Go and Node on their machine; how do you specify these versions in a central way so that all team members have the same versions of tools?
The easiest setup would be to use `nix-shell`. This would also have the benefit that you could have different versions for different projects.
If you also install `direnv` on the machines you can use `use nix` in the `.envrc` file to automatically load the packages that are declared in `shell.nix` into the PATH once a developer enters the project directory in the terminal.
Containerization is also one thing that's awesome about Nix: Instead of saying: "start with image xyz, then run apt-get to install further software, then do some cleanup" you can just say "please build me an image with e.g. jdk11 and openssl".
Nix will put everything into the image thats required for those packages, not more, not less. And for Nix that's easy to: It just has to put all the packages in a tar file and call it a day (more or less).
At work we use this for Java Spring Boot projects and we started using Nix container images in production as well.
For development we still use a JDK installed through IntelliJ as it is not that easy to teach IntelliJ about a JDK installed through Nix.
But on CI and when running the project from terminal we can use exactly the same JDK as in the container images.
reply