Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Lol, you're joking right?

With respect, the Rust community is extremely in-exposed to the majority of actual enterprise environments, and that's ok.

Honestly the best path forward is to just acknowledge it, say "we are working on it" and you'll get your point across better.

Again with respect, because you all do really amazing work, fighting it honestly makes you and the whole community come across ignorant.



view as:

I'm not joking. Did you miss the part where I said the situation is worse for gcc? I do understand that this is a problem. I also do understand that it is not specific to Rust. Besides, convincing IT to allow rustup with an access-controlled toolchain directory shouldn't be a big deal in many cases. If they allowed one compiler, why not two? (Rustup can also be installed without sudo/admin, but I assume you are talking about situations where they want to restrict the executables running, not just globally installed executables). Yes, I know that in some cases they may not want two, but again, in those cases the GCC solution won't work either.

Note that tools like nvm etc are also quite necessary in their respective spheres for serious development. I was once in a situation where I couldn't even install node at a modern version without nvm.

For clippy, we are working on it, yes. For afl a simple fix exists (write the analog of -Xclang) but that fix will only work nicely if we ship llvm headers too which clang does (which we can, but it is a decision that can't be taken lightly). So no, we are not working on it. It is an interesting idea, and I will ping the relevant people (there are plans to ship "optional components" that you can obtain with rustup or download directly), but there are no concrete plans.

The general evolution of Rust is towards everything being done through rustup. If you want to install a cross compile toolchain, clippy, get stdlib sources, or a different compiler version, use rustup. All the solutions to the nightly problem involve rustup. I.e. instead of installing nightly and using clippy, you just `rustup install clippy` or something and the relevant clippy binary (or compatible source?) will be downloaded, one which will work even if you are on stable. Basically, rustup will be the thing you install when you want to use rust.

This is good, but as far as enterprise is concerned, you are still installing rustup, not rust. I'm not really sure what's wrong with this, but I'm sure some enterprise shops will have issues with this. For those which don't; the solution of having nightly installed via rustup in a frozen directory is not that much different, just unweildy.

I have worked in places where the computer is locked down for many teams. But not in a place where it would be locked down in a way that rustc would be okay but rustup would not. I'd love to hear more about such cases and discuss how it can be addressed, but I'd prefer email (username@gmail or @mozilla.com). Thanks :)


> The general evolution of Rust is towards everything being done through rustup.

The general evolution of Perl was towards everything being done through cpan (the client). Decades ago. That doesn't stop companies from rolling their own Perl packages (when not just using what the system shipped with) and CPAN module packages (when it doesn't already exist as a package for the distro, or in a trusted third party repo, or is not the right version your code needs).

In many enterprise environments, when one or more production servers running your code fails, you want clueless intern #23 to be able to spin up a new box, running the default configuration for your distro, and pop the required packages on for your program, copy the relevant config or two that are needed for it to event start, start it up and expect it to "just work". Running rustup and remembering it needs to install a specific version to ensure everything turns out right is not conducive to that.

Now, this is less of a problem since rust is used to build other stand-alone programs, not necessarily run itself on as many servers, but it we're talking about a QA or testing box that dies, and there's a critical fix waiting to go out, I think it's obvious where this still has some importance.


This is local developer tooling we're talking about, however. It doesn't need to run on those machines.

You want your devs all running the exact same version of the compiler. The last thing you want is someone with a slightly different version than used by everyone else (and production!) which writes some code that compiles and runs fine without problems, but crashes or exhibits a bug in testing, QA, or worst case, production. Having something like that occasionally gum up your development pipeline is a good reason for pegging tool versions (depending on the type of tool).

Having things use rustup is great. Don't assume that's sufficient for certain populations of users (and a populations you really may want, such as enterprise). At best, they'll use rustup as a basis to start some other packaging, but even that's risky because rustup is complex, and it's hard to contain all the variables unless you're very careful about exactly what version you use with it, so then you're packaging rustup, and then package what it created. Easier at that point to just skip rustup.

It's all about reproducibility, which is king in enterprise. Rustup is about ease of use, and easy iteration. Enterprise already sacrifice that on the alter of reproducibility, so it's really not a selling point for them. It's great for everyone else, it just has less to offer for this subgroup.


Right, you can pin to a nightly for tooling.

The solution I proposed does not affect reproducability, since you would only use the nightly when running local tools.


My point is that it's not always a local tool. Sometimes what you consider a local tool might be something run on a QA or testing box, which needs to be tightly controlled to make sure that the difference between a tooling item intalled and a required library that is in production is closely tracked, and the tooling library needs to match exactly what's actually installed locally for devs. Think CI tool chains, automated testing, etc. A certain percentage of organizations where see "let's use this program that dynamically downloads and builds a program, which we can ask for a specific version if we want" as unacceptable because either it requires remote sources, it requires specific configuration which isn't tracked automatically or enforced, or both.

It's not about whether it's possible, it's about whether you're meeting the needs of the target audience. I don't believe rustup is meeting those needs in some cases. The important distinction is that possible does not imply acceptable.


I get that, but the solution is still possible and should be acceptable -- keep a One True Copy of a .multirust directory which you install on all testing servers and dev setups. No network necessary. Enterprise setups already do far more complicated things with pypi clones and whatnot.

Again, having rustup installed with two preinstalled and pinned compilers should not be much different, acceptability-wise, from having one preinstalled and pinned compiler.


I humbly suggest that saying something "should be acceptable" when it's been explained at length that, regardless of whether you think it's acceptable, other people do not think so, is exactly the problem yazaddaruvala was describing above. If you are trying to market to a population, you need to take their needs seriously with a good faith effort. Not understanding is fine. Doing a cost-benefit analysis and determining there's just no resources for what's needed is fine. Telling people that their needs are unjustified doesn't come across very well to those that still feel those needs.

I'm saying it should be acceptable in the parameters presented to me so far. Please explain why two compilers with a switching program is bad, whereas one compiler isn't. I am trying to understand the needs and come up with a solution, but so far they have been presented in bits an pieces. I am working off what has been told to me so far and my own experiences in locked down environments, which admittedly weren't as locked down as the ones you describe. I am also comparing with what other "enterprise-ready" languages require for tooling to try and ubderstand what is "acceptable". You keep telling me that I am not aware of the needs of enterprise users. Fine, I concede that. Please explain them to me. My intention in providing possible "acceptable" solutions is to find out what use case prevents them from working and see how Rust can address that. I am not telling people their needs are unjustified.

...this is also why I implored it to be further discussed in email, while the media are similar I am able to discuss more clearly there.


> Please explain why two compilers with a switching program is bad, whereas one compiler isn't.

Having any output that's based on a person remembering to set a configuration is less useful in these situations than having the configuration hard coded. (you don't want someone hunting down the correct config in some company wiki, much less working from memory. At most you want a config copied from a repository of configs).

Having a utility download a binary or source from the internet is less useful in these situations than serving it locally. (You can't control that the remote resource is still there, is the same, is secure).

Knowing that the exact same stuff that works in whatever development environment(s) you have (local, shared, both) works the same as when it's pushed to some further resource (automated build server, automated test harness, etc) is important in preventing bugs and problems.

In the environment I had experience with, we had a mainly Perl ecosystem running on CentOS servers. We created RPM packages for every perl module we pulled in as a dependency for our system or back-end RPC servers. Everything we installed on these boxes got an RPM built if at all possible. We maintained our own yum repository that we installed these packages from. While trivial to run cpan to install a module on these systems, that was not deemed acceptable for anything going into production. Rustup would not have been allowed into production here, nor on some of the shared dev and testing boxes we had, since that wouldn't lead to being able to build the exact same binaries easily and definitively. The absolute last think you want is a problem that loses data/config, and to find that you're not sure how to reproduce the last build environment exactly.

Does rust make assurance that things should build correctly with later versions? Yes. Does that really matter when you're talking about hundreds of thousands to millions of dollars? Not without a contract. That's one reason why enterprises use distributions that provide this feature, and build their own packages and deploy them where those distributions fall short.


> Having a utility download a binary or source from the internet is less useful in these situations than serving it locally

I already addressed that; I'm using rustup as a toolchain manager, not for downloading. You can have a .multirust directory that everyone uses and use rustup to switch. You could alternatively set rustup to download from a internal server that contains a reduced set of preapproved binaries only. I'm assuming that the external network is turned off here; it usually is in these situations. If you want a tool that manages toolchains but does not contain code that accesses the internet, that's a more stringent requirement that rustup doesn't satisfy; though I do hope Rust's distro packages work with update-alternatives.

You can also create an rpm for your specific two-compiler setup, of course. That's annoying though.

> Having any output that's based on a person remembering to set a configuration is less useful in these situations than having the configuration hard coded

Set default to stable, and `alias fuzz='rustup run nightly cargo afl'` :)

Again, this is for tooling, you can easily paper over the fact that the tool uses a different compiler.

Rust does have a tool (crater) that runs releases on the entire public ecosystem before releasing. There is an intention to make it possible for private code to take part in this too and provide feedback. But you're right, it is still unreasonable to expect enterprise users to trust that updates will work :) Any stability guarantee isn't enough, since bugs happen.


> I already addressed that; I'm using rustup as a toolchain manager, not for downloading.

I missed that, but see no problem with that.

> You could alternatively set rustup to download from a internal server that contains a reduced set of preapproved binaries only.

It's better, but less idea from a system maintainer's point of view than distro packages, because multiple package systems (which is essentially what rustup is in that use) is more work. It may be the best overall solution depending on how well multiple rust distro packages can be made to play together. Distro packages to provide the binaries and rustup (or something) to just switch between local binaries would be ideal for the system maintainer that wants control. I understand it's less ideal from the rust developer (as in someone that works on the rest ecosystem, as opposed to someone that works in the rust ecosystem), because the goals are different.

> If you want a tool that manages toolchains but does not contain code that accesses the internet, that's a more stringent requirement that rustup doesn't satisfy

Less that it can't (but that is a reality some places), more that it definitely won't, and someone exploring in it won't make it do so accidentally. Don't let the new dev accidentally muck up the toolchain.

> You can also create an rpm for your specific two-compiler setup, of course. That's annoying though.

Annoying for those wanting to get new rust versions out to people, annoying for devs that want the latest and greatest right away, but only slightly annoying, and easily amortized, by those that need to support the environment (ops, devops, sysadmin, whatever you want to name them).

> Set default to stable, and `alias fuzz='rustup run nightly cargo afl'` :)

If I was tasked on making sure we had some testing system in place that ran some extensive code tests in QA or something, or for a CI system, I wouldn't rely on that. It works, it's simple, but when it breaks, the parts I have to look at to figure out why and how, as all over the place, and deep.

Did someone change rustup on the system?

Did someone change the nightly that's used on the system?

Did someone muck up the .multirust?

If any of those happened, what was the state of the prior version? What nightly was used, what did the .multirust look like, did a new nightly catch a whole bunch more stuff that we care about, but aren't ready to deal with right now and is causing our CI system problems?

Theoretically I would build a $orgname-rust-afl RPM, and it would have a $ordname-rust-nightly RPM dependency. $orgname-rust-afl would provide a script that's in the path, called rust-afl-fuzz which runs against the rust-nightly compiler (directly, without rustup if possible. less components to break), to do the fuzzing. RPM installs are logged, the RPM itself is fully documented on exactly what it is and does, and after doing so once, all devs can easily add the repo to their own dev boxes and get the same setup definitively, and changing the RPM is fairly easy after it's been created. DEB packages shouldn't be much different, I don't expect other distros to be as well.

What did I get out of this? Peace of mind that almost no matter what happened if my pager went off at 9 PM (or 3 AM!) that weird system interactions, automatic updating, stupid config changes, etc weren't likely the cause of the problem, and if worse came to worse, I could redeploy a physical box using kickstart and our repos within an hour or two. When you have a pager strapped to you for a week or two at a time, that stuff matters a lot.

To achieve this we went so far as to develop a list of packages that yum wasn't allowed to automatically update (any service that box was responsible for) when everything else auto-updated, which were automatically reported to a mailing list as having updates so someone could go manually handle those by removing one of the redundant servers from the load balancing system at a time to update and restart the service (if not the server), re-join to the load balancer, and then move to the next server, for zero downtime updates.

The yum stuff was handled through a yum-autoupate system postrun script (a cron job script provided CentOS specific package). Yum auto-update didn't support this feature, so we added as a feature of that configuration, made our own RPM to supersede CentOS's version of the package, and then created another RPM for the actual postrun script to be dropped in place, and added them to our default install script (kickstart). We were able to drop support for our version of yum-autoupdate when CentOS accepted our patch.

All that's really just a long winded story to illustrate that sysadmins like their sleep. If your tool reduces the perceived reliability of our systems, expect some pushback. If your tooling works well with our engineering practices, or at least we can adapt it easily enough, expect us to fight for you. :)

Rustup is great, but when I was at this job, I would have had little-to-no use for it (besides maybe looking at how it works to figure out how to make an RPM, if one didn't exist to use or use as a base for our own). I know, because that's the situation perlbrew was in with us.

> Again, this is for tooling, you can easily paper over the fact that the tool uses a different compiler.

Sure, but in this scenario papering over is less important than easily discoverable and extremely stable.


> Distro packages to provide the binaries and rustup (or something) to just switch between local binaries would be ideal for the system maintainer that wants control

I think in this case update-alternatives or some such would be better? Not sure if it can be made to work with a complicated setup that includes cross toolchains. But I agree, in essence.

But anyway the local rustup repo thing was just an alternate solution, I prefer distributing the .multirust directory.

> Don't let the new dev accidentally muck up the toolchain. > ... > If I was tasked on making sure we had some testing system in place that ran some extensive code tests in QA or something, or for a CI system, I wouldn't rely on that. It works, it's simple, but when it breaks, the parts I have to look at to figure out why and how, as all over the place, and deep.

abstracting over rustup fixes this too. Keep .multirust sudo-controlled and readonly, don't make rustup directly usable, and just allow the tooling aliases.

> directly, without rustup if possible. less components to break

yeah, this is possible. It's pretty trivial to integrate afl-rs directly into a compiler build, perhaps as an option. You can then just build the stable branch of rust (so that you get backported patches) and use it.

> Rustup is great, but when I was at this job, I would have had little-to-no use for it

Right, as you have said there are other options out there to handle this issue :) Which is good enough.

When you do care about reproducibility but don't want to repackage Rust, rustup with a shared readonly multirust dir can be good enough. Otherwise, if you're willing to repackage rust, that works too :)


Sure, and to be clear, rustup works perfectly fine for my current needs. I just play around with it a bit when I have time, and even if I was to use rust in production, I would use rustup in my current environment (where the dev team consists of me, myself, and I ;) ). Almost all the benefits of controlling the packaging go right out the window when there's very few devs involved and they aren't expected to change.

First, I do want to say, I don't really care about the gcc alternative. I'm not even particularly interested in this tool. I've just found in the past, folks in the Rust community are quite dismissive about "other perspectives", particularly when they differ from "the open development way". I was just trying to urge you not to encourage that dismissive attitude.

Meanwhile, to potentially help you understand how, at least the company I work at functions:

Any time I need a library or tool, I need to:

1. Get approval from the open source team (for each minor version).

2. Start a security approval process (for each minor version).

3. Download the source, successfully modify the build scripts to use the internal build system and dependency management solution (for each minor version).

4. Upload the modified source to the internal dependency management solution (for each minor version).

5. Finish the security approval: Hopefully with a "Successful" ending (for each minor version).

Something like Rustup, in its current state, will not be usable (also its a bit redundant - we pin compiler/runtime versions in our BuildSystem.config). FYI, even a version of Cargo that can pull dependencies from a non-controlled repo, will not get approved (we would need to modify it to panic instead of call the network). Only a modified version which allows the internal build system to pull libraries from our internal dependency management solution, put the files in the correct dirs, will be allowed (fortunately I think Cargo is already able to do this). FWIW, this process would be almost identical to what we currently have to do for NodeJS projects and NPM packages.

Hopefully this gives more insight into why I thought it was funny, because:

1. Literally any "uncontrolled" tool that can talk to the network, download/build an executable, is entirely disallowed.

2. Getting approval for a single minor version is a pain. Requesting approval for a "nightly" i.e. unversioned is unimaginable. This would literally be the conversation:

What is the level of support on that version? Well, its not really versioned, so there is no level of support apart from upgrading to the latest nightly.

How would you update for a security fix? Update to the latest nightly.

Wouldn't that be an arbitrary number of commits, not just the security update, i.e. potentially the difference between minor/major versions? Yes.

So, we cannot minimally update the nightly for a security fix unless we cherry-pick the commit ourselves? Correct.

Also there is no support on the version of nightly we are pinned to? No.

So there may not even be a commit we can cherry-pick, we might have to investigate and make the change ourselves? Correct.

Also, there could potentially be a bug in our pinned version which only our company is exposed to? Well, its unlikely any other team/company will pin to that exact version, so yes, its unlikely we would even find out about a potential security vulnerability.

... lets wait for a version with more support.


Ah, I see where you are coming from now. Thanks :)

> First, I do want to say, I don't really care about the gcc alternative. I'm not even particularly interested in this tool. I've just found in the past, folks in the Rust community are quite dismissive about "other perspectives", particularly when they differ from "the open development way". I was just trying to urge you not to encourage that dismissive attitude.

I'm usually not dismissive, since I've felt these pains too working in a (less locked down, but locked down nevertheless) similar environment. In this case I was dismissive because it is just as bad for GCC, which makes targeting Rust very unfair. Plugin tools like this always require a bunch of extra work up to even recompiling the compiler, which provides additional barriers to adoption.

--------

So, firstly, I wasn't advocating rustup as a tool which downloads the toolchains. For a reproducible build, that's not good. I was advocating it as a tool which manages toolchains and lets you switch with ease (you load it up with the toolchains you need beforehand, and distribute a .multirust directory to your build machines). This lets you use the supported stable version for actual builds, and nightly for peripheral tooling like linting and fuzzing, which doesn't affect the actual build -- just diagnostics. Note that your concerns about bugs don't apply to tooling, since it doesn't touch the outputted binaries. This may lessen the pain of getting approval, but still, a better solution should exist :)

If you are going to modify and build Rust sources, this isn't a problem anyway. You can get the sources for the latest stable (they're on a branch somewhere) and tweak them so that this tool works (one line change to code), or so that the compiler allows you to use unstable features (make/configure option). This is effectively a "nightly" build on stable sources, sources that will be getting backport fixes for bugs. Note that the APIs used by this tool will not be stable and may change (most likely they won't) if you update to a new compiler.

-----

The above solution is less viable for clippy, since clippy depends on tons of extremely unstable compiler internals (whereas this tool depends on one API call which could change but probably won't) which will break every compiler upgrade. But as I said, there is a fix being planned for clippy. For most users the fix will just involve using rustup. For enterprise users I imagine the possible fixes will either involve downloading the official clippy for their version of the stable compiler once, or, if you are compiling from sources, compiling Rust with clippy included (which probably won't be hard since building clippy will be part of Rust's rustup toolchain packaging anyway, so the steps will be the same and probably just a make/configure option).

> fortunately I think Cargo is already able to do this

Not exactly. Cargo so far is fine with local path dependencies, but it doesn't yet have support for having a locked down internal mirror repo with the relevant sources like many people do for PyPi or npm. That's something enterprise users do want, since vendoring in-tree is not always the solution. I don't know if it's being actively worked on, but it is on the list of things that Rust definitely wants.


> 1. Get approval from the open source team (for each minor version).

> 2. Start a security approval process (for each minor version).

> [etc...]

Out of curiosity, do you think having a laborious process to adopt even minor versions actually improves security?

Because to me, since in most software exploitable bugs are often fixed without being explicitly marked as such or CVE-ed, this seems like basically a recipe for insecurity.

I could be wrong. And I know that even if you disagreed with the process, that wouldn't mean that you didn't have to follow it, or that many, many developers in a multitude of environments similar to yours don't have to follow similar processes. But I'm curious what you think.


Legal | privacy