I don't want rust in the kernel. As a small time developer making a hardware product, the investment to understand the kernel and how to build drivers is already huge. I have no interest in learning rust, and it will make it harder for people with limited resources to use linux in their projects.
To be honest, I don't think Linux being the "One True Kernel" is really worthwhile, and the benefits of adding Rust to the kernel outweigh the downsides.
There are plenty of other kernels available for embedded work, and my understanding of Rust in the Linux kernel is that it's mostly drivers, which seems like it would be optional for your use case.
That said, I've not done much embedded, I may be misunderstanding something.
And specifically, one of the ways in which it is complicated is that it's mostly done in a language that doesn't natively enforce memory-safe constructs, forcing developers to walk a best-practices and tooling tightrope to avoid writing code that breaks.
The OP might be ok with a Kernel that was just in Rust, but having a kernel witha mix of Rust and C adds complexity, both in having to master two languages to understand it well, and a more complex build system.
What's so bad in having to know two languages for developing drivers? Especially when knowing the second one will rid you of all memory safety bugs just because your code compiles.
And to call Rust's build system is laughable. Issuing `cargo build` could not be simpler (and it includes automatic downloading of dependencies, too).
There is also so much to rust that makes it nearly ideal for the prospect of writing a driver.
For example, how many bytes are in an `int`? What is the endianess of your 32bit `int`?
Rust puts up front a lot of the information crucial for cross platform development of hardware drivers.
Rust has few undefined edges which is exactly where C gets into trouble. C, by design, has a bunch of undefined edges.
Honestly, the biggest downside to Rust in the kernel is that Rust is backed by the LLVM, which doesn't have as many supported targets as GCC does. (And, AFAIK, GCC is generally the preferred compiler for the kernel).
> Honestly, the biggest downside to Rust in the kernel is that Rust is backed by the LLVM, which doesn't have as many supported targets as GCC does. (And, AFAIK, GCC is generally the preferred compiler for the kernel).
Fortunately, there are now two competing approach to make GCC a viable compiler to Rust[1], and it's moving really fast. I think the LLVM-only situation won't get in the way more than a few more years.
Rust and Linux both have the same name for say the 32-bit unsigned integer type, they both call that u32. This is of course a coincidence, and not an unlikely one, but it probably doesn't hurt for Linux people getting familiar with Rust.
it is already complicated. so i don’t want to learn another language. i want to become BETTER at using what is there, not have a language forced onto me. i want to make my tool set smaller, not learn more tools.
also, being able to use linux in an embedded system as a resource constrained developer is hugely important today. adding more tools means more dev cost.
Sounds like it's time to exit programming, buddy, because the world has some very bad news for you. Yeah, you'll have to keep learning tools. It's included in the big paycheck.
If everyone had your attitude, humanity would have never left the trees. There are all sorts of legitimate reasons to take it slow on adding Rust to the kernel and adopting Rust generally. The language is far from perfect.
But "I don't want to learn something new"? No, that's not a legitimate reason.
But Christ almighty, you don't get to block progress of all humanity (and Linux is a humanity scale project now) because you're too lazy to learn some new syntax. Memory safety is important. Memory safety bugs cause billions of dollars of damage every single year. The project if solving them is too important to get tangled up in the weeds of people who don't have "resources" to RTFM.
If Rust would have just used a C-style syntax instead of this wacky quirky stuff that's needlessly hard to read, it would probably be much more widely adopted. I gave it a shot myself but I was too turned off by the friction of getting used to the syntax.
why fn? why switch the order around? fn and u32 and all that as if keystrokes are such a scarce resource, yet u32 f(u32 a) saves so much more than fn f(a: u32) -> u32.
Closures use pipes instead of any of the existing syntaxes we're used to.
Why couldn't they just use the familiar concept of classes instead of whatever is going on with their strange NIH traits? NIH describes the whole language, every little thing has to be different somehow.
C with Rust's safety guarantees, OOP style classes, and a few more bells and whistles could have taken over the world a lot faster, but instead we have something with a high friction to learning that will be adopted much more slowly by either programmers with a fresh start to whom every language is equally weird, or the relative minority of experienced programmers with the will and free time to push past the friction.
> There are good reasons for it, and I'm certain you will gain a better understanding as you gain more experience as a developer.
The people pointing out problems with Rust have more experience as developers in their little fingers than Rust's internet fan club has in their whole bodies combined.
In a mature, intelligent discussion, if you have a point, you make it. On HN, if you have a point, you say "there are reasons; I'm not going to give them; also, you're a newbie".
It's comments like yours that make HN unbearable nowadays.
Interesting perspective. FWIW a lot of modern languages now have put the types on the left for a good reason, it's a lot easier to parse. Plus for a reader, at least in Rust, wherever you see a `:` you know that a type is coming after. For the other stuff, I don't know, I got used to it pretty quickly. Not that much of a big deal in my opinion
The "symbol: Type" notation is the "scientific" one for programming languages since "forever".
It's not that "modern" languages are doing something new. Everybody was doing it like that, besides the "C language family". They're the outlier, not the other way around.
To answer the first paragraph (as I understand it, i'm no expert here):
Parsing! it is faster to parse and probably uses less memory too. This is not only important for compile time but also for editors with code suggestions.
C has a relative long compile time. If you look at golang, which has a goal of a fast compile time, you will see similar syntax.
PS.
Note that Ken is one of the fathers of both C and golang.
Parsing is ridiculously fast nowadays. Compilers spend far more time in phases after AST construction (like register allocation, SSA transforms, various vectorizaton passes) than they do constructing their ASTs. Parsing speed is a nonsense argument these days. It's the same level of rigor as painting a car red to go faster.
> If you look at golang, which has a goal of a fast compile time, you will see similar syntax.
Rust critics: "Some of Rust's language choices are clearly inspired by Go and may not have been good ideas"
Rust people: "We didn't copy Go! We independently through a rigorous analysis arrived at our syntax!"
Also Rust people: "If you look at golang, which has a goal of a fast compile time, you will see similar syntax."
> Note that Ken is one of the fathers of both C and golang.
Creating one of these things makes Ken made imperfect. Creating both of these things makes Ken a menace.
I don't think rust is particularly weird. If you want, you can pretty much write in an imperative style, with objects, etc. It's just got some practical thinking thrown on ontop of that: i.e. make everything immutable by default, use tooling to ensure people aren't holding on to dangling pointers, etc.
Some of these things have tradeoffs, but I think once you get to grips with what the borrow checker is asking of you (doesn't take long), it's a pretty easy language - certainly simpler than C[0], far simpler than C++.
[0]: Non-buggy C, that is. I wrote a pretty basic program in C the other day (I'm not a C programmer) and I literally spent 80% of the time looking at output from -fsanitize=address to catch stupid off-by-one errors.
> Closures use pipes instead of any of the existing syntaxes we're used to.
Who is “we”? Pipes are one of the existing closure syntaxes I was used to pre-Rust. And, I mean, I’m probably not alone: lots of current and ex-Rubyists around.
Rust didn't invent any of these syntaxes. Pascal was invented around the same time as C and had functions that start with "function" and types that use the "binding: type" notation. And lots of languages (Ada and ML come to mind) followed in Pascal's footsteps that predate Rust.
It has benefits. It's easier to parse for both humans and machines and it allows for easier type inference.
>Closures use pipes instead of any of the existing syntaxes we're used to.
Closures in Ruby use pipes. It's a common syntax.
>Why couldn't they just use the familiar concept of classes instead of whatever is going on with their strange NIH traits? NIH describes the whole language, every little thing has to be different somehow.
Needless to say, traits aren't NIH either, and there's good reasons for avoiding class-style polymorphism in a language like Rust.
>I once attended a Java user group meeting where James Gosling (Java's inventor) was the featured speaker. During the memorable Q&A session, someone asked him: "If you could do Java over again, what would you change?" "I'd leave out classes," he replied. After the laughter died down, he explained that the real problem wasn't classes per se, but rather implementation inheritance (the extends relationship). Interface inheritance (the implements relationship) is preferable. You should avoid implementation inheritance whenever possible
And clever use of blanket implementations makes traits even nicer for this purpose.
For example, suppose you just invented the type Qux and you always know how turn any Baz into a Qux.
Rust's standard library provides four interesting conversion traits that may be applicable for different purposes, From, Into, TryFrom, and TryInto. Oh no, implementing all of them sounds like a lot of work and who knows which anybody would need?
Just implement From<Baz> for Qux
A blanket implemention of Into<U> for T when From<T> for U gets you Into, then a blanket implementation of TryFrom<T> for U when Into<U> for T gives you TryFrom, and finally a blanket implementation of TryInto<U> for T when TryFrom<T> for U gets you TryInto as well.
So you only wrote one implementation but anybody who needs any of these four conversions gets the one they needed.
Another more obvious example is the blanket implementation of IntoIterator for any Iterator. This makes it easy when you want to be passed a bunch of Stuff you're going to iterate over, just give a trait bound on your parameter saying it has to be IntoIterator for Stuff. You needn't care if the user of your function passes you an array, a container type, or an iterator, since all of them are IntoIterator.
It would have been so easy to overlook this, and end up with a language where people find themselves collecting up iterators into containers over and over to make more iterators, or else constantly turning containers into iterators to call functions.
First up let's be absolutely clear - there aren't many (any?) "amazing new Rust inventions". Rust isn't a proof of concept, it's the mass market product and so it just stole clever ideas wholesale from languages we don't care about, and in some cases haven't even heard of. The various papers saying C++ should do things that Rust did aren't because Rust invented those things but because it has popularized them and now people writing C++ papers want them.
It is true that SFINAE means you can write very general templates in C++ and just not worry about whether they compile with any particular parameters, which is superficially similar to a blanket implementation of a Rust trait. But, because Rust's traits have semantics not just syntax the actual effect is different.
And that makes the blanket implementations very comfortable in Rust, whereas the equivalent pile of enable_if templates in C++ would be trouble.
Remember C++ comes with template language that is explicitly labelled as having unenforced semantic value. It's for documentation, a human reading them can see for example that you expect this parameter type to have full equivalence. However the compiler only cares about syntax, and the syntax just says the type has an equals operator.
This is a great contrast with Rust, whose traits have semantics and so core::cmp::Eq and core::cmp::PartialEq are different traits and the compiler cares which one you asked for even though the syntax is the same.
> But, because Rust's traits have semantics not just syntax the actual effect is different.
Don't concepts give C++ the same semantic power? What am I missing? Concepts give C++ generic code exactly the same nominative constraints you're describing, yes?
Besides: even without concepts, you can use tag types and traits to similar effect. Not everything needs to be duck-typed, even in C++17!
One of my main problems with using Rust instead of C++ is losing metaprogramming power. I really like the template system.
> Don't concepts give C++ the same semantic power?
Not really. But, it's on purpose, and, in their context it makes sense. These aren't idiots, they know there are however many bajillion lines of C++ code and if Concepts doesn't work with that code it's useless.
In Rust, you have to explicitly write implementations of traits. If the author of stupid::Bicycle doesn't implement nasty::Smells and the author of nasty::Smells didn't implement it for stupid::Bicycle either, then a stupid::Bicycle just isn't nasty::Smells.
Even "marker traits" where the implementation is empty, are still explicit. If you have a type that implements Clone but you didn't say it is Copy, then it isn't Copy, even though maybe the type would qualify as Copy if you had asked. Rust will always move this type, because you didn't say copying it was OK.
This is where the semantic strength comes from, and it was fine in Rust because Rust "always" did this. There isn't a bajillion lines of Rust code that doesn't know about traits, and so it's OK to require this.
In C++ they realistically couldn't do that. So, next best thing is to say that a Concept is modelled on syntax. You mentioned duck-typing, and that's what concepts are assumed to be for.
Now, of course you can use tag types, for a home grown Concept it actually might be viable, if you've only got two users and they both know you personally you can just tell them to use the tag type and they'll both add it. But for the C++ standard library Concepts that was not practical. So, that's not how Concepts is being taught or has been used in practice, unlike Rust's traits.
I agree that Rust differs needlessly from previously curly-brace languages. Rust could have achieved its core goals --- memory safety and robust concurrency --- as a dialect of C++ instead of as a whole new language with the latest fashion sense (cough Go *cough) thrown in.
It's also a shame that your comment is being downvoted into oblivion. I remember when HN was for discussion, not for karma-enforced orthodoxy.
> could have achieved its core goals --- memory safety and robust concurrency --- as a dialect of C++ instead
The C++ Standard committee disagree with you - and not for lack of trying, either. The C++ Core Guidelines are the best they could come up with, and these are not rigorously enforced unlike the safe subset of Rust.
Are you talking about [1]? All that paper says is that you can't add a borrow checker to C++ as a library feature. You can add such a thing as a language feature.
I hate that paper. People misrepresent it all the time.
Nevertheless, the core guideline were not an attempt to add lifetimes to C++ at a syntactic level. AFAIK, that's never been tried, and I think the effort would be a valuable complement to Rust.
> If Rust would have just used a C-style syntax instead of this wacky quirky stuff that's needlessly hard to read
A lot of the "whacky, quirky" stuff is to avoid the clusterfuck that is C and C++ parsing.
People want IDEs. They also want their IDE to do smart things. Having to compile the universe to get the IDE to do something smart doesn't work very well.
Take a "simple" task like "Is this is a variable name? That is a type definition?". C and C++ have to practically do a full compile to figure that out (see: typedef and templates). Rust, on the other hand, just has to read a few tokens and it knows definitively the answer to that question.
And, you may call this stuff "quirky", but practically all the languages designed in the last 10 years have converged to similar goals and syntaxes to Rust. Language designers know that people don't like "different from C" but they do it anyway.
Think about whether your arguments are stronger than those who are designing new programming languages that they want adopted.
> Many were increasingly of the opinion that they'd all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans
> I don't want rust in the kernel. As a small time developer making a hardware product, the investment to understand the kernel and how to build drivers is already huge.
There's a crucial part missing: the investment [...] to build (memory) unsafe drivers.
I find it very troubling when memory unsafe language programmers don't weigh memory safety at all in their views.
(note that this doesn't necessarily refer to Rust; there's people who swear that one can build drivers also in Golang, so assuming that is realistic, the same reasoning applies)
Ada maybe isn't very well adapted to kernel code? From what I know of Ada, doesn't it get it's memory safety from either a GC or not freeing memory entirely? That is -- ownership rules are pretty new to Ada/SPARK.
C-style syntax and community interest also favor Rust.
This spec argument has always seemed like a red-herring to me. Can you explain why the Ada spec would be a significant factor in this instance?
>Ada maybe isn't very well adapted to kernel code? From what I know of Ada, doesn't it get it's memory safety from either a GC or not freeing memory entirely? That is -- ownership rules are pretty new to Ada/SPARK.
I think GC is optional with Ada, as far as I know the memory safety comes from raising exceptions (or refusing to compile) when it detects memory-unsafe operations (array bounds checking etc).
>C-style syntax and community interest also favor Rust.
That's fair, C-like languages are instantly familiar with software people and rust seems to have a more hip image than old man fuddy-duddy Ada.
>This spec argument has always seemed like a red-herring to me. Can you explain why the Ada spec would be a significant factor in this instance?
I'm not arguing for Ada over Rust (I'm not a software guy) so I don't mean it as a red herring, but wouldn't a suite of static verification tools and a formally verified compiler require a spec to be tested against?
Re: spec, not if the spec is implementation (and project's values) defined. I mean -- the Rust project has a ridiculous # of tests which define language behavior without having an ISO standard. Yes, implementation defined behavior in C is usually a place where C compiler engineers trade safety for speed. Yes, that's usually a bad trade. However, I'd look at this situation re: Rust vs. C in a different way though -- the Rust project's values are why defining a standard is less important.
I think a spec is often a red herring because a bunch of folks living in the slum of C, when asked if they would all like to move into Rust's nice 3 bedroom by the park, instead always seem to ask: Wouldn't it be better to form a committee about building us a cathedral? Ada might be better. Someone should try it, but until then I'll take Rust.
It's always fun to read peoples drive-by opinions about the only decent language IMO. Ada is highly compatible with C code. For dynamic allocations with "ownership" there is the concept of memory pools which CAN use a GC if thats how you choose to implement the pool allocator.
Haha, I'm sorry if it seemed like I ever knew what I was talking about. I thought asking questions would make it clear that I don't.
Remain interested in the potential advantages of Ada compared to Rust for Linux kernel development, if you would care to point me in the right direction.
Honestly if Ada hasn't caught on in the intervening 30 years (despite a lengthy government issued monopoly) then it probably isn't a good choice. A more mature Rust would be nice, but it's gravy at this point.
(caveat: I've read a bit about Ada but don't have any deep familiarity with it)
Ada is safe compared to the other languages of its day, but I'm not sure it compares favorably with Rust. IIRC it does have more of a focus on safe arithmetic rather than memory safety.
The focus of Ada is not on safe arithmetic only, it's on functional safety at large: the code does what it is specified to do and nothing else.
Ada shines in its specification power, how developers can express what the code is supposed to do (strong typing, ranges, contracts, invariants, generics, etc.). And then you can either check your code at "compile time" with SPARK [1], that provides a mathematical proof that you code follows the specification. SPARK also proves that you don't have buffer overflows or division by zero for instance. Or you can have checks inserted in the run-time code which greatly improves the benefits of testing as every deviation from specifications will be detected, not only the ones you decided to check in your tests.
In terms of memory safety, Ada always had an edge on C/C++ because of the lower usage of pointers (see parameter modes [2]) and the emphasis on stack allocation. Now with the introduction of ownership in SPARK it's getting on par with Rust on that topic.
Note that Rust's safety proposition is different than Go's in a way that's significant for Linux beyond the obvious.
(Safe) Rust is Data Race Free. You can mistakenly write a data race in Go, just as you could in C, and, even though under normal circumstances Go feels like it's memory safe, once you've got a data race all bets are off and you potentially no longer have memory safety. In (safe) Rust you can't write a data race, so, this can't happen.
If you write simple serial programs, you don't care, your programs never have data races because they have no concurrency and therefore can't possibly have concurrent memory writes. But obviously Linux is not a serial program.
Its also important to note that for most systems security critical memory safety is a REALLY big deal, but safety from crashes due to data races is generally not a big deal. Rust has trade-offs to accomplish this.
We proven time and time again we can't write safe c/c++ code from a security perspective. I don't think we've proven that crashing now and then to fix our data races is a bad outcome. There are always bugs to be fixed.
The consequence of a data race isn't (necessarily) limited to "crashing now and then".
The big problem is SC/DRF. Most languages promise (at best) your program is sequentially consistent if data race free. And it turns out we can't debug non-trivial programs unless they're sequentially consistent because it hurts our brains too much.
Causality is something we're all used to in the real world. The man hits the ball with the bat, then the ball's trajectory changes, because it struck the bat. Sequentially consistent programs are the same. X was five, then we doubled it, now it's ten.
But without sequential consistency that's not how your program works at all. The ball goes flying off over the wall around the field, then the pitcher throws the ball, the ball lands in the catcher's glove, the umpire calls it, the batter hits the ball, and yet at the same time misses the ball... We doubled X, now it's five, but before we doubled it that was nine or sixteen? What?
> Its also important to note that for most systems security critical memory safety is a REALLY big deal, but safety from crashes due to data races is generally not a big deal.
It must be noted, though, that data races can lead to memory safety issues in languages which are otherwise free of them, Go being a prime example of that: the built-in hashmap is not thread-safe, and does exhibit memory corruption issues under unprotected concurrent writes and accesses (only concurrent unprotected reads are OK).
There is a “best effort detection” of concurrent misuse, but it’s just that.
read the kernel docs and follow best practices. the code is reviewed by thousands of developers, so chances are what makes in into the kernel tree is pretty good. i don’t need rust. thanks!
Anecdotes from kernel maintainers is that memory unsafe code gets pas reviewers all the time. There was even that ethically questionable study that tried and succeed in getting vulnerabilities inserted into the kernel.
First, I've generalized to memory-safe languages, not necessarily Rust.
Second, this is exactly the overconfidence that I find troubling. Vulnerabilities caused by memory unsafety are real, pervasive, and inevitable.
A few resources you should look at:
- the official reports/post by Mozilla and Microsoft about the percentage of vulnerabilities caused by memory unsafety
- the mailing lists with the Linux kernel issues (I don't imply any severity; just the amount)
There are many summarized posts for the lazier, and I'm talking about high profile authors (or authors writing on behalf of high profile companies). Here's a semi-random one:
I was emphatically not suggesting that the needs of hardware devs should be ignored, just that those needs should not constrict kernel development to the detriment of other concerns.
The problem is, it isn't "needs of hardware devs" vs "rust". The kernel exists to abstract over hardware. Almost everything in the kernel* is written by "hardware devs". So any benefit Rust brings will be FOR the benefit of hardware devs.
* - There is some cryptography stuff in the kernel (that probably doesn't need to be in there anyway) and that isn't written by hardware devs I suppose.
Rust in itself is easier to understand and reason about than the equivalent kernel C code in my experience.
The actual cognitive cost does not come from Rust itself, but from mixing the two memory models and interop between them, IME and is probably a bad idea in general.
Should just boil the ocean and write the entire LK with Rust only.
reply