The problem I have with Nim is that it misses the simplicity of Go and the extra safety at high performance of Rust. At least I, personally don't see the need for a middle-of-the-road language.
I wish them however the best of luck. Compiling down to C means being able to leverage a lot of mature tools, and if they manage to simplify a lot of the language they may rival Go for projects when you need high performance but desire to retain high productivity regardless of how big and diverse your dev team is.
If Nim continues at the same pace, it may displace Python.
It is "as fun as" Python to start with, it has the knobs that made python popular in many niches. It has official JS, C, C++ and Objective-C backends (with perfectly compatible and natively fast FFI in each case). It compiles down to small, efficient executables like Go does.
It has some of the best metaprogramming facilities (in a non-lisp language), which makes it possible to introduce "source native" notation for e.g. XML and JSON.
It also has an LLVM backend, though it is not yet complete.
Nim is a big and complicated language, but so is Python. The size of a language itself is not a problem, as Python shows. Unlike C++, you don't have to be familiar with every small metaclass/descriptor/base-class-lookup issue to use the language. I believe Nim shares Python's "it's ok to ignore the details until you need them" feature.
Perhaps it would be more informative to speculate on the reasons why Python would be displaced by any new language, rather than why any given language might displace Python.
Because albeit it got a lot better with time, it's still a lot slower than C for medium to big projects.
Displace is probably not the right term as Python like many others has its strengths, I would rather think of using the right tool for the job, and to me for medium to big projects when performance is important, Python doesn't seem the best choice.
> If Nim continues at the same pace, it may displace Python.
Um, no.
Look at how long it took Python to displace Perl. And Perl was actively shooting itself in the brain for years or that probably wouldn't have happened.
Python is far more embedded in certain areas now than Perl ever was.
I can't imagine a language even starting to displace Python without a good REPL. Not just a hacked-together "lol does this work" REPL, something you can rely on, and preferably something you can use as a Jupyter backend.
>The problem I have with Nim is that it misses the simplicity of Go and the extra safety at high performance of Rust. At least I, personally don't see the need for a middle-of-the-road language.
I'd very much like a more expressive and better designed than Go language, without the ceremony and straight-jacket of Rust.
That could be D or Nim or some schemes, for example, but I also want a community and tooling.
Yeah, OCaml is as close to the ideal language for me as I've found so far (it's still pretty far from my ideal, but it's closer than everything else I've tried). But it definitely has tooling issues. Tooling issues have pushed me away from it for more than one project.
By far the biggest issue is dependency management/build tooling. Opam was a huge improvement in package management, so things are better than they used to be. But OCaml is missing something like Rust's Cargo, that makes creating and building projects simple and easy, and something that allows devs to avoid wasting time on build systems. OCaml also has issues with Windows support. OCaml works on Windows, but currently getting Opam to work on Windows requires Cygwin.
>I'd very much like a more expressive and better designed than Go language, without the ceremony and straight-jacket of Rust.
The problem is this mentality is one that produces unmaintainable software in the long run.
In the case of Go, the simplicity of the language can be frustrating when it comes to expressing one's ideas in "clever" ways; but the code you write may be easily grokked by a junior developer that started with Go last week. On the other hand, things like metaprogramming allow you to build a castle of abstractions which you yourself will have trouble navigating after six months without touching the codebase, and will be a pain of legacy code to any other engineer. Sure, you don't have to use all of it at first, or ever, but that's what I mean "in the long run", either you cave in or someone else in your team does.
It's hard to say Go is poorly designed, because it does what it was meant to do extraordinarily well, even if there are a few things that could be even further simplified and yes, things like generics (where the complexity tradeoff is not that high due to its ubiquitous nature) that could be added (and this discussion seems to be back on the table according to Russ Cox).
As for Rust, without all the ceremony you cannot summon the demons that grant you perfect memory safety without paying for GC as well as absence of data races. Nim simply cannot provide this despite all the complexity it comes with, and really, if you're okay with that much complexity you might as well just stick with C++ and the gargantuan momentum behind it. The committee is making strides towards making the language more palatable so the appeal for Nim diminishes with each new release.
>The problem is this mentality is one that produces unmaintainable software in the long run.
What mentality? Wanting more expressive power than Go?
Then Haskell for one, or even OcamL and Ada, must produce some very unmaintainable code...
>In the case of Go, the simplicity of the language can be frustrating when it comes to expressing one's ideas in "clever" ways; but the code you write may be easily grokked by a junior developer that started with Go last week.
Only at the micro level (directly checking some lines of code). Grokking what 10 lines do easily doesn't mean you'll more easily grok the overall design, especially if lack of expression has you writing 1000 lines (and copy pasting e.g. for "generic" code) what could take 100 in a more expressive language (and still be perfectly readable).
In fact, the main, sure fire, correlation they found for number of bugs in studies of coding, is first and foremost lines of code (regardless of language).
>It's hard to say Go is poorly designed, because it does what it was meant to do extraordinarily well
It also violates the law of least surprise and orthogonality of concepts several times (e.g. special casing a generic "make", etc).
>As for Rust, without all the ceremony you cannot summon the demons that grant you perfect memory safety without paying for GC as well as absence of data races.
Sure, but as I don't care for GC pauses or perfect memory safety, I don't have much of a use case for it, hence wanting a middle ground between Go and Rust.
> but as I don't care for GC pauses or perfect memory safety
I personally don't need to care about either of these things to find Rust a nice programming language to use for general purpose stuff. Its type system and tooling are quite pleasant.
f# is probably your best bet. ocaml and d are the closest-to-mainstream ones that don't involve a vm, but neither of them has spectacular tooling just yet; f# (on windows) at the least inherits some of the .net tooling and is getting an increasing amount of support from microsoft.
i'm personally enjoying ocaml a lot, but that's because i'm on linux; under windows i'd probably have gone with f# instead.
Unfortunately, the problem with Nim (as well as D) is the lack of big player support jump starting public opinion.
Languages are inherently vendor-locked* (There isn't any way to foolproofly translate code from language A to language B), so I won't write code in a language which may disappear tomorrow or drop support for my system.
* It's true that there's no real vendor lock-in by OS code, but unless there's a large community around the language which can pick up where the inventor dropped off (like C), you're on your own, and unlike small projects, languages have huge codebases which also require specialized knowledge to maintain.
Nim compile and execution times are closer to C/C++ than most languages, maybe in the 10 percentile.
Its expressiveness is, arguably, close to Python.
It compiles to C, Objective-C, javascript, which many other languages do not.
The C target runs even on arduino boards.
I don't see the appeal of Nim. We have the choice of C++14, Rust and Go for well supported robust languages with different focuses. C++14 has the best execution performance, Rust is the most safe and it seems likely go tries to be the simplest. What does Nim do better than any of these?
Presuming it does do anything better than those learning Nim is hard because the examples are stale and the documentation lacking. I don't mean just the official docs, which might be great, but where are all the answers on SO and community made books or tutorials.
The thing that can't be overstated with Nim is how fast it is. We're talking C/C++ level performance - and that's on "normal" code, not some abomination with tons of annotations or other cheats.
It's also very unlike Rust in that it's a very low ceremony language. Simple scripts are no more verbose than they would be in Python or Ruby.
C++ and Rust don't have garbage collection, Go's type system is very simplistic. Obviously, depending on what you do, this can be either an advantage or a disadvantage. My point is that there are clear differentiating factors.
The closest competitor for Nim is probably D, not any of the languages you name.
The article seemed to describe a high speed native language with garbage collection. I guess that does seem similar to D, but D is much closer to any of the languages I listed with more than 15 years of development and a much smaller list of breaking changes. It also isn't anywhere near as popular as the three I mentioned.
Why does Nim get the attention that D does not? I think Nim only gets attention around here because its new and not because it brings anything directly useful. As long as it is still "new" there is still so much potential.
As for garbage collection, I think you are overestimating its value. There are great resource management schemes in both Rust and C++, and they can each manage much more than just memory. I am only qualified to speak in detail on C++, but if you simply avoid the "new" operator and automatically allocate everything (some say stack allocate), which is the default, then everything is cleaned up for you automatically.
Further down the garbage collection route, if your app doesn't need performance then why not stick with Ruby, Javascript or Python, they are all well supported and provide tons of flexibility and libraries. If you need performance, why settle for have measures why not go for C++, Rust or one of the blazing fast fully native languages without GC (even one of the really fast ones with GC like Haskell or Java)? I see the space between them filling up with a ton of languages that seem hard to distinguish. Nim looks like just another, piling into this gap that swallows otherwise decent languages like D, cyclone, darkbasic, pascal, and .Net language that isn't C# and so many other languages that I cannot count them.
> The article seemed to describe a high speed native language with garbage collection. I guess that does seem similar to D, but D is much closer to any of the languages I listed with more than 15 years of development and a much smaller list of breaking changes. It also isn't anywhere near as popular as the three I mentioned.
Well, when you compare languages, you usually do it by feature set. Both Nim and D are statically typed, garbage collected, imperative languages with first-class metaprogramming features.
I'll also add that Nim has also about 9 years of development behind it. It's older than you think.
> Why does Nim get the attention that D does not? I think Nim only gets attention around here because its new and not because it brings anything directly useful. As long as it is still "new" there is still so much potential.
I see both Nim and D getting comparable attention. They both aren't mainstream languages, so coverage is mostly by enthusiasts and intermittent. People are interested because both are sufficiently different from mainstream languages, not because they suddenly plan to bet their careers on them.
> As for garbage collection, I think you are overestimating its value. There are great resource management schemes in both Rust and C++, and they can each manage much more than just memory. I am only qualified to speak in detail on C++, but if you simply avoid the "new" operator and automatically allocate everything (some say stack allocate), which is the default, then everything is cleaned up for you automatically.
This would be a long debate. Suffice it to say that for many people the benefit of avoiding GC is unclear. Both C++ and Rust manage shared ownership poorly (i.e. it being inefficient, leading to lots of syntactic noise, or both). Note also that I'm not saying that GC is always better, but that it's a differentiating feature.
As for resource management, RAII in my experience is primarily the consequence of poor support for higher order functions or equivalent functionality in a language. (The type of resource management that RAII facilitates is usually better done through higher order functions or a sufficiently expressive macro [1] system.)
> Further down the garbage collection route, if your app doesn't need performance then why not stick with Ruby, Javascript or Python, they are all well supported and provide tons of flexibility and libraries.
You seem to be have some serious misunderstandings about garbage collection. No, GC is not per se a performance drain, and adding the two orders of magnitude overhead that a typical bytecode interpreter for a dynamically typed language induces is not even in the same ballpark for even poorly implemented GC.
[1] And by macros I don't mean C/C++ preprocessor macros, but macros that are properly integrated in the language and its type system, such as in LISP, Nim, D, or Rust.
Both C++ and rust encourage exclusive ownership. And that's a good thing.
> RAII in my experience is primarily the consequence of poor support for higher order functions or equivalent functionality in a language.
HOF tie resources to lexical scope; RAII ties them to object lifetimes, recursively. In turn lifetimes may be bound to a lexical scope, but often is not the case.
> Both C++ and rust encourage exclusive ownership. And that's a good thing.
It's not. It makes lots of important things unnecessarily hard and often leads to unnecessary copying [1]. It makes it hard to even do something like OCaml's List.filter and Array.filter properly. It gets in the way of doing functional data structures (ex: binary decision diagrams). Lots of common design patterns also require shared ownership.
> HOF tie resources to lexical scope; RAII ties them to object lifetimes, recursively. In turn lifetimes may be bound to a lexical scope, but often is not the case.
No, higher order functions don't per se tie resource management to lexical scope (though that's the easiest application). You can build an entire transactional model on top of higher order functions.
Second, tying resource management to object lifetime is dangerous, as object lifetime can exceed the intended life of a resource (ex: closures, storing debugging information on the heap).
Third, RAII is in practice little more powerful than lexical scoping. RAII works poorly for global variables (problems with initialization/destruction order) and thus is in practice limited to automatic and heap storage. Using heap storage leads to the aforementioned problems, where object lifetime can become unpredictable.
>It makes lots of important things unnecessarily hard and often leads to unnecessary copying [1]
If you read the article referenced by that thread you'll see that a major issue with string copying is due to having a lot of bad interfaces taking raw C pointers so code at both side of the ibterface need to make copies exactly because the raw C pointer doesn't guarantee exclusive ownership. The remaining issues are due to a badly optimized string buider in Chrome and failure of pre-reserving vector memory which lead to many copies on resize. This last issue is fixed with move semantics in c++11.
>It makes it hard to even do something like OCaml's List.filter and Array.filter
Std::remove_if works genrically on any range. Boost (and the range TR) provides iterator views when you need lazy evaluation.
> You can build an entire transactional model on top of higher order functions.
I'm sure you can, in the end you can implement anything manually. With RAII propagation of lifetimes is done automatically by the compiler.
>Using heap storage leads to the aforementioned problems, where object lifetime can become unpredictable.
Only if you use shared ownership. Otherwise is completely predictable.
> If you read the article referenced by that thread you'll see that a major issue with string copying is due to having a lot of bad interfaces taking raw C pointers so code at both side of the ibterface need to make copies exactly because the raw C pointer doesn't guarantee exclusive ownership. The remaining issues are due to a badly optimized string buider in Chrome and failure of pre-reserving vector memory which lead to many copies on resize. This last issue is fixed with move semantics in c++11.
The point is that something like:
List.filter (fun s -> String.length s > 0) list
simply cannot be done efficiently, because you require either copying or shared ownership for the strings. This also occurs naturally in a number of other situations, such as storing strings in objects.
> Std::remove_if works genrically on any range. Boost (and the range TR) provides iterator views when you need lazy evaluation.
std::remove_if is destructive. Iterators are not the same thing as a functional filter operation.
> I'm sure you can, in the end you can implement anything manually.
The point here is that a transactional system is more powerful than RAII.
> Only if you use shared ownership. Otherwise is completely predictable.
If you don't use shared ownership, then you're basically limited to lexical scoping.
> The point here is that a transactional system is more powerful than RAII
my point was that RAII trivially maps to transactions (destructors do rollback and commit is explicit). Transaction RAII object can be composed to make more complex transactions.
> If you don't use shared ownership, then you're basically limited to lexical scoping.
why would you say that? A common use case is having objects manually removed from collections triggering cleanup actions, like closing sockets, automatically de-registering from event notifications, sending shutdown events, rolling back transactions. That has nothing to do with lexical scoping.
> my point was that RAII trivially maps to transactions (destructors do rollback and commit is explicit). Transaction RAII object can be composed to make more complex transactions.
RAII is limited in that it's tied to object lifetime, whereas more general transactional semantics can be linked to more general semantic conditions. Also, RAII does not have a good way to distinguish between commits and aborts.
> why would you say that? A common use case is having objects manually removed from collections triggering cleanup actions, like closing sockets, automatically de-registering from event notifications, sending shutdown events, rolling back transactions. That has nothing to do with lexical scoping.
If you use an explicit action to trigger destruction, then this doesn't have to be a deletion. In fact, having it tied to object destruction is unnecessarily limiting. The general rationale for having RAII is that it happens automagically; if explicit disposal is needed, then much of that rationale goes away.
In general, it seems to be me that you don't have actual experience with resource management outside of C++, so you're mostly speculating about what it's like and try to force your thinking about it into a C++-like model.
> in general, it seems to be me that you don't have actual experience with resource management outside of C++, so you're mostly speculating about what it's like and try to force your thinking about it into a C++-like model.
yes, I'm a C++ programmer. I have experience with resource management in C# and python for example, which are a pale shadow of what is possible in C++.
I know nothing of resource management in functional languages, especially regarding transactions and I would love to read more about it if you have some pointers (ah!)
There's an interesting example of resource management in Haskell with monads [1], but it's probably not easy to follow if you aren't already steeped in Haskell lore, so let me be a bit more basic.
First, note also that it would not be particularly hard to add RAII on top of a garbage-collected language to coexist with GC for resource management; it's just not done in practice. And it's not because language designers are ignorant of it (Bjarne Stroustrop's Design and Evolution of C++ is part of the standard recommended reading list in the field).
Generally, you want resource usage to be a provable property of a program. Not that you'd actually write a formal proof, but you generally want to be able to explain at least informally why resource usage follows certain constraints (e.g. having certain upper bounds).
The basic insight that you need is that resource lifetime is just another semantic property that you can handle with basically the same techniques as other properties of programs; you do not need special language support for it (though, obviously, it helps if your language is a bit more expressive than a Turing machine :) ).
This means that you'll generally tie resource usage to program state and program behavior that you can reason about. The incidental semantics of object lifetime can be dangerous, especially in a functional language, as object lifetime can sometimes be unpredictable.
One of the major hiccups are closures. Closures capture their environment (including local variables) and if they survive the stack frame that created them (because they are returned or stored on the heap), then the lifetime of any captured object can be extended in a fairly unpredictable fashion. Obviously, that is not a good thing, as you have a hard time proving lifetime properties, but few functional programmers would limit themselves to a more trivialized use of closures just for the sake of RAII.
Instead, as I said, you tie resource management to program behavior or state. In the most simple case, that can be scoped resource management. But it can also be an LRU cache, a system based on transactions, or something else entirely. Here's a simple example of a library I sometimes use in OCaml:
class example = object
inherit Tx.resource
initializer print_endline "create"
method release = print_endline "close"
end
let _ = Tx.scoped (fun () -> new example)
This is a simple lexically scoped transaction, but the library also allows for chained, nested, etc. transactions that don't begin and end based on lexical scope, but (say) program events (e.g. terminate a transaction when a socket is closed from the outside and release resources that are associated with that connection). It can also distinguish between commit and abort behavior (similar to the Haskell example above), will properly error if resource creation is not done within the context of a transaction, plus a few other bells and whistles.
15 years as a C++ dev and I agree with gpderetta, cleaning all manners of resources is awesome. Even Bjarne, the creator of C++, agrees cleaning up resources in destructors is good. Then the standard committee agrees it is good, because things like std::lock_guard and the custom "deleters" on shared_ptr and unique_ptr exist and work with many resources.
If you have issue managing lifetimes I can see why you might think explicit resource cleanup is better, but with so many scopes and even thread local scope, and move semantics ability to move an object into new scopes, there really is no limitation impose by tying resource cleanup to object lifetime. If you don't like that, then make you own classes to do it explicitly.
And as I said before, if your entire perspective comes from C++, it may be too narrow. I'll give you two examples:
1. Modern functional programming languages generally come with compacting, generational garbage collectors that have bump allocators. This means in particular that the cost of heap allocations for temporaries is only marginally higher than that of alloca() and has good locality even for pointered structures (to the point where linked lists can outperform dynamically resized arrays such as std::vector, which is basically unheard of in C++). When heap and stack allocations are that competitive, that opens up a whole new set of techniques that aren't normally used in C++ and lifetime considerations become a lot more complex.
2. Functional programming languages use closures extensively, and closures can have effects on object lifetimes that are difficult to predict. The reason is that closures capture their environment – in particular local variables – and if they survive the stack frame that generated them, this can lead to objects living much longer than you think. It's a major reason why closures and RAII don't get along well (note that C++ didn't have closures until recently and in practice their use is much more constrained than in functional or multi-paradigm languages).
This does not mean that you do not want to have sane resource handling. But in general, you want resource usage to be a provable property of a program, so you will generally tie resource management to program state or program behavior rather than incidental language semantics.
First, allocation is only so cheap if it's temporary. If objects survive minor collections, then there's additional cost, as they get promoted to the major heap. The key idea that I'm getting at is that with temporary objects being cheap, you have more flexibility in creating temporary data structures and do not have to fit them in the constraint of a stack frame and (unlike with alloca()) do not have to worry about stack overflow and they can be returned from a function without copying (unlike stack frame contents).
Temporary data structures will still be small and generally fit in the L1 cache of any reasonably modern processor. And using pointer does not mean that everything is a pointer, and that you're necessarily sacrificing ILP.
I don't discount the power of being able to cheaply create (short lived) highly dynamic data structures. I do miss it in C++ and alloca never feels right.
> Modern functional programming languages generally come with compacting, generational garbage collectors that have bump allocators. This means in particular that the cost of heap allocations for temporaries is only marginally higher than that of alloca() and has good locality even for pointered structures
Interesting. Would you mind naming a few such languages? I'm guessing Haskell. What about OCAML? Any others?
I know that OCaml, Haskell, the JVM and Microsoft .NET do it (I think Mono does, too, but am not positive). And I know for a fact that OCaml and the JVM inline allocations and optimize multiple allocations that are close together (e.g. increasing the allocation pointer only once even if you allocate a pair of objects).
It's fairly common and needed for modern functional languages, as they can go through a lot of temporary objects when programming in a purely functional style.
>> The point is that something like:
List.filter (fun s -> String.length s > 0) list
simply cannot be done efficiently, because you require either copying or shared ownership for the strings. This also occurs naturally in a number of other situations, such as storing strings in objects.
This is a very salient point. It bounced around in my brain a couple of hours before I came back to comment.
How often do you need to control memory layout and management so you get the absolute best performance? Compare that to how often you need to express filters.
For me, there's no doubt that expressing functional logic and having it be decently efficient is the most important need.
This little example of yours illustrates that a well-designed garbage-collected language has a HUGE advantage over RAII. I may be slow on the uptake, but this is the first time I've seen it that way.
* If the filtered result is used locally in a function (and then thrown away), a filtered view works just fine, is very cheap, efficient and is lazy (nice if you only consume a subset of it).
* Often only the filtered view is used, so you can destructively modify the original list, so no copies.
* If you need both the original list and the the filtered list, in a functional language you need to allocate new cons cells anyway. The cost of allocating, modifying and touching the new memory is going to dominate except for very long strings, so copying is not an issue.
Of course in C++ lists are frowned upon in the first place (as many algorithms can handle any data structure transparently), while they are kind of central in many functional languages.
There are cases of course where frictionless shared ownership is nice and GC shines. Filter is not one of them.
The problem is that you need a vastly more complex machinery to cover all the various ways to avoid copying/reference counting and then still don't have a general solution when you can't avoid multiple ownership (unless you count std::shared_ptr with its very high overhead).
As I said before, it's not that you can't do it, it's that there are costs associated with it.
> If you need both the original list and the the filtered list, in a functional language you need to allocate new cons cells anyway.
You can filter arrays also and allocating cons cells for a list is pretty cheap with a modern GC, as discussed before.
That's not the point that I'm getting at. Exclusive ownership invariably mandates copying for certain use cases. Rust allows you to obviate this in a few more cases through borrowing, but as a general rule, exclusive ownership and having multiple references to the same object do not mesh. You need to get rid of either one or the other. That Rust forces you to be explicit about copying in those cases does not make the underlying problem go away.
This is especially noticeable and constraining when you come from a functional programming background and not C++ or when you're doing stuff that's more complicated than shuffling bytes (I've mentioned binary decision diagrams as an example).
The bigger point is: as it is extremely rare for me to write one of the few niche applications that are actively GC-hostile (such as web browsers, AAA video games, or OS kernels), I don't see the point of jumping through all the extra hoops that avoiding GC brings with it.
No, my point is not that it cannot be done (both Rust and C++ are Turing-complete, so "cannot be done" does not make sense for anything that's a computable property), but that it comes with a cost. See my other response for the details.
> Hm, why would you not use multiple ownership then, instead?
That's exactly what I want. The problem is that (1) it comes with significant runtime overhead and/or syntactic noise in Rust/C++ (or alternatively, lack of memory safety); and (2), it becomes difficult to write code that works equally well for multiple and exclusive ownership (module APIs often become burdened with implicit or explicit ownership assumptions).
Well, let me also add that I completely understand that there are use cases for Rust, where a GCed language would be a poor fit. An obvious example is a web browser ( :) ), where the x% memory overhead that comes with a garbage collector is just a price that you may not be able to afford to pay.
In other words, don't read this as "Rust sucks" (I actually rather admire Rust's design), read it as: for my purposes, the practical use cases of Rust are generally too niche to justify the software engineering tradeoffs.
Yeah, I hear you. I think this is fair. And at least some of it comes down to preference, that is, I don't think that the syntactic noise is very much, but others can certainly disagree. The others would mostly be a "nuh uh" since I don't have numbers anyway :)
At the end of the day, Rust can never be great for every single last programmer, and this is entirely okay.
> Third, RAII is in practice little more powerful than lexical scoping.
I'm beginning to think that we're all going about things wrong :) . We like and rely on things like RAII, macros, etc. But these are nothing more than specific compiler features.
If we were to take control of generating our own code, we could have RAII, macros, and whatever else we dreamed up, easily. And the generated code could be in some readable, debuggable language, so much more visible and obvious than the results of a macro expansion.
For so long we've relied on an amazing black box called a "compiler", when perhaps we should take on the responsibility ("power") of implementing a compiler ourselves. (We could still, of course, generate some intermediate mainstream language, and then those amazing black boxes could take that as input and apply all their optimizations.
Seems to yield clearer code, less magic, more power, and more portability.
I was unaware it was 9 years old. You have made me reconsider whether or not my exposure to D is in a bubble. I actually have the D episode of cppcast sitting in my playlist right now. Not because I searched it out, but because Alexandrescu pushes D so hard. I have also seen a Google tech talk, and several talks from the level of Boostcon or Cppcon. I just search for "Nim Tech Talks" and I couldn't find anything recorded, just one programming language enthusiast group on meetup.
After reconsidering, I think D while not mainstream has a much larger mindshare that nim. Even if it were the larger one I don't the need for another language in exactly this space where several other languages have failed demonstrating the difficulty here.
As for garbage collectors, I still hold the opinion that they leave all the other resources unmanaged. All but the best have large performance costs and that Rust and C++ solve these with RAII.
I apologize if you thought that I meant Ruby, Python or other newer scripting was a drop in replacement for a strong systems language. I see a typical lifecycle for software starting off as rapidly cobbled together pile of some scripting language. Perhaps someone makes a thing Ruby on rails and they assemble some product in a week or two. Then as they grow product owners often choose to rewrite (sometimes just partially) in something faster. Commonly Java, Go or C, so why settle for something slow here, when the problem is well define and it can be done with tools that produce incredible performance. People jump this gap because it is worth the time and effort to maximize the ability to scale a proven product. Why leave half or more of the performance on the table even when compared to Java?
I appreciate your input, but you seem more intent on attacking other languages than promoting Nim. I am still unsure why its good. Even if the others are bad, that doesn't make Nim good.
Just to add, JavaScript is not the slow and hard to use anymore. With V8 engine JavaScript can be order of magnitude faster than Ruby, python(almost as fast as java in most cases). With ES6(ES2015) features it becomes pleasure to use.
It should get more attention as a Ruby/python alternative.
JavaScript does not need any more attention drawn to it. Regardless of its merits as a language and how much developer time it has swallowed, it remains the only language in the browser and therefore is forced into the hands of most developers.
Saying JavaScript needs more attention is like saying the level of nitrogen in the Earth's atmosphere needs more attention paid to it.
>I don't see the appeal of Nim. We have the choice of C++14, Rust and Go for well supported robust languages with different focuses. C++14 has the best execution performance, Rust is the most safe and it seems likely go tries to be the simplest. What does Nim do better than any of these?
Well, C++ is bloated and with tons of tricky parts, Rust has lots of ceremony and head-scratching to fit your programs into its "lifetimes" model, and Go is lacking expressive power and has several bad decisions baked in (possible forever).
None of these things say why Nim is good. All of these languages, despite all these failings, have been used to good effect. Why is Nim good, what does it do that makes it better where these fail?
This isn't a real response or accurate. C++ is used it countless video games, Science simulations, applications and operating systems. Rust is new (only 7 years old), but is being integrated into the renderer for a major browser and has a library packages that is growing. Go about as old as Rust and is used to make a ton of real world services including a large chunk of the services at Google.
Nim is older than Go and Rust, and has what to show for it?
It sounds like a negative and leading question, but I really don't mean it that way. What are some interesting projects made in Nim? Even hobbyist project can be meaningful.
Languages and other projects backed by large companies are known for growing fast. Java, Visual Basic, C#, Go, Rust... Community-driven projects, generally speaking, tend to have slower growths, e.g. Python, Linux.
TL;DR: 'Both the C and Swift are now the same. I don’t just mean “they take the same time”, I mean they are compiled to literally the same instructions.'
Initially the Swift Mersenne Twister code was faster, but he was able to apply an optimization to the C code that made it identical.
Metaprogramming, and it's not just about how powerful it is. Metaprogramming in Nim is easy, safe, and powerful.
You mention a lot of extremes: Go is simplest, Rust is the most safe, C++14 has the best execution performance. These extremes have trade-offs: Go's lack of generics mean that you end up writing a ton of unsafe boilerplate code, Rust's safety means that you have to spend extra time wrapping your head around lifetimes, C++'s best execution performance means that you have to deal with memory manually and in an unsafe manner.
These trade-offs are debatable, but my point is this: what if I want a language that is well balanced? Nim may not be as simple as Go, but it's definitely simpler than C++ and Rust. Nim may not be as safe as Rust, but only when you decide to manage memory manually, which is rarely necessary and when using Nim's GC it is just as safe as Rust. Nim compiles to C/C++ so its performance is identical in many cases.
The initial idea behind Nim was to have a small core language with a powerful metaprogramming system for expansion. Perhaps we have strayed from this vision a little bit, but it is still true. And we do aim to refocus towards it.
Nim always looked to me as a modern version of OCaml made by someone who has no idea how to do static type systems properly. The language as a whole might have a nice feeling and convenient features, but the type system is really not good ...
D's type system, at least, is better designed.
On the other hand, I always considered Rust as the result of an ML programmer looking at C++ and thinking "this is bullshit, I can do it cleaner".
I might have a very ML view of the programming world, but that's ok, most other people have an ALGOL/C view of it. ;)
some others worth checking out (all largely single-developer efforts):
http://www.ats-lang.org/ [ml + c + everything dialled up to eleven. linear and dependent types. comes out of academia, but under active development and slowly beginning to gather a community.]
https://eigenstate.org/myrddin/ [not really familiar with this one but it looks interetsing. close to the metal, has pattern matching over algebraic datatypes but seems more towards the c than the ml end of the spectrum from the brief glance i took.]
http://felix-lang.org/ [spawn of ml + c++ rather than ml + c. under enthusiastic if haphazard development.]
there have been a few others i've seen pop up over the years; i keep meaning to make a website or at least a github awesome-style list cataloguing and tracking them.
For all those propositions, I have only one question: Why would I use that instead of OCaml ?
OCaml is far from being perfect, but it has lot's of features and a decent community (especially compared to all those hobby/research languages).
OCaml can do pretty much anything except things like image processing and video game programming. I mostly don't care about those things, and if I did, I would use Rust.
Now, from a research point of view, sure, those languages are interesting.
i'm in the same boat - i keep an interested eye on developments in the ML-cross-C space, but so far none of them seems compelling to use over ocaml. however, if one of them had a cross-compiler as good as go's, that would be a killer feature. i would really like an ml-like language i could use to develop desktop apps and trivially deliver binaries for multiple platforms.
Maybe in the past, but not today. Today the perspective is from the very high level dynamically typed languages. Encoding things into types is out of the question.
The Felix language may fit your taste then http://felix-lang.org/ Its related to C++ somewhat in the way say Scala is related to Java.
Its been around for a while. Felix has an ML like type system (stronger in fact), has pattern matching, has coroutines / fibers, and was designed to have a FFI less interop with C++ libraries. Very fast too.
It is more an expression of confusion rather than type system quality evaluation, but
why does JSON parser's implementation returns its own types like JsonNode? Why doesn't it just return core types like:
There's also a lot of Modula-3 in there. Actually, the whole language is full of Pascal-isms, but I'm not sure how many come directly from Pascal and how many come via Modula-3.
I wish them however the best of luck. Compiling down to C means being able to leverage a lot of mature tools, and if they manage to simplify a lot of the language they may rival Go for projects when you need high performance but desire to retain high productivity regardless of how big and diverse your dev team is.
reply