Yes this seems like an excellent performance boost to development. Parallelize type checking and compilation. We all have these many-core machines now, and so many tools that single thread the world unnecessarily.
This is unproven (and not a toy problem), but I imagine it's going to do pretty well at compilers. The amount of time I'm waiting at work, hypnotizing the tsc process that sits at 100% CPU, wishing it was parallel...
I call bullshit. Researchers have been trying to make auto-parallelizing compilers for decades without anything substantive making it to industry.
Even in functional languages, auto-parallelization hasn't worked well because of the coarseness issue: it's difficult for compilers to figure out what to run in multiple threads and what to run single-threaded because of tradeoffs with inter-thread communication.
Having a layer of parallelisation on top of good old sequential code seems like a very neat idea. It resolves headaches of learning how to run code in parallel in languages that aren’t necessarily my primary language (e.g. short, one-off scripts). Thanks for sharing!!
I wouldn't put my hopes very high on this. They tried doing this kind of ubiquitous automatic parallelization in Haskell but they ran into parallelization overhead issues because its very hard to have the computer figure out the correct parallelization granularity all by itself.
And of the major stuff that's heavily parallelized, basically all of what I've seen is written in static languages.
With good reason, too. I submit that when you're writing code like that, it's generally because you're trying to do a CPU-bound operation as fast as you can. Which means that unnecessarily spending cycles on type checks that could have been put by the wayside at compile time is probably not your cup of tea.
I've given a number of talks on the parallelism TS at C++ conferences and fully believe it can revolutionize the C++ libraries, I really hope it can make it into C++17.
Having such easy parallelism techniques within the language would be huge, I can think of so many scenarios where stupid-parallelism can be applied using the STL. The only set back is the additional room to shoot yourself in the foot, people that don't understand the overhead of parallelism could easily blow up a program in terms of efficiency.
I doubt that outside of special cases this would actually provide significant benefits.
Suitable languages already exist (see Erlang), but the fact of the matter is that a vast array of problems don't benefit from parallelism or aren't parallelisable at all. Not to mention that a good chunk of the benefits would be negated by the increase in coordination and synchronisation between parallel tasks.
Well, I expect that most programming language research in the next 10 years would focus on exploiting multicore programming. If we're lucky, we'll all be using something like Haskell or Erlang, and all apps will be multithreaded. Languages that choose to ignore multicore will slowly suffocate. Writing multithreaded apps will become much easier than it is now, if not trivial.
I love it. A suggestion is to have something for parallel computations because personally I know I won't even try a language that doesn't have something for that. I think the way the language handles parallelism is key to its design and syntax. Other than that, it's really cool.
I use parallelization all the time. It's easily added later thanks to this feature, unlike say rewriting a code base from a single threaded language because you thought async constructs would be enough. Never making that mistake again.
Not a stupid question. I think adding parallelism to the compiler might be possible, though there are certainly parts that use a lot of mutable state that could prove tricky.
Thanks for the explanation. Popular "modern" languages like Javascript, Ruby or Python already have many of the "FP" features mentioned in the OP and comments: first class functions (though not so simple Ruby) and high level data processing functions (e.g. map, filter, select).
As best I can tell the big new benefit of FP is the potential, generally not currently realized, for automatic parallelization. The cost is having to think in a new way that is, at least initially, somewhat confusing.
Alternatively non-FP languages could simply import ideas from FP and gain the benefits in an evolutionary way. For example if Mads wanted to he could rewrite the Ruby interpreter to have map, filter and many other methods automatically use multiple cores. Eventually programmers using the new multi-core version would learn which structures were, such as map, or weren't, suc as for loops, going to be parallelized.
There would be many language specific details to work out. For example determining if there are truly no side effects in a particular situation. This is not always so simple in non-FP languages but still not as complex as analyzing every possible for..end loop.
As 4, 8, 16, 32 and more core processors become more and more common the pressure to either change to FP languages, incorporate FP feature in non-FP languages or find some other parallelization methodology (?) will become irresistible.
And luckily, code compilation tends to be one of those things that can actually be effectively parallelized.
Another thing useful for a programmer is being able to benchmark things - with a multicore machine you can tie off one core (or multiple cores!) exclusively for a benchmark, which can really help reproducibility.
reply