Could everyone on HN just take a course in languages theory so we can all stop with these stupid trolls about the best languages which have been emerging for a week.
Hopefully it would allow everyone to realize that a language is just some syntax and semantic and that a compiler is just a program like another. Nothing sacred here. Hell you can even do imperative programming in Haskell if you wish. Coding an interpreter to do so is not particularly difficult.
With a bit of luck, everyone would understand at the same time that the compiler is building what is executed therefore the expression"speed of a language" is utter nonsense. You can only speak of the speed of the implementation which, yes, vary widely amongst compilers for the same language.
So, yes, Haskell semantics encourage functional programming, C semantics imperative programming, both are Turing complete and yes GCC is currently better at optimising C code than any Haskell compiler, nothing new under the sun. Can we all go back to a normal activity now ?
I'm curious how a language with no compiler is "faster than C" since the speed of C is nothing to do with the language and everything to do with the compiler implementations which have been so heavily worked over the years.
I can believe it - I've made my own language and compiler whose results are faster in the niche area it targets than any C compiler I've ever seen - I'm just curious how this assertion is backed up.
I thought this was the most telling line of the article: "Ultimately the first factor of performance is the maturity of the implementation."
That supports a common conviction held by fans of functional programming: if all of the years of arduous optimization that have been poured into GCC had instead been poured into (say) GHC, then Haskell would be even faster today than C is.
That is, to many people functional programming languages seem to have more potential for performance than lower-level procedural languages, since they give the compiler so much more to work with, and in the long run a compiler can optimize much better than a programmer. But so much more work has been put into the C-style compilers that it's hard to make a fair comparison. It's still hard, but this experiment seems to give some solace to the FP camp.
I think many people fully understand this. When people rave about, say, haskell being nearly as fast as C, what they mean is that you can write much higher level, and thus shorter and more concise, code in a way that can be automatically optimized by the compiler quickly.
In other words, he's correct but also assuming incorrectly that because other don't spell it out, they don't know.
I don’t really care which functional programming advocate you’re quoting. They’re all liars when they make these claims.
You can say SSA, static guarantees, internal mutability, blah blah blah all you want. When third party, not specifically chosen anecdotes to make FP look good, measurements stack up to the claims, we can have a better conversation.
It’s not looking good though, cause these claims of “actually, Haskell is faster than C because compiler magic” have been going on since well before stable Haskell like 15 years ago, and they’ve been lies for just as long.
I won't rehearse my "large system" credentials here, but I have them, and this meme that C++ and Java are somehow safer for large teams to work in is a joke. I've done large projects in Tcl (!) and in C++ (!$&#) and there are more ways to shoot your teammates in the feet with C++ than in Tcl; they just aren't first-class elements of the language. And C++ is universally regarded as a language for large system development.
I won't rehearse my "high performance systems" credentials here, but I have them, and this meme that the speed penalty for dynamic languages is a problem is a joke. Reread my previous comment: I'm getting in between the processor and the native instruction pipeline in Ruby, and it is, I believe I said, the greatest thing ever.
Belaboring the sentiment: the problem with the "fast systems language meme" is that it leaves you with the impression that the language is where speed comes from. No. It is trivially easy to write slow systems code in C: do everything off an interrupt, do 3 system calls per byte, demand-spawn a thread.
The answer to fast systems is to find the bottlenecks and fastpath them. It is much easier to fastpath a bottleneck when a hash table of digraph structures is 2 lines of code instead of a 1.5mb compiled library with template headers that take 3 minutes just to parse.
None of this is heresy. You can read it in "Expert C Programming: Deep C Secrets", which is the first C book lots of people ever read. Your design is where speed comes from. If method dispatch is a bottleneck, please don't waste time moving to C++: your design is fucked, and you are doing it wrong. Do it right.
My heresy is that static type checking is white elephant. I've been doing this kind of work for going on 15 years now, and I can count on 1 hand the number of head-scratching bugs I've had due to type safety, and that includes 100kloc+ components built entirely around void* closures. The things that really fuck you in large systems are object lifecycle and concurrency.
That's actually something I've been saying for quite awhile when people bring up microbenchmarks complaining about how a language like Haskell is slow.
Like, yes, no question, if you try and implement a tightloop in Haskell vs C, C is going to come out a lot faster.
However, no one writes Haskell like that! People rely pretty heavily on standard libraries (or even third party libraries), and on the optimizations afforded to them by the "rich" language of Haskell.
C might be capable of being faster, but that doesn't imply that a C program will inherently be faster than a similar Haskell program, especially if the programs are relatively large.
> This article is in response to an earlier one comparing Haskell and C, which made the claim that Haskell beats out C when it comes to speed.
Perhaps my reading comprehension of the original post is different from Jacques' (or I'm just wrong), but I don't think that the original article made such a claim. Here's the TL;DR of the original article:
> TL;DR: Conventional wisdom is wrong. Nothing can beat highly micro-optimised C, but real everyday C code is not like that, and is often several times slower than the micro-optimised version would be. Meanwhile the high level of Haskell means that the compiler has lots of scope for doing micro-optimisation of its own. As a result it is quite common for everyday Haskell code to run faster than everyday C. Not always, of course, but enough to make the speed difference moot unless you actually plan on doing lots of micro-optimisation.
From this, I understood that in a larger program, most programmers wouldn't be doing the kind of micro-optimizations that they do for the Benchmarks Game. I figure that most code would be written following this pattern:
* Write code to follow the spec (usually without thinking too much about performance)
* Run the code, and evaluate its performance
* If the performance is good enough, you're done
* If the performance needs to be better, run a profiler
* Find the biggest bottleneck and optimize it
* Re-run the code and re-evaluate its performance
The original article took a micro-benchmark (a mistake in my opinion, because it's easy to micro-optimize every single detail of that code) and showed that in the case of Haskell, the first version was too slow, but that with the help of a profiler, finding the cause was easy, and the fix was trivial, while in C the profiler didn't show any problem with the code of the user, so it must be a problem in the standard library's code, and to fix it required changing the design of the program and making it more different than the spec. And I felt this was the author's real point; that to get the maximum performance from a C program, you'll need to code it not like a human would imagine it, but like a computer would, and it makes the code harder to maintain later on.
These "faster than C" claims are almost always embarrassing (usually involving C code that would easily win if it were as aggressively optimized as the high-level language) but that's almost not the point.
The real point is the larger narrative. The subtext of these posts is what we are really arguing about. So let's just duke that out directly.
High-level language fans have a point, which is that high-level languages are sometimes an better overall "bang for the buck" in developer time, and that sometimes they can be pretty fast (possibly even out-performing an un-optimized C program). Reasonable C guys aren't arguing against this. We're certainly not arguing that people should use C for everything.
But here's what high-level language fans have to understand. First of all, you depend on us. Your language runtime is (very likely) implemented in our language (possibly with a little assembly thrown in). So as much as you may like your language, it certainly does not obsolete C. C guys like me get cranky when high-level language fans imply that it does.
Second of all, a C+ASM approach will always win eventually, given enough time invested. That is because a C+ASM programmer has at his/her disposal literally every possible optimization technique that is implementable on that CPU, with no language-imposed overhead. What this means is that a higher-level language being "faster than C" is just a local maximum; the global maximum is that C is faster.
Yes, it's absolutely true that in limited development timeframes a higher-level language might still be the right choice, and in rare cases might even have better performance. But for long-term projects that want the absolute best performance, C (or C++) are still the only choice. (But maybe Rust someday).
It seems to me that this is more of a complaint leveled at C-ish compilers. One could envision a language that makes the sort of structures that baffle the normal compilerore visible and amenable to optimization. I'm not sure if some syntax and some clever semantics could address all these problems, but...
When I was first learning C formally, I had this idea that C was the "fastest language". My instructor said, day 1, that the problem with C was that it was not amenable to optimization. It was pretty surprising, but as time goes on and languages, compilers, and interpreters get better I think we'll see this sort of problem more and more.
I'm surprised by how dramatic the difference is between the speed of C and Haskell. One of my professors (at The University of Glasgow, so appropriately a Haskell fan) once claimed that it had "c-like performance".
I suppose that's the point of this paper though, that "c-like performance" is a terribly vague term, meaningless without knowledge of the specific comparisons being made.
It's not neccesarily which language is faster, but the which algorithm is faster.
He said naively written C, which mean the algorithm may be entirely different and run in O(n2) and much slower than a Haskell version which use a different algorithm and run in O(nlogn).
I don't think it was an assumption; I think Steve was speaking from experience. As someone working extensively in C and doing a fair bit of Haskell, I'm finding my C to be more verbose and less safe. The C runs faster, in my application; sometimes I write it faster and other times slower, depending on a bunch of factors.
C is not the fastest language. C++ is faster than C. For example, the only way you can hope to match C++ inline template algorithms in C is with a horrific macro scheme.
"So here is the bottom line. If you really need your code to run as fast as possible, and you are planning to spend half your development time doing profiling and optimisation, then go ahead and write it in C because no other language (except possibly Assembler) will give you the level of control you need. But if you aren't planning to do a significant amount of micro-optimisation then you don't need C. Haskell will give you the same performance, or better, and cut your development and maintenance costs by 75 to 90 percent as well."
Note the 'same performance or better' in there.
Maybe you missed that bit in the original article?
This wasn't a large effort by any stretch of the imagination and a factor of 5 difference compared to the Haskell code isn't even in the same ballpark as "the same performance", and about a factor 10 difference with the C code listed in the original article. You'll notice no micro optimizations were done in the re-write, just some global re-arranging of the way data flows through the program.
The rest is in response to the misleading title, that Haskell is 'faster' than C, faster to program, faster to debug, easier to maintain and so on while making claims about performance that are not born out by real world comparisons.
Speed was the wrong thing to compare on here, and should not have been part of the original article without a serious effort to become equally proficient in both languages.
I think it's worth pointing out though that idiomatic C is probably going to be more consistently performant. It seems common to run in to situations in Haskell where one change can cause 10X speedup, but I don't see that nearly as often with C code. I don't have a lot of evidence on hand to support this, just what I've observed personally. This seen fair? Relevant?
Hopefully it would allow everyone to realize that a language is just some syntax and semantic and that a compiler is just a program like another. Nothing sacred here. Hell you can even do imperative programming in Haskell if you wish. Coding an interpreter to do so is not particularly difficult.
With a bit of luck, everyone would understand at the same time that the compiler is building what is executed therefore the expression"speed of a language" is utter nonsense. You can only speak of the speed of the implementation which, yes, vary widely amongst compilers for the same language.
So, yes, Haskell semantics encourage functional programming, C semantics imperative programming, both are Turing complete and yes GCC is currently better at optimising C code than any Haskell compiler, nothing new under the sun. Can we all go back to a normal activity now ?
reply