Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I don’t really care which functional programming advocate you’re quoting. They’re all liars when they make these claims.

You can say SSA, static guarantees, internal mutability, blah blah blah all you want. When third party, not specifically chosen anecdotes to make FP look good, measurements stack up to the claims, we can have a better conversation.

It’s not looking good though, cause these claims of “actually, Haskell is faster than C because compiler magic” have been going on since well before stable Haskell like 15 years ago, and they’ve been lies for just as long.



sort by: page size:

This is an objection that people seem to raise disproportionately often - particularly when they have no performance tests and never profile their code. The one time I've seen a real-world head-to-head comparison between C and Haskell, the Haskell version performed 5x better than the C version. In fact I've literally never known a real-world Haskell program to have serious performance problems (I've known some that needed a small amount of time to be spent on profiling).

I realise this is much less satisfactory than a convincing theoretical solution; all I can say is that it just doesn't come up in practice.


I thought this was the most telling line of the article: "Ultimately the first factor of performance is the maturity of the implementation."

That supports a common conviction held by fans of functional programming: if all of the years of arduous optimization that have been poured into GCC had instead been poured into (say) GHC, then Haskell would be even faster today than C is.

That is, to many people functional programming languages seem to have more potential for performance than lower-level procedural languages, since they give the compiler so much more to work with, and in the long run a compiler can optimize much better than a programmer. But so much more work has been put into the C-style compilers that it's hard to make a fair comparison. It's still hard, but this experiment seems to give some solace to the FP camp.


That's actually something I've been saying for quite awhile when people bring up microbenchmarks complaining about how a language like Haskell is slow.

Like, yes, no question, if you try and implement a tightloop in Haskell vs C, C is going to come out a lot faster.

However, no one writes Haskell like that! People rely pretty heavily on standard libraries (or even third party libraries), and on the optimizations afforded to them by the "rich" language of Haskell.

C might be capable of being faster, but that doesn't imply that a C program will inherently be faster than a similar Haskell program, especially if the programs are relatively large.


> This article is in response to an earlier one comparing Haskell and C, which made the claim that Haskell beats out C when it comes to speed.

Perhaps my reading comprehension of the original post is different from Jacques' (or I'm just wrong), but I don't think that the original article made such a claim. Here's the TL;DR of the original article:

> TL;DR: Conventional wisdom is wrong. Nothing can beat highly micro-optimised C, but real everyday C code is not like that, and is often several times slower than the micro-optimised version would be. Meanwhile the high level of Haskell means that the compiler has lots of scope for doing micro-optimisation of its own. As a result it is quite common for everyday Haskell code to run faster than everyday C. Not always, of course, but enough to make the speed difference moot unless you actually plan on doing lots of micro-optimisation.

From this, I understood that in a larger program, most programmers wouldn't be doing the kind of micro-optimizations that they do for the Benchmarks Game. I figure that most code would be written following this pattern:

* Write code to follow the spec (usually without thinking too much about performance)

* Run the code, and evaluate its performance

* If the performance is good enough, you're done

* If the performance needs to be better, run a profiler

* Find the biggest bottleneck and optimize it

* Re-run the code and re-evaluate its performance

The original article took a micro-benchmark (a mistake in my opinion, because it's easy to micro-optimize every single detail of that code) and showed that in the case of Haskell, the first version was too slow, but that with the help of a profiler, finding the cause was easy, and the fix was trivial, while in C the profiler didn't show any problem with the code of the user, so it must be a problem in the standard library's code, and to fix it required changing the design of the program and making it more different than the spec. And I felt this was the author's real point; that to get the maximum performance from a C program, you'll need to code it not like a human would imagine it, but like a computer would, and it makes the code harder to maintain later on.


I'm surprised by how dramatic the difference is between the speed of C and Haskell. One of my professors (at The University of Glasgow, so appropriately a Haskell fan) once claimed that it had "c-like performance".

I suppose that's the point of this paper though, that "c-like performance" is a terribly vague term, meaningless without knowledge of the specific comparisons being made.


I think it's worth pointing out though that idiomatic C is probably going to be more consistently performant. It seems common to run in to situations in Haskell where one change can cause 10X speedup, but I don't see that nearly as often with C code. I don't have a lot of evidence on hand to support this, just what I've observed personally. This seen fair? Relevant?

To be honest, your comment strikes me as far more "religious" than the article, which (since you obviously didn't RTFA) documents optimizing a particular bit of haskell code to be on par with an equivalent implementation in C. The article is not saying that haskell (the language) is faster than C.

If all you have is a hammer...


If Haskell is easier to optimize than C, then it could easily be that there's some amount of programmer effort, for which expending that much effort in Haskell yields a faster program than expending that much effort in C. If that amount of effort is in the range of effort most people are able to expend on a class of projects, then Haskell is faster than C for those projects. It may even be that those are most projects.

It is, of course, not the case that Haskell is faster than C with arbitrary effort expended tuning to the specific hardware - no one is claiming that.


From TFA:

"So here is the bottom line. If you really need your code to run as fast as possible, and you are planning to spend half your development time doing profiling and optimisation, then go ahead and write it in C because no other language (except possibly Assembler) will give you the level of control you need. But if you aren't planning to do a significant amount of micro-optimisation then you don't need C. Haskell will give you the same performance, or better, and cut your development and maintenance costs by 75 to 90 percent as well."

Note the 'same performance or better' in there.

Maybe you missed that bit in the original article?

This wasn't a large effort by any stretch of the imagination and a factor of 5 difference compared to the Haskell code isn't even in the same ballpark as "the same performance", and about a factor 10 difference with the C code listed in the original article. You'll notice no micro optimizations were done in the re-write, just some global re-arranging of the way data flows through the program.

The rest is in response to the misleading title, that Haskell is 'faster' than C, faster to program, faster to debug, easier to maintain and so on while making claims about performance that are not born out by real world comparisons.

Speed was the wrong thing to compare on here, and should not have been part of the original article without a serious effort to become equally proficient in both languages.


"Third, a comparison on speed between Haskell and C is almost meaningless, speed is not Haskells strongest point and it would be folly to bench it against C for that particular reason."

I disagree wholeheartedly. If I am choosing Haskell over C, I am giving up some (possibility of) performance. The question of how much is an important piece of information, entirely relevant to that decision.

As I observed in a comment on that other post, the actual performance of both the Haskell and the C depends on the amount of effort expended to make them faster (first in general, and then possibly on a particular architecture). At the limit, the C beats the Haskell by some margin - the size of that margin is informative; but that's also not the whole story - what the margin is earlier also matters, and for a particular project, it might matter more.

This is not to say that the particular benchmarks in the earlier article were good choices - I don't have a clear position on that.


So GHC is fast as long as it's only compared to slower languages?

> Granted, functional languages are never going to be as fast as low-level languages like C, but that doesn't mean they're necessarily slow either.

That's the point.


I don't think it was an assumption; I think Steve was speaking from experience. As someone working extensively in C and doing a fair bit of Haskell, I'm finding my C to be more verbose and less safe. The C runs faster, in my application; sometimes I write it faster and other times slower, depending on a bunch of factors.

Having seen hundreds of these types of blog posts, touting faster-than-c (superluminal?) benchmarks in arbitrary languages, I'm extremely skeptical. Real-world applications are rarely solely limited by small chunks of code that are simple enough to be optimized independently of the rest of the program. Accurate comparisons should be done on large, complex code bases that mirror an equivalent C program.

Unfortunately, those don't exist, so in my mind the true performance potential of haskell is still unknown. I do have high hopes for the language, especially since whole-program optimization and aggressive inlining/code folding should yield very, very efficient code, but as of yet the only large programs in haskell remain GHC and darcs, and darcs is extremely slow.

Still: a single benchmark showing a good result is better than one showing a bad result.


1) Nowhere in my comment did I say C is closer to the machine.

2) Despite #1 C is still closer to the machine than Haskell, and I'm not sure how you could maintain otherwise

3) Nearly all of the C optimizations will, at best, make a speedup by a constant factor. Things that add (or remove) an O(n) factor in Haskell can and do happen.


"but can you show me any demo where a FP language outperforms C for parallel tasks?"

That's not a fair comparison. C proponents should ask, "Is there a demo where (alternative) outperforms C for (application domain) when C has (alternative)'s safety-checks enabled, too?" Memory and dataflow safety in C, using checks instead of exotic stuff, easily adds 20-500% overhead depending on application. Whereas, these safer or more functional languages are usually within 200% with some Common LISP's and Schemes matching or outperforming C in a few benchmarks. Concurrent Haskell wouldn't be giving me a performance reduction: I'd be getting more capabilities in development and reliability with a performance sacrifice. One that goes down every year for at least one implementation of each language or style.


Could everyone on HN just take a course in languages theory so we can all stop with these stupid trolls about the best languages which have been emerging for a week.

Hopefully it would allow everyone to realize that a language is just some syntax and semantic and that a compiler is just a program like another. Nothing sacred here. Hell you can even do imperative programming in Haskell if you wish. Coding an interpreter to do so is not particularly difficult.

With a bit of luck, everyone would understand at the same time that the compiler is building what is executed therefore the expression"speed of a language" is utter nonsense. You can only speak of the speed of the implementation which, yes, vary widely amongst compilers for the same language.

So, yes, Haskell semantics encourage functional programming, C semantics imperative programming, both are Turing complete and yes GCC is currently better at optimising C code than any Haskell compiler, nothing new under the sun. Can we all go back to a normal activity now ?


It's not that much of a claim, if they implicit parallelization works well. Also some functional languages are getting very nice optimizations like deforrestions / fusions that make them really fast.

Again I want to emphasize that you were comparing different algorithms on different data structures. It's like someone made a benchmark using the naive recursive Fibonacci definition and then you implemented the iterative version in another language and concluded from that that the other language must be much faster. The different algorithm is what gave you (most of) the speed up, not the language.

I mean, I don't doubt that C++ is in fact faster than Haskell, just not by that much.


"Haskell is easier to optimize than C" != "Haskell is faster than C".
next

Legal | privacy