Haskell may potentially be compiled to within epsilon of C when using strict evaluation + mutable state + unboxed types (look at the shootout implementations; the majority are in this style), but lazy evaluation and functional data structures have a real cost.
It's not merely that the ghc developers have been insufficiently clever and diligent.
You are of course entitled to your own opinion, but I would say that’s unfairly harsh towards Haskell. GHC Haskell not only compiles down to a binary, it can also transpile to C—-, both of which can run performantly.
I started off FP with lodash, then moved to Python and built in functions, and I’ve tasted the benefits of applying functional programming in production, and so that’s what makes learning Haskell interesting and worthwhile to me. I’m looking forward to bringing what I’ve learned from Haskell towards the rest of the code I write, even if it’s not in Haskell.
Haskell's perceived as an impractical language because it is one †. There aren't many projects where you'd get any plausible benefit from using Haskell instead of some other reasonable garbage collected language. Lazy evaluation is a big risk, if you aren't writing programs that start and finish. It sure is one that I've suffered from. At the line-by-line level, purely functional code also takes more effort to edit, once written, than imperative code.
†Edit: Well, it's practical for some stuff, but it's not particularly more practical than other languages, and there's a big spooky bit of danger that comes with it.
I think it's that it's easier to detect a lack of data dependencies in C than it is to write an efficient Haskell compiler. Well, that and the fact that there is at least an order of magnitude more people working on the former than the latter
I worked professionally in Haskell for a long time and I cannot disagree with you more. This is always the promise of strict functional languages but in a business setting where the very nature of what you build has to be arbitrarily mutable at the whim of competing interests between stakeholders, engineers, customers, etc., it just doesn’t work.
You end up with the same spaghetti code messes, unexpected runtime errors, indecipherable crashes, etc. no matter what degree of strict verification tooling you use. The introduction of lazy evaluation, difficult to debug tail recursion implications on memory consumption, elevating IO into the type system, burying details deep in type class patterns, relying on dozens of special case compiler directives to enhance Haskell... it becomes the same shit code mess as any other paradigm.
I've been writing a lot of Haskell recently. I had to switch to embedded C and the difference between the two was striking. The C program I was editing compiled, but failed to do anything useful. To fix that, I had to reorder the stateful functions and #define the right things without much help from the compiler.
Had I been using Haskell and a reasonably well designed library, these mistakes could've been caught at compile time, with the compiler telling me exactly what I did wrong. The kind system described in the article is how you would implement such a library, or something even more advanced.
Sure, comparing embedded C to Haskell is disingenuous, but it does show how far programming languages have come. And I think my experience could definitely happen in many other mainstream languages.
Yeah I have no particular axe to grind on Haskell’s behalf. I’m a Lisp guy who has to write C++ all day like everyone else.
Lazy is a ridiculous default, the combinatorial explosion created by GHC extensions makes C compiler quirks look like a gentle massage in comparison, and building Haskell is a friggin migraine on wheels.
But for whatever reason, those folks are on average in my experience, serious as a heart attack.
Haskell's main problem is that it is over-zealous, and built on axioms that make it a poor fit for real people writing real code. Everything else flows from that core issue. Functional purity and lazy evaluation are interesting, but when you can't toss a printf debug or log statement into a function without changing function signatures all the way up, it's not going to be popular.
Pragmatic, sloppy languages will always be more popular, because they are more forgiving.
I don't see why Haskell's in an awkward situation. Adding more strictness support to GHC is not particularly hard and it's happening already. It's far easier to add strictness to GHC than to launch a competitor production-strength pure functional language.
(On the long term, probably we'll see a brave new type-theoretical language, when much of the current research crystallizes into something tangible; but until then I don't see GHC being overshadowed by competitors. There is lots of inertia and the extant GHC infrastructure is extremely valuable. Dependent types are very important and adding them to GHC is very hard, but it's still happening, so Idris & co. will have a difficult time trying to fish in that pond)
This is a terrible and also hilarious argument. Instead of saying Haskell has too many newfangled ideas, you're instead arguing that it has an insufficient amount of them.
So Haskell has newfangled innovations compared to the big players, and it _also_ has a mature ecosystem in widespread use.
This puts it ahead of Purescript and Idris for adoption in large scale projects (although I would readily agree that both those languages are pretty great too).
I maintain that lazy evaluation works great, it just requires optimizing along different paths than devs are used to -- but it doesn't incur more "overhead" to optimize code than is the case in typical strict languages, which _also_ have to pay that cost, just in different sorts of optimization and reasoning.
The thing is, barring the intervention of marketing departments, compilers don't get more dumb over time. If this is possible now, it ought to be easy soon (for some value of "soon").
Haskell's main benefit, to my mind, is its type safety. That's what saves programmer time in the long run. If that can be fused (hah!) with decent performance - even if it's not quite up to C without contortions - it's a net win.
I’m far from an expert Haskell programmer, but I have written it for money and had the code run on big fleets.
Haskell is really cool, but IMHO is held back from practical adoption by two things, one technical, one cultural:
- Lazy by default maps weirdly onto von Neumann machines. Laziness is cool but doesn’t pull it’s weight as the default. Performance in time and space is even harder to reason about than it already is on modern architectures, and debugging is...different.
- GHC extensions are like crack for the kind of people who overdo it with C++ templates. Pragmatic, legible Haskell in the RWH tradition isn’t safe from the type-theory zealots even in a production system. Cool research goes on in GHC, but it’s hard to keep everyone’s hand out of that cookie jar in prod.
The difference is that with Haskell you can wrap a nice, clean, type-safe interface around all that ugly optimization. With C, you're still dealing with casts, null pointer checks, array bounds checks and all sorts of other tedious bug sources that Haskell users don't have to deal with -- unless there's a library bug, of course.
Haskell sucks because mutability and strictness are extremely important for writing large complex programs, and Haskell deliberately makes both of these things inconvenient.
Also, Haskell's ecosystem and tooling is still crappy, probably because writing complex programs in Haskell needlessly difficult (due to laziness and immutability).
And the "type-safety" guarantees are a red herring. They are not that useful in practice.
Yes. Haskellers are just too much in love with compilers and parsers. Those are much easier to write in Haskell, but not enough people have grappled, and suffered (and failed) to write one in eg C to appreciate.
No, on the contrary, my experience with Haskell is that my code is mostly bug free, but ends up less performant because you can accidentally create huge trunks in the heap and it consumes too much mental stamina.
However, there exists an intermediate plane of abstraction over C and under Haskell that is absolutely horrendous and results in all sorts of weird bugs and unpredictable situations.
Haskell has an elegance, and can be written as simply as C. Usually fast enough, and can be optimized as well. The downside is that you will be sucked into a rabbit hole of academic type theory and wonder how best to express your system as a Free Monad instead of bashing it out like any sane C programmer. Just kidding, someone already figured out those hard parts for you, you just forgot to browse for it on Hackage.
It's not merely that the ghc developers have been insufficiently clever and diligent.
reply