Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I used to write a lot of Haskell and tbh I find the idea of restricting IO to the edges of an application to be equally as important in traditional imperative programming languages as well. It's just good design, not a PITA.

It seems to be a common view point that Haskell makes IO difficult. I actually personally found that it was more powerful because I was being more explicit about it. It also allowed for higher ordered functions to abstract common patterns in IO. The only difficulty in Haskell was not due to monads or type safety, but rather in getting my brain to understand lazy evaluation by default.



sort by: page size:

So Haskell makes IO harder by making you learn which modules use lazy I/O and avoid them? I can agree with that. But I don't think that's what people generally try to say when they claim IO in Haskell is hard or bad. People think the IO/effect segregation in Haskell makes IO bad or hard, when it's in fact just the opposite.

How is the imperative way painful in Haskell? Nothing stops you from just writing everything in the IO monad if you feel your algorithm is easier to write in an imperative style.

I dunno, so much of the IO Monad is specific to Haskell and lazy evaluation. Generally I think of it as Haskell's quirky way of handling the real world in a pure lazy mathematical construct and not something fundamental.

Oh, for sure I still love monads. I miss Haskell's do syntax in every other language. I think it's a great design pattern that starts popping up constantly once you know where to look. My objection is being forced to use monads due to the language's lazy by default semantics. You get this problem with haskell where the IO monad eventually just pollutes a huge chunk of your code because you can't safely sequence side effects without it. Sometimes I just wanted an escape hatch that would let me do side effects that I knew to be safe/harmless. Haskell has unsafePerformIO but it truly is unsafe because you can't guarantee the execution order of your side effects, which makes it useless as an escape hatch for a lot of purposes.

I think the more important part in the case of the IO monad is not that it might fail, but that it’s a side effect in a lazy language. Not super sure though. Haven’t used much Haskell.

Thank you. This was a very informative comment. I knew that Haskell didn't originally have monadic IO but the reasoning behind its adoption never occurred to me.

I like the IO monad in Haskell because it places a tax on mingled code. It's easier to keep IO at the top level and just call into pure code.

Though The Haskell Way includes the io monad, which is imperative code in practice :)

This is Haskell-specific, it sounds like. I agree, the IO monad is really quite inconvenient sometimes.

I work in OCaml, which is also a functional language, but prints can be added in single lines. I address this point in Lecture 19 (Imperative Programming), actually, but my perspective is -- we invented immutability and purity to serve us, but we need not be fanatically beholden to it. In my opinion, I think Haskell goes in that direction, when every usage of IO now needs the IO monad to get involved.

A little mutability is OK. Functional programming is about the avoidance of side effects, more than simply forbidding it.


>`IO` is a little bit harder because it wraps up the interpreter pattern (distinction between description and action) in a monad

I've always thought that the trick to explaining the IO monad in Haskell is just to realize that IO in Haskell has very little to do with monads. Fundamentally, Haskell values of type (IO a) may have side effects when evaluated, the evaluation semantics of Haskell are such that expressions are (observably) evaluated at most once†, and the >>= operator forces sequential evaluation by threading through a hidden world parameter and doing some special compiler magic. (It's a common myth that sequential evaluation is forced merely by the type of >>=, but this is just false.) Once you've grasped this underlying mechanism, you can then observe that 'return' and >>(=) form a monad.

† This is subtle and somewhat beyond my pay grade. Strictly speaking the semantics of Haskell only require non-strictness, which doesn't directly entail any limit on the number of times an expression is evaluated. But in practice, it is safe to assume that a Haskell implementation will evaluate any expression with side effects at most once. I'm sure someone else can do a better job of explaining exactly to what extent, if any, this property is entailed by a non-strict semantics.


But it's considered good practice in Haskell to keep IO out of the main logic of the program and basically use it as little as possible. Haskell is certainly not an IO-focused language like Go and Rust. The IO monad is almost more of a deterrent than a tool.

What I'm arguing is that you can still think and write with an imperative style using the IO monad if you want, so I'm not sure what the issue is.

Edit: As an aside, I already know that "Monad" doesn't mean impure (having implemented an IO monad in my spare time to find out) and that was implied when I said "100% pure functional [...] like Haskell".

They are implemented purely, nonetheless they do allow side-effect (impure) programming.


I agree, but I think it still takes some coding self-discipline to write code with a functional core and stateful shell in Haskell. There’s little stopping you from having every return type wrapped in the IO monad. It’s not any more unnatural to do that than it is to code in any imperative language.

The bulk of real Haskell programs do not choose such granular restriction of side-effects that it's unmaintainable. As always, there are tradeoffs. In most applications, the code at application boundaries live in some Monad that are based on IO.

For Example, Yesod gives you the Handler monad, which is based in IO, and really just exists to provide access to runtime/request information, so at the API handler level you can do anything you need to. But what's nice is that not everything has to live in IO, and so in places where it makes sense, such as a parser, we can say that parser doesn't need IO, because that wouldn't make sense.

And my point here is that just having the ability to separate between IO and non-IO is very useful; we also do not have to split every single effect down into it's own special, separate type; in many cases that would just be overkill.


Actually, I think I/O is really clear in Haskell. Haskell is a pure functional language, and in that world side-effects are not possible (since a function should always return the same value, given the same input). For this reason functional languages like Haskell introduce monads, which allow you to model imperative languages (with state and order). I/O is performed in one such monad (the IO monad), and programming in the IO monad just looks like imperative programming. The tricky part is that you cannot get pure values out of the IO monad (since it is impure), which may entice programmers to let the IO monad 'leak' intro programs to much, rather than lifting pure functions into the monad.

Yes, monads can be hard to get in the beginning, but they never felt like tricking the compiler to me.


But the point is the language doesn't need monads. You could express the same solution to the problem more verbosely without monads if the language couldn't express monads. Consequently, monads are beside the point.

Haskell needs to constrain the order of evaluation when it's dealing with the real world. There's a solution to that problem (require and yield `World`). Then we choose to use `IO` (i.e. a monad) to implement that solution.

But the abstraction that is a monad is independently justified and worthwhile in languages like Typescript (I wish!) and ML which don't have the same IO issue.

As long as we teach monads as if they have anything to do with IO we're confusing people. IO uses monads because it's convenient and apt. But monads don't control order of evaluation. They've got nothing to do with order of evaluation.


The advantage is that you can reason about the correctness of code. Sure, if you write an entire code base depending on its size you may be able to reason about all the side effects in your head. But when moving to a larger project, or one developed by a team - all bets are off. It is really important to have the effects embedded in the type system. Even with a large test suite.

On top of this, having professionally written C/C++ for years in the embedded systems space I can say that Monads for IO is not awkward. In fact, it feels far more powerful because you are not limited to the awful semicolon operator for chaining effects.

Lastly FP != slower (even on single core). I agree Haskell can be hard to reason about performance wise at times due to choosing lazy by default. However, have you ever heard of Sisal? Many strict by default FP languages are quite fast, however they sacrifice purity. But to be honest, with experience Haskell can be damn fast also.


The fact that you have to use the IO monad, to me, feels like something completely ugly and different from what I get from haskell algorithm wise. IMHO.

I feel that one of the biggest misconceptions about monads and IO in Haskell is that they are an "escape hatch" in the language that allows you to perform imperative actions. This is plainly false. Monads are just another functional programming idiom, defined by two regular functions, `return` and `(>>=)`.

What monads allow you to do is have a purely functional model to represent IO, among a great many other things. Haskell is, in a sense, the best imperative programming language ever created, because it lets you manipulate imperative statements in a purely functional manner. You can stick statements in data structures, compose them, replicate them, generate them and only evaluate them when you want to.

next

Legal | privacy