Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

>It leads to recursion where iteration is more natural

Viewing iteration as more 'natural' than a fold seems down to mostly taste. And hell, if you really want iteration, you can easily get that in both effectful and non-effectul variants through monads.

>ask a functional programming zealot to implement an O(1) hash map in a pure way—they will usually stammer, try to move goal posts, before finally admitting it's not possible

Except that no one - not even Haskell zealots - will argue that you never need effects, but simply that effects should be encapsulated. In Haskell, nothing prevents you from using mutable state if you really need it and mutable hash tables can be easily implemented using something called functional state threads [1][2].

[1] https://www.microsoft.com/en-us/research/wp-content/uploads/...

[2] http://hackage.haskell.org/package/hashtables-1.2.4.1



sort by: page size:

> In a purely functional language all four are purely functional, but this is (IMO) needlessly restrictive.

I concur. My two favorite languages nowadays (Elixir and Clojure) are far from being purely functional. They are functional enough that mutating state is awkward, but if you do need it it is there.

I also think having immutable data structures by default is a saner choice IMO.

> It leads to recursion where iteration is more natural, awkward choices of data structures or even plain impossibility of certain algorithms/data structures

Anecdotal, but I rarely use recursion/for loops, opting for defining auxiliary functions that works in one element and using map/reduce instead. Really, I don't remember the last time I needed recursion to solve a problem (for example, I know recur exists in Clojure, but I never saw it in the code that I work everyday).


> > > Traditional hash tables are imperative data structures, and Lisp code (or Scheme code, at least) typically does not use them because of this. Association lists, which can be represented as literals, are persistent and {snip}

> > I don't understand that.

> Introducing state into programs makes them harder to reason about, thus Scheme programmers generally discourage the use of mutable data structures when a persistent data structure would have worked.

Oh, maybe you typoed and meant to say that traditional hashtables are mutable data structures? In that case, I see what you mean; in Clojure, hashmaps, vectors, sets, and lists are all immutable and persistent. In Scheme, which data structures are persistent, or mutable/immutable?

> The way alists are used, only the first pair to contain the desired key is considered.

Ok, I see now. The function creates a hashmap from an alist.

> I learned that people reach for mutable hash tables far too frequently when there are better options available.

{raises hand} They're very easy to work with. In Clojure, I found myself doing extra work to work around the immutability. I'm sure it's a benefit for larger and multithreaded programs, but mine were neither.

> What languages even have literal syntax for sets? I can't think of any, but I'd like to know.

Clojure and Python. I suppose I could live without literal set syntax, but hashmap/hashtable syntax is extremely handy.


> Why? Because you are drawn to the pure FP style. That's great, but that's an aesthetic choice, and certainly not one shared among all programmers.

No, because they're special cases at the language level that don't need to be there. It shouldn't be a language builtin if we can achieve the same thing with a plain old library function. That's not an aesthetic preference, that's language design.

> You could translate any recursive function into one making use of a state monad.

What and how would you translate? I can't see what you mean at all, particularly because it's very normal to write a recursive function that uses the state monad - I do this quite a lot.


>Ah screw it I can’t finish this. I don’t know how to translate the steps without mutable state. I can either lose the ordering or I can say “then mix in the bananas,” thus modifying the existing state. Anyone want to try finishing this translation in the comments? I’d be interested in a version that uses monads and one that doesn’t use monads.

He's wrong and he completely misunderstood functional programming. It's not weird at all. Let's get things straight. First off there's a difference between functional programming and types. Monads are more of a type centric thing and also haskell centric. Haskell is a big part of functional programming but it's only one style. You can do functional programming without monads and also without type checking. Additionally type checking and even haskell-like type checking AND monads can exist in imperative languages as well (like rust.) Does rust having monads make it functional? No.

Second functional programs can be very very very very readable. It depends on how you structure it and your naming conventions. See example below:

   list of data primitives (aka ingredients and tools): 
      cake
      grease
      flour
      small_bowl
      large_bowl
      pan
      whisk
      salt
      oven
      cream
      butter_milk
      white_sugar
      brown_sugar
      walnuts


      heatedOven = preHeatOven(oven, 175)
      panWithGreaseAndFlour = putGreaseAndFlourInPan(grease, flour, pan)
      bowlWithWhiskedIngredients = whiskStuffIntoBowl(small_bowl, whisk, salt, flour, baking_soda)
      bowlWithSugars = mixStuffInBowl(large_bowl, white_sugar, brown_sugar)
      bowlWithSugarsAndBananas = mixStuffInBowl(bowlWithSugars, bananas)
      bowlWithAllingredients = mixStuffInBowl(bowlWithSugarsAndBanas, bowlWithWhiskedIngredients, buttermilk, walnuts)
      panWithAllIngredients = pourContentsOfBowlIntoPan(bowlWithAllingredients, pan)
      heatedPan = heatPan(panWithAllIngredients, heatedOven, 30)
      finishedProduct = coolPan(heatedPan)
I mean that's so simple I can literally translate that into english:

      Create a heated oven by heating an oven to 175 degrees.
      Create a pan with grease and Flour by putting grease and flour into a pan.
      Create a bowl with whisked ingredients by whisking salt, flour and baking soda into a small bowl with a whisk.
      ....
You get the picture... It's just real practical english tends to be less pedantic that's it.

Think about it this way. Let's say we live in an imperative world where we have an oven. We then heat the oven. Now in the imperative world the oven is destroyed and replaced by a heated oven. We only refer to the "heated oven" as an "oven" for convenience but in fact the original "oven" is actually destroyed as in it no longer exists.

In functional programming THE only difference is that the original oven is NOT DESTROYED. That's it. You have a heated oven, but you still have a reference to the original oven. But you don't ever need to refer to original oven if you don't want to. This makes English describing imperative processes more or less IDENTICAL to functional processes.

Another perspective to think about this is that in the English language we create the heated oven but the original oven is not destroyed but moved! The oven is now moved into a namespace called the past, we can only refer to the original oven by appending or prepending one of the many names and phrases of the "past" namespace to the oven! For example: The oven "before it was heated", the oven "from the past", the "original" oven,... etc.

So from that perspective then the only difference between FP and english is that variables once operated on, are moved to a different namespace. So literally just extra past-tense embellishment on English grammar is the ONLY difference.

It gets crazier than this. Is anything in reality really mutable? What does mutability even mean? We live in a universe where each instance of time is it's own namespace. The universe is a function of time. At t1 there is an oven, at t2 there is a heated oven. So really mutability is just syntactic sugar over immutability where just gloss over mentioning what time namespace we're in for convenience. Yes you heard that right. Everything is actually immutable and mutability doesn't exist. Our concept of "mutability" arises from language shortcuts we take and assumptions of present tense.

Either way from the examples this guy writes it's pretty obvious functional programming didn't click for him. He understands the rules like immutability and he sees all the crazy tricks and nutty types people associate with functional programming but he missed the big picture. Hopefully this explanation will let the concept of functional programming click a bit more with people.

There is in fact a deeper intuition for why functional programming is "better" than imperative programming. But that's another long explanation. Most people don't ever reach this point of realizing how functional programming is better. They could spend years doing functional programming and completely miss it. Instead they reach a state where they think functional programming is sort of a style of programming used to express intellectual arrogance with no practical benefits and they give it up and move back to imperative programming. Well, they are wrong and they missed a deeper understanding of a fundamental concept. If this sounds like you, I'm telling you... you missed the big picture.

Literally the OP is a picture perfect example of this. Completely missed the point of FP but spent enough time with FP to understand what a monad is, even though a monad isn't really technically an FP thing.


> In functional programming, you typically do not manipulate state directly, instead you compose functions that manipulate that state, into more complex functions. (In Haskell, for example, this composition is often done through monads.) Eventually, you get a function that represents the whole program.

A bit more detail. In Haskell this is implemented very elegantly with list fusion. If you write

    map (\x -> x+1) mylist
You'll map over a list and add one to every element. this allocates a whole new list where all the elements are increased by one [0]. Now let's say we take the output of that map and map over it again, but this time we multiply every element by two:

    map (\x -> x*2) (map (+1) mylist)

Naively implemented that would copy allocate two lists, one where all the numbers are incremented and another where all the incremented numbers are multiplied by two. Any good dev will know that's performance poison. So the Haskell compiler implements "list fusion" – it sees that you don't use the intermediate list for anything, and rewrites your code so it's equivalent (in semantics and performance) to:

    map (\x -> (x+1) * 2) mylist)
(For the compiler devs in here, this optimization is commonly known as deforestation.) This leads to huge speedups in many cases. But there's a problem. If elsewhere you use the result of `map (\x -> x+1) mylist` for anything besides mapping over it exactly once, this optimization is impossible! So list fusion has a reputation for being fragile – the compiler has to know with 100% certainty that you aren't going to use the list for anything else, and sometimes innocent-seeming changes (like moving the intermediate map into another module) can break it.

The solution the Haskell community finds promising is to be able to give a tag to value, like a type only not quite, that says "I promise to use this value exactly once". If you use it twice or zero times, the compiler will yell at you. The compiler is still on the hook for doing the analysis of what's used once and what's used multiple times, but now the programmer and the compiler can be guaranteed to be on the same page.

As for the other issue mentioned in the original post, of modifying a subelement of a tree: this is a well-known problem and there are many solutions. If the tree is only used once, the same optimization as list fusion can be applied to mutate the list in place (although the "you must use the value only once" restriction doesn't help quite as much as you'd think it would). The more common solution, that doesn't depend on compiler optimization at all, is to use a data structure that supports this efficiently. For example, if you have a tree and you want to change one leaf on the tree, the only thing you really need to copy is the part of the tree that's actually different now - for the unchanged branches, the new tree just has a pointer to branches of the old tree, so they can be (and are) reused without copying. That's why it's very common to see maps implemented in functional languages using trees, instead of hashmaps. With a hashmap, it's much harder to get around the need to copy the whole thing when you just want to change one part.

[0]: Well, it might do that that once it gets around to it, laziness etc., but let's ignore that for now.


> But the second one is more debuggable than the first, which I think is even more important than readability.

The first is less likely to require debugging in the first place.

> There are lots of data structures in this style of programming that don't have any names.

So you can only reason about things that have names? Now we know where idiomatic Java comes from.

> Who knows what kind of data structures map and filter create in order to do their work.

In most reasonable implementations, the only data structure being created is the final result (a functorial value in map's case, a sequence in filter's case). For example, in SML:

    fun map _ nil = nil
      | map f (x :: xs) = f x :: map f xs

    fun filter _ nil = nil
      | filter p (x :: xs) =
        if p x then x :: filter p xs
        else filter p xs
> But the core promise of functional programming—that you can stop thinking about the underlying procedures—never seems to fully pan out.

Functional programming doesn't promise freedom from procedures. It promises (and delivers) freedom from physical object identities when you only care about logical values.

---

@banachtarski:

Code that's likely to require debugging (say, because it implements tricky algorithms) should be isolated from the rest anyway, regardless of whether your program is written in a functional style or not. Say, in Haskell:

Bad:

    filter (\x -> tricky_logic_1) $
    map    (\x -> tricky_logic_2) $ xs
Good:

    -- Now trickyFunction1 and trickyFunction2 can be
    -- tested in isolation. Or whatever.
    trickyFunction1 x = ...
    trickyFunction2 x = ...
    
    filter trickyFunction1 (map trickyFunction2 xs)

> This doesn't have anything to do with recursion.

recursion is compressing the domain so small it eats itself, crafting a small set of function is mirroring this, kinda like grammars

> You this doesn't have anything to do with recursion.

> I think you mean, they iterate for you. They don't use recursion.

afaik map and fold were defined recursively

     ...
     foldl f z (x:xs) = foldl f (f z x) xs
albeit accumulative recursion (maybe that's what you mean by iterating)

> I really don't understand all this. You know most iteration is done with a for loop right?

that was what i was pointing at, iterators are not encapsulated enough

> This seems like you're trying to write a satire of someone soaked in haskell. Everyone is else is just writing for loops and moving on.

I'd appreciate if you didn't make it personal. Also, you forgot ocaml, lisp, prolog.


> And, to be brutally honest, as much as I love those functional combinators, first-class functions, streams, etc, they suck to reason about.

> Sometimes loops are better!

That I think is backwards. A loop could be doing literally anything - it probably is futzing with global variables - so there's no way to reason about it except by executing the whole thing in your head. A map (or mapA) or a fold (or foldM) or a filter or a scan is much more amenable to reasoning, since it's so much more specific about what it's doing even without looking at the body.


> Says who? You didn't back this claim up with anything.

Why would I need to back that up ?

> No one has ever said it 'only' iterates, you hallucinated this claim. The whole point that I've made is that just because a language like haskell uses recursion for iteration, it doesn't mean iteration and recursion are the same thing or that things that iterate have anything to do with recursion.

Why the mention of haskell all the time ? I'm not even talking about statically typed languages here.

The original conversation was about the intellectual benefits of recursion, not iteration == recursion. Recursion is a way to think about problems that I find more general, precise, creative and economical I tried to convey why, I'm not the most precise but this is getting nowhere. Feel free to enjoy your life.


> Just because you could shoehorn reality into a physical model, does not make it equally valid.

Hey, physicists do it all the time. Having mathematical representations of how things work in the real world often turn out to give you complex devices that you wouldn't get by intuition alone. Thanks to doing it, we have Special Relativity and Schrodinger equations, that bring you advantages as GPS and insanely-high-speed networks.

There's nothing wrong with modelling interrupts and mutable state as static mathematical entities a.k.a. functions; and if you think it is, you don't know enough about functional programming - in fact, doing it allows for advanced tactics like automatic verification and very high scale generation of electronic devices.

Those less degrees of indirection may make it simple to build some kinds of programs; but they also introduce complexity in modelling the possible states of the system when you allow for arbitrary, unchecked side effects. So no, that representation is not inherently better - only most suited for some tasks than others; but the converse is also true for functional models of computation. If you think it is always better, it's because you're not taking all possible scenarios and utilities into account.

(Besides, as I already said elsewhere, "computers are mutable state" is not incompatible with functional representation; as mutable state can be represented with it). It seems that you're confounding mutable, wich is a property of computational devices, with 'imperative' which is a style of writing source code and defining relations between statements in your written program.

> Literally the exact opposite of functional, IO registers are the epitome of global mutable state.

Yup, you definitively have no idea of what functional programming is if you think you can't have global mutable state in it. In fact, every Haskell program is defined as a global mutable IO monad...


> The fact that recursion can compile down to iteration when written in a tail-call manner, while other languages are just providing iteration as a core part of the system, kinda implies that FP isn't "superior in the vast majority of use-cases".

My first impression when I learned about “infinite recursion via TCO” is that it was so much more beautiful semantically than your typical loop

> recursion and condescension

I’ve worked in probably 15+ languages of all types throughout my life since Microsoft BASIC in 1984 when I was 12. If this doesn’t give me the right to assert things like this, then what does?

Perhaps the real problem is that whenever someone mentions “functional languages”, the walls go up, the ears go closed and the baseless presumption of hubris skyrockets.

The fact is that you can use a functional style (and gain the advantages) in any language https://towardsdatascience.com/why-developers-are-falling-in... , you just lose the guarantees as soon as you touch another library


> I find that decomposing iteration into map and filter expressions typically clarifies my understanding of the problem and makes the resulting code more general too, and I just disagree that for-loops are clearer.

Agree to disagree I guess.

> Combining a map and filter into one loop body tends to create specific, imperative code that doesn’t need to be specific or imperative. For one, loops require mutation to have the same effect as filter and map, and as soon as you’re maintaining state, things get confusing.

I agree that immutable, declarative code is easier to reason about in general than mutable, imperative code, but it's not a binary proposition. I posit the cost of managing the tiny bit of very-local mutable state for a for loop is a lower cost than trying to model a problem in terms of map and filter. While I can appreciate the elegance in "immutable always", I think it's too puritanical for real world software development, especially in these cases which are simply expressed as for loops.

Note that languages like Python have strong support for immutable expressions and `for` loops, and it seems very common (even idiomatic) to use imperative `for` loops for anything more complicated than the simplest list comprehensions.

> In general though I think the functions used in map and filter should be total. Most languages are not expressive enough to handle errors fluidly in long chains like that, you can do it with monads but apparently everyone hates those except me.

I think it's just that monads tend to allow for very abstract code at the expense of understandability, and the abstraction that monads facilitate is often well in excess of what a given problem requires. If you're writing code to maximize abstraction (which is a lot of fun), monads (and map/filter/etc) are great, but if you're trying to work with a team to build software to solve real problems, you probably care a lot more about readability than gratuitous abstraction and the monad (/map/filter/reduce/etc) juice frequently isn't worth the squeeze.


> While I don't mind the toy problems, all those toy problems have something in common...

> They can be easily solved with recursion.

As a sibling mentions, recursion is pretty much required in pure functional programming. There is no notion of time or sequencing: everything we write is a definition. If we don't call another function, then our program will finish; if we don't use recursion (either directly, indirectly via functions like `map`, or via a chain of mutually-recursive definitions) then our program's behaviour will be fixed by the structure of our code (e.g. the number of calls we write). That's fine for simple calculations, but most useful programs depend on the structure of their data (e.g. processing lines in a file, etc.); that requires recursion, since we can't hard-code the right number of calls ahead of time.


>It also forced me to use a lot of recursion which is nice as I understood it much better than in that single "Fibonacci number" example.

Back in college I took a programming class that used Haskell and not having any other do way to perform iterations other than with recursion, you're forced to understand recursion. With imperative languages, you can always fall back to ol' for and while loops so you get the impression that recursion is simply a waste of time.


> It turns out using declarative style makes things more readable and safer and better optimized by the compiler.

I don't think most of my loops are straightforward maps or filters of a single iterable. I need to reference various data sources and execute actions with side-effects. To do that using a functional framework, something like Haskell's mapM ("monadic map") is required. But that brings quite some mental overhead and I find it way too constraining for the typical case.

> more readable and safer and better optimized by the compiler.

I doubt these are in general qualities of non-builtin constructs vs builtin ones. But yes, too much builtin and non-orthogonal functionality leads to detrimental complexity.

In any case I don't think it matters much to most compilers whether you write a loop using syntactic sugar or as labels and jumps.


> This probably means that I can encode the Y-Combinator in it, resulting in anonymous recursion, though unless functions are first class, it'd by 500+ LOC.

Try writing that function. It has a type like '(a -> a) -> a', which you need to implement. You should try it, but I'll spoil it for you: Dhall has no recursive types, nor does it have recursive values. The recursive Y-combinator cannot be written in Dhall itself, it must be provided by the implementation. You can write this in Haskell using open recursion/data types, but only because Haskell allows equirecursive types, and it's already inconsistent anyway. Dhall has neither of these properties: no form of recursion exists at all.

This is a property of System F, the core itself, which is quite well established.

That said, some weird bug in the implementation itself may cause some weird behavior (like most software). But System F itself? It's not turing complete and this is well known for a very long time as a mathematical result. And having written my own implementation of the Calculus of Constructions: it's quite easy to get the core logic correctly implemented. In fact, with a few abstractions, you can correctly implement the Calculus of Constructions -- which is even more powerful than Dhall -- in about ~50 lines of Haskell code.

> Again, I haven't reviewed the internals enough to see, but if you have functions, that's normally enough to build infinite anonymous recursion.

No, it isn't. Functions have nothing to do with it, at all. In the world of lambda calculus/natural deduction/logic, the relevant bit is not functions, it is the question: can I construct any possible value, out of thin air? Or, in other words, can I create a value of type 'forall a. a', somehow? Can I create a paradox -- a logical inconsistency? In the unified world of programming languages & logic, "general recusion" is actually a form of logical inconsistency inside the system, which allows you to prove anything -- of the kind Godel described.

This is, roughly speaking, the equivalent process to determining some logical system like ZFC is actually inconsistent: such as proving `True = False` inside ZFC with nothing else. If you can create that proof out of nothing but the basics, you've created something that shouldn't exist. It is a contradiction. If you can do this, you have proven inconsistency in the underlying theory -- because you can prove anything if you already have proven 'True = False' https://en.wikipedia.org/wiki/Principle_of_explosion

And, really, it is not "roughly" the same thing -- it is the exact same thing. To find a value of type 'forall a. a', in fact, is the exact same principle as proving or showing a paradox/inconsistency in an ordinary logic.

If what you said was true, that "functions lead to turing completeness", then the existence of modus ponens -- modus ponens being the logical equivalent of "function type" in the lambda calculus -- would imply that systems like ZFC are inconsistent. But modus ponens and inconsistency have nothing to do with each other, and the fact a theory contains Modus Ponens as a valid judgement rule does NOT mean it is inconsistent.

For example, the simply typed lambda calculus has functions. But it does not have any form of polymorphism, or type variables. Thus, the general Y combinator is untypeable in this language and cannot be written, and the STLC is not Turing complete: you cannot write a function of type 'forall a. (a -> a) -> a' inside the language. It's not even possible to abstract over types, because there are no type variables! Picking a specific type 'a' means nothing at all: '(Bool -> Bool) -> Bool' is extremely easy to implement and not inconsistent in any way ('fix k = k True', done, and perfectly consistent).

It's the "for every 'a'..." part, along with the ability to deduce 'forall a' (somehow, someway), that leads to general recursion. If you want the STLC to have recursion, your compiler must implement a magic version of 'fix' on its own.

> I'm also not certain that it isn't Turing Complete.

You should be, because it isn't. There's ample research on the design of typed programming languages, the Lambda Cube, System F, and turing completeness.

A good book to start with, just for the design of these systems, is "Types and Programming Languages" by Benjamin Pierce.

> The C Pre-Processor doesn't supply a way to encode general recursion, but as we can see here [0], it is entirely possible, though to a limited depth. Leading to some debate as to whether or not it is Turing Complete.

No, it doesn't lead to anything. The C preprocessor isn't turing complete. That's it. There is no scientific debate to be had about this, anymore than there is a scientific "debate" about what temperature water boils at. The replies to the post you quoted illustrate this directly and express the difference. Whether or not "it can recurse 10,000,000 if I allow it to do so by passing a flag, and 10,000,001 times if I changed the number 10,000,000 to 10,000,0001" is rather besides the point of whether it's Turing complete.

> Reduces to a normal fashion is completely useless as a guarantee if it takes 10 years, and is one of the reasons that simple markup/notation forms like YAML and JSON that are readable in O(1)/O(n) time end up preferred, rather than something that can explode into O(n^2) time.

Correct. This is why being able to ask if any configuration is normalized is important (and asking this can be done quickly by simply seeing if there are any unreduced terms, rather than trying to actively reduce them). But let's not pretend YAML and JSON don't have their deficiencies in similar realms -- O(1) is nice and all, yet YAML is actually relatively complex to the point of introducing security vulnerabilities, and I've encountered my fare share of deficiencies/inconsistencies in JSON/YAML parsers.

You must also consider the usage scenarios. Purely for configuration, this is less likely to matter: you're unlikely to write a purposefully inconsistent/bizarre YAML file that fucks up your parser, much like you're unlikely to write a Dhall configuration which takes 10 years to reduce when evaluated. But when you are accepting arbitrary, untrusted inputs - YAML parsing security matters, just like Dhall evaluation time, and the ability to ask "is this normal form", does.

In most cases I don't see the differences or threat vectors being truly, meaningfully distinct, at all. If you trust your input sources, given the general way configuration files are used -- the evaluation time isn't really a problem. YAML O(1) or whatever doesn't really matter much either; it's generally a fixed cost anyway. But if you don't trust your input sources -- YAML should scare you just as much as Dhall.

That said, I don't think Dhall will offer a substantial benefit for most people who are already using YAML. So it's not like you should switch. To me it's sorta neat, but probably something I'd never use extensively just due to adoption factors.

But I'm afraid you're deeply mistaken on the design of the language, and don't seem to know much about programming language theory, at a glance. I don't blame you, since it's a rather dry field. But, to put it in perspective: "System F and the Lambda Cube alone are not turing complete, and always normalize" is so trivially obvious in the PLT world that if you asked someone to "prove it" to you, they would probably say "It's true by definition of the system itself, what are you even asking" before staring at you blankly (or, they realize you don't know the field, and that they're going to have to start from square 1, as far as computability/language theory goes, to explain why this is the case.) It's an old and well accepted result.

TAPL, the book I mentioned earlier is a good book to introduce yourself to many concepts like this and should help clear up any confusion you might have.


> Aside, I agree on the List monad thing. I never ever use the list monad, why would I.

I use monad library functions on lists happily, and use the do-notation, particularly when I'm thinking in terms of non-deterministic computation and search.


> I think more and more programmers are realizing that the functional approach is almost always superior.

Only for a very limited definition of "almost always". It's almost always superior if you don't hit performance constraints, and if your problem isn't dominated by sequencing, and if your team can handle the functional approach, and if...

Why is it superior? It controls the state space explosion, and therefore makes it easier to understand and reason about your program.

Why could performance be a problem? Think about a functional quicksort algorithm. It doesn't do mutation, so at each stage, it has to return a new sorted list rather than sorting the list in place. As the list gets longer, the performance gets worse compared to an imperative quicksort.

Now, quicksort would actually be implemented as a library, and would be tuned for speed. But the same issue comes up in other ways. If you have to transform data structures, imperative can be more efficient than functional, and the difference grows as the size of the structure increases. (I could have called this area "scaling" instead of "performance".)

When could your problem be dominated by sequencing? It often is in embedded systems, where you're sequencing external hardware to get it to do real-world things. I mean, yes, you could write all that in a monad, but if that's the dominant thing you need to control, what does it gain you?

All that said, making any individual function pure (in the FP sense) is always at least a local win, because that function becomes simpler to understand, debug, and test.


> ...a monad looks pure, but actually isn't.

This is not true. The arguments for `next` include both:

* The target stream.

* A "state token" or universe.

You get different results with different state tokens.

At a high level, the thing to consider is that we can model all imperative programming with functional programming -- we can model Turing Machines with Lambda Calculus -- as well as go the other way around, because Turing Machines and the Lambda Calculus are, for all intents and purposes, equivalently expressive.

Modeling IO with state monads isn't a hack; it's the point.

next

Legal | privacy