Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
The Case for Controlled Side Effects (two-wrongs.com) similar stories update story
55.0 points by mightybyte | karma 3004 | avg karma 3.84 2015-08-06 16:33:40+00:00 | hide | past | favorite | 61 comments



view as:

That is a symptom of mutable code. In that case, side-effects are nearly a red herring because if the object were immutable, then you would really only ever have to check once.

Mutable state is sometimes the best and simplest solution even in a purely functional setting, so it's good to have an effect system for that.

Mutation is one of many side-effects. Side-effect tracking is a solution to more than just mutation.

Could you give an example of a dangerous side-effect that is not a mutation?

Printing out stuff; playing a song; starting a script.

I totally agree with all of this, but I think the hard part of writing side effect free code is that it often involves more allocations. (Since you're constantly constructing new values and objects, and transferring them around on the stack). I think this is why you see a lot more mutation (and bugs) in C: managing allocations is annoying. Having a garbage collected language mitigates the burden on the programmer, but if you're writing performance sensitive code that means you're putting pressure on the garbage collector so you're going to get more pauses. Not to say you shouldn't write in this style, you should, but there's a pretty big opportunity for languages to get better in this regard.

Some of the languages have gotten better! Clojure, for example, has smart data structures that share elements to speed up operations and minimize copying:

http://hypirion.com/musings/understanding-persistent-vector-...


> I think the hard part of writing side effect free code is that it often involves more allocations

Are you saying this in spite of the existing implementation of decent persistent data structures, at least in some languages? https://en.wikipedia.org/wiki/Persistent_data_structure

I think the cost in more memory (and with persistent data structures, only the difference in memory of the change) is more than made up for by fewer bugs, easier concurrency, and a host of other benefits.

In the light of "mutability always produces more bugs," mutation should be seen as an optimization, after tests are written which cover behavior and side effects.


> Are you saying this in spite of the existing implementation of decent persistent data structures, at least in some languages?

No, s/he's saying that because of the existing implementations of decent persistent data structures. Practically all of them allocate way more than mutable data structures - e.g. trie-based vectors/arrays, trie-base hashmaps, tree-based ordered maps, etc. Essentially each new addition, modification or removal results in one or more allocations.


Also, those allocations imply linked, rather than contiguous, data structures, which (often? always?) imply poor cache interaction, which is a major source of inefficiency.

Would love some more perspectives on this, as I've just watched a couple cppcon talks ([0], [1]) and I'm feeling a bit drunk on the kool-aid.

[0]: https://www.youtube.com/watch?v=fHNmRkzxHWs

[1]: https://www.youtube.com/watch?v=rX0ItVEVjHc


There are data structures that are a combination of linked and contiguous, such as the classic rope data structure. For persistent collections, a common backing data structure is the hash array mapped trie, which is basically a quickly-navigated tree of arrays.

Somewhere in there is the obvious tradeoff you have to make between access time and copying time, with regards to the array length.


Thanks for the info, I didn't realize the connection between hash array mapped tries and persistent collections. Are you aware of any open source (ie. linkable) implementations that work that way?

Clojure [1] - the crucial implementation is really in classes BitmapIndexedNode and ArrayNode.

Scala [2] - Scala's source is an incredibly convoluted spaghetti of abstract classes and traits, but it looks like the main bits of implementation is in methods get0 and updated0 of HashTrieMap.

There's also a C# implementation somewhere, but it's not a part of a language library, so it's likely less optimized (except for some boxing/reified generics, which make it theoretically more efficient). But all these implementations use the same basic algorithm, so it's irrelevant which one you study.

[1] https://github.com/clojure/clojure/blob/master/src/jvm/cloju...

[2] https://github.com/scala/scala/blob/2.11.x/src/library/scala...


Wow, thanks for doing the leg work for me. This is the type of thing that reminds me why the internet is so wonderful :)

Sure, the [unordered-containers](https://github.com/tibbe/unordered-containers) library is based on that IIRC, as is the [hash map in Clojure](https://github.com/clojure/clojure/blob/master/src/jvm/cloju...).

Modifying an array requires no allocation; persistent vectors always do.

Linear types are another approach where you're essentially passing around exclusive access to a single mutable structure, so all effects are local. See: Rust/Clean.

Python is an awfully weird language to desire this feature in. You don't even statically know what type profile is, so checking the class you think it will be doesn't actually answer the question. Also effects would necessarily be part of the public api of a given class (preventing someone from later changing verified_email to a "property"), which is quite unpythonic.


I imagine you could do something like "const" in c++. If you're not familiar, const tends to have a lot of behaviors for one keyword, but in this context I mean const methods where basically const means "this method can't modify the state of this object." (Unfortunately it doesn't mean the method is pure, but it's better than nothing)

It does seem like a technique a bit better suited for static compilation though, just in that catching this sort of error at runtime would be slower and harder to figure out.

Even if it was a weak/easily circumvented check it might be useful though, at least as code documentation.


I'm of the opinion that argument values should never be modified by their methods, because that would result in what I've been calling "backflow" or premature state propagation backwards up the call stack. I think this not only hurts concurrency but also reasoning about the code and its flow. I think any new state should be via a return value of the method and not via its arguments.

It's not just a cost in terms of memory, it's a cost in terms of performance as well. Persistent data structures tend to result in a fair bit of pointer chasing, and the cache miss penalty ain't getting any less expensive.

I find oftentimes when I'm working in F# I do the initial versions of a module using persistent data structures, but eventually the profiler tells me it's time to bite the bullet and switch to mutable structures. The change tends to end up being relatively painless because a huge portion of the time I wasn't relying on persistence anyway, the code does not explode into a confetti of bugs, and run times improve considerably.


I think that's a healthy way to view mutability. It's an amazing performance optimisation, but like any optimisation, it's not to be performed prematurely...

I think, though, that this case really highlights the extent to which Knuth's quote gets used in a much wider context than he intended. In the context he was explicitly talking about micro-optimization, not large-scale performance-impacting decisions.

The use of the word "prematurely" combined with the tendency to forget the context of that quote means it takes on connotations of implying that if you do something for performance from the get-go, that's bad. That just ain't so; sometimes you really do know what you're doing, and know that one option won't be workable and other one will. Choice of data structures is a case where this often happens. If you know up front that you're going to need O(1) random access and replacement, and O(1) typical case and O(N) worst case appending, and that persistence is not necessary, then you already know you want a mutable list. In that case starting with a persistent equivalent and planning to rework things later is likely just a waste of time and, assuming you're doing this at work, money.


The article advocates explicitly typed effects rather than less effects in code. Effect systems are also useful for making high-performance code safer (or enforce considerably static safety at small or no runtime cost). The core feature of Rust is statically safe direct mutation, and also we have the ST monad in Haskell which lets us do mutation behind safe immutable API-s.

This is an objection that people seem to raise disproportionately often - particularly when they have no performance tests and never profile their code. The one time I've seen a real-world head-to-head comparison between C and Haskell, the Haskell version performed 5x better than the C version. In fact I've literally never known a real-world Haskell program to have serious performance problems (I've known some that needed a small amount of time to be spent on profiling).

I realise this is much less satisfactory than a convincing theoretical solution; all I can say is that it just doesn't come up in practice.


> In fact I've literally never known a real-world Haskell program to have serious performance problems... all I can say is that it just doesn't come up in practice.

Considering how few real-world Haskell programs are, and how small they tend to be, I think it's best to say that we don't yet know how serious the effect is. If anyone ever writes an ERP or an air-traffic control system in Haskell, then we'll have pertinent evidence to support or contradict the claim.


Well, Erlang is also basically immutable at its core, and seems to do reasonably well. It's definitely not up there with C (that'd be ridiculous) but within its domain I haven't heard many performance complains either.

Most performance-sensitive Erlang programs use C for the performant bits (they use Erlang for the control plane and C for the data plane). So it's fast enough only for things that don't need to be fast.

This isn't real world, but it's useful to see:

http://benchmarksgame.alioth.debian.org/u64q/compare.php?lan...

2-6x slower, more memory, roughly same amount of code.


> The one time .... Performed 5x...

Ah! I see that totally seals the deal and is so much more better than the claims of the other people you were complaining about. That BLAS author must be an idiot same for Nginx. 5x here I come. Heck why are these js runtime writers wasting there time. Its 5x do you hear me.

I am sure comments like these would help Haskell so much


This is essentially the same argument that people made against Java, Python, and Ruby (amongst others). But that didn't prevent them from being enormously popular languages used by tons of people to get things done. I don't see any reason to expect that controlled side effects ala Haskell will be any different.

> constantly constructing new values and objects, and transferring them around on the stack

This is not necessary, as long as your type system doesn't allow aliasing of mutable values (such as pointer arithmetic, eww)

https://en.wikipedia.org/wiki/Substructural_type_system#Line...

http://clojure.org/transients


I agree with the need/value of side-effect/pure annotations. From a debugging perspective though, there are 2 alternative solutions: Aspect Oriented Programming (i.e., mutation observations) and Immutable data types (e.g., Object freezing) to cause an exception on mutation.

Depending on the language, you can replace the value with a watcher or proxy and add a break point to know if it was mutated between those lines. Alternatively, you could freeze the object or property or make it immutable between those lines to cause an exception to be thrown. Both will require triggering the case in question live so annotations still superior, especially compared to manually reading code.


I didn't see this addressed in the article:

Do these annotations have force? I've run into plenty of annotations that lie - how can I trust that code that's annotated as being side-effect-free actually is?


> Not only do these type systems allow you to annotate things as "doing side effects"; they specifically disallow doing side effects unless you have annotated the thing as such.

Just below the "Controlled Side Effects" heading.


Any practical language will have escape hatches, but it's the equivalent of doing a cast (indeed it sometimes literally is doing a cast) - it sticks out in code review, compilers have options to warn or ban them altogether.

Maybe the author, presumably well versed in type systems, assumed that the use of effect typing was implied. If you're not familiar go check it out, but I'll give a synopsis. If you're familiar with `unsafe' in Rust, it's a specific case of effect typing.

You can annotate a function with an effect (useful examples: unsafe, impure, throws exceptions) and then if another function calls that function, then it needs to also be annotated with that effect, or else the compiler emits and error. Usually there will also be an escape hatch where you can assert to the compiler that you know, in a way that it can't statically prove, that a certain call to, say, an impure function, still doesn't have side effects. Code that appears within these escape hatches needs to be heavily tested in order for effect typing to be useful.

For the above examples of effect types, the compiler will also consider certain non-function-call constructs to have an effect annotation. Dereferencing an arbitrary integer as a pointer, for example, would effectively have an "unsafe" annotation.

I believe Nim has a generalized implementation of effect typing.


Welcome to Haskell.

No, seriously. We allow side effects. We have the STM monad, the ST (single-threaded) monad, and the IO monad if you really want to go wild. And almost no one seriously argues that the IO monad is "dirty" and should be avoided. In fact, the type signature of a top-level executable's main is IO () by mandate.

I want to make a proposal, which is that we ban the term "side effect". It doesn't mean what people think it does.

For example, the print function puts output on the console but that's not a "side effect"; that's the main effect. I would argue also that non-detectable stateful effects (e.g. building up a large dictionary mutably) shouldn't be called "side effects" (although they are stateful effects) if they're purely implementation details that are hidden behind an interface.

"Side effect", I think, is a case of a term that used to resemble either a design flaw or a non-intuitive behavior (i.e. reading your command-line arguments deletes the strings, which is something that I encountered in one system) that is tolerated for performance's sake; but that has now been expanded to include stateful main (i.e. desired) effects as well.


"Side effect" means something that happens implicitly as a result of evaluation. There's no need to ban the term; we just need to keep in mind how it differs from the more general notion of "effect".

Pretending that modern type systems provide anything near an adequate control of effects is misleading at best. Controlled side effects are an excellent goal; the way Haskell (or the pure-FP approach) goes about achieving that goal not only falls short of the target (which is OK) but may be going off in the wrong direction altogether.

For example, one of the important things about side effects is not only their existence or lack thereof, but their ordering (e.g. you must write to a socket after you open it and before you close it). While enforcing ordering is possible with PFP type systems (to a certain extent), it gets very complicated very fast. On the other hand, other ways of reasoning about effects and their ordering -- e.g. by debugging and profiling -- are made harder, rather than easier, by the PFP approach.

Personally, I believe a better approach is imperative-functional, namely inline (i.e. non-monadic) side-effects, but made explicit with good effect systems. Sadly, we're not yet at a point where this is possible and easy to do.

I strongly believe monads are not a good way of representing side effects, and to understand why you only need to look at how hard it is to compose monads and how difficult to follow are the ways of composing them (monad transformers).


Pretending that modern type systems provide anything near an adequate control of effects is misleading at best. Controlled side effects are an excellent goal

I completely agree. But, as I suspect you’d agree, it’s not an easy problem even to specify the behaviour we would ideally like to see here.

For example, one of the important things about side effects is not only their existence or lack thereof, but their ordering (e.g. you must write to a socket only after you open it and only before you close it).

When I’m designing software, I tend to think of effects at three distinct levels: can effects happen at all, what relative timing (i.e., ordering) do they need to have, and what absolute timing do they need to have?

The most basic question is whether certain effects or effects related to a certain resource happen at all. This includes distinguishing pure functions from code with side-effects, but it could also be about, say, having a function that appears pure from the outside but is implemented using mutable state internally. If that state is stored in internal memory that is only acquired and released within that function and no reference to it is accessible outside the function, then the scope of the effects is limited and the usual advantages of pure functions still apply viewed from outside. If the state is stored externally, say in a file or database, then potentially it could interact with other parts of the system because they could also acquire a reference to it via another channel, so we might want to know about that.

My current thinking is that in terms of modelling the program behaviour we want to see, the most important thing is probably whether any given resource is transient/unique vs persistent/shared, not whether it's internal (memory) or external. For example, we might also want to model a temporary file with a unique OS-provided name or a temporary database table as being completely private state for programming logic purposes. We still need mechanisms to recover from errors by, say, asking the OS to delete that temporary file if we abort with an exception, but then for true robustness we need to deal with things like failing to allocate dynamic memory as well, and in any case these are ultimately still effects that can be confined to the area of the program where those resources are used without interfering elsewhere. I’d like to have a language where interfaces could indicate how any persistent/shared resources might be affected and pass the references identifying those resources around explicitly so we can’t do things like accidentally duplicating a file handle and having two threads concurrently writing to it without proper synchronisation. I’d also like to have a language where implementations could work with private resources internally without needing to do anything special to protect other parts of the code, but with errors being generated if anything could leak a reference to a private resource that might result in unwanted interactions.

We can also be interested in the relative timing of when effects happen, i.e., the possible orders in which multiple effects happen. I reckon this usually only matters to the extent that the same resource(s) are affected; two sequences of effects that relate to completely independent resources can probably be interspersed arbitrarily without any real harm being done. There are a lot of practical error classes that fall into this category, and this is where I’m with you in not being entirely convinced about the monadic model. If I’m working with two files, I probably care about opening, then reading/writing, and then closing each of them in the correct order, but not about whether I closed the first before I opened the second. If we start needing some sort of IO monad to capture each sequence and then stacking them up, this could easily become a chore. It’s a little like the problem of using too much hierarchy in OO design: you wind up codifying some arbitrary nesting of the things you care about, but the hierarchy is illusory and navigating it becomes a burden when the nesting isn’t convenient (cf. 27,534 variants of lifting in Haskell).

I suspect more effective tools will eventually come from incorporating effects and the resources they affect as true first class citizens in programming languages. I’m intrigued by the kind of ideas we see in substructural type systems, and wonder whether a system built on similar principles might let us constrain the things that actually matter safely but without imposing all kinds of arbitrary prioritisation and/or boilerplate code that is more trouble than it’s worth. Some of the discussions during the development of Rust in recent years have been interesting in this regard[1] as have some of the discussions about the monadic model in Haskell and its limitations.

As a final comment, I think the other downside of the pure functions and monads model is that it tends to put all the emphasis on relative ordering. We might also care about absolute timing of effects if we’re interacting with a real time clock or other physical system. If the only way to get that information into your model is to shoehorn in another level of monad, again with the complications of stacking transformers possibly in arbitrary orders, this isn’t ideal. If a clock were just one more stateful resource we have access to in certain parts of our program, much as it would be in typical imperative languages today but with the improved effect/resource usage safeguards, that would be helpful.

[1] For example, see http://pcwalton.github.io/blog/2012/12/26/typestate-is-dead/


> Pretending that modern type systems provide anything near an adequate control of effects is misleading at best

Adequate for what exactly? Programmers all around the world are using Haskell every day and finding it saves them headaches because it offers them some benefits with regard to control over effects. It's perfectly adequate for us. I've been writing Haskell every working day for over two years and one of the things I most appreciate about it is its effect system.

You keep repeating these kinds of claims but you don't actually offer any alternative, nor have you offered any compelling explanation for why debugging and profiling should be intrinsically harder in a pure functional language.

Maybe you can't write an air traffic control system in Haskell. So what? Who cares? Haskell is a phenomenally useful tool for a growing number of developers working on a wide range of different tasks (those tasks don't include air traffic control systems or missile defense systems, but we'll live with that).

And pray tell, what would an effect system look like that's not monadic? I'm genuinely interested as I benefit every working day from Haskell's effect typing and if there's something even better I'd like to know about it!

I'm keenly waiting for your talk to appear here so I can actually try to understand what you are on about:

https://www.youtube.com/channel/UC-WICcSW1k3HsScuXxDrp0w

Hope it appears at some point!


> I've been writing Haskell every working day for over two years and one of the things I most appreciate about it is its effect system.

Haskell's type system is not an effect system.

> And pray tell, what would an effect system look like that's not monadic?

Something like this -- http://research.microsoft.com/en-us/projects/koka/ -- but with notions of ordering and better composition.

> Hope it appears at some point!

It's up there now.


> Haskell's type system is not an effect system.

Haskell's type system models effects through things like monads, applicatives, categories. If you don't agree with this then you're going to have to tell me what your definition of "effect system" and "type system" are.

> Something like this -- http://research.microsoft.com/en-us/projects/koka/ -- but with notions of ordering and better composition.

I'll give it a look, thanks,

> It's up there now.

Wow, that was good timing!


> Haskell's type system models effects through things like monads, applicatives, categories. If you don't agree with this then you're going to have to tell me what your definition of "effect system" and "type system" are.

Haskell's type system, as you rightly note, models effects through types on return values combined with monads. But Haskell code -- since the language is pure -- by definition has no effects. It triggers effects in the runtime by returning monadic values, but it doesn't actually have effects (if it did, it wouldn't have been pure). An effect system[1] describes the effect impure functions have rather than the types of the values they return (i.e. they don't require effects to be modeled as values). An example of an effect system is Java's checked exceptions, or Koka's effects.

> Wow, that was good timing!

You'll be disappointed if you're expecting a discussion of effect systems...

[1]: https://en.wikipedia.org/wiki/Effect_system


> > Wow, that was good timing!

> You'll be disappointed if you're expecting a discussion of effect systems...

I'm expecting a discussion of why continuations are enough.


That's discussed but without a proof. I'm working on a blog post (to be published today or tomorrow) which links to the proof and provides a more in-depth discussion.

Great, I'm looking forward to it!

OK, I think there's a fundamental misunderstanding at play here, of what purity, effects and monads are.

Koka's effects system is monadic! Crazy huh? Well look:

    function main()
    {
      bind(foo, bar)
    }
    
    function foo()
    {
      println("Returning 3")
      return 3
    }
    
    function bar(x)
    {
      println("Adding 1 to my argument and printing")
      y = x + 1
      println(y)
    }
    
    function bind(x : () -> e a, f : a -> e b) : e b
    {
      y = x()
      return f(y)
    }
    
    function return_(x)
    {
      return x
    }
I can define return and bind generically over effects e just like in Haskell!

Checked exceptions in Java are also monadic! Well, in a sense but I don't know enough about Java to write down the types correctly. Maybe the types can't even be written, but it doesn't stop the nature of checked exceptions being monadic. Specifically, if I have

    A method1()  throws Exception1, Exception2, ...
    B method2(A) throws Exception1, Exception2, ...
then combining them thusly

    method2(method1())
is exactly binding in a monad. It's not a monad defined as a specific datastructure in your language, but it is a monadic operation. It's pretty much the same monadic operation as the bind in the hypothetical "universal monad"[2] you propose in your talk.

The reason effects are monadic is that if you have a computation C1 that does effects of type E and that returns a value of type A, and an computation C2 that does effects of type E and that takes a value of type A as argument and returns a value of type B, then you can combine them to form a function that does effects of type E and returns a value of type B. This is the bind of a monad!

Different languages may have different levels of support for the syntax of effects/monads at the type or value level, for programming generically with effects/monads, and for defining your own and combining them, but the fact remains that effects are monadic.

The misunderstanding is that you seem to think a monad is a value along with some extra stuff. In particular, you think that in a pure language a value of type IO is a value along with some extra stuff. This is wrong! It's understandable that you came to this conclusion given the thousand misleading monad tutorials that say "a monad is a value in a context" or "when you use IO you are building up a computation for an impure runtime to run". But this is wrong!

Haskell is not a "pure language". "Pure language" is a not a well-defined concept. It's simply a neat slogan to summarize one of Haskell's characteristics without getting technical. What is pure in Haskell is let bindings and function application. That's it. (In fact I would classify Koka under the slogan "pure language" since you can tell from its type whether a function performs any effects.)

IO in Haskell is exactly an effect type like 'console' in Koka. It is monadic in Haskell, like in Koka. In Haskell it's slightly more awkward because let bindings and function application are pure so you are forced to use monad combinators (or do notation) to combine them, but this is just a syntactic difference not one of any substance[1]. Koka has better support for combining effects through essentially the "universal monad"[2] you mentioned in your talk.

In the implementation of Haskell[3] a function of type 'a -> IO b' is exactly the same as a function of type 'a -> b'[4]. It isn't a function "which returns an 'IO b' which is an instruction to an impure runtime to run some effectful code". It isn't "a 'b' in a context". It isn't "a 'b' with some other stuff". Haskell does not "trigger effects in the runtime by returning monadic values". Haskell has effects! I'll reiterate because this is important. A function of type 'a -> IO b' is exactly a function 'a -> b'. It's just been tagged with an effect type called 'IO'. We combine these effect types with monadic combinators because the nature of effects in monadic, not for any other reason.

Some other monads (Either, Maybe, State) are more like what you are thinking of. But they're different monads! They don't somehow subsume the entire nature of monads. Monads, like effect types, can simply be used for tracking effects. Monads will always be part of effect systems.

[1] I am very happy to consider syntactic improvements that make effects more easy to work

[2] I disagree that the "universal monad" is not typeable in Haskell. At least some version of a "universal monad" is typeable. It may look a bit awkward though.

[3] At least in GHC. Other implementations are free to do what they want.

[4] Actually slightly augmented with a trick to stop the optimizer breaking, but that's not of substance.


> It's not a monad defined as a specific datastructure in your language, but it is a monadic operation... IO in Haskell is exactly an effect type like 'console' in Koka. It is monadic in Haskell, like in Koka. In Haskell it's slightly more awkward because let bindings and function application are pure so you are forced to use monad combinators (or do notation) to combine them, but this is just a syntactic difference not one of any substance

I both agree and disagree, because that very much depends on the domain. From a mathematical point of view, whenever two things are isomorphic, then they are the "same". But from a programming point of view, the two can be very different, as abstractions are psychological interactions with the programmer's mind, and two isomorphic mathematical concepts can psychologically interact very differently with the programmer. Therefore, if I don't have to "think in monads" when working with an abstraction, then the abstraction is not a monad, even if an isomorphism exists between them.

As it has been proven (I think) that any effect can be modeled by monads, you could say that all effects are monadic, and just like the mathematician in the famous joke, that's absolutely true yet absolutely useless. In a programming language, abstractions are defined (and measured) both by mathematical as well as psychological properties.

So I think the difference is of substance because it has vastly different psychological properties (more than just thin syntactic sugar).


> So I think the difference is of substance because it has vastly different psychological properties (more than just thin syntactic sugar).

Right, it's not just syntactic sugar, it's also the type system, and it may also touch on the run-time system. I think we've been in violent agreement from the beginning but I just didn't realise it. I interpreted your language as being somewhat combative and that put me on the defensive.


> I interpreted your language as being somewhat combative and that put me on the defensive.

I'm sorry, then, and take full responsibility. Internet culture -- at least in forums such as this -- rewards texts with emotional content more than those devoid of it. As there are many texts breathlessly extolling the virtues of monads, I had to offset that a bit ;)


> As there are many texts breathlessly extolling the virtues of monads, I had to offset that a bit ;)

That is an understandable impetus ;)


I watched your talk and it was very interesting. I'm somewhat confused by the notation for scoped continuations though. Is there a reference you can point me to? In particular, if I have

    ^x {
        ...
        cont c
        ...
    }

    c() {
        ...
        pause(^x = foo)
        ...
    }
then when 'pause' is run control flow jumps back to the cont. How do I then resume execution of 'c'? It seems like I have to somehow encounter that 'cont c' line again either in a loop or by reinvocation of an enclosing function. But can there be two calls to 'cont c'?

    ^x {
        ...
        cont c
        ...
        cont c
        ...
    }
And the second one returns control to the thread that 'paused' when called from the first? If so, how do I start an entirely new thread?

This was all a bit confusing, so if you can point me to a reference I would be grateful. Unfortunately "scoped continuations" has very few hits on Google. In fact by the time you read this, this comment is probably one of the top hits!


He mentions Python, is there a solution to this in Python? Perhaps Python 3.5 with optional type checking annotations?

Wonder how many klocs are underneath that `profile.email.interactions()` method?

It doesn't matter because you don't have to maintain it.

> I don't like wasting my time on things my computer could do for me.

So don't. Let the interpreter check profile.verified_email twice (horror of horrors!) and get back to your day job.

Even if there are no in-program side effects, removing a 2nd check for verified_email immediately before preparing and sending the email introduces a race condition: the user's email is verified, your first check runs, then you go talking to the DB and who knows what else for the last_interaction fetch/business logic, meanwhile, user changes their email address, and now profile.verified_email is false. By skipping your 2nd if check, you send your email to an address that is still pending verification. This assumes that .verified_email is a property that talks to the DB, and while it'd be better style in that case to make it an .is_verified_email() method to make that clear, doing so is a plausible and not-incorrect use of properties.

It's not worth your (employer's) time to even check if that's the case. Leave the non-broken code as-is and return to your regular programming.


Add a comment so the next person reading the code can understand why it's built that way.

I was waiting for the paragraph that read "I'm not talking about pure functional languages because X, nor am I talking about Rust, because Y."

So, yeah, uncontrolled side-effect are a PITA. Solutions exist.


The D programming language has compiler support for this. Unfortunately due to legacy concerns it's opt-in, which makes it much less useful, and the designers have said that they regret not adding it to the language earlier when it could have been mandated.

It has the `pure' keyword, which can be used to annotate functions that don't have any side-effects other than mutating its arguments. If you want to disallow mutating arguments, you can just const them up. In the article's example, you could probably eliminate a lot digging through code by only looking at the functions without a `pure' annotation and where `profile.verified_email' is accessible in a non-const argument.

The D designers call the property that the `pure' annotation enforces "weak purity" and I think it's a really useful notion for managing side effects in imperative languages. In C, for example, it just means that you can't mutate global (or local static) variables, or mutate anything through them (no mutating the value pointed at by a global).

Shame it didn't make it into Rust. I guess the rationale was that it made lambdas too complicated and didn't reall come with much benefit (there aren't many more ways to violate weak purity in Rust than in C, and Rust actually forbids mutable globals anyway).


Legal | privacy