I'm not sure that they are. You just need to model the concept of state as the function's input/output, as functional programmers are eager to do :)
If we start in state x, then apply operation f, the resulting state is f(x). If we apply operation f again, we'll be in state f(f(x)). If f is idempotent, then state f(x) and state f(f(x)) are identical.
> Well managed accessible state can be greatly simplifying.
Absolutely--and functional programming is one way to achieve that. From the perspective a given function, all "state" is captured in its inputs and outputs. You know exactly what existing state can be used by the function (inputs) and exactly what state can be generated by the function (outputs) simply from looking at the signature.
Idempotence is usually used when talking about effectful code. Modeling the state as part of the input/output seems to run counter to that purpose, because that's how you create a pure model of an effectful program. It's useful to talk about idempotence when running the function zero times is different than running it once, which is hard to do in that framing. As you say, "on the same input" is difficult to reconcile if the state is part of the input.
If we consider the state to be part of the input, then it's easier to say that when f(x) = y, then f(y) = y as well. In other words, f(f(x)) = f(x). Which, as it just so happens, is literally the mathematical definition of idempotence which the programming concept is based on.
In any case, this still gets to the same overall conclusion: the point is to arrive at a specific destination, not to move towards it. That shift of perspective is helpful in framing problems towards an idempotent solution.
Author here. State is not immutable, it is mutate-in-place via atomic swap. That makes it somewhat different than both functional programming and imperative programming, but it's not an excessively difficult programming style to use in practice.
I was trying to respond specifically to didibus's obvious misunderstanding: they thought an increment function might be designed to be idempotent. That doesn't make any sense, whether you know the CS meaning of idempotence or not.
One significant issue here is the question of a priori vs. a posteriori properties. When thinking algebraically about software, it's often a case of defining things in an a priori way in order to get the right laws to apply to operations. Idempotence is one such law.
The other angle on this is the a posteriori angle, where you note that something behaves a certain way and use an appropriate word to describe that behaviour.
My reading of didibus's comment was that it was basically about imperative programming, and that there was some confusion around the situations in which the concept of idempotence applies (incrementing a variable isn't one of them).
I think you are trying to introduce the State monad and the technique of modelling a stateful program by chaining pure functions. That's fine, but it does introduce a level of complication that is not implied by a straightforward comparison of memoized functional vs. idempotent imperative code.
I mean "state" in the sense that we can distinguish time-dependent systems with internal state or memory from time-independent systems, whose outputs are a pure function of their inputs. Combining pure functions to create pure functions does not introduce any state, in this sense.
An entire program may have no observable state, like a command-line tool that reads stdin and writes stdout with no particular timing. It does have state inside it, of course, but only because it's running on a giant state machine. We can fully describe its behavior without making reference to any internal states, using functions or relations to talk about the relationship between its input and its output.
> I would say that the main difference between TLA+ and the functional approaches, is that in TLA+ you reason purely mathematically, while in the functional approaches you reason within a (complex) programming language.
This statement puzzles me, because we have great math for reasoning about pure functions (mappings). We have lambda calculus and category theory. You don't need a state machine to prove QuickSort works in a purely mathematical way, either, just statements about the equivalence of functions. The recursion can be handled using the fixed point operator and the algebraic laws it obeys.
Edit: In fairness, existing "functional approaches" to specification do seem complex and look like programming languages.
I came across an old comment of yours (https://news.ycombinator.com/item?id=11835178) that seemed to support the idea that FP and TLA+ have strengths for different kinds of programs, depending on whether they are more transformative or more reactive. I'm just as happy if learning more about TLA+ turns my idea of this on its head, though!
Pure functional programming's core delusion is pretty clear as well: all state is immutable.
There is no "state" in functional programming, there is function application. Your program is a function from some value (inputs) to some value (the result). You write that program by composing various smaller programs with operators like (.) and (>>=) (or if you like being generic, (>>>)). There is no data, there is just function application.
So then, "state" becomes an imaginary concept that sometimes makes things easier to think about; if there is a sequence of computations that logically operate on the same "thing", then you call that "thing" the state. The State monad is nothing but syntax sugar that lets you very explicitly pretend that there is a such thing as state; something you can "get" at and "put" into as needed. This abstraction, BTW, is no different than state in any other language (which is why it comes up so much in hastily-written Haskell programs; you can just cut-n-paste from C and change the spelling of some keywords), except instead of "get" and "put", you just use variable names and "=". Otherwise, exactly the same concept and semantics, and very similar syntax everywhere.
In procedural languages, state is important because the procedures abstract CPU instructions, which operate on little boxes called "registers". But in functional programming, your program is not a list of things to do to a register, it's just functions. So state is not really meaningful anymore. Don't program the machine, program your algorithm!
I've yet to be convinced that Haskell actually gains very much from its stateless-ness; I understand that in principle there are nice optimizations that it can enable, and parallelization becomes rather simple, but I'm not seeing it beat the pants off of impure functional languages in speed or clarity.
But nothing is ever going to "beat the pants off" hand-optimized assembly in a benchmark; there is only so much work a computer can do, after all. In the real world, however, people don't have time to spend hundreds (or thousands!) of hours optimizing four lines of code. In that case, you'll find that these theoretical optimizations are great -- you write code the natural way, and the optimizer turns that into something closer to the optimized-for-hundreds-of-hours assembly. C and Java compilers just can't do that because it can't make any assumptions about what you mean, it can only work with what you say (since those programming languages are basically high-level lists of instructions for the CPU to perform).
It's interesting to compare this with the push for functional + immutable design systems, which leads to a similar result: writing programs that operate as "nextstate = f(state,inputs)"
Does it still count as idempotent if you achieve it by adding a timestamp parameter to each function to specify which version of the global state you want to use?
If your functions are "stateless" (by which I assume you mean they don't depend on or modify outside state), then they are pure functions so... that's exactly FP?
I'm sorry but what you're saying is immediately obvious. No one in the functional programming world is laboring under the impression that state doesn't exist.
The real issue is whether you think it's worth separating out code that can be reasoned about in a mathematical fashion from stateful code that has to deal with mutating concerns. Functional programming is where research on improving abstractions to better allow this separation of concerns, which is pretty much the opposite of a clumsy hermetically-sealed bubble rolling around and knocking lamps over, etc.
Author here. It's not "pure" functional progamming in that state exists and is mutable-in-place, but only via atomic swap - you can interpret the sequence of states as a sequence of monads, but the evaluation semantics are different.
Not quite. The computer you're referring to has one state before an event, and another state afterwards.
Modeling those as two distinct, immutable states provides enormous benefits over modeling it as a single state that "mutates". Doing this eliminates a large class of serious bugs.
The reason for these benefits is that far from trying to "escape the reality that the known universe and everything that happens within it is stateful," functional programming uses a more rigorous model of state that better reflects the relationships between states over time.
Yes, that was actually my point. State is essentially function composition. Thus I'm arguing for avoiding dependencies between conceptually independently compositions.
Eh, syntactically it hides the state as in `f x` has the same value every time, but calling functional programming stateless is a stretch, I consider it a half-truth told the newcomers to understand the concept. Functional languages still have state but they're abstracted out to function call. E.g. if you have a recursive descent parser written in Haskell, your program will still have state, not every time you're in `expr` function you'll have the same conceptual state.
I think lack of mutation is a lot more pronounced, since it has very important implications to runtime (necessary O(logn) slow down) and requires a more sophisticated type system (linear type system) in order to emulate safe mutation.
If we start in state x, then apply operation f, the resulting state is f(x). If we apply operation f again, we'll be in state f(f(x)). If f is idempotent, then state f(x) and state f(f(x)) are identical.
reply