Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> > “In computer science, control flow (or flow of control) is the order in which individual statements, instructions or function calls of an imperative program are executed or evaluated. Within an imperative programming language, a control flow statement is a statement that results in a choice being made as to which of two or more paths to follow.”

> This can be reduced to:

> > “Imperative programs make a choice about what code is to be run.”

No. You left out the part about order, which is the most important part.



sort by: page size:

> Depends on your definition of "flow control".

Yep. That’s why this is a potentially useful rule of thumb for comparing programming languages, especially ones that skew very far in one direction or the other, but it’s not a rigorous definition of the concepts.

Some incredibly simple algorithms are difficult to explain naturally without it sounding like control flow. How would you describe the absolute value function without saying “if” or “when” or specifying one particular element from a set? Obviously any computer system needs to be able to determine what steps to take depending on specified conditions.


> You can kinda treat branchless code as a form of flow control as well.

That's the common understanding of flow control, the other poster is right in that it's not typically just conditionals. There are a few posters who were under that misapprehension, but that's never been the common understanding of it, it's just that conditionals are typically where people worry over flow control because otherwise it's linear.


> I wonder if a language can be Turing-complete without flow control. My guess is no?

I can’t imagine why not. I think you have to make the distinction between explicit control flow in the syntax, which declarative languages don’t have (according to this rule of thumb), and the notion of a computer choosing to do one thing among several options depending on some condition. Of course any Turing-complete system can do the latter, but that’s totally unrelated to the syntax of the programming language.


> The core of Functional Programming is thinking about data-flow rather than control-flow

I've spent a small amount of time trying to convince people that FP is a byproduct of immutability and not much else. But I like this description better than that. I like that it captures the idea of dependencies. Data flow, and FP, are about explicitly passing all the required dependencies into any step in the process, and getting back all the results - as opposed to having any implicit inputs or output. Making all dependencies explicit will force you into writing immutable code, and likewise writing exclusively with immutable data and functions that dont have side-effects will automatically make you spit out code with explicit dependencies as a byproduct. These are two sides of the data-flow coin.

I also like that the focus on data-flow makes it clear you can do FP in any language. Some languages definitely help you, but you absolutely can practice data-flow over control-flow in C and assembly almost as easily as python. It's just tempting not to when you don't have to, being more strict about FP takes more self control.


>Those are still executed in some order.

Not really. E.g. there's no ordering in function composition that isn't also in high school math. Also, you shouldn't depend on e.g. maps processing elements in a given order.

My impression is that statement ordering is easier for mediocre but competent programmers than it is for excellent programmers. The reason is that writing correct imperative code is non-trivial, but writing correct-y imperative code is pretty easy. So someone who's carefully thinking things through while learning will have a more difficult time with simple assignments, because -- at least while learning -- they're essentially trying to construct the correctness proof in their head. And that's pretty hard for imperative code.


> So, ya, the question is "How do you pass information between sub-routines".

Nitpicking now, but phrasing it that way supposes the existence of sub-routines. I would prefer to phrase as just thinking about control-flow and data-flow. Most programming languages have complex control-flow and constraint data-flow to follow along with control-flow. Most successful end-user languages have arbitrary but static data-flow and little to no control-flow.

> Thank you for taking the time to have this discussion.

If nothing else, I find that these sorts of discussions tend to be really useful to clarify my thinking and catch my mistakes. Civil disagreement is much more interesting than ideological agreement or disagreement :)


> It looks like flow based programming is exactly the same thing as functional composition.

You can do flow based programming by composing functions, but - like I mentioned in my previous comment - it is possible to accomplish without any sort of functional programming at all (unless you consider Bash and Perl and C to be functional programming languages, of course!).

> Can you illustrate to me how these two concepts are disparate?

Say you have a connection between three elements in the flow, like so:

    step1 -> step2 -> step3
Each of these steps could be defined functionally (assuming that the maps read lazily):

    step1 = [stream of digits of pi]
    step2 = map { _ * 2 }
    step3 = map { _ / 2 }
    flow = step3 step2 step1  # map [ map [3 1 4 ...] { _ * 2 } ] { _ / 2 }
Or they could be done procedurally:

    step1 = while [read digit from digits of pi] { write [digit] to stdout }
    step2 = while [read input from stdin] { write [input * 2] to stdout }
    step3 = while [read input from stdin] { write [input / 2] to stdout }
    flow = step1 | step2 | step3  # each digit -> digit * 2 -> digit / 2
With the pipe effectively defined as follows:

    | = spawn left; spawn right; while [read input from left] { write input to right }
The key thing here, though, is that whether the underlying language is procedural or functional or whatever has very little relevance; what actually matters is that each of those steps can run concurrently, continuously listening for new inputs and sending new outputs. The functional approach does this by lazily reading values from a sequence. The procedural approach does this by more explicitly running the three loops concurrently (whether with preemptive processes/threads or via coroutines / green threads).

This all assumes there ain't any branches in that flow graph, though; it gets a lot harder to represent a branch flow graph through functional means (not impossible, but most functional languages that delve into this sort of thing typically use Erlang-style actors and message passing instead of trying to represent this directly with function composition), while in the procedural case it's just deciding which output stream to use.


> What does it mean for control flow to be problematic?

The idea is that we're embedding a DSL in Haskell by evaluating Haskell expressions symbolically within the DSL. Instead of doing a computation in Haskell and providing a value, we want to output a program in some other language from Haskell and then run that. This other program can have totally different semantics from normal Haskell: it might be a description of hardware (like Kansas Lava), it might be efficient C code, it might be an SMT formula... etc.

A nice, lightweight way to do this is to define a type where operations produce expressions in your target language. We can overload a bunch of things like + to do this; 1 + 2 would be evaluate to an AST along the lines of `Plus (Literal 1) (Literal 2)` which can then be treated as a circuit or a logic formula or whatever. This is called a "deep embedding".

The problem with control flow is that it happens at the Haskell level, not at your symbolic level. So `if (x > y) then 1 else 2` will either evaluate to 1 or to 2, but we actually want it to evaluate to the AST representing the whole conditional expression. In a sense, we want to "take both branches" symbolically.

A good way to think about this is that actual Haskell computations become compile-time operations for your DSL. Compile-time control flow can change the result of the compilation, but it can't depend on runtime information—and, of course, we definitely want to have control flow at runtime!

In the case of `if` we can still make this work by defining our own symbolic boolean type and overloading the if-then-else syntax, but we simply can't do this for pattern matching within the confines of Haskell. Haskell patterns are not first-class citizens and the language does not give you any to talk about what pattern matching means within the language (except maybe something really hideous involving Template Haskell).

As to your broader question: deeply embedding DSLs in Haskell is a really broad and powerful technique, but it also has some real limitations. The control flow thing is one; another is dealing with common subexpression elimination. For example, if x gives you a big AST of some sort, you'd probably like x + x to give you an AST that realizes the two xs are the same and unifies them somehow but actually getting this structure from a Haskell expression is awkward at best[1].

However, the technique has one major advantage: it is really easy to use. It's much less work to build a nice deeply embedded DSL than it is to write a full compiler; if the above problems aren't a big deal, you're getting a lot of structure for free by relying on Haskell—you can rely on Haskell's types and type inference, you don't have to write a parser, you get Haskell as a powerful compile-time metaprogramming language and you can use things like Haskell's rewrite rules for lightweight optimizations.

It's a great tool to have in your arsenal because it really lowers the barrier to writing a DSL, which is an incredibly powerful technique for a whole host of real-world problems.


> I wonder if a language can be Turing-complete without flow control. My guess is no?

I agree, unless something like pattern-matching does not count as control flow. But then isn't a conditional just a basic pattern? Why does it get a pass?


>Most of the non-functional "programming" in Flow based programs actually occurs at the level where you are composing the connections between processes (be that the GUI, or in a hand-written method which makes the connections).

This is false, there are entire languages that do line by line compilation via Flow Based Programming.

>One additional benefit of FBP which it shares with traditional FP programs is that they are capable of exhibiting "lazy" execution

This is blatantly wrong and the main thing I was trying to express.

Flow Based Programming is based off a topographical analysis of the code body. Which is a analysis of object dependencies (where a dependency is code that must be executed before the object (code) in question can be run).

Meaning objects are executed in the order in which they have the least number of dependencies. This means one off sections of code, that do not necessarily need input from external, or exist independent of otherwise coherent processes he processed first.

For example (Example 1)

        Path p = resolve_path();
        if(something_happened())
        {
               FP fp = open(p, "rw);
               write(fp, "logloglog");
               close(fp); 
        }
This above code is bad, because path P is always resolved, regardless of error state.

A Currying analysis (lazy evaluation) will change this flow via lazy evaluation (or should ideally):

(example 2)

        if(something_happened())
        {
               Path p = resolve_path();
               FP fp = open(p, "rw);
               write(fp, "logloglog");
               close(fp); 
        }
A topological flow based compiler will prefer path P be constant. It will prefer example 1, to example 2 every time (if its given example 1 as base code). And likely ensure that path P is solved for absolutely first in a function AS resolve_path() depends on no code being executed before it.

This is the flaw in most flow based programs. Is they don't care errors like this, and when the errors do exit, they optimize for them to be more apparent.

:.:.:

This is what I mean by a topographical analysis being the opposite of a Currying analysis. They group code differently, in almost an opposite fashion. Curry trys to group code by purpose. Topological by how much code has to be executed before it.

Both have their uses.


> Any control flow program can be converted into a matrix multiplication.

Any control flow program, on a set of inputs for which it terminates[0], can be, but it's not true that any control flow program, with unspecified inputs, can be, even with variables for the inputs, since the shape of the multiplication as well as the value of elements in the matrices may depend on the inputs (also, whether it's even a terminating program can obviously depend on the inputs.)

[0] and any program which always terminates can also be, though the general form may be more complex than is necessary for some inputs.


> In most programming languages, the order of the two statements would be well defined, and neither would be dead code.

There are no statements in the above examples, just expressions (composed of other expressions). Evaluation order of expressions is not well defined in "most popular languages", e.g. consider this expression:

  printSecondArgument(
    query(connection, "INSERT INTO t1 VALUES ('abc')"),
    query(connection, "SELECT * FROM t1")
  )
> Trying to make all statements implicitly concurrent unless they have explicit dependencies is a terrible way to complicate your life

I agree, that's one reason why I dislike statements (see my list in a parent comment)

> It is obvious to everyone that distributed code, eventual consistency, and other similar non-totally-ordered examples are much harder to get right than procedural code.

This is a category error. All code is distributed; the world is partially-ordered. "Procedural code" (i.e. serial/sequential execution) is a strategy for dealing with that. It's particularly easy (give each step a single dependency; its "predecessor"), but also maximally inefficient. That's often acceptable when running on a single machine, and sometimes acceptable for globally-distributed systems too (e.g. that's what a blockchain is).

Forcing it by default leads to all sorts of complication (e.g. multithreading, "thread-safety", etc.). Making it opt-in gives us the option of concurrency, even if we write almost everything in some "Serial monad" (more likey, a continuation-passing transformer)


>> rather than just composing your functionality out of direct function and method calls.

It's called imperative programming. Funny how rarely you see the word.


> With that reasoning, all code is declarative because it's "declaring to the interpreter/compiler" how to operate.

Sort of! All code is also imperative, eventually, at the machine code level. This is a perfect example of how useless the whole "imperative vs declarative" distinction is. Nearly everything in a computer is both imperative and declarative, in some fashion, at some point.

These terms were not made to be some concrete and inviolable paradigm of computing. Some academics just wanted to tell people to write programs where you didn't have to spell out every instruction to the compiler, so they made this crude distinction. Things like a function called give_me_a_temporary_file(), rather than writing out an entire function yourself to create and return a temporary file. But both are executing steps imperatively under the hood. So we shouldn't make a big deal about these terms or programs that do more of one or the other.

The differences that I'm pointing out are 1) declarative does not describe a flow, the flow is under the hood; and 2) configuration files do not actually perform steps, they merely describe arbitrary data, and only the program interpreting the configuration can determine if the result is imperative or declarative. For some programs, the exact same configuration file may result in either imperative or declarative execution.


> The programming style is very imperative

Is that supposed to be a meaningful statement?


> What's not so easy to understand for an imperative programmer is this: Why should I jump through those hoops just to have a readonly structure? Why not just, you know, program imperatively?

Beacuse imperative programs are rife with "statements". Opaque lines of code that may or may not do something, somewhere, which cannot be manipulated as first class objects.

Why would you ever wanna program with that weird restriction?


> Decision making(a.k.a. control structures) in Chaos language is only achievable on function returns for the sake of zero cyclomatic complexity.

This is just moving the control flow to the end of the function right? Switch-case statements also have cyclomatic complexity, right?


> We don't trip over ourselves when writing imperative code

Speak for yourself. Though I guess I'm more concerned with refactoring than with writing. In particular in imperative code it's very hard to tell the difference between semantically important ordering and accidental ordering.

> when we do, it mostly has to do with IO or other OS state, and monads won't help you there.

Yes they will? It's very normal to encapsulate something of that nature with a monad.


> This is exactly the kind of problem that functional programming was supposed to help us avoid!

No, no it isn't. Just like you wouldn't expect 2 * 3 + 8 to suddenly return 22 when you intend it to, you shouldn't expect the order in which you apply functions to not matter.

When you pass around a changing state object to every step of your program, you are effectively doing normal imperative programming with global-ish state.

next

Legal | privacy