> The point here is that a transactional system is more powerful than RAII
my point was that RAII trivially maps to transactions (destructors do rollback and commit is explicit). Transaction RAII object can be composed to make more complex transactions.
> If you don't use shared ownership, then you're basically limited to lexical scoping.
why would you say that? A common use case is having objects manually removed from collections triggering cleanup actions, like closing sockets, automatically de-registering from event notifications, sending shutdown events, rolling back transactions. That has nothing to do with lexical scoping.
> my point was that RAII trivially maps to transactions (destructors do rollback and commit is explicit). Transaction RAII object can be composed to make more complex transactions.
RAII is limited in that it's tied to object lifetime, whereas more general transactional semantics can be linked to more general semantic conditions. Also, RAII does not have a good way to distinguish between commits and aborts.
> why would you say that? A common use case is having objects manually removed from collections triggering cleanup actions, like closing sockets, automatically de-registering from event notifications, sending shutdown events, rolling back transactions. That has nothing to do with lexical scoping.
If you use an explicit action to trigger destruction, then this doesn't have to be a deletion. In fact, having it tied to object destruction is unnecessarily limiting. The general rationale for having RAII is that it happens automagically; if explicit disposal is needed, then much of that rationale goes away.
In general, it seems to be me that you don't have actual experience with resource management outside of C++, so you're mostly speculating about what it's like and try to force your thinking about it into a C++-like model.
> in general, it seems to be me that you don't have actual experience with resource management outside of C++, so you're mostly speculating about what it's like and try to force your thinking about it into a C++-like model.
yes, I'm a C++ programmer. I have experience with resource management in C# and python for example, which are a pale shadow of what is possible in C++.
I know nothing of resource management in functional languages, especially regarding transactions and I would love to read more about it if you have some pointers (ah!)
There's an interesting example of resource management in Haskell with monads [1], but it's probably not easy to follow if you aren't already steeped in Haskell lore, so let me be a bit more basic.
First, note also that it would not be particularly hard to add RAII on top of a garbage-collected language to coexist with GC for resource management; it's just not done in practice. And it's not because language designers are ignorant of it (Bjarne Stroustrop's Design and Evolution of C++ is part of the standard recommended reading list in the field).
Generally, you want resource usage to be a provable property of a program. Not that you'd actually write a formal proof, but you generally want to be able to explain at least informally why resource usage follows certain constraints (e.g. having certain upper bounds).
The basic insight that you need is that resource lifetime is just another semantic property that you can handle with basically the same techniques as other properties of programs; you do not need special language support for it (though, obviously, it helps if your language is a bit more expressive than a Turing machine :) ).
This means that you'll generally tie resource usage to program state and program behavior that you can reason about. The incidental semantics of object lifetime can be dangerous, especially in a functional language, as object lifetime can sometimes be unpredictable.
One of the major hiccups are closures. Closures capture their environment (including local variables) and if they survive the stack frame that created them (because they are returned or stored on the heap), then the lifetime of any captured object can be extended in a fairly unpredictable fashion. Obviously, that is not a good thing, as you have a hard time proving lifetime properties, but few functional programmers would limit themselves to a more trivialized use of closures just for the sake of RAII.
Instead, as I said, you tie resource management to program behavior or state. In the most simple case, that can be scoped resource management. But it can also be an LRU cache, a system based on transactions, or something else entirely. Here's a simple example of a library I sometimes use in OCaml:
class example = object
inherit Tx.resource
initializer print_endline "create"
method release = print_endline "close"
end
let _ = Tx.scoped (fun () -> new example)
This is a simple lexically scoped transaction, but the library also allows for chained, nested, etc. transactions that don't begin and end based on lexical scope, but (say) program events (e.g. terminate a transaction when a socket is closed from the outside and release resources that are associated with that connection). It can also distinguish between commit and abort behavior (similar to the Haskell example above), will properly error if resource creation is not done within the context of a transaction, plus a few other bells and whistles.
15 years as a C++ dev and I agree with gpderetta, cleaning all manners of resources is awesome. Even Bjarne, the creator of C++, agrees cleaning up resources in destructors is good. Then the standard committee agrees it is good, because things like std::lock_guard and the custom "deleters" on shared_ptr and unique_ptr exist and work with many resources.
If you have issue managing lifetimes I can see why you might think explicit resource cleanup is better, but with so many scopes and even thread local scope, and move semantics ability to move an object into new scopes, there really is no limitation impose by tying resource cleanup to object lifetime. If you don't like that, then make you own classes to do it explicitly.
And as I said before, if your entire perspective comes from C++, it may be too narrow. I'll give you two examples:
1. Modern functional programming languages generally come with compacting, generational garbage collectors that have bump allocators. This means in particular that the cost of heap allocations for temporaries is only marginally higher than that of alloca() and has good locality even for pointered structures (to the point where linked lists can outperform dynamically resized arrays such as std::vector, which is basically unheard of in C++). When heap and stack allocations are that competitive, that opens up a whole new set of techniques that aren't normally used in C++ and lifetime considerations become a lot more complex.
2. Functional programming languages use closures extensively, and closures can have effects on object lifetimes that are difficult to predict. The reason is that closures capture their environment – in particular local variables – and if they survive the stack frame that generated them, this can lead to objects living much longer than you think. It's a major reason why closures and RAII don't get along well (note that C++ didn't have closures until recently and in practice their use is much more constrained than in functional or multi-paradigm languages).
This does not mean that you do not want to have sane resource handling. But in general, you want resource usage to be a provable property of a program, so you will generally tie resource management to program state or program behavior rather than incidental language semantics.
First, allocation is only so cheap if it's temporary. If objects survive minor collections, then there's additional cost, as they get promoted to the major heap. The key idea that I'm getting at is that with temporary objects being cheap, you have more flexibility in creating temporary data structures and do not have to fit them in the constraint of a stack frame and (unlike with alloca()) do not have to worry about stack overflow and they can be returned from a function without copying (unlike stack frame contents).
Temporary data structures will still be small and generally fit in the L1 cache of any reasonably modern processor. And using pointer does not mean that everything is a pointer, and that you're necessarily sacrificing ILP.
I don't discount the power of being able to cheaply create (short lived) highly dynamic data structures. I do miss it in C++ and alloca never feels right.
> Modern functional programming languages generally come with compacting, generational garbage collectors that have bump allocators. This means in particular that the cost of heap allocations for temporaries is only marginally higher than that of alloca() and has good locality even for pointered structures
Interesting. Would you mind naming a few such languages? I'm guessing Haskell. What about OCAML? Any others?
I know that OCaml, Haskell, the JVM and Microsoft .NET do it (I think Mono does, too, but am not positive). And I know for a fact that OCaml and the JVM inline allocations and optimize multiple allocations that are close together (e.g. increasing the allocation pointer only once even if you allocate a pair of objects).
It's fairly common and needed for modern functional languages, as they can go through a lot of temporary objects when programming in a purely functional style.
my point was that RAII trivially maps to transactions (destructors do rollback and commit is explicit). Transaction RAII object can be composed to make more complex transactions.
> If you don't use shared ownership, then you're basically limited to lexical scoping.
why would you say that? A common use case is having objects manually removed from collections triggering cleanup actions, like closing sockets, automatically de-registering from event notifications, sending shutdown events, rolling back transactions. That has nothing to do with lexical scoping.
reply