Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Erm it is perfectly possible to perform dynamic allocation in Ada. You have the 'new' keyword to allocate a new object of a type. You have the 'unchecked deallocation' mechanism. You have controlled types that deallocate when an object is out of scope. You have all sorts of weak references schemes in some libraries. You have storage pools to handle allocation specifics for a type. You have the secondary stack that handles returning objects of unknown-size-at-call-site.

Most of those can be disabled though through the 'restriction' mechanism (look up Pragma Restriction which is very interesting in itself).

SPARK itself can handle and prove some ownership properties but to the best of my knowledge isn't at the level of rust in memory safety on dynamically allocated memory.



sort by: page size:

Depends on which dynamic memory you are talking about.

Ada can manage dynamic stacks, strings and arrays on its own.

For example, Ada has what one could call type safe VLAs, instead of corrupting the stack like C, you get an exception and can redo the call with a smaller size, for example.

As for explicit heap types and Ada.Unchecked_Deallocation, yes if we are speaking about Ada 83.

Ada 95 introduced controlled types, which via Initialize, Adjust, and Finalize, provide the basis of RAII like features in Ada.

Here is an example on how to implement smart pointers with controlled types,

https://www.adacore.com/gems/gem-97-reference-counting-in-ad...

There is also the possiblity to wrap heap allocation primitives with safe interfaces exposed via storage pools, like on this tutorial https://blog.adacore.com/header-storage-pools

Finally thanks to SPARK, nowadays integrated into Ada 2012[0], you can also have formal proofs that it is safe to release heap memory.

In top of all this, Ada is in the process of integrating affine types as well.

[0] - Supported in PTC and GNAT, remaining Ada compilers have a mix of Ada 95 - 2012 features, see https://news.ycombinator.com/item?id=27603292


Ada’s memory management is a mix of memory pools, RAII, and manual memory management.

It often isn’t necessary to explicitly use pointers to dynamically allocated memory. For example, Ada lets you return an array of unknown size by value from a function. These unconstrained objects are stored on what is called a “secondary stack”, which is basically a special allocation pool for holding these objects managed by the language runtime.

When you do use pointers, the compiler is very strict to try and avoid dangling pointers. You can also create custom pointer types and assign them to memory pools, so dynamic allocations of that pointer type get made in the pool (and not in the normal heap).

You can also use manual memory management (using the `new` keyword and the `Unchecked_Deallocation` function similar to malloc() and free()).

There is also a finalizer class you can inherit from that adds a destructor to record types, so you can use RAII, too.


Ada allows to return from a function a stack-allocated array of runtime-dependent size. This alone removes wast number of cases when one needs to allocate memory dynamicalky in C/C++.

I always puzzled why even C++ does not provide any facilities like that out of the box.


What I meant by dynamic allocation was allocating an array of the necessary size. Last I checked, the Ada solution was to always allocate the largest possible array you would ever need every time. Correct me if I'm wrong, but in Ada if two arrays are different sizes, they are completely different types.

In practice, this means your Ada written image viewer can show you 1000 thumbnails, but no more. Or 10000, or some other hard limit. And you can only view images smaller than 2MP. Unless the author made the fixed sizes really large, but now you're burning 10MB of memory per image even if they're all tiny gifs.


Yes dynamic allocation is exactly what I'm concerned about. Deserializing text for example.

Yes. What you've written is exactly how to allocate an array of dynamic size on the heap. You can also do: declare My_Array: Foo_Array(X..Y); begin ... end;

I assume you meant to say dynamic allocation?

It depends on your application.

If your application is fine with returning failed allocations and can handle the case, then it's okay. But if you have a real-time or safety critical system, for example, something running on a vehicle, robot, or flying device, you usually can't risk failed allocations. In that case the better bet is to reserve memory on the heap for your array and ensure that you are within bounds or within a certain threshold by checking on push data to your array or using a circular buffer.

There are many approaches to solving a problem, and many ways to implement the solution, but the path you take usually depends on your application and your constraints. Think about which set of tradeoffs with respect to performance, safety, and memory usage makes the most sense for what you are trying to do.


Dynamic memory allocation is a runtime operation and can always fail, so this cannot in general be a compile-time operation if you also want to allow dynamic memory allocation.

Maybe there are tools that will do that. My knowledge of the industry is about 5 years out of date, and I was no expert even then. But we had no such solution and, in fact, didn't use dynamic memory allocation at all nevermind newfangled gizmos like garbage collection.

In principle, yes I think it's possible... at least, I don't think it reduces to the halting problem. But it would be tricky. It would be relatively simple to reason statically about the rate of memory allocation (iterate through all paths leading to a 'new' operator), but for this purpose you care about cases where an object becomes garbage and can be deallocated. That occurs when the last reference to the object is overwritten or goes out of scope, which is not so easy to determine in the presence of aliasing.


I slightly disagree. Using heap allocated types is perfectly fine. The biggest thing I have to keep reminding myself coming from higher level languages is to re-use data structures, and to architect things in a way that this is possible.

Allocating a new String/Vec every single time you do something is killer for performance, but if you do it once up front then clear the data structure for the next use it should be performant enough.


It is possible to heap allocate a VLA too, although the syntax is a bit more cumbersome with pointer access. But it does allow you to pass dynamically sized multi-dimensional arrays to functions and access them with array indexing notation.

All sorts of useful problem domains need dynamic allocation; you can hardly avoid using it at some point.

I'd actually like to see a Modula-2 example of, say, dynamically allocating an array object, which is dynamically sized. Because obviously Modula has better arrays which know their size, which a C program has to express all by itself with size fields and pointers.

I worked with Modula-2 quite a lot around 1990 but didn't do anything of the sort. I didn't know anything about any ISO standard or memory allocation functions provided thereby. A cursor search seems to indicate that it didn't exist until 1996; basically years after "peak Modula".


That's why I tend to always prefer automatic and static storage to dynamic allocation wherever possible, especially in cases where you don't have "N" items or "N" cannot possibly exceed a certain small value. Also, allocation/deallocation of the certain object need not be defined within its module. It should be up to the caller to decide whether to allocate the object on the stack, statically or dynamically depending on the caller's situation:

    foo f;
    foo_init(&f);
    foo_destroy(&f);
    ...
    foo *g = malloc(sizeof(foo));
    foo_init(g);
    foo_destroy(g);
    free(g);

Yep. In new code you should use smart pointers for allocation as much as you can.

It is quite possible (and it can be reasonable) to deviate from that rule. For example you could write a wrapper for malloc that only allows dynamic allocation in the initialization phase of your program. This can be useful to allocate memory for opaque-pointer-style types.

You can do it in D as well in much the same way as in C#. (You can get manual memory management with heap allocation as well, but you lose the safety then.)

There was an interesting paper I read the other day about a scheme for guaranteed constant time dynamic allocation and deallocation. The idea was you make all objects identical in size (they used 32 bytes). Then, on deallocation of a 32 byte node, you just put the node on a free list. Then, when a new allocation request comes in decrement the ref count of any *direct* reference(s) and move those objects to the free list if their ref count goes to 0. Therefore, even if you have some huge list that goes out of scope, only the 1st node goes on the freelist, but the rest of the list will get freed eventually assuming you continue to allocate new objects.

I thought it was a really cool idea and it seems totally viable for things like a game engine. All the pointer chasing would be expensive, but being able to freely allocate memory without much lag would be a really nice property, particularly for games with plugins like Roblox, Factorio, etc. where code quality is often out of your control.

https://lptk.github.io/files/ctrc-2024-05-09.pdf


Yes I was talking about short lived allocations.

I don't discount the power of being able to cheaply create (short lived) highly dynamic data structures. I do miss it in C++ and alloca never feels right.


Ah I was thinking about "keep things allocated where they were before, but try to dynamically dispatch implementation code, to avoid compiling it N different times." It sounds like you're talking about "heap allocate everything by default." If everything was implicitly boxed, no type could be `Copy`, right? Is that strategy possible without breaking existing code?
next

Legal | privacy