There's a huge depth to a good malloc() implementation. While you're wise to avoid it when your interests are focused on kernel development, I'd recommend every programmer to have a go at writing malloc/free one day. Avoiding terrible performance problems is pretty tricky. At the very least, you'll gain a lot of respect for memory allocation and will discover that there's a lot of work going on behind the scenes when your code tries to allocate only a few bytes of memory.
Any time you're using malloc() you run the risk of running out of memory, which is a very very bad thing for a safety-critical embedded system. Statically allocated, fixed-size things are good, as are algorithms that are designed not to require O(n) space to perform their processing.
For memory allocation you often don't need to use malloc.
True.
Many (most?) programs can mmap a large region of memory and allocate from that.
You don't even need to do that for many applications. Coming from an embedded systems background where dynamic allocation is highly discouraged and almost never necessary, the amount of superfluous allocation/deallocation I see in a lot of code is astounding; maybe it's because of my experience, but I can often very easily find a way to do something without dynamic allocation and consequently O(n) space, with what usually turns out to be simpler (thus less chance of being buggy) code and O(1) space.
But when you do need dynamic allocation, malloc() isn't bad at all.
Implementing malloc() was one of the first assignments in my OS class. A linked list with some header information about each allocation will get you a long way before you need a better algorithm.
malloc is a generic allocator, it needs to worry about a lot of things that are not common. A simple allocator will work most of the time, but the devil is in the details.
Yes, this is why malloc() can't actually be implemented in C. Actual implementations of it exist because of special compiler dispensation, or mostly that the callers and implementations are in separate libraries so the implementation isn't visible to the caller.
Indeed. The proper thing to do is: write your own malloc, learn an enormous amount from implementing malloc, never ever under any circumstances allow your malloc implementation to enter into production use.
Hopefully the act of writing your own malloc implementation will be enough to convince you not to use it in anger, but it's probably best to be clear on the subject.
That's not a property of malloc()/free()[1], that's a property of the runtime system which can be satisfied both in GC'ed languages and those with manual memory management.
[1] To the point that porting code originating from Linux is rife with bugs when you run on systems that have the possibility of malloc() failing (99.9% setups of linux never report malloc() failure)
For memory allocation you often don't need to use malloc. Many (most?) programs can mmap a large region of memory and allocate from that. Then when you're done you just unmap the region. This is much easier to use since you only have to remember to unmap once as opposed to handling each allocated pointer individually.
Also, to clarify, I'm not saying allocate yourself a big piece of memory and then reimplement malloc on top of that. You literally just get a big piece of memory and every time you want to allocate something new from it add the offset to the base address, return that to the user, and store the new offset.
A lot of people underestimate the performance impact of malloc(). It is dog slow. In addition if you use a poor malloc() implementation heavily with varying sized data, you can easily end up using more memory than had you used a copying GC!
reply