> Believing that all software security problems would be solved if we got rid of C is a bigger disease than C itself.
Getting rid of a class of errors (correctness) allows you to focus more on other errors (logical). It could be argued that it could slightly reduce the likelihood of the latter.
ditching C doesn't solve memory leaks but it does solve use after free. A memory leak is a performance problem that isn't generally exploitable. A use after free is almost always a security vulnerability.
You can use resources after releasing them in pretty much any language. That's maybe not as dangerous as reinterpreting arbitrary memory, but then you can also use object pools for everything in C, which is what some very high performance code I used to work on did.
Circular references and other things that tend to leak memory on GC languages aren't a logic flaw in the sense of your business logic being wrong or your algorithms being bad, though. They're things the language allows you to do, that can be completely correct at a business level and visually correct at a code level, but result in fundamental breakage of the VM's operation.
I say this as someone who does high-level languages as a day job, and doesn't find them to be a problem personally, but, yeah, Java as a concept still allows some unintuitive footguns like that.
Difference in magnitude of impact, difference in the response of the standards organizations in handling it, but yeah it's certainly same kind of footgun as C lets you do. The difference being that Java has devoted an enormous amount of resources to squashing these errors and they largely aren't a problem anymore, where C is afraid to touch the "undefined behavior" and "implementation dependent" sacred cows.
Is there an unspoken social contract between language committee and language users that aliasing a variable or causing a circular reference should not shoot your dog and burn down the house? That's the fundamental disagreement between Java and C's standards bodies.
Memory errors are always logic errors in C. Freeing something that was already freed or accessing a buffer out of bounds and so on are pure logic errors.
But that person was trying to say memory errors are some kind of special "magical" error, which is simply wrong. Memory errors are merely logical errors. There is nothing "special" or "mysterious" about them.
judgment is not the only reason for conversation, I strongly suggest you read back over the conversation from the beginning and endeavor to understand the context with which responses are made.
I also suggest you say 5 years instead of "half-a-decade", most of us see through that and it's off-putting.
I have worked with .Net, in a hobby capacity, since beta. I have worked with it professionally since 2007.
I have seen two instances of rooted memory:
- WPF. The implicit GC root path introduced by events. We were easily able to track this down due to the information that the GC holds in memory[1] (fixing it was not so easy).
- A leak in managed C++ (non-pure, i.e. native code). We had to use the normal tools to track this down, not a 5 minute diagnosis.
At the end of the day GC'd languages can have (and usually do) richer tooling surrounding memory usage. They are better in this regard even during a failure state.
Getting rid of C won't get rid of security problems anymore than moving to java and .net got rid of needing to think about memory leaks.
Those that think otherwise are ignorant of history, or choosing to ignore it.
And it gets worse when you consider resource exhaustion, not just memory. There are entire mechanisms in .net and java whose sole purpose in life is to try and deal with resource exhaustion because the GC only deals with 1 aspect of that resource exhaustion (memory leaks).
---
For those of us who watched that transition, it's obvious it won't work out the way people are claiming.
I had to move part of the Code to C for different reason (imaging device) and used that code in a .net application. But allocating unmanaged memory on .nets heap led to memory fragmentation and while overall there was enough memory available, you ran into allocation errors since unmanaged memory needed to be allocated in one successive block. Usually doesn't happen, but if you have large objects like high resolution images in memory, you can quickly run into problems. And to my knowledge there was no way to check if there was enough successive memory available.
Had to wait on a patch for .net... not a problem anymore today, although I don't know the details how they solved the problem.
Getting rid of a class of errors (correctness) allows you to focus more on other errors (logical). It could be argued that it could slightly reduce the likelihood of the latter.
reply