That's true. I am using memory sanitizers in my workflow, but I haven't been using the `undefined` sanitizer. This could have saved me a day worth of effort.
Undefined behavior is different from data races and memory safety bugs, though, right? I'm not saying it's more severe, but it's definitely part of the puzzle.
And well, like people keep pointing on that discussion thread, C and C++ undefined behavior isn't just an operation you don't know the result; it's a much deeper and crazier thing that no other language has (unless when they inherit it by sharing compiling tools).
My instinct is to downvote this to keep it at the bottom of the page, but since you are a new user with seemingly good intentions, that seems too rude.
Undefined behavior gets a bad rap, but it's not always evil.
Probably true, but make sure that you are distinguishing between undefined and implementation defined.
Compilers and executables would be a lot slower if they had to account for these cases.
Maybe, but I'd like to see better quantification for "a lot". My instinct (unsourced) is that it's usually minimal for most C, and quite significant for templated C++. But I'd interested in seeing firm numbers on this.
If you're writing serious C, you should be using tools like valgrind on debug-mode executables to make sure you aren't relying on undefined behavior.
This is where I think you veer off into being mostly wrong. I grew up with Valgrind, but I don't think it really has much use any more. You are almost always better off with one of the now built-in "sanitizers": https://github.com/google/sanitizers/wiki/AddressSanitizerCo....
But while you should be using these to catch bugs, neither the sanitizers nor Valgrind are able to catch the dangerous forms of undefined behavior shown in the examples in the article. UBSan is great and should be more used than it is (https://medium.com/@lucianoalmeida1/the-undefined-behavior-s...) but it is not going to catch anything near all the problems with undefined behavior!
Here's a summary of the unfortunate state of the art: https://blog.regehr.org/archives/1520. While there are underutilized tools that can help, "using tools like valgrind on debug-mode executables to make sure you aren't relying on undefined behavior" is likely to give you misplaced confidence that you are free from the dangers.
Right, which is why I was asking which class of bugs are we talking about. If "undefined" is the only class that this would remove, it sounds less than compelling. Especially if it in any way impedes considering the logic of what is actually getting done. (I mean, correct me if I'm wrong and "undefined" is actually a large part of the problems that afflict writing code for the kernel.)
I suppose there are ways to make the undefined behavior defined that preserve memory unsafety, so you’re technically correct. In practice one would probably require safe crashes for OOB access etc.
From the examples I'm familiar with, shifting from undefined to unspecified actually makes invalid programs _more_ likely to blow up spectacularly, because they're likely to go on and try to use the unspecified value rather than having the code path that uses it quietly excised or transformed.
I've long believed that all compliant compilers should implement all undefined behaviour as an attempt to format the hard drive. It makes bugs much easier to detect in testing. Bugs that ship will mask themselves from customers, and many critical security bugs will be self-limiting and reduce the amount of data exposed.
The point is you are supposed to avoid undefined behavior, precisely because these things can be different between compilers. In theory, this lets the compilers optimize better for a specific platform/cpu, while avoiding changing the meaning of the programmer's code. In practice, most platforms and compilers have done the same exact thing with the same "undefined" behavior, that a large body of code relies on that defacto standard.
reply