I haven't found the compiler ever optimizing something different because of constness. The various escape hatches means that it is really hard for the optimizing compiler to make certain assumptions, especially if it's connected to something with external linkage. A function with a const struct arg that you never take the address of? compiler emits memcpy!
I'm not sure any compiler actually implements any real optimizations based on const. I suppose possibly on static const variables of basic integer types, but that'd be about all it can do. The existence of the volatile escape hatch, to say nothing of const_cast<>, makes it a pretty useless hint for the compiler.
In practice const very rarely allows any interesting compiler optimizations (passing a reference to a const local variable to a function whose body cannot be seen is pretty much the only place where it's safe to do so). The const overload of a function with const and non-const versions could in theory be faster, but copy-on-write data structures are the only common example of that actually being the case.
Const doesn't actually help compilers (at least in C/C++ based languages). Since it can be const_cast away and still work in a standards compliant way, any optimisation assuming const values never change is a bug. Instead the compiler will use other techniques, e.g. finding variables that are only written to on initialisation, and those don't even need to be marked const.
Yeah. const_cast (and C-style casts) and mutable existing in the language make const pretty useless to the optimizer. But at least they are trying to get some benefit from it in some cases.
A C++ variant (compiler flag) that would remove const_cast, mutable and the cast-const-away parts of C-style casts from the language and allow the optimizer to put more value into 'const' could be interresting.
const doesn't help most optimizing compilers, it only really helps prevent programmer clumsiness. Since most C/C++ compiler IR's are SSA-based all values are effectively const anyways at the level optimizations are performed whether or not they're mutable in the original source code.
That is true, but it's extremely rare to be able to take advantage of that. When people think of const as a potential optimization aid, it's almost always in the context of const pointers, which actually do nothing at all for optimization.
That adding const open optimization opportunities for the compiler is not surprising to anyone who knows what const is for. Removing const to cause optimization is a big wtf and that is why it is worth debating.
C/C++ const has no effect on optimization, unless it's on a global/static object and the compiler can see the original declaration.
I'm not going to look it up in the C++ spec, but in C it's only undefined behavior if the original object was const, and it would make sense for C++ to be the same.
Wow awesome! It's partly these little optimizations that keeps C/C++ going.
I've never considered that const might have a performance benefit (and mostly just use it as a compile time check). But will consider it in the future.
Would be interesting to see performance benchmarks for the changes made here, as well as applying const in general.
Actually, with the exception of the modest benefits from moving globals to the read-only data segment it is rare to get optimization benefits from tagging something as const.
const is for enforcing correctness by the compiler. It tells the compiler about intent, but it doesn't give the compiler any extra information that it can use for optimizing.
Example: If you mark a member function as const then the compiler knows that you can't modify member variables. However the compiler won't use that to optimize the function - what it will use is its own data flow analysis that independently shows that you aren't modifying any member variables.
The distinction becomes important if you call some non-member function whose definition is not visible. At that point the compiler has to assume that that function might modify the object, though a separate cached pointer. So, the fact that the member function is const becomes useless as soon as the the compiler can't analyze code flow anymore.
Why would that optimization not be possible with a simple const *?
The problem I see with that proposal is that it introduces a new type of pointer, the immutable pointer. That seems like equal to a const pointer, but it's not. With a const pointer the callee can not mutate the pointee. But it can still be mutated from the outside. That means that any time you handed out a mutable pointer to something you have to make a copy for handing out an immutable pointer to it. An ABI like this would probably be much more complex to implement for a small gain.
You would end up with hardly predictable behaviour wether the struct gets copied or not. C# structs suffer a lot from this, because methods are mutating by default (https://codeblog.jonskeet.uk/2014/07/16/micro-optimization-t...). The biggest problem is simply that this is not explicit.
Also there is a case where a calling convention like this can make things worse, as you will have to make two copies:
void fn1(A*);
void fn2(A a) {
// Have to make copy here because mutating
fn1(&a);
}
void fn3()
{
A a;
fn1(&a);
// Have to make copy here because reference to a might have escaped.
fn2(a);
}
> So most of the time the compiler sees const, it has to assume that someone, somewhere could cast it away, which means the compiler can’t use it for optimisation. This is true in practice because enough real-world C code has “I know what I’m doing” casting away of const.
Some years ago i'd agree, but experience has shown that C/C++ compiler writers would rather win benchmark games than keep working code working. So it'd be nice if there was a better reason than "well, a lot of code would break if they did that".
Because:
1) const is not supported by an optimized compiler and any function that declares or references const variable will not be optimized;
2) const is slower than var because it has a more complicated semantics;
Like in C++, a const is absolutely allocated since you can get a pointer to one. And then you can do horrible stuff like const_cast that pointer and mutate the value, and the possibility of that occurring prevents the compiler from doing certain const-related optimizations.
In that case I would have welcomed a simple sentence stating why they won't use 'const'. I very much feel they're not even aware of it.
The general feeling I get is that they're trying to work without a compiler, notably without any optimization not warning
- some of the code (passing short* instead of int*) will raise warnings
-it's likely optimisations passes will often remove some of the copy overhead
- the compiler will bark if you write to 'const'
So my general feeling is that they're trying to work assuming people will want to screw their API intentionally; which looks quite unreasonable to me. A bit context about the 'why' would be interesting
reply