Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> This page compares make to tup. This page is a little biased because tup is so fast. How fast? This one time a beam of light was flying through the vacuum of space at the speed of light and then tup went by and was like "Yo beam of light, you need a lift?" cuz tup was going so fast it thought the beam of light had a flat tire and was stuck. True story. Anyway, feel free to run your own comparisons if you don't believe me and my (true) story.

The completely unprofessional tone here really turns me off to the entire system. If you write like a typical teenager, you probably code like a typical teenager, and I don't want a typical teenager writing my goddamn build system.

Besides: who the hell is bottlenecked on the build system? The compiler and linker (or the equivalent for your favorite language) do all the work. Anyone who believes this article makes a different is completely ignorant of Amdahl's Law.



sort by: page size:

> However, if a company/project writes a compiler that is a little slower than the competitor, people with almost always complain that it is a bad compiler.

That is bullshit. There are plenty of project that would gladly trade performance for more correctness. I would go as far to say most projects would make that choice if articles like this get mindshare.

“It’s mostly as fast as clang but errors upon UB” is an easy sell


> The same applies to all languages with terribly long build times. It’s not that they couldn’t build faster. They just choose not to. C++ or Rust program compiles for too long? Well, OCaml could probably compile the equivalent program in under a second. And it’ll still be machine-level fast.

> “Wow, wow, slow down! This is even more unfair! Now it’s not just apples and oranges, now it’s toothbrushes and spaceships. You completely ignore what each language brings to the table. There’s a reason they spend so much time compiling, you know?”

So what's the reason if the resulting performance doesn't differ much? C/C++ build times seem insane to me. Every time I try to install an Aur package and see it starts building C/C++ I cancel it and give up the idea as I don't want to wait a day for my CPU load to drop below 100% and my SSD free space usually happens to be not enough anyway.


> There are alternate compilers for Go, in the form of gccgo and llgo. But those are both very slow to build (compared to the Go tree that takes ~30s to build the compiler, linker, assembler and standard library).

For any non-Gophers reading this: I write Go as my primary language, and have for the past two and a half years. I just timed the respective compilation speeds on a handful of my larger projects using both gc and gccgo (and tested on a separate computer as well just for kicks).

gccgo was marginally slower, though not enough to be appreciable. In the case of two projects, gccgo was actually slightly faster. The Go compiler/linker/assembler/stdlib are probably larger and more complex than the most complex project on my local machine at the moment, but I think my projects are a reasonable barometer of what a typical Go programmer might expect to work with (as opposed to someone working on the Go language itself).

The more pressing issue as far as I'm concerned is that gccgo is on a different release schedule than gc (because it ships with the rest of the gcc collection). That's not to say it's not worth optimizing either compiler further when it comes to compilation speed, but it's important for people considering the two compilers to understand the sense of scale we're talking about - literally less than a second for most of my projects. Literally, the time it takes you to type 'go build' is probably more significant.


> Would a 60x faster compiler work for you?

Not if the resulting executable is, say, 10% slower. I could syntax check slightly faster, I suppose, but I already compile individual TUs for that. For testing, I need to test the actual build target. Also, the fastest comparison on http://bellard.org/tcc/ is a mere 9x faster on a real project. I wonder if they're doing better than 30% faster than GCC on the linux kernel since their 2004 numbers.

Also missing: C++ support. I no longer need PPC, fortunately.

And keep in mind: Despite kvetching about build performance, I make it even worse by running static analysis now and then, because that stuff is actually important to me.

> And it hasn't been a priority because in the past performance sins were always covered up by hardware getting faster (by itself, without any extra work). This is no longer the case, increases in single core performance are over.

Still getting wider, and still getting a little faster, but yes. I did get the memo about Moore's law "ending" a decade or so ago.

> Performance has also deteriorated in line with code bulk, [...]

Yes and no. Yes, more programs do more sloppy things that waste hardware performance now for little to no good reason (although I'll argue saving dev time can be worth it.) No, you cannot write a program that will efficiently unfold proteins on an Alto. No, you cannot make general purpose hardware that's as fast as a modern CPU or GPU with the Alto's fundamentally simpler design principles. No, you cannot efficiently and effectively use more complicated modern hardware with a similar line count as compilers and linkers for the Alto. No, really, the little computer in your hard drive really is playing a role in scheduling hardware operations to maximize your I/O throughput.

We're jumping through some really crazy complicated hoops in hardware to try and get more and more performance, and some of it requires hardware and software working in tandem. The minimum SLOC count to achieve our modern, expanded goals - even if you do something crazy like limit those goals to increased performance only - is absolutely rising for very reasonable reasons. Even if we assume most people 'waste' SLOC left and right, it doesn't follow that all codebases do, nor that all 'reasonable' codebases will be small enough to avoid problems.

Look, I get it, lean mean small programs kick some serious ass for a whole slew of reasons. I've sped up linking by a factor of 10x or so on some projects - on par with the 9x tcc factor above! - by switching from BFD to gold, with the latter focusing exclusively on elf binaries, instead of taking on BFD and all it's abstraction layers and non-elf target support. I imagine it's a much smaller program!

But I just told you the tradeoff: Specialization. I wasn't able to do this on all our target platforms, because not all of them even use elf. And, if it weren't for the fact that we were able to ditch BFD entirely, the total SLOC of our build toolchain would have, ironically, gone up from adding another linker into the mix as a result. Fortunately, I don't have to build our build toolchains - just any in-house ones. Our build configs did go up a few more LOC to override the defaults...

Unlike tcc, gold has not specialized itself to the point that it doesn't solve any of my problems, so it gets to join my toolset. I'm sure the reverse has happened to someone else. And plenty more deal with the situation where no specialized tool works for them. To ignore larger codebases requires you to ignore the generalized tools they fall back on.

> [...], and usually for no good reason whatsoever. There are a few good reasons, but they tend to be a small fraction of the whole problem.

An important distinction here: There's a lot of code that's absolutely useless, and has no good reason to exist... for solving your needs. This is very different from code that's absolutely useless, and has no good reason to exist. My pointless misfeature is someone else's core use case. And, it would seem, your pointless misfeature is perhaps my core use case.

And hey, that's cool. If tcc works for you, use it. But when people are using the multi-MLOC LLVM toolchain - it's not because software needs to get it's act together, but because it did get it's act together. I'm absolutely loving some of the tooling they're putting out, and wish I'd had it two decades ago.

And hey, perhaps current hardware performance sins (cough16GB max memory in 2016 for that pricecough) will be covered up by future software optimizations (by themselves, without any extra work on Apple's part) :). I imagine it'll involve more code though.


> A 100x slower compiler has a huge impact on productivity

Very true, but the article didn't mention a 100x slower compiler, they mentioned 100% (2x) slower one.

Technically that's a quantitative difference of degree, but it's so big as to be talking about qualitatively different developer experiences.


> Compilation speed greatly effects your ability to iterate in your development process. Depending on your skill level, for most programmers this can be MASSIVE

Compilation speed is not the only factor in ability to iterate. This argument is the exact bikeshed argument that gets made all the time. Even if your compilation is insanely fast, your build pipeline might be insanely slow. Compilation speed alone isn't enough to really make any value judgements at all, is my entire point.

> Sure there are ultra high experience devs who can make a thousand line edit, send the build and have it run perfectly 4 hours later. Even in that case you need to consider the cost of coding extra defensively to make sure it's right the first time.

My last job used Ruby on Rails, a language that is touted to have extremely fast iteration speed. That company also chose to have a build process run all tests (unit + system), such that a "build" took 3+ hours. To say, "Ruby on Rails has fast iteration speed and doesn't require a compile step" ignores so much context about what's actually happening, the statement is effectively useless.


>Yes and they made a conscious decision to not include so much in the language in order to make the compiler fast so your criticism is rather odd. They made a choice. It results in a faster compiler.

The null compiler (which produces nothing) is even faster, but 1000s of orders of magnitude.

As you can see I'm not in favor of the choices they made to make the compiler fast.

It's not much of a feat -- it's like writing a 8-bit era like game, and bragging that it gets 2000 fps performance.

First, after 60 fps it's dimminishing returns anyway, and second, yes, but at what cost?


> I always wonder if this is the flip side of the fast compilation?

Go is cheating a bit on this one by heavily caching everything. Building in fresh container is quite slow (ok, maybe not c++ slow but still much slower then C).

Always having to build anything due to static linking does not help either.


> I think user time is more valuable than developer time

Developers are users of their development tools.

> when building complex/efficient software expecting lightning builds isn't just unwarranted, it's selfish.

This is wrong. There is no rule that says that you cannot write fast applications with fast builds. It is just that we got used to C++'s slow build times.

This sort of thinking even falls flat on its face when you consider that C provides similar optimization opportunities as C++ (and in pretty much every modern compiler it uses the same backend and most - if not all - optimizations) while compiles much faster.

But beyond that most of the codebase of a decently sized application does not need more than the most basic of optimizations - it is only a tiny tiny fraction that needs that, yet with modern C/C++ (and most other) compilers, the entire codebase suffers for it (i think only Visual C++ allows you to control optimization settings on a per-function basis, but i haven't encountered any codebase that actually uses this outside of temporarily disabling optimizations for debugging and/or working around bugs).


>The link you provided is not evidence, just an anecdote

No it is not. A reproducible test is evidence, not anecdote.

>At best it's an indication that all compilers have some startup overhead, some more or less than others. Based on JVM startup time alone, I could "prove" that Java was the slowest language in the world.

That seems unlikely since we just saw otherwise.

>The burden is on you to provide evidence, since it is your claim

And I did. Dismissing it because it does not fit your preconceptions does not mean I need to continue supplying more and more evidence for you to dismiss without any justification.

>I develop in Go daily, and nothing I've seen indicates that Go is anywhere close to approaching C++ in compilation slowness. (I have also worked extensively with C++.)

Now that is an anecdote.


> If you write other types of programs the compilers speed and your programs speed aren't the same at all. People that write other types of programs care more how fast the compiler is and less how fast their programs run. People that write programs that process entrusted data care vastly more about no surprises than they care about speed. Even more so, in modern code bases the hot sections are very small parts of the total code base. For 99% of the code, CPU speed doesn't matter.

Literally everything you ask for here is fulfilled by not turning on compiler optimizations. That's the default. I don't understand what your complaint is.

And since nobody stated it plainly to you yet: This whole world view is completely bogus. Entire branches of the industry require C++ compilers that produce the fastest possible code (to name a few: Games, HFT, image processing). I don't know how you convinced yourself that compiler writers are the only consumers of compiler optimizations, but it's dead wrong. I don't claim your experience or what tradeoffs you seek in a compiler are wrong, but they are extremely far from universal.


> One of the interesting tradeoffs in programming languages is compile speed vs everything else.

Yes, but I don't think that compile speed has really been pushed aggressively enough to properly weigh this tradeoff. For me, compilation speed is the #1 most important priority. Static type checking is #2, significantly below #1 and everything else I consider low priority.

Nothing breaks my flow like waiting for compilation. With a sufficiently fast compiler (and Go is not fast enough for me), you can run it on every keystroke and get realtime feedback on your code. Now that I have had this experience for a while, I have completely lost interest in any language that cannot provide it no matter how nice their other features are.


> More importantly the salient point was about comparing the binary output of different compilers.

No, it wasn't. I explained it (apparently badly) 3 times now. There's another iteration of bootstrap compiling in there before the output is compared.


> It would be a little nutty to suggest that Golang 1.1 is going to give optimized C code a run for its money. Nobody could seriously be suggesting that.

AFAICT, the entire situation started only because the article was submitted to /r/Golang and /r/C++ with the trollbait title "Business Card Ray Tracer: Go faster than C++", and not because of anything in the article itself (which was actually a pretty good article).


> at least Java JITs only learned that trick as of lately, and are still pretty bad at it.

Did the author claim that Go was faster than Java? As far as I can tell, the JITs are still kicking the Go compiler's butt on "effectiveness."

> C++ again suffers from cross-compilation unit visibility; even if your LTO can detect an inlineable call, its AFAIK not possible at that time to move heap allocations to the stack.

Is the author really trying to explain why Go is faster than C++?

> This is an interesting pattern in Go, the longer one looks at it, the more you understand that it's a whole bunch of good decisions in various subsystems coming together.

One cannot validate these patterns yet because Go is still slower than the languages it supposedly innovates on in performance techniques.


> the code is fast (written in C, which _almost_ makes it fast by default).

I hate to be that guy, but [Citation Needed].

Most language implementations are written in C and most language implementations are quite slow, once you include the giant slew of hobby, toy, and less successful languages. Remember, Ruby, Python, and early versions of JS interpreters were all written in C too and are or were dog slow!

Duktape does have a section on performance[1], but it's basically just a guide to things you have to do to not make it extra slow. I didn't see any reported benchmarks.

This doesn't mean Duktape is bad. Simple+slow is a perfectly good trade-off for many real-world uses. Embedded scripting languages tend to spend most of their time in the FFI, not executing code, so perf doesn't matter as much as maintainability.

But it's really dubious to just say, "well, it's written in C so it's probably great." It's using ref-counting and linear scanning or hash table lookup for property look-up. Perf is likely not great.

Just as a point of comparison, my little hobby language[2] is also written in very simple C, but it's carefully designed to be fast too. Property look-up and method calls are just pointer + known offset. GC is tracing. Because of this (and other stuff), perf actually is good[3], typically better than Python and Lua and on par with Ruby. (With the usual caveat that all benchmarks are bad. :) )

[1]: http://duktape.org/guide.html#performance

[2]: https://github.com/munificent/wren

[3]: https://github.com/munificent/wren/blob/master/doc/site/perf...


> The point being that generated code from C compilers being always the one to beat is an urban myth.

Ok.

> C compilers are only speed monsters thanks to almost 50 years of research in optimizing C and C++ compiler backends.

Lots of those optimizations apply in one form or another to other languages as well.

And... so which is it? Are C compilers fast and the thing to beat or did that research go nowhere?

C is the speed benchmark because 'beating C' is what will get you a foot in the door. Being 'slower than C' is going to get your pet language booted out the door because:

- companies tend to compete on speed of execution

- the speed of the compiler itself is a major factor in turnaround time for the typical edit-compile-test cycle

- nobody cares about security until they've been bitten hard.

This is all very frustrating but it seems to - in my experience - accurately reflect priorities in lots of corporations. It is up to us to change that.

See the title: it is about the speed of execution, and it uses one technology 'GPU' to challenge another 'CPU' and yet the accent is on which language was used.


> In which world is it okay to compile a trivial 5-line example three times slower than a full database engine

Well, as explained in the article, those 5 lines generate hundred thousands of lines of code behind the scene.

And you don't use C++ for fun, you use it because you have a serious performance problem. And in that case you'll wait the compile times, because there is no alternative.

Let's not even go into link time code generation or profile guided optimizations, which increase compilation times by another order of magnitude.


> I'm extremely dissatisfied with the performance of current compilers. The fastest compilers written by performance-oriented programmers can be way faster than ones you generally encounter. See luajit and Jonathan Blow's 50k+ loc/second compilers and the kind of things they do for performance.

Lua and Jai are lot less complex than say C++: sure, LLVM isn't necessarily built to be the fastest compiler in existence, but I don't think it's fair to compare it to compilers for much simpler languages and bemoan its relative slowness.

next

Legal | privacy