Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I think you misunderstood me. I agree that the StackOverflow post has to do with branch prediction. I just meant that I don't think that's why the earlier poster thinks it's parallel to the situation described in the article.


sort by: page size:

How is this article related to branch prediction? I don't see any connection here.

If you read the post author's comment on his own post, he went back and added all the explanation/analogy after making the initial post that just said it was down to branch prediction.

But I don't see the post going into investigating this at all. Yes, most likely that is what is going on but I don't understand the point of the OP post is if the real reason of the difference in the branch predictor behavior is not explained.

No, branch prediction is not involved with what I was talking about.

> Like [..] CPU Branch Prediction Idiocy Can you elaborate on that or point me to some sources as I thought branch prediction was a good thing for speed (until now)

Because branch prediction exists: sometimes yes, often no. Among other reasons.

No, you didn't misunderstand it and it is called branch prediction and it is done in hardware. I'm surprised no one has brought up this point.

> And it doesn't really matter whether a branch can be well predicted

I guess we build branch predictors on our CPUs for fun?


I don't think it's related to branch prediction in particular, just spooky action-at-a-distance where a change makes seemingly unrelated code much slower or faster.

Related:

How and why CPUs do “branch prediction” (2017) - https://news.ycombinator.com/item?id=20324092 - July 2019 (33 comments)

A history of branch prediction - https://news.ycombinator.com/item?id=15078574 - Aug 2017 (65 comments)


> I believe it would be much more challenging for the compiler to predict these branches statically.

Why do you believe so? "Branch prediction for free" is a compiler literature classic, and was published in PLDI 1993.


I didn't, and frankly, half of the articles I read about it make me think branch prediction is a bug. I mean, I know it's meant to improve performance, which is great, but it has to make assumptions about what's going to happen before it knows it, and those assumptions are going to be wrong. How wrong? How can we con it into making better assumptions? Suddenly programming becomes about second guessing the compiler.

And remember Spectre and Meltdown? Security vulnerabilities caused by branch prediction. If I recall correctly, the pipeline was executing code it wasn't meant to execute because it's executing it before it knows the result of the check that decides if it has to execute it.

Programming is a lot easier if the actual control flow is as linear as I'm writing it.

My broad takeaway of the whole ordeal is that I'm basically avoiding if-statements these days. I feel like I can't trust them anymore.


It should be noted though, that the actual branch prediction comes from the CPU, not the compiler.

This is not even related to branch prediction, it is a scheme for controlling load speculation. This means executing a load out of order, even if there are stores with unresolved addresses ahead of it. This paper isn't proposing load speculation, it's proposing a predictor when for when it should be done.

> Branches are often faster than branch-free methods, since they can speculate certain data dependencies away.

And often they're slower, since predicting some branches is often impossible due to the data :) So the processor ends up miss-predicting most of the time, thinking it knows what happened last time.

It all depends on what you're doing.


The chart in the results section is interesting. I don't remember reading Raymond Chen's post on branch prediction before, so thanks for that:

http://blogs.msdn.com/b/oldnewthing/archive/2014/06/13/10533...

I especially liked Raymond's comment: "Compiler optimizations may have been carefully selected for expository purposes." Counterintuitive, and interesting that the Swift program demonstrates the effect as clearly as the C program.


From the article:

Instead simple branch predictors typically squish together a bunch of address bits, maybe some branch history bits as well, and index into an array of two-bit entries. Thus, the branch predict result is affected by other, unrelated branches, leading to sometimes spurious predictions.


> Branch prediction of modern CPUs is so good

Modern x86 CPUs. ARM branch prediction is much worse, unfortunately.


Programmers are horrible at guessing branch prediction.

Not for error cases like the ones described in the article.

next

Legal | privacy