Hmm, that's interesting. The code as written only has one branch, the if statement (well, two, the while loop exit clause as well). My mental model of the branch predictor was that for each branch, the CPU maintained some internal state like "probably taken/not taken" or "indeterminate", and it "learned" by executing the branch many times.
But that's clearly not right, because apparently the specific data it's branching off matters too? Like, "test memory location X, and branch at location Y", and it remembers both the specific memory location and which specific branch branches off of it? That's really impressive, I didn't think branch predictors worked like that.
Or does it learn the exact pattern? "After the pattern ...0101101011000 (each 0/1 representing the branch not taken/taken), it's probably 1 next time"?
Your mental model is close. Predictors generally work by having some sort of table of predictions and indexing into that table (usually using some sort of hashing) to obtain the predictions.
The simplest thing to do is use the address of the branch instruction as the index into the table. That way, each branch instruction maps onto a (not necessarily unique) entry in the table. Those entries will usually be a two-bit saturating counter that predicts either taken, not taken, or unknown.
But you can add additional information to the key. For example, a gselect predictor maintains a shift register with the outcome of the last M branches. Then it combines that shift register along with the address of the branch instruction to index into the table: https://people.cs.pitt.edu/~childers/CS2410/slides/lect-bran... (page 9). That means that the same branch instruction will map to multiple entries of the table, depending on the pattern of branches in the shift register. So you can get different predictions for the same branch depending on what else has happened.
That, for example, let’s you predict small-iteration loops. Say you have a loop inside a loop, where the inner loop iterates 4 times. So you’ll have a taken branch (back to the loop header) three times but then a not-taken branch on the fourth. If you track that in the branch history shift register, you might get something like this (with 1s being taken branches):
11101110
If you use this to index into a large enough branch table, the table entries corresponding to the shift register ending in “0111” will have a prediction that the branch will be not taken (i.e. the next outcome will be a 0) while the table entries corresponding to the shift register ending in say “1110” will have a prediction that the next branch will be taken.
So the basic principle of having a big table of branch predictions can be extended in many ways by using various information to index into the table.
Yeah, the "two-bit saturating counter" thing is pretty much exactly how I thought it worked (which would be terrible for the example in the article), but I had no idea about the fact that it also kept track of the branch history thing, and how that maps to different branch predictor entries. Thanks for the link, that's really fascinating!
It seems like that would struggle with detecting how many layers of branching to pay attention to. Imagine the two nested loops surrounded by a randomized one. Wouldn't that implementation keep hitting patterns it hadn't seen before?
Obviously that must be a solved problem; I'd be curious to know what the solution is.
It even gets much more sophisticated than that. Even the first Ryzen had something perceptron-based (yes, neuronal network!) and there are several predictors + logic to pick the best one for a branch.
Check out [1]: it has the most thorough description of branch prediction I've ever seen (chapter 3), across a lot of historical and current CPUs. It is mostly empirical, so you do have to take it with a grain of salt sometimes (the author acknowledges this).
Supposedly the branch prediction on modern AMD CPUs is far more sophisticated, based on [2] (a citation pulled from [1]).
> My mental model of the branch predictor was that for each branch, the CPU maintained some internal state like "probably taken/not taken" or "indeterminate", and it "learned" by executing the branch many times.
I always figured the algorithm was much simpler, it would just use the same branch as last execution — should work fairly well.
Didn’t realize it used the input value as well, which to me makes no sense — the whole point is to avoid having to inspect the value. This article raises more questions than answers, I’m intrigued now.
There are many branch prediction algorithms out there. They range from fun architecture papers that try to use machine learning to static predictors that don’t even adapt to the prior outcomes at all.
The idea here is about maintaining a "path history"!
When looking up a register that tracks the "local" history of outcomes for a particular branch, you want to have a hash function that captures enough context to distinguish
the different situations where that branch might be encountered.
Apart from folding a long "global history" of recent outcomes and mixing in the current program counter, I think many modern machines also mix in the target addresses of recently-taken branches.
Typical branch predictors can both learns patterns (even very long patterns) and use branch history (the probability of a branch being taken depends on the path taken to reach that branch). They don't normally look at data other than branch addresses (and targets for indirect branches).
Enlarging a branch predictor requires area and timing tradeoffs. CPU designers have to balance branch predictor improvements against other improvements they could make with the same area and timing resources. What this tells you is that either Intel is more constrained for one reason or another, or Intel's designers think that they net larger wins by deploying those resources elsewhere in the CPU (which might be because they have identified larger opportunities for improvement, or because they are basing their decision making on a different sample of software, or both).
I was self-taught in high school on computer architecture by reading book. I didn't own a computer, understand, but these book served the same purpose in terms of learning CPU architectures and machine language programming. The 6502 was the CPU I studied.
In 1985 as an EE student, I took a course in modern CPU architectures. I still recall having my mind blown when learning about branch prediction and speculative execution. It was a humbling moment - as was pretty much all of my studies as CMU.
I guess the generate_random_value function uses the same seed every time, so the expectation is that the branch predictor should be able to memorize it with perfect accuracy.
But the memorization capacity of the branch predictor must be a trade-off, right? I guess this generate_random_value function is impossible to predict using heuristics, so I guess the question is how often we encounter 30k long branch patterns like that.
Which isn’t to say I have evidence to the contrary. I just have no idea how useful this capacity actually is, haha.
AMD CPUs have been killing it lately, but this benchmark feels quite artificial.
It's a tiny, trivial example with 1 branch that behaves in a pseudo-random way (random, but fixed seed). I'm not sure that's a really good example of real world branching.
How would the various branch predictors perform when the branch taken varies from 0% likely to 100% likely, in say, 5% increments?
How would they perform when the contents of both paths are very heavy, which involves a lot of pipeline/SE flushing?
How would they perform when many different branches all occur in sequence?
How costly are their branch mispredictions, relative to one another?
Without info like that, this feels a little pointless.
By only testing one static branch, it is possible that the performance of the Intel Emerald Rapids predictor is not representative of a more realistic workload. If path information is used to index the predictor in addition to global (taken/not taken) branch history without xoring with the global history (or fulling mingling these different data) or if the branch address is similarly not fully scrambled with the global history, using only one branch might result in predictor storage being unused (never indexed). Either mechanism might be useful for reducing tag overhead while maintaining fewer aliases. Another possibility is that the associativity of the tables does not allow tags for the same static branch to differ.
(Tags could be made to differ by, e.g., XORing a limited amount of global history with the hash of the address.)
It is also possible that the AMD Zen 5 and Apple M4 have similar unused predictor capacity and simply have much larger predictors.
I did not think even TAGE predictors used 5k branch history, so there may be some compression of the data (which is only pseudorandom).
It might be interesting to unroll the loop (with sufficient spacing between branches to ensure different indexing) to see if such measurably effected the results.
Of course, since "write to buffer" is just a store and increment and the compiler should be able to guarantee no buffer overflow (buffer size allocated for worst case) and that the memory store has no side effects, the branch could be predicated by selecting either new value to be stored or the old value and always storing. This would be a little extra work and might have store queue issues (if not all store queue entries can have the same address but different version numbers), so it might not be a safe optimization.
Before switching to a hot and branchless code path, I was seeing strangely lower performance on Intel vs. AMD under load. Realizing the branch predictor was the most likely cause was a little surprising.
I still remember learning about TAGE and preceptron predictors, and how machine learning and neural networks has long been, in some form, been used in CPU architecture design.
The simplest binary saturating counter, ala bimodal predictor, already achieved more than 90% success rate. What comes next is just extension around that, just use multiple bimodal predictors and build a forest of it, but the core idea that treating the branch prediction using a Bayesian approach, never fades.
It is a combined effort between hardware design and software compiler, though.
This is good work. I wish branch predictor were better reverse engineered so CPU simulation could be improved. It would be much better to be able to accurately predict how software will work on other processors in software simulation rather than having to go out and buy hardware to test on (which is the way we still have to do things in 2026)
By the no-free-lunch theorem, and the fact this 30k random branch pattern is so atypical in the real world, it would imply the loser here (Intel) is more likely to be the best branch predictor in actual benchmarks.
Branch prediction works really well on loops. The looping condition is mostly true except for the very last time. The loop body is always predicted to run. If you structure the loop body to have no data dependence between iterations, multiple iterations of the loop can run in parallel. Greatly improve the performance.
Does any JIT/AOT/hot code optimization/techniques/compilers/runtime takes into account whether the branch prediction is saturated and try to recompile to go branchless
Using random values defeats the purpose of the branch predictor. The best branch predictor for this test would be one that always predicts the branch taken or not taken.
I find it interesting that the S-curve is much steeper for AMD than it is for the others. AMD maintains perfect prediction for much larger sizes than the others, but it also reaches essentially random behaviour earlier.
Are they really keeping a branch history that's 30k deep? Or is there some kind of hashing going on, and AMD's hash just happens to be more attuned to the PRNG used here?
Would be interesting to see how robust these results are against the choice of PRNG and seed.
60 comments
But that's clearly not right, because apparently the specific data it's branching off matters too? Like, "test memory location X, and branch at location Y", and it remembers both the specific memory location and which specific branch branches off of it? That's really impressive, I didn't think branch predictors worked like that.
Or does it learn the exact pattern? "After the pattern ...0101101011000 (each 0/1 representing the branch not taken/taken), it's probably 1 next time"?
The simplest thing to do is use the address of the branch instruction as the index into the table. That way, each branch instruction maps onto a (not necessarily unique) entry in the table. Those entries will usually be a two-bit saturating counter that predicts either taken, not taken, or unknown.
But you can add additional information to the key. For example, a gselect predictor maintains a shift register with the outcome of the last M branches. Then it combines that shift register along with the address of the branch instruction to index into the table: https://people.cs.pitt.edu/~childers/CS2410/slides/lect-bran... (page 9). That means that the same branch instruction will map to multiple entries of the table, depending on the pattern of branches in the shift register. So you can get different predictions for the same branch depending on what else has happened.
That, for example, let’s you predict small-iteration loops. Say you have a loop inside a loop, where the inner loop iterates 4 times. So you’ll have a taken branch (back to the loop header) three times but then a not-taken branch on the fourth. If you track that in the branch history shift register, you might get something like this (with 1s being taken branches):
11101110
If you use this to index into a large enough branch table, the table entries corresponding to the shift register ending in “0111” will have a prediction that the branch will be not taken (i.e. the next outcome will be a 0) while the table entries corresponding to the shift register ending in say “1110” will have a prediction that the next branch will be taken.
So the basic principle of having a big table of branch predictions can be extended in many ways by using various information to index into the table.
Obviously that must be a solved problem; I'd be curious to know what the solution is.
Supposedly the branch prediction on modern AMD CPUs is far more sophisticated, based on [2] (a citation pulled from [1]).
[1] https://www.agner.org/optimize/microarchitecture.pdf
[2] https://www.cs.utexas.edu/%7Elin/papers/hpca01.pdf
> My mental model of the branch predictor was that for each branch, the CPU maintained some internal state like "probably taken/not taken" or "indeterminate", and it "learned" by executing the branch many times.
I always figured the algorithm was much simpler, it would just use the same branch as last execution — should work fairly well.
Didn’t realize it used the input value as well, which to me makes no sense — the whole point is to avoid having to inspect the value. This article raises more questions than answers, I’m intrigued now.
When looking up a register that tracks the "local" history of outcomes for a particular branch, you want to have a hash function that captures enough context to distinguish the different situations where that branch might be encountered.
Apart from folding a long "global history" of recent outcomes and mixing in the current program counter, I think many modern machines also mix in the target addresses of recently-taken branches.
In 1985 as an EE student, I took a course in modern CPU architectures. I still recall having my mind blown when learning about branch prediction and speculative execution. It was a humbling moment - as was pretty much all of my studies as CMU.
But the memorization capacity of the branch predictor must be a trade-off, right? I guess this generate_random_value function is impossible to predict using heuristics, so I guess the question is how often we encounter 30k long branch patterns like that.
Which isn’t to say I have evidence to the contrary. I just have no idea how useful this capacity actually is, haha.
It's a tiny, trivial example with 1 branch that behaves in a pseudo-random way (random, but fixed seed). I'm not sure that's a really good example of real world branching.
How would the various branch predictors perform when the branch taken varies from 0% likely to 100% likely, in say, 5% increments?
How would they perform when the contents of both paths are very heavy, which involves a lot of pipeline/SE flushing?
How would they perform when many different branches all occur in sequence?
How costly are their branch mispredictions, relative to one another?
Without info like that, this feels a little pointless.
(Tags could be made to differ by, e.g., XORing a limited amount of global history with the hash of the address.)
It is also possible that the AMD Zen 5 and Apple M4 have similar unused predictor capacity and simply have much larger predictors.
I did not think even TAGE predictors used 5k branch history, so there may be some compression of the data (which is only pseudorandom).
It might be interesting to unroll the loop (with sufficient spacing between branches to ensure different indexing) to see if such measurably effected the results.
Of course, since "write to buffer" is just a store and increment and the compiler should be able to guarantee no buffer overflow (buffer size allocated for worst case) and that the memory store has no side effects, the branch could be predicated by selecting either new value to be stored or the old value and always storing. This would be a little extra work and might have store queue issues (if not all store queue entries can have the same address but different version numbers), so it might not be a safe optimization.
The simplest binary saturating counter, ala bimodal predictor, already achieved more than 90% success rate. What comes next is just extension around that, just use multiple bimodal predictors and build a forest of it, but the core idea that treating the branch prediction using a Bayesian approach, never fades.
It is a combined effort between hardware design and software compiler, though.
https://news.ycombinator.com/item?id=47438490
At least that's my prediction.
https://blog.cloudflare.com/branch-predictor/
Should be titled: How I Learned to Stop Worrying and Love the Branch Predictor
Are they really keeping a branch history that's 30k deep? Or is there some kind of hashing going on, and AMD's hash just happens to be more attuned to the PRNG used here?
Would be interesting to see how robust these results are against the choice of PRNG and seed.