• Create Account

### #ActualHodgman

Posted 07 December 2012 - 07:13 PM

Each time you perform a jump, the code at the jump target has to be loaded into the L1 icache. If you're calling the same function over and over, there's a bigger chance it will still be present in the cache. On the other hand, if you're calling random functions, there's a larger chance of cache misses.

Regarding branch prediction, it depends on the CPU heavily.
Some CPUs have a history-based predictor, where it's first guess for any branch will be the target that this branch jumped to last time. In this case, as long as a/b/c/d aren't long enough (contain enough branches of their own) to completely flush that history table, then using the same 'i' repeatedly would help the predictor.
That said, many of these history-based schemes only store true/false values, not actual addresses, so they only work for conditional jumps, not unconditional jumps like yours.

If the distance between fetching the value at jump and jumping to that value is large enough, then some CPUs may be able to fully determine the branch target before branching, meaning there's no prediction to be done. If this is the case, you might be able to help by unrolling your loop somewhat:
int i1=...
int i2=...
int i3=...
int i4=...
jump[i1]();
jump[i2]();
jump[i3]();
jump[i4]();
However, the best overall solution is probably to use 4 loops:
while(...)
a();
while(...)
b();
while(...)
c();
while(...)
d();

### #2Hodgman

Posted 07 December 2012 - 07:10 PM

Each time you perform a jump, the code at the jump target has to be loaded into the L1 icache. If you're calling the same function over and over, there's a bigger chance it will still be present in the cache. On the other hand, if you're calling random functions, there's a larger chance of cache misses.

Regarding branch prediction, it depends on the CPU heavily. If the distance between fetching the value at jump and jumping to that value is large enough, then the CPU may be able to fully determine the branch target before branching, meaning there's no prediction to be done.
Some CPUs have a history-based predictor, where it's first guess for any branch will be the target that this branch jumped to last time. In this case, as long as a/b/c/d aren't long enough ([i]contain enough branches of their own
) to completely flush that history table, then using the same 'i' repeatedly would help the predictor.

### #1Hodgman

Posted 07 December 2012 - 07:06 PM

Each time you perform a jump, the code at the jump target has to be loaded into the L1 icache. If you're calling the same function over and over, there's a bigger chance it will still be present in the cache. On the other hand, if you're calling random functions, there's a larger chance of cache misses.

Regarding branch prediction, it depends on the CPU heavily. If the distance between fetching the value at jump[i] and jumping to that value is large enough, then the CPU may be able to fully determine the branch target before branching, meaning there's no prediction to be done.
Some CPUs have a history-based predictor, where it's first guess for any branch will be that the target will be what it was last time. In this case, as long as a/b/c/d aren't long enough (contain enough branches of their own) to completely flush that history table, then using the same 'i' repeatedly would help the predictor.

PARTNERS