Data speculation is a CPU technique too, which Apple CPUs are known to implement. Apparently they can do stride detection when predicting address values.
Someone with a M >= 2 might try the code and find no speedup with the "improved" version, and that it's already iterating faster than L1 load-to-use latency.
But that works on a different level, right? At least as I understand it data speculation is about prefetching from memory into cache. This trick is about using the branch predictor as an ultra-fast ”L0” cache you could say. At least that’s how I understand it.
This is doing value speculation in software, using the branch predictor. The hardware of course does do that and instead uses different tables for deriving a predicted value, and misprediction will be detected and flushed in a slightly different way.
But the effect on the main sequence of instructions in the backend will be quite similar. In neither case is it a "prefetch" as such, it is actually executing the load with the predicted value and the result will be consumed by other instructions, decoupling address generation from dependency on previous load result.
I am new to this low level, but am I right in understanding this works because he uses a linked list but often it is contiguous in memory so you guess the next element is contiguous and if it is the branch predictor predicts you are right and saves going to cache and breaking the pipeline.
However I imagine you'd also get the same great performance using an array?
Consecutive linked list nodes can occur when you bump allocate them, and in particularly if you have a copying garbage collector which ensures that bump allocation takes place from blank slate heap areas with no gaps.
Idea: what if we implement something that resembles CDR coding, but doesn't compact the cells together (not a space-saving device). The idea as is that when we have two cells A and B such that A->cdr == B, and such that A + 1 == B, then we replace A->cdr with some special constant which says the same thing; indicates that A->cdr is equivalent to A + 1.
Then, I think, we could have a very simple, stable and portable form of the trick in the article:
while (node) {
value += node->value;
if (node->next == NEXT_IS_CONSECUTIVE)
next = node + 1;
else
next = node->next;
node = next;
}
The branch predictor can predict that the branch is taken (our bump allocator ensures that is frequently the case), and go straight to next = node + 1. When in the speculatively executed alternative path, the load of node->next completes and is not equal to the magic value, then the predicted path is canceled and we gret node->next.
This doesn't look like something that can be optimized away, because we are not comparing node->next to node + 1; there is no tautology there.
Yes. But I don’t think the OP is suggesting this as an alternative to using an array. As I read / skimmed it the linked list is just a simplified example. You can use this trick in more complex situations too, eg if you’re searching a tree structure and you know that some paths through the tree are much more common than others.
Data speculation is a CPU technique too, which Apple CPUs are known to implement. Apparently they can do stride detection when predicting address values.
Someone with a M >= 2 might try the code and find no speedup with the "improved" version, and that it's already iterating faster than L1 load-to-use latency.
But that works on a different level, right? At least as I understand it data speculation is about prefetching from memory into cache. This trick is about using the branch predictor as an ultra-fast ”L0” cache you could say. At least that’s how I understand it.
This is doing value speculation in software, using the branch predictor. The hardware of course does do that and instead uses different tables for deriving a predicted value, and misprediction will be detected and flushed in a slightly different way.
But the effect on the main sequence of instructions in the backend will be quite similar. In neither case is it a "prefetch" as such, it is actually executing the load with the predicted value and the result will be consumed by other instructions, decoupling address generation from dependency on previous load result.
I am new to this low level, but am I right in understanding this works because he uses a linked list but often it is contiguous in memory so you guess the next element is contiguous and if it is the branch predictor predicts you are right and saves going to cache and breaking the pipeline.
However I imagine you'd also get the same great performance using an array?
Consecutive linked list nodes can occur when you bump allocate them, and in particularly if you have a copying garbage collector which ensures that bump allocation takes place from blank slate heap areas with no gaps.
Idea: what if we implement something that resembles CDR coding, but doesn't compact the cells together (not a space-saving device). The idea as is that when we have two cells A and B such that A->cdr == B, and such that A + 1 == B, then we replace A->cdr with some special constant which says the same thing; indicates that A->cdr is equivalent to A + 1.
Then, I think, we could have a very simple, stable and portable form of the trick in the article:
The branch predictor can predict that the branch is taken (our bump allocator ensures that is frequently the case), and go straight to next = node + 1. When in the speculatively executed alternative path, the load of node->next completes and is not equal to the magic value, then the predicted path is canceled and we gret node->next.This doesn't look like something that can be optimized away, because we are not comparing node->next to node + 1; there is no tautology there.
Yes. But I don’t think the OP is suggesting this as an alternative to using an array. As I read / skimmed it the linked list is just a simplified example. You can use this trick in more complex situations too, eg if you’re searching a tree structure and you know that some paths through the tree are much more common than others.
Yes, as he noted the trick is of limited value in practice
https://news.ycombinator.com/item?id=45499965