I'm reading a programming book, in which the following bit of code is written:
if (zi < z_ptr[xi])
{
// write textel
screen_ptr[xi] = color;
//update z-buffer
z_ptr[xi] = zi;
}
(The code is a basic software implementation of a Z-Buffer for a software rasterizer. This bit of code is being called quite a lot, but other than that, the purpose of the code is, I think, irrelevant)
Afterwards, in a side-box, is written:
"Notice that even after I determine that the Z-buffer should be updated, I do not update it with the value of zi. This is an optimization trick. It's usually a bad idea to immediately read and then write the same value - better to put somethign in between and then write the value"
I presume the author is referring to the fact that the "if" statment reads the value of z_ptr and then writes to screen_ptr before writing back to z_ptr (rather than writing to z_ptr first, before writing to screen_ptr).
In other words, he is suggesting that the above code is marginally faster (more optimized), than the following, similar code:
if (zi < z_ptr[xi])
{
//update z-buffer
z_ptr[xi] = zi;
// write textel
screen_ptr[xi] = color;
}
My questions are: Is this true? If so, why?
I don't care if it's "marginally" true, or only true "some of the time." And I realize that little optimizations like this are generally irrelevant and should be put off till the end (more readable code is better than marginally faster code, and most optimizations that will make a difference are algorithmic in nature anyway).
If this optimization IS real or possible, then please tell me - and explain WHY.
Also, if this optimization IS real or possible, is this the kind of thing that the compiler will automatically handle during internal optimization?
I'm very interested in people's explanations and thoughts.
Thank you.
-Gauvir_Mucca