This is yet another question about sorting draw calls. Its mostly based on the super popular article written here: http://realtimecollisiondetection.net/blog/?p=86. In it, keys are generated for each draw call that contain sort criteria. The sort criteria can be used to improve drawing performance by reducing excessive state changes, overdraw, etc. Each key can have a number of bits dedicated to holding a distance value.
I'm having trouble understanding how to take a distance represented as a float and pack it into a fixed number of bits. In the linked article, the author explains the following in a comment:
you can e.g. normalize the floats so they are in the same exponent range with the same sign, at which point only the fraction bits of the floats need to be considered (of which there are 23 bits plus one implicit bit which is always 1). Pick the top N bits of the fraction bits, and store those in your key.
I don't get how you would normalize the floats to put them in the same exponent range. Say that I limited my depth to have a range of [0,250]. What operation would I perform on the floats to normalize them?
At first I thought this meant normalizing the range to [0,1], and then just saving the fraction bits (mantissa) of an IEEE754 float. But this doesn't work. The values 0.1 and 0.2 for example will have the same fraction bits. It would be cool if I could create a function where I could specify a range and a bit length and get back a sortable bit set to use in a key for DrawCalls:
F(int low,int high,int bits,float value) { /*magic goes here */}
Any thoughts on what 'normalize' means in the context of the article I posted or how I could create such a method?