Then "we" aren't helping -- theagentd posted the thread and is reiterating that the original (unanswered) question of the thread was about compressing world-space normals in a vertex buffer...
[quote name='L. Spiro' timestamp='1343734169' post='4964811']We are not talking about world space here.
Also, view-space normals can have both + and - z values with a perspective projection! Making the assumption that z is always positive will cause strange lighting artefacts on surfaces at glancing angles.
[/quote]
My first reply is in regards to compressing world-space normals.
Yes they are world-space within the vertex buffer, but you must recognize that they are not actually used until later. The values when they are used are what matter, so when approximating them, or in any other way modifying them before that point, really has no meaning. They could be XYZARB, up until the moment when they are used within a lighting equation as long as the equation is using the standard terms.
Basically, we are helping because the original topic poster assumed the normals need to be used in world coordinates. You can define 2 components in world space and then derive the third in view space later, inside the shaders, at whatever point lighting needs to be done.
If you think about it you can see how the sign for the Z component is always positive and then how that leads to my previous suggestion.
But if you need more than just my word, I can tell you that we use this type of compression at work, and I can promise with actual hands-on experience that it works.
The only way in which it fails is when you don’t reject back-facing polygons, but that is a rare condition and we have special cases for that.
That being said, I also recommend 16-bit normals. Normals are usually confined to a small range and you gemerally don’t need to exceed this range.
Using 2 16-bit floats instead of 3 32-bit floats will save you a lot of memory.
L. Spiro