Design issue: managing branching in the rendering pipeline

Started by
11 comments, last by MJP 11 years, 2 months ago

Almost any game with normal maps that came out in the last few years will only store the XY components, either in the G and A channels of a DXT5/BC3 texture or in the R and B channels of an ATI2N/3Dc/BC5 texture.

Advertisement

But tangent space normal maps naturally hold only z-values in the range [0..1]. I wonder why it is so common to compress the z-value also (and therefore waste precission). Is it necessary for texture compression?

I noticed that when you bake a tangent-space normal map out of Blender, it does take advantage of this fact -- the x/y channels represent values from -1 to +1, but the z channel only represents values from 0 to 1, gaining you a bit of extra precision. So to decode a blender tangent-space normal map, you'd use tex2D(...).xyz*float3(2,2,1)-float3(1,1,0)

Almost any game with normal maps that came out in the last few years will only store the XY components, either in the G and A channels of a DXT5/BC3 texture or in the R and B channels of an ATI2N/3Dc/BC5 texture.

I'm getting a bit off topic now, but do you know how widespread use of the Toksvig factor is? IIRC, the original paper relied on using full 3-component normals so you could measure how much their length had changed from 1.0 during interpolation/filtering (whereas reconstruction methods assume length must be 1). But I've also seen approaches where you pre-compute the Toksvig factor and bake into your spec-power/roughness maps, which would allow for 2-component normals?

But tangent space normal maps naturally hold only z-values in the range [0..1]. I wonder why it is so common to compress the z-value also (and therefore waste precission). Is it necessary for texture compression?

I noticed that when you bake a tangent-space normal map out of Blender, it does take advantage of this fact -- the x/y channels represent values from -1 to +1, but the z channel only represents values from 0 to 1, gaining you a bit of extra precision. So to decode a blender tangent-space normal map, you'd use tex2D(...).xyz*float3(2,2,1)-float3(1,1,0)

>Almost any game with normal maps that came out in the last few years will only store the XY components, either in the G and A channels of a DXT5/BC3 texture or in the R and B channels of an ATI2N/3Dc/BC5 texture.

I'm getting a bit off topic now, but do you know how widespread use of the Toksvig factor is? IIRC, the original paper relied on using full 3-component normals so you could measure how much their length had changed from 1.0 during interpolation/filtering (whereas reconstruction methods assume length must be 1). But I've also seen approaches where you pre-compute the Toksvig factor and bake into your spec-power/roughness maps, which would allow for 2-component normals?

Everyone I know that is doing that (or something similar) is either pre-baking the factor into their roughness maps, or making use of pixel shader derivatives to compute the normal variation on the fly.

This topic is closed to new replies.

Advertisement