Sign in to follow this  

Normal map compression

This topic is 4833 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey! I've got an idea how to compress normal maps. Usually, normal maps take 3 bytes per pixel (RGB). However, since the normals are usually normalized, it's actually 2D data which is stored, not 3D. If you express a normal n=(n1,n2,n3) in spherical coordinates (theta, phi, R), then R is always 1 since it's normalized. So you only need to store the two angles theta and phi. So you could create a texture map with internal format intensity-8-alpha-8 and store theta in intensity and phi in alpha. That's 2 bytes per pixel, so you need 33% less memory. Then, in a pixel shader, how to compute lighting? There are two possibilities: 1) Compute the normal n by the formula n=(sin(theta)*cos(phi) , cos(theta)*cos(phi) , sin(phi)). You could create a sin/cos lookup table or let the GPU compute sin/cos (which is fast on modern hardware AFAIK) 2) Create a 2D lookup table (which maps (theta,phi) to n) After you got n back, lighting calculation is as usual. Another advantage would be that the normal is always normalized in the pixel shader, so you don't need a normalization cubemap. I haven't implemented it yet, since I wanted to know if anyone has experience with this technique. One question: Can you compress I8A8 textures? What quality does it produce? I had tried to compress the (standard) RGB normal map but it looks quite ugly - too many quantization artifacts. Is there another catch I missed?

Share this post


Link to post
Share on other sites
1. what about the GB channels of your textures?
as far as i know ogl always uses RGBA with GL_LIMINANCE_ALPHA you create a texture RGB = theta and ALPHA = phi

on the card you won t save a single bit with this technique

and i doubt using sin/cos functions in the pixelshader is faster than some texture lookups

stay with R8G8B8A8 any your fine

Share this post


Link to post
Share on other sites
Quote:
Original post by Basiror
1. what about the GB channels of your textures?
as far as i know ogl always uses RGBA with GL_LIMINANCE_ALPHA you create a texture RGB = theta and ALPHA = phi on the card you won t save a single bit with this technique

According to this page the L8A8 format really only uses 2 bytes per pixel.

Quote:

and i doubt using sin/cos functions in the pixelshader is faster than some texture lookups

I've read somewhere that the NV3x does sin/cos in a single cycle, but the R3xx does not. Anyway, it's recommended to combine math ops with texture lookups to make use of the parallel structure of the GPU, so it should be fine to use some math ops.

Quote:

stay with R8G8B8A8 any your fine

RGBAs resp. RGBs get too large and are not suitable for compression (where the latter is the main reason), so I'm looking for alternatives.

For some models (planets) I have 1024x1024 normal cubemaps, so that's 1Mpixel x 6 faces x 3 bytes per pixel = 18 megs plus 6 megs for mipmaps (uncompressed) for a single normal map. That's way too much. Compressed it's 4 megs, that's acceptable but doesn't look good. So I'm wondering whether compressed L8A8 might look better (in case it *can* be compressed).


Share this post


Link to post
Share on other sites
well when the card passes the luminance alpha texture to the pixelshader it uses R8G8B8A8 according to the red book, so you consume the same amount of bandwidth and latest boards have 256 mb vid ram already so 18 mbs don t matter at all in my opinion

about the sin cos thing, test it and we will i just can t imagine it

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Why not just store your normal maps as a heightfield, calculate the normals on the fly and only use 1 height value per pixel?

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Another way of storing a normalized vector with two values are to just store X and Z, then calculate Y using sqrt(1 - X * X - Z * Z).
This assumes that the Y component of the normal is always positive (true if normals were generated from a heightfield).
The creation of the map is faster and easier than the previous method since no cos/sin etc are needed, You just drop one component.
You could use a look up here to using X and Z (2D texture) or use X * X + Z * Z (1D texture) which I belive is faster (less memory to access).

Share this post


Link to post
Share on other sites
Do not rely on opengl.org info - drivers could store textures in whatever format they want - so dig at nvidia's and ati's sites for relevant inbformation . Also DXDiag could be helphul because it will show you what formats can your hardware handle.

Share this post


Link to post
Share on other sites
Here's how the id software guys compressed normal maps for QIII:

- (offline) move the R component into the alpha channel
- (offline) DXT5 compress the normal map
- (runtime) move alpha back into R channel in the pixel shader.

Also, if you're using MIP mapping (you are, right?) you should MIP your normal maps as well. Here's one way:
http://developer.nvidia.com/object/mipmapping_normal_maps.html

joe
image space

Share this post


Link to post
Share on other sites
Scoob> you probably mean Doom III, not Q3? Q3 doesn't have any normal mapping AFAIK.

Lutz> "I've got an idea how to compress normal maps"
this technique was already used in the md3 format to store normals :)
have you tryed it on the GPU yet to see the performance of the whole thing compared to regular normal maps?

Share this post


Link to post
Share on other sites
Thanks for all the comments.

Quote:
Original post by Anonymous Poster
Another way of storing a normalized vector with two values are to just store X and Z, then calculate Y using sqrt(1 - X * X - Z * Z).
This assumes that the Y component of the normal is always positive (true if normals were generated from a heightfield).
The creation of the map is faster and easier than the previous method since no cos/sin etc are needed, You just drop one component.
You could use a look up here to using X and Z (2D texture) or use X * X + Z * Z (1D texture) which I belive is faster (less memory to access).


I've read about this method. Currently, my normal map is stored in (planet) object coordinates, so basically all orientations of X, Y and Z are possible. On the other hand, I don't think the method is much faster since you have to compute a sqrt which is as bad as sin/cos I guess. Lookup is possible, though. But it doesn't solve the compression problem.

Quote:

Why not just store your normal maps as a heightfield, calculate the normals on the fly and only use 1 height value per pixel?

OK, that would cost least memory but it would require at least 3 texture lookups in the pixel shader to get the dx and dy differences along two directions of the heightmap.

Scoob Droolins:
Thanks for the link. I am using mipmapping also for the normals.
Why do the QIII need the R component? Do they store phi and theta in RGB somehow and R in alpha? What do they do in the pixel shader? Covert (theta,phi,R) back to (n1,n2,n3)?


Share this post


Link to post
Share on other sites
Quote:

Lutz> "I've got an idea how to compress normal maps"
this technique was already used in the md3 format to store normals :)
have you tryed it on the GPU yet to see the performance of the whole thing compared to regular normal maps?


Ah, good to know! I couldn't imagine it was new anyway.
I haven't tried it yet. I guess I won't see any performance change (RadeOn 9800 Pro 128MB) since the normal map stuff is only a small part of my pixel shader and the work load is dominated by other stuff. Maybe I'll temporarily disable the other stuff.

Anyway, I think I'll post some results once it works.

Share this post


Link to post
Share on other sites
Dag. Yeah, i meant Doom III (always get those confused.)

Quote:
Original post by Lutz
Scoob Droolins:
Thanks for the link. I am using mipmapping also for the normals.
Why do the QIII need the R component? Do they store phi and theta in RGB somehow and R in alpha? What do they do in the pixel shader? Covert (theta,phi,R) back to (n1,n2,n3)?


I think they use standard XYZ pixel normals, the R component is the pixel normal's X value, doesn't matter if it 's object or tangent space. Move R to Alpha, then zero the R. Now DXT5 compress, which does alpha separately. This gives more accurate compression of both the GB(YZ) channels, and the A(X) channel. In the pixel shader (now that the normal map has been uncompressed by HW), move A back to R to restore the proper XYZ normal, the proceed as usual.

joe
image space

Share this post


Link to post
Share on other sites
Quote:
Original post by Scoob Droolins
- (offline) move the R component into the alpha channel
- (offline) DXT5 compress the normal map
- (runtime) move alpha back into R channel in the pixel shader.

Just a quick correction. Point 1 and 2 are done at level-load time. So it's not really off-line. Or is that just different terminology?


Quote:
Original post by Anonymous Poster
Another way of storing a normalized vector with two values are to just store X and Z, then calculate Y using sqrt(1 - X * X - Z * Z).
This assumes that the Y component of the normal is always positive (true if normals were generated from a heightfield).
The creation of the map is faster and easier than the previous method since no cos/sin etc are needed, You just drop one component.
You could use a look up here to using X and Z (2D texture) or use X * X + Z * Z (1D texture) which I belive is faster (less memory to access).

This is how nVidias HILO textures work. They store X and Z component with 8 or 16 per component and then calculate Y in "pixel shader". But their point is not increased compression but increased percision.

Share this post


Link to post
Share on other sites
Quote:
Original post by _DarkWIng_
Just a quick correction. Point 1 and 2 are done at level-load time. So it's not really off-line. Or is that just different terminology?


You're right, due to the A/R swap, no current offline compressor (like the nVidia Photoshop plugin) could do this. I'm thinking this feature may show up in the plugin soon.

Thanks _DarkWing_

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
I've tested the theta-phi-method now. It works, BUT:

1) It's slower.
I've used sin/cos functions in my pixel shader (no texture lookups) (the combined sincos function didn't work for some reason). Frame rate drops from +200FPS down to 150FPS (RadeOn 9800 Pro, 4xAA, 8xAF, 800x600, planet fills whole screen, ~20000 polys)

2) There's a problem when theta wraps from 2PI down to 0.
Could be solved, though, I guess.

3) It doesn't look pretty (most important!)
There are two things important to me: The normal map can be compressed and it should still look good. My GPU didn't want to compress luminance/alpha, so I wrote theta/phi into the RG channels and set BA to zero (just for testing) and compressed the texture. As a result, it produced (almost) the same artifacts as the (old) XYZ method.

So what I did is change my normal mapping system from object space to tangent space (everything is stored in tangent space now). And voila - it looked good even with compressed normal maps (XYZ)!

Share this post


Link to post
Share on other sites
This 2d representation has been exploited eg. in photon mapping. You can store the spherical coordinates as bytes, leading two only two bytes per pixel/photon/whatever, and then you can use a lookup table (of size 65536 to directly grab the cartesian direction, or two ones, one for sin and one for cos, of size 256 and then do the math. the latter can read to more efficient cache usage).

-- Mikko

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
One of the problem with using the theta phi representation as you figured already is that you cannot rely on the filtering of the texture to smooth your normals around the poles.
But let's just imagine that you choose a representation that does not rely on the poles (hemisphere in tangent space), then what you do is converting your angles in -pi/2, pi/2 to a position in [-1,1][-1,1][0,1] by using a two monotonic functions and you can deduce the last component from the first two.

You will soon realize that you can save the first process by storing instead the first two components and still deduce the last one (it's straightforward on a single hemisphere).

Saving the sin/cos calculation is a big performance step usually and additionnally, depending where you want your precision, you should note that storing components directly makes the precision higher around the "zero" position which should help smooth gradients which is what matters visually.

Of course this "hemisphere" thing only works in tangent space, that means is only viable on per pixel normal calculations in traditional techniques.

Share this post


Link to post
Share on other sites

This topic is 4833 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this