Jump to content
  • Advertisement
Sign in to follow this  

Normal map compression

This topic is 4994 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey! I've got an idea how to compress normal maps. Usually, normal maps take 3 bytes per pixel (RGB). However, since the normals are usually normalized, it's actually 2D data which is stored, not 3D. If you express a normal n=(n1,n2,n3) in spherical coordinates (theta, phi, R), then R is always 1 since it's normalized. So you only need to store the two angles theta and phi. So you could create a texture map with internal format intensity-8-alpha-8 and store theta in intensity and phi in alpha. That's 2 bytes per pixel, so you need 33% less memory. Then, in a pixel shader, how to compute lighting? There are two possibilities: 1) Compute the normal n by the formula n=(sin(theta)*cos(phi) , cos(theta)*cos(phi) , sin(phi)). You could create a sin/cos lookup table or let the GPU compute sin/cos (which is fast on modern hardware AFAIK) 2) Create a 2D lookup table (which maps (theta,phi) to n) After you got n back, lighting calculation is as usual. Another advantage would be that the normal is always normalized in the pixel shader, so you don't need a normalization cubemap. I haven't implemented it yet, since I wanted to know if anyone has experience with this technique. One question: Can you compress I8A8 textures? What quality does it produce? I had tried to compress the (standard) RGB normal map but it looks quite ugly - too many quantization artifacts. Is there another catch I missed?

Share this post


Link to post
Share on other sites
Advertisement
1. what about the GB channels of your textures?
as far as i know ogl always uses RGBA with GL_LIMINANCE_ALPHA you create a texture RGB = theta and ALPHA = phi

on the card you won t save a single bit with this technique

and i doubt using sin/cos functions in the pixelshader is faster than some texture lookups

stay with R8G8B8A8 any your fine

Share this post


Link to post
Share on other sites
Quote:
Original post by Basiror
1. what about the GB channels of your textures?
as far as i know ogl always uses RGBA with GL_LIMINANCE_ALPHA you create a texture RGB = theta and ALPHA = phi on the card you won t save a single bit with this technique

According to this page the L8A8 format really only uses 2 bytes per pixel.

Quote:

and i doubt using sin/cos functions in the pixelshader is faster than some texture lookups

I've read somewhere that the NV3x does sin/cos in a single cycle, but the R3xx does not. Anyway, it's recommended to combine math ops with texture lookups to make use of the parallel structure of the GPU, so it should be fine to use some math ops.

Quote:

stay with R8G8B8A8 any your fine

RGBAs resp. RGBs get too large and are not suitable for compression (where the latter is the main reason), so I'm looking for alternatives.

For some models (planets) I have 1024x1024 normal cubemaps, so that's 1Mpixel x 6 faces x 3 bytes per pixel = 18 megs plus 6 megs for mipmaps (uncompressed) for a single normal map. That's way too much. Compressed it's 4 megs, that's acceptable but doesn't look good. So I'm wondering whether compressed L8A8 might look better (in case it *can* be compressed).


Share this post


Link to post
Share on other sites
well when the card passes the luminance alpha texture to the pixelshader it uses R8G8B8A8 according to the red book, so you consume the same amount of bandwidth and latest boards have 256 mb vid ram already so 18 mbs don t matter at all in my opinion

about the sin cos thing, test it and we will i just can t imagine it

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Why not just store your normal maps as a heightfield, calculate the normals on the fly and only use 1 height value per pixel?

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Another way of storing a normalized vector with two values are to just store X and Z, then calculate Y using sqrt(1 - X * X - Z * Z).
This assumes that the Y component of the normal is always positive (true if normals were generated from a heightfield).
The creation of the map is faster and easier than the previous method since no cos/sin etc are needed, You just drop one component.
You could use a look up here to using X and Z (2D texture) or use X * X + Z * Z (1D texture) which I belive is faster (less memory to access).

Share this post


Link to post
Share on other sites
Do not rely on opengl.org info - drivers could store textures in whatever format they want - so dig at nvidia's and ati's sites for relevant inbformation . Also DXDiag could be helphul because it will show you what formats can your hardware handle.

Share this post


Link to post
Share on other sites
Here's how the id software guys compressed normal maps for QIII:

- (offline) move the R component into the alpha channel
- (offline) DXT5 compress the normal map
- (runtime) move alpha back into R channel in the pixel shader.

Also, if you're using MIP mapping (you are, right?) you should MIP your normal maps as well. Here's one way:
http://developer.nvidia.com/object/mipmapping_normal_maps.html

joe
image space

Share this post


Link to post
Share on other sites
Scoob> you probably mean Doom III, not Q3? Q3 doesn't have any normal mapping AFAIK.

Lutz> "I've got an idea how to compress normal maps"
this technique was already used in the md3 format to store normals :)
have you tryed it on the GPU yet to see the performance of the whole thing compared to regular normal maps?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!