Calculating Normals From Displacment

Started by
7 comments, last by Chris_F 12 years, 3 months ago
Would it be feasible to cut down on texture memory by using only displacement maps and then calculating the normals in your shader?
Advertisement
You cut down on texture memory, but increase the number of samples needed (three heights are required at a minimum) and increase the amount of calculations required just to get your normal vector. You would probably be better off converting to a spherical coordinate system for your normal vectors and then just converting to a cartesian coordinate system after loading the two parameters.

Depending on your requirements, you might even be able to pack both variables into a single component for a minimum amount of memory and bandwidth required...
Does the spherical coordinates for normal mapping work? Do you have to do point sampling?

Not saying it doesn't work, cause I haven't tried it, but wouldn't bi linear interpolation cause some odd artifacts, since it would take the long route around the sphere if say you had coordinate -1 next to coordinate 1.

You would probably be better off converting to a spherical coordinate system for your normal vectors and then just converting to a cartesian coordinate system after loading the two parameters.


I'm not certain, but I think if you stored spherical coordinates in your texture, filtering would produce incorrect results. I could already store just X and Y for normals if I wanted to cut it down to two channels. I was just curious if it could be cut down to just a single channel while still giving good results.

Does the spherical coordinates for normal mapping work? Do you have to do point sampling?

Not saying it doesn't work, cause I haven't tried it, but wouldn't bi linear interpolation cause some odd artifacts, since it would take the long route around the sphere if say you had coordinate -1 next to coordinate 1.


That is actually a good point - the interpolation wouldn't be correct in some situations (as you mentioned, since the angle only increases in one direction). However, how often do you have texture data with wildly swinging texels next to one another? In general, I think it would still work as an approximation, even if it wasn't an exact one to one mapping...
It's possible, but as Jason mentioned you would need more texture samples in the shader to compute a normal from a height map. You would probably also end up with lower-quality normals, as normal maps are often generated with wide filtering kernel.
I think that storing normal maps in spherical coordinates could lead to errors, and I think storing just X and Y leads to poorer quality.

Has anyone used a spheremap transform method for storing normal maps into two channel textures?

ex: http://aras-p.info/t...thod04spheremap

To me, it looks like the linear interpolation of texture filtering wouldn't cause errors. In addition, it seems to be more accurate and the instructions for the transformation are cheaper than spherical coordinates.
Storing just XY is very common when using compressed texture formats. I haven't tried spheremap transform myself for storing normal maps, but it's possible that it might result in better quality. However spheremap transform is not linear, so linear interpolation definitely will not produce correct results. But as long as you generate the mip levels individually before encoding, then the error from interpolation might be less than the error resulting from storing XY and reconstructing Z. You'd have to do the math or run some experiments to know for sure.
I gave is a shot. There is no noticeable error for areas of smooth normals, but a noticeable error in places with harsh changes in normal direction.

normals00xyz.jpg

I used the reference image from http://aras-p.info/t...malStorage.html

Then again, I'm getting a lot of errors in the Z component when storing just X and Y. I'm not really fond of either technique. sleep.png

Edit: Actually, experimenting with it some more, it might not be so bad. This time I used some real tangent normal maps, and I used more conservative interpolation (1.5X) and the results are almost identical to storing all three components. My tests seem to show that spheremap transform results in slightly less error than the X&Y approach, despite being non-linear.

This topic is closed to new replies.

Advertisement