Calculating Normals From Displacment

This topic is 2837 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Would it be feasible to cut down on texture memory by using only displacement maps and then calculating the normals in your shader?

Share on other sites
You cut down on texture memory, but increase the number of samples needed (three heights are required at a minimum) and increase the amount of calculations required just to get your normal vector. You would probably be better off converting to a spherical coordinate system for your normal vectors and then just converting to a cartesian coordinate system after loading the two parameters.

Depending on your requirements, you might even be able to pack both variables into a single component for a minimum amount of memory and bandwidth required...

Share on other sites
Does the spherical coordinates for normal mapping work? Do you have to do point sampling?

Not saying it doesn't work, cause I haven't tried it, but wouldn't bi linear interpolation cause some odd artifacts, since it would take the long route around the sphere if say you had coordinate -1 next to coordinate 1.

Share on other sites

You would probably be better off converting to a spherical coordinate system for your normal vectors and then just converting to a cartesian coordinate system after loading the two parameters.

I'm not certain, but I think if you stored spherical coordinates in your texture, filtering would produce incorrect results. I could already store just X and Y for normals if I wanted to cut it down to two channels. I was just curious if it could be cut down to just a single channel while still giving good results.

Share on other sites

Does the spherical coordinates for normal mapping work? Do you have to do point sampling?

Not saying it doesn't work, cause I haven't tried it, but wouldn't bi linear interpolation cause some odd artifacts, since it would take the long route around the sphere if say you had coordinate -1 next to coordinate 1.

That is actually a good point - the interpolation wouldn't be correct in some situations (as you mentioned, since the angle only increases in one direction). However, how often do you have texture data with wildly swinging texels next to one another? In general, I think it would still work as an approximation, even if it wasn't an exact one to one mapping...

Share on other sites
It's possible, but as Jason mentioned you would need more texture samples in the shader to compute a normal from a height map. You would probably also end up with lower-quality normals, as normal maps are often generated with wide filtering kernel.

Share on other sites
I think that storing normal maps in spherical coordinates could lead to errors, and I think storing just X and Y leads to poorer quality.

Has anyone used a spheremap transform method for storing normal maps into two channel textures?

ex: http://aras-p.info/t...thod04spheremap

To me, it looks like the linear interpolation of texture filtering wouldn't cause errors. In addition, it seems to be more accurate and the instructions for the transformation are cheaper than spherical coordinates.

Share on other sites
Storing just XY is very common when using compressed texture formats. I haven't tried spheremap transform myself for storing normal maps, but it's possible that it might result in better quality. However spheremap transform is not linear, so linear interpolation definitely will not produce correct results. But as long as you generate the mip levels individually before encoding, then the error from interpolation might be less than the error resulting from storing XY and reconstructing Z. You'd have to do the math or run some experiments to know for sure.

Share on other sites
I gave is a shot. There is no noticeable error for areas of smooth normals, but a noticeable error in places with harsh changes in normal direction.

I used the reference image from http://aras-p.info/t...malStorage.html

Then again, I'm getting a lot of errors in the Z component when storing just X and Y. I'm not really fond of either technique.

Edit: Actually, experimenting with it some more, it might not be so bad. This time I used some real tangent normal maps, and I used more conservative interpolation (1.5X) and the results are almost identical to storing all three components. My tests seem to show that spheremap transform results in slightly less error than the X&Y approach, despite being non-linear.

• Game Developer Survey

We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a \$15 incentive for your time and insights. Click here to start!

• 9
• 11
• 15
• 21
• 26