Jump to content

  • Log In with Google      Sign In   
  • Create Account

Calculating Normals From Displacment


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
8 replies to this topic

#1 Chris_F   Members   -  Reputation: 2467

Like
0Likes
Like

Posted 13 January 2012 - 08:02 PM

Would it be feasible to cut down on texture memory by using only displacement maps and then calculating the normals in your shader?

Sponsor:

#2 Jason Z   Crossbones+   -  Reputation: 5428

Like
0Likes
Like

Posted 14 January 2012 - 02:13 AM

You cut down on texture memory, but increase the number of samples needed (three heights are required at a minimum) and increase the amount of calculations required just to get your normal vector. You would probably be better off converting to a spherical coordinate system for your normal vectors and then just converting to a cartesian coordinate system after loading the two parameters.

Depending on your requirements, you might even be able to pack both variables into a single component for a minimum amount of memory and bandwidth required...

#3 Triangles-PCT   Members   -  Reputation: 257

Like
1Likes
Like

Posted 14 January 2012 - 05:49 AM

Does the spherical coordinates for normal mapping work? Do you have to do point sampling?

Not saying it doesn't work, cause I haven't tried it, but wouldn't bi linear interpolation cause some odd artifacts, since it would take the long route around the sphere if say you had coordinate -1 next to coordinate 1.

#4 Chris_F   Members   -  Reputation: 2467

Like
0Likes
Like

Posted 14 January 2012 - 10:15 AM

You would probably be better off converting to a spherical coordinate system for your normal vectors and then just converting to a cartesian coordinate system after loading the two parameters.


I'm not certain, but I think if you stored spherical coordinates in your texture, filtering would produce incorrect results. I could already store just X and Y for normals if I wanted to cut it down to two channels. I was just curious if it could be cut down to just a single channel while still giving good results.

#5 Jason Z   Crossbones+   -  Reputation: 5428

Like
0Likes
Like

Posted 14 January 2012 - 10:44 AM

Does the spherical coordinates for normal mapping work? Do you have to do point sampling?

Not saying it doesn't work, cause I haven't tried it, but wouldn't bi linear interpolation cause some odd artifacts, since it would take the long route around the sphere if say you had coordinate -1 next to coordinate 1.


That is actually a good point - the interpolation wouldn't be correct in some situations (as you mentioned, since the angle only increases in one direction). However, how often do you have texture data with wildly swinging texels next to one another? In general, I think it would still work as an approximation, even if it wasn't an exact one to one mapping...

#6 MJP   Moderators   -  Reputation: 11790

Like
0Likes
Like

Posted 14 January 2012 - 12:52 PM

It's possible, but as Jason mentioned you would need more texture samples in the shader to compute a normal from a height map. You would probably also end up with lower-quality normals, as normal maps are often generated with wide filtering kernel.

#7 Chris_F   Members   -  Reputation: 2467

Like
0Likes
Like

Posted 14 January 2012 - 07:38 PM

I think that storing normal maps in spherical coordinates could lead to errors, and I think storing just X and Y leads to poorer quality.

Has anyone used a spheremap transform method for storing normal maps into two channel textures?

ex: http://aras-p.info/t...thod04spheremap

To me, it looks like the linear interpolation of texture filtering wouldn't cause errors. In addition, it seems to be more accurate and the instructions for the transformation are cheaper than spherical coordinates.

#8 MJP   Moderators   -  Reputation: 11790

Like
0Likes
Like

Posted 15 January 2012 - 12:55 AM

Storing just XY is very common when using compressed texture formats. I haven't tried spheremap transform myself for storing normal maps, but it's possible that it might result in better quality. However spheremap transform is not linear, so linear interpolation definitely will not produce correct results. But as long as you generate the mip levels individually before encoding, then the error from interpolation might be less than the error resulting from storing XY and reconstructing Z. You'd have to do the math or run some experiments to know for sure.

#9 Chris_F   Members   -  Reputation: 2467

Like
0Likes
Like

Posted 15 January 2012 - 02:11 PM

I gave is a shot. There is no noticeable error for areas of smooth normals, but a noticeable error in places with harsh changes in normal direction.

Posted Image

I used the reference image from http://aras-p.info/t...malStorage.html

Then again, I'm getting a lot of errors in the Z component when storing just X and Y. I'm not really fond of either technique. Posted Image

Edit: Actually, experimenting with it some more, it might not be so bad. This time I used some real tangent normal maps, and I used more conservative interpolation (1.5X) and the results are almost identical to storing all three components. My tests seem to show that spheremap transform results in slightly less error than the X&Y approach, despite being non-linear.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS