Quote:Original post by Anonymous Poster
Another way of storing a normalized vector with two values are to just store X and Z, then calculate Y using sqrt(1 - X * X - Z * Z).
This assumes that the Y component of the normal is always positive (true if normals were generated from a heightfield).
The creation of the map is faster and easier than the previous method since no cos/sin etc are needed, You just drop one component.
You could use a look up here to using X and Z (2D texture) or use X * X + Z * Z (1D texture) which I belive is faster (less memory to access).
I've read about this method. Currently, my normal map is stored in (planet) object coordinates, so basically all orientations of X, Y and Z are possible. On the other hand, I don't think the method is much faster since you have to compute a sqrt which is as bad as sin/cos I guess. Lookup is possible, though. But it doesn't solve the compression problem.
Quote:
Why not just store your normal maps as a heightfield, calculate the normals on the fly and only use 1 height value per pixel?
OK, that would cost least memory but it would require at least 3 texture lookups in the pixel shader to get the dx and dy differences along two directions of the heightmap.
Scoob Droolins:
Thanks for the link. I am using mipmapping also for the normals.
Why do the QIII need the R component? Do they store phi and theta in RGB somehow and R in alpha? What do they do in the pixel shader? Covert (theta,phi,R) back to (n1,n2,n3)?