• 10
• 9
• 13
• 10
• 18

# Storing normals as spherical coordinates

This topic is 3220 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

We've all come across the problem of storing XYZ normals efficiently and precisely without using too much space... And one solution I've heard is to convert the 3-component normal into 2-component spherical coordinates. What is the math behind converting a normal into spherical coordinates and back? Will 16 bits be enough to store these spherical coordinates relatively accurately (by relatively accurately, I mean as accurate as storing the 3 components of the normal as 16 bits each directly)? What is the numerical range of the converted spherical coordinates?

##### Share on other sites
A spherical co-ordinate still uses 3 components: a distance from the origin (often named radius), an angle for the latitude, and another angle for the longitude. With the special interest in normals (i.e. direction vectors of length 1) obviously the radius value is constrained to 1, and hence need not be stored explicitely. The ranges for the both angles are usually a full circle (e.g. 360°) for the latitude and a half circle (e.g. 180°) for the longitude. Special use cases may limit the angles further, e.g. if only the upper half sphere is needed.

General conversion formulas can be found on the inet, e.g. here.

##### Share on other sites
Spherical normal storage works great. I recommend storing (atan2(y,x), z). You don't need to encode z with acos(), and you can use a sincos() or atan2() lookup texture on older cards to turn ALU ops into tex ops.

##### Share on other sites
If those normals are vertex normals (which I assume since you are probably not using a 32 bit normal map, dah!), you also might want to try to encode them as 8 bit normals, which is usually enough. An 8 bit normal would be encoded like a color in DX, so you'll have to map the range [-1,1] to [0,255] and convert that to a RGB color:
Encode: enc = MakeDWORDColor((n * 0.5 + 0.5) * 255)
Decode: n = enc * 2 - 1 (maps range [0,1] back to [-1,1] in HLSL)
For spherical it's like patw describes:
Encode: enc = (atan2(n.y,n.x),n.z)
Decode: sincos(enc.x, ss, cc); n = (cc, ss, enc.z)
If you also have to store the tangent, you could try storing the full TBN system as a quaternion.

##### Share on other sites
All sounds very nice, but I think there might be a problem linearly interpolating such coordinates. So, although it's probably a very good solution for storing normal buffers for deferred renderers, I'm afraid trying to linearly interpolate such normals maps might result in artifacts. Imagine interpolating between a pixel with a coordinate of 179(u8 = 255) to one of it's neighbors with a coordinate of -180(u8 = 0).

##### Share on other sites
Ah Pixelot a very good point. I am using it specifically for a G-buffer that it is the ideal application for it.

This is a Flickr set of normal-encoding comparison shots. The thumbnails look like just black images, but if you go to full size you will see grey/white pixels. These values represent the difference between the encoded normal, read from the G-buffer, and the forward-render calculated normal (IE sample bump map, transform tangent->gbuffer_space)

Normal Storage Comparison

##### Share on other sites
This is indeed for a deferred renderer, therefore interpolation does not matter.

Basically, I have a GBuffer with 4 RGBA8 textures. Currently, I encode 32-bit depth in the first one, 16-bit XY normal components in the second one, and 16-bit Z component into the third one along with other miscellaneous data. What I'm trying to do is get the normal down to 2 components, such that I can encode the full 16-bits-per-component XYZ normal in a single RGBA8 texture [wink].

##### Share on other sites
Here in 9th grade math we've barely touched trig, so please bear with me: what is the range of atan2? -pi to pi, or 0 to 2pi, or...?

##### Share on other sites
Checked documentation, it is from -pi to pi.

Anyways, I implemented this, and I'm getting very odd and inconsistent results when I retrieve the "original" normals. Here is my code, spot any errors?

decode
float2 spherical = TEX2DLOD(normalMap, IN.uv0).xy;float2 viewNormXY = float2(0, 0);sincos(spherical.x * 2 * 3.14159265 - 3.14159265, viewNormXY.y, viewNormXY.x);float3 viewNorm = float3(viewNormXY, spherical.y * 2 - 1);

encode
float2 spherical = float2((atan2(normal.y, normal.x) + 3.14159265) / (2.0 * 3.14159265), normal.z * 0.5 + 0.5);

Edit:
patw, I'd be interested to see how you've done it.

##### Share on other sites
Encode:
return (float2(atan2(nrmNorm.y, nrmNorm.x) / 3.14159265358979323846f, nrmNorm.z) + 1.0) * 0.5;

Decode:
float2 spGPUAngles = spherical.xy * 2.0 - 1.0;
float2 sincosTheta;
sincos(spGPUAngles.x * 3.14159265358979323846f, sincosTheta.x, sincosTheta.y);
float2 sincosPhi = float2(sqrt(1.0 - spGPUAngles.y * spGPUAngles.y), spGPUAngles.y);

return float3(sincosTheta.y * sincosPhi.x, sincosTheta.x * sincosPhi.x, sincosPhi.y);

Storing z instead of acos(z) just saves some ops the decoding, you still need sin(phi) and cos(phi) to reconstruct XY. You just happen to already have cos(acos(z)), and the trig identity: '1 - sin(x)^2 = cos(x)^2' does the rest.

Edit:
Didn't answer the question I guess. You are seeing odd results because the conversion back from spherical is missing a step. You are computing sincos(theta) but not sincos(phi). The reason why I say store just normal.z and not acos(normal.z) is because the length of the vector is 1.0 (did I mention this method of encode/decode only works on normalized vectors) and so doing acos(normal.z/1) and then recovering cos(acos(normal.z/1)) is a very silly thing to do. Instead I store normal.z, and then compute sin(phi) by using the law of sines.