• Create Account

Awesome job so far everyone! Please give us your feedback on how our article efforts are going. We still need more finished articles for our May contest theme: Remake the Classics

# Reason for Normal Calculation

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

10 replies to this topic

### #1lipsryme  Members   -  Reputation: 628

Like
0Likes
Like

Posted 15 March 2012 - 10:35 AM

I often see this calculation in tutorials of deferred lighting when decoding the normals in the Light's PS:

```float3 normal = 2.0f * normalData.xyz - 1.0f;
```

And it says "Transform normal back into [-1,1] range".
I don't get why this transformation is needed when receiving the normals. Can someone tell me the reason behind this ?
I've tried doing that in my shader and the only visual difference I see is that it gets brighter/darker doing it with or without the * 2.0f - 1.0f part. So how do I know which one is "correct"?

Student at the Games-Academy Frankfurt, Germany.

### #2turch  Members   -  Reputation: 571

Like
1Likes
Like

Posted 15 March 2012 - 10:57 AM

Texture vaules are in the [0,1] range, which means they go from 0 to 1.

Each channel represents an axis of a normal, which is in [-1,1], which means they go from -1 to 1.

So when writing a normal into a texture, you need to scale and translate it from [-1,1] to [0,1], and when reading it back in you need to scale and translate it back.

### #3lipsryme  Members   -  Reputation: 628

Like
0Likes
Like

Posted 15 March 2012 - 12:01 PM

I see, so when doing the encoding to store them inside a 2D Texture I transform my normals in the 0,1 range and by decoding it I do the same back to the [-1,1] space.
Thanks for the explanation.

Student at the Games-Academy Frankfurt, Germany.

### #4MJP  Moderators   -  Reputation: 5462

Like
0Likes
Like

Posted 15 March 2012 - 12:11 PM

Texture vaules are in the [0,1] range, which means they go from 0 to 1.

This is only the case for a certain class of texture formats, namely unsigned normalized integer formats. These formats will store values as an integer, where the the pixel shader will reinterpret that integer value as a [0,1] floating point where 0 == 0 and 2^(numbits) - 1 == 1.0. Your typical 8-bit color texture formats fall under this category. If you use a signed normalized integer format you can store [-1, 1] values which means that you don't need to do the range expansion. This is also the case if you use a floating point format that has a sign bit.

### #5bwhiting  Members   -  Reputation: 404

Like
0Likes
Like

Posted 15 March 2012 - 05:26 PM

coming from flash (where optimization counts big time )

i use:
normal = (nrm - 0.5) / 0.5;

to expand the normal as one less constant is required to be uploaded (the 0.5 is uploaded only once and reused)

am I being a melon or will this sort of optimization add up to some sort of saving?

### #6PolyVox  Members   -  Reputation: 579

Like
0Likes
Like

Posted 16 March 2012 - 03:05 AM

am I being a melon or will this sort of optimization add up to some sort of saving?

Multiplication is supposed to be faster than diivision (though I've never checked this). So probably the multiplication by 2.0 is better than the division by 0.5.

### #7bwhiting  Members   -  Reputation: 404

Like
0Likes
Like

Posted 16 March 2012 - 03:26 AM

Multiplication is usually faster, but where I think the saving might come would be from sending only one constant to the gpu (the 0.5) as opposed to two (1.0 and 2.0), because as far as I am aware, any generic numbers used in a shader have to be uploaded to the gpu, and cpu -> gpu communication is a large bottleneck.

In tests I have done, uploading constants is a definite slow process! Might only be a problem on the flash platform, or maybe I am doing something wrong.

### #8mhagain  Members   -  Reputation: 3828

Like
0Likes
Like

Posted 16 March 2012 - 04:01 AM

A decent compiler should be able to see that you're dividing by constant 0.5 and convert it to a multiplication by 2.0. I've confirmed already that the HLSL compiler will produce the same assembly for both. I don't know the flash platform so I can't comment there (but I am puzzled about why it needs to upload constants for this kind of thing), but in the general case it's either (a) it shouldn't matter at all, or (b) just do the multiplication in your code. I favour (b) because it makes what you're doing more explicit to anyone reading the code.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.

### #9bwhiting  Members   -  Reputation: 404

Like
0Likes
Like

Posted 16 March 2012 - 04:23 AM

At the moment we can only write shaders in AGAL which is fairly low level i.e. mov, div, m44, sin, slt, etc...
so no higher level language is available unless you write a converter.

so you cant simply write
normal = (nrm - 0.5) / 0.5;

it would be something more like
tex ft0 v0 fs0 <2d, clamp, linear>
sub ft0 ft0 fc0
div ft0 ft0 fc0

where
ft0 = a temp to store the normal in
v0 = uv
fs0 = normal map sampler
fc0 = the 0.5 value constant uploaded to the gpu

so as you can see every arbitrary number used in the shader must be uploaded as a constant, which isn't the fastest process in the world (even for such a small amount of information)
also we are limited to only 28 constants in the fragment shaders so every saving helps there too

### #10mhagain  Members   -  Reputation: 3828

Like
0Likes
Like

Posted 16 March 2012 - 09:11 AM

At the moment we can only write shaders in AGAL ....

Sounds painful.

I'm assuming that the OP is using either HLSL or Cg (from use of float3 - it would be vec3 if GLSL) where this overhead doesn't exist.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.

### #11osmanb  Members   -  Reputation: 899

Like
0Likes
Like

Posted 16 March 2012 - 12:54 PM

For actual numeric constants, those are baked into the shader microcode, so that's irrelevant. However, the more important thing is that when doing a scale + bias operation, you should try to always do the scale operation first. (eg: (nrm * 2) - 1). The reason is that GPU instruction sets include an instruction known as 'mad' - multiply+add. It's meant to perform (a*b+c). There is no instruction for doing add+multiply (a+b)*c. If you're using all constants, the compiler might be able to rewrite your expression to produce a mad operation, but it's better to be safe...

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS