# [DirectX9]Normals not calculated properly?

This topic is 2693 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi,

When trying to render a model from the directx-sdk with my lighting-system, I noticed that there is something wrong with the lighting:

It seems as the lighting is neigther applied per pixel (as I intended), nor is it even blended between the vertices. The model, as well as the environment, is lighted per triangle. I tried loading many different models and all of them have this (ugly) effect. So I searched for that bug in my shader code, and first thing I noticed is that specular lighting is working properly. So I started checking only the values needed for diffuse lighting calculation. The direction to the light was working properly (it was smoothly scaled across the models), so all that was left were the normals. Thats what the normals stored after the deferred lighting pass looks like:

It is clear to see that the normals are all the same for a single triangle, there is no difference. So obviously lighting has to look as above. Is there anything wrong with my normals calculations? Looking at the sdk-example, the normals where supposed to be smoothed between the triangles of the models:

Well looking at the code for calculating the lighting in the sdk as well as on many other sources like two of my books about directx, I can't see much difference in how I'm doing it:

 VS_OUTPUT_BUILD vsBuild(VS_INPUT_BUILD i) { VS_OUTPUT_BUILD o; o.vNrm = normalize(mul(i.vNrm, World)); float3 N = o.vNrm; float3 T = normalize(mul(i.vTan, World)); float3 B = normalize(mul(i.vBin, World)); o.mTBN = float3x3(T, B, N); return o; }; // psBuild() // put geometry data into render targets PS_OUTPUT_BUILD psBuild(VS_OUTPUT_BUILD i) { float3 bumpNormal = tex2D(BumpSampler, i.vTex0); bumpNormal.rgb = 2.0* bumpNormal.rgb - 1.0; bumpNormal.rgb = mul(bumpNormal.rgb, i.mTBN); bumpNormal.rgb = normalize(bumpNormal.rgb); bumpNormal.rgb = 0.5 * bumpNormal.rgb + 0.5; o.vWorldNrm.rgb = bumpNormal; return o; };

So basically I'm calculating the normal in world space based on a bump-map. I already tried changing the calculations from asigning the mTBN-matrix in the vertex shader to doing it in the pixel-shader, however no difference. I also tried outputting the normals of the model itself, still all the triangles are having the same normals in the lighting pass. This is a strongly simplified version of my lighting calculation-shader:

float4 psSpot(PS_INPUT_LIGHT i) : COLOR0 { float3 vDiffuseMaterial = tex2D(SceneMaterialSampler, i.vTex0).rgb; float4 vWorldNrm = tex2D(SceneNormalSampler, i.vTex0); vWorldNrm.rgb = 2.0 * vWorldNrm.rgb - 1.0; float3 vLightDir = (LightPos - vWorldPos).xyz; float3 vLightDist = normalize(vLightDir); float3 vDiffuseIntensity = max(0, dot(vLightDist, vWorldNrm)); color.rgb = vDiffuseIntensity * LightDiffuse.xyz * vDiffuseMaterial color.a = 1.0; return color; };

So, from all the information given.. does anyone have an idea whats going wrong? I really can't tell whats going on, after comparing my calcuations to others, and seeing they were practically identical, and after tried to changing the used normals to all different possibility. Any help would be highly appreciated! Thanks in advance!

##### Share on other sites
Yes it seems your not calculating normals properly.

I dont use bi-normal in my shaders and it works so try it
First in vertex shader simply transform the normal, tangent and bi-tangent to world space.

 VS_OUTPUT_BUILD vsBuild(VS_INPUT_BUILD i) { VS_OUTPUT_BUILD o; o.vNrm = normalize(mul(i.vNrm, World)); o.vTan = normalize(mul(i.vTan, World)); return o; }; // psBuild() // put geometry data into render targets PS_OUTPUT_BUILD psBuild(VS_OUTPUT_BUILD i) { float3 N = normalize(i.vNrm); float3 T = normalize(i.vTan - dot(i.vTan, N)*N); float3 B = cross(N,T); float3x3 TBN = float3x3(T, B, N); float3 bumpNormal = tex2D(BumpSampler, i.vTex0); bumpNormal.rgb = 2.0f* bumpNormal.rgb - 1.0f; bumpNormal.rgb = mul(bumpNormal.rgb, TBN); bumpNormal.rgb = normalize(bumpNormal.rgb); bumpNormal.rgb = 0.5f * bumpNormal.rgb + 0.5f; o.vWorldNrm.rgb = bumpNormal; return o; };

##### Share on other sites
Thanks for the reply, I tried your solution, but it didn't work eigther. I didn't use binormals at the very beginning too, but figured it would be faster to use them as it spares me a lot of calculations (escpecially the cross-product shouldn't be cheap). Anyway your solution gave me the same results, BUT..

Inspired by you saying I didn't calculate my normals correctly I remebered that I'm calucating the normals, binormals and tangents in my code after loading a mesh. So I first of all commented the calls of D3DXComputeNormals() and D3DXComputeTangents(), and suddently it was working, at least partially (bumpmaps, parallax-mapping etc. wouldn't work anymore). I found out I had to pass the adjacency in case I want the normals and tangents to work properly. I didn't do so because adjacency confused me in first place. Thanks for the help!

##### Share on other sites
The problem is you are calculating your normals in the vertex shader, when you need to move the calculations to the pixel shader. Your picture shows that your normals are being generated properly, you just need them to be interpolated better.

So, for now, in the vertex shader just transform the normal, tangent and bi tangent by the world matrix (this could be incorrect if your world is using non uniform scaling, but for now, lets assume it is not), make sure to normalize each one after you do that.
Then pass those along to the pixel shader for input. In the pixel shader, assemble your TBN matrix. This is the key, creating the matrix in the pixel shader with the tangent bi tangent and normal passed in from the vertex shader.

##### Share on other sites
@smaherprog: Thanks for your reply, but I tested it, and there is no differnence if I do the matrix-caluclation in the vertex shader or in the pixel shader. How could it be? I quess that vertex shader is interpolating the matrix just like it would do with the values on their own, as passing a matrix takes up exactly the same amount of texturecoordinates (3). The actual problem was with my normals anyway, binormals and tangents themself - they weren't computed properly, as I forgot to add adjacency to the D3DXComputeNormal/Tangent-function, so they were not interpolated. For now it seems to be working so I don't see a point from moving calulcations to the pixelshader unless there is a huge graphical differenc. Thanks for your suggestion, though!

##### Share on other sites
I didn't read all the replies, but just wanted to write some suggestions:

Just multiply the corresponding vectors with your world (or transpose of inversed world) matrix. DO NOT NORMALIZE THEM IN YOUR VERTEX SHADER.

Normalize the corresponding vectors and build your TBN-basis matrix. Then use it with your lighing equations if you want to.

Some operations' results depend on where they've done in (i.e. Vertex or Pixel Shader). For example, if you want to output clip-space depth, you've to divide .z by .w in pixel shader; if you try to do it in vertex shader, it will work but you'll get different results. Normalizing TBN-basis vectors is a similar process.

hth.
-R

1. 1
2. 2
Rutin
20
3. 3
4. 4
frob
15
5. 5

• 10
• 9
• 14
• 9
• 33
• ### Forum Statistics

• Total Topics
632592
• Total Posts
3007295

×