Weird lighting when translating object

Started by
13 comments, last by kalle_h 4 years, 11 months ago

Hello everyone,

About a month ago I finished programming the lighting (or at least it is what I thought), because a few hours ago I realized that when I translate the object, the lighting breaks (not when rotating, because I multiply the normals by the transpose of the inverse of the modelMatrix), only if I move the object from the 0,0,0.

I only have one light, a directional one. 

Here the rock is at 0,0,0.

TCaptura_1.PNG.23ad3737c7a09f2d4314cdc42b11d276.PNG

 

Here the rock is at -10,0,0.

Captura_2.PNG.5c7f0d9f67ae0ea7f83cae581a63d3c4.PNG

I only moved it to the left, and as you can see looks like the normals are wrong, but I do not touch them, except when I multiply them by the transpose of the inverse of the modelMatrix (if I do not multply the normals by that matrix the lighting doesnt break but when the object rotates the light rotates with it). So, is there a way to have the normals well for translating and rotating? 

Captura_4.PNG.3f49f5c4b0389261ee6149ae05f234a7.PNG

Now I can translate the object and the lighting doesnt brake, but if I rotate it the light rotates with it.

 

Captura_3.PNG.08bec59e69e1f2248294048e51ce88a1.PNG

Now I can rotate it but if I move it the lighting breaks.

 

Thanks a lot!!

 

 

 

 

 

 

Advertisement

I believe for translation you don't have to touch the normals. Are you modifying normals while translating? 

22 minutes ago, ritzmax72 said:

I believe for translation you don't have to touch the normals. Are you modifying normals while translating? 

Hello and thanks for your reply!

I modify the normals every frame because the multiplication of the normals is made on the vertex shader.

I understand what you said, but if dont multiply the normals then I cannot rotate the object. Isnt there a way to have the normals a way the object can be rotated and translated without having any problem?

Normal should be vec3 and not vec4. You propably have 1.0f as implicit value in w channel in input normal. Then you are also normalize resulting vec4 which is incorrect. So change v_normal to vec3.

Try something like that: v_normal = (normal_Matrix * vec4(n_normal.xyz, 0.0)).xyz;

And do not normalize that before pixel shader stage.

Hi,

I think your problem is in the Normal matrix. Maybe I am wrong because it is early in the morning and I didn't have enough coffee but you are using a vec4 Normal and a mat4 normal matrix which might be the root of all evil. ;)

 

Detailed answer:

Thing is, your model matrix creates a position vector when multiplied with another position vector from your model. This includes displacements (translation). A normal is not a position vector. It is a direction vector. Directions don't have displacements and you should never use them on direction vectors. The simple trick to disable the displacement of a vector is to set its fourth component to zero while you need a one as their fourth component if you want them to contribute. This works since the displacement part of a transformation matrix is completely stored in the fourth column. Just have a look at the individual components (scaling, rotation, displacement) of a transformation matrix (look at the picture in section 10: http://www.c-jump.com/bcc/common/Talk3/Math/Matrices/Matrices.html)

The rules of matrix-vector multiplication will do the rest for you.

In your case, by using the transposed inverse of your full model matrix, you have some displacement part inside your normal matrix. This displacement part distorts your normals if your displacements are others than 0,0,0. However, I am not sure if the "fourth-component-is-zero" trick will work here since you calculated the inverse of your model matrix and transposed it. So displacement components may have left the fourth column. That's why you usually determine your normal matrix by using the transposed inverse of the upper left 3x3 submatrix instead of the full 4x4 model matrix. This part contains the scaling and rotation part of the transformation.

 

Short answer:

To solve your problem do the following (might contain coding errors):


mat3 normal_matrix = transpose(inverse(mat3(model_matrix)));
vec3 v_normal_tmp = normalize(normal_matrix * vec3(v_normal));
v_normal = vec4(v_normal_tmp,0)

You should think about using vec3 and mat3 for normals. Otherwise, do it as above.

 

Think about that too:

1 hour ago, kalle_h said:

And do not normalize that before pixel shader stage.

 

Addition:

You should also think about why the normal matrix is the transposed inverse of the model matrix: Normals are usually rotated the same as the model itself. But if you have any scaling that is not equal in all directions, the scaling of the normals needs to be inverted, otherwise, you get wrong normals. So you invert your matrix to get inverse scaling. But this inverts also your rotations. The nice thing about rotations is, that their inverse is equal to the transposed matrix. Since scaling happens only on the main diagonal, which is not affected by transposing the matrix, we transpose to undo the inversion of the rotation. If you think about it, it seems a little bit wasteful. That's why I usually calculate the normal matrix together with the model matrix in one function. While the model matrix is Translation * Rotation * Scaling the normal matrix is Rotation * inverse(Scaling). The inverse scaling matrix is just calculating 1/scaling_factor for each component.

 

Greetings

Translation of object changes the input light vector direction.

You should instead not touch the normals but transform light direction vector with world inverse into object space of the model and DOT object space untouched normal with this transformed light vector.

2 hours ago, JohnnyCode said:

Translation of object changes the input light vector direction.

You should instead not touch the normals but transform light direction vector with world inverse into object space of the model and DOT object space untouched normal with this transformed light vector.

You cannot always do that. Example enviroment cubemap, you need world space normal/reflection vector to sample it. For general case you usually need transformed world space normal.  Also it's lot cheaper to transform single normal than multiple lights. 

5 hours ago, kalle_h said:

Normal should be vec3 and not vec4. You propably have 1.0f as implicit value in w channel in input normal. Then you are also normalize resulting vec4 which is incorrect. So change v_normal to vec3.

Try something like that: v_normal = (normal_Matrix * vec4(n_normal.xyz, 0.0)).xyz;

And do not normalize that before pixel shader stage.

Thanks a lot, it was that.

4 hours ago, DerTroll said:

Hi,

I think your problem is in the Normal matrix. Maybe I am wrong because it is early in the morning and I didn't have enough coffee but you are using a vec4 Normal and a mat4 normal matrix which might be the root of all evil. ;)

 

Detailed answer:

Thing is, your model matrix creates a position vector when multiplied with another position vector from your model. This includes displacements (translation). A normal is not a position vector. It is a direction vector. Directions don't have displacements and you should never use them on direction vectors. The simple trick to disable the displacement of a vector is to set its fourth component to zero while you need a one as their fourth component if you want them to contribute. This works since the displacement part of a transformation matrix is completely stored in the fourth column. Just have a look at the individual components (scaling, rotation, displacement) of a transformation matrix (look at the picture in section 10: http://www.c-jump.com/bcc/common/Talk3/Math/Matrices/Matrices.html)

The rules of matrix-vector multiplication will do the rest for you.

In your case, by using the transposed inverse of your full model matrix, you have some displacement part inside your normal matrix. This displacement part distorts your normals if your displacements are others than 0,0,0. However, I am not sure if the "fourth-component-is-zero" trick will work here since you calculated the inverse of your model matrix and transposed it. So displacement components may have left the fourth column. That's why you usually determine your normal matrix by using the transposed inverse of the upper left 3x3 submatrix instead of the full 4x4 model matrix. This part contains the scaling and rotation part of the transformation.

 

Short answer:

To solve your problem do the following (might contain coding errors):



mat3 normal_matrix = transpose(inverse(mat3(model_matrix)));
vec3 v_normal_tmp = normalize(normal_matrix * vec3(v_normal));
v_normal = vec4(v_normal_tmp,0)

You should think about using vec3 and mat3 for normals. Otherwise, do it as above.

 

Think about that too:

 

Addition:

You should also think about why the normal matrix is the transposed inverse of the model matrix: Normals are usually rotated the same as the model itself. But if you have any scaling that is not equal in all directions, the scaling of the normals needs to be inverted, otherwise, you get wrong normals. So you invert your matrix to get inverse scaling. But this inverts also your rotations. The nice thing about rotations is, that their inverse is equal to the transposed matrix. Since scaling happens only on the main diagonal, which is not affected by transposing the matrix, we transpose to undo the inversion of the rotation. If you think about it, it seems a little bit wasteful. That's why I usually calculate the normal matrix together with the model matrix in one function. While the model matrix is Translation * Rotation * Scaling the normal matrix is Rotation * inverse(Scaling). The inverse scaling matrix is just calculating 1/scaling_factor for each component.

 

Greetings

Hello and thank you very much for your reply! 

You have explained it perfectly, I understood everything.

But now, I have a problem. Some white spots have appeared and I didnt have them earlier. Maybe some normals are wrong and this formula: diff = max(dot(v_Normal, lightDir), 0.2f); returns a number that is too high. 

Screenshot:

Captura.thumb.PNG.c6fd82a9359c31d82fb7bee293032eea.PNG

 

Forget about that lake, I am trying to figure out the uvs of the plane so I can apply there a reflection and refraction texture.

Greetings.

 

Edit: Nevermind, it was the specular. Thank you!

1 hour ago, Alex Gomez Cortes said:

But now, I have a problem. Some white spots have appeared and I didnt have them earlier. Maybe some normals are wrong and this formula: diff = max(dot(v_Normal, lightDir), 0.2f); returns a number that is too high

You need to re-normalise the normal vector in the fragment shader.

The interpolation between adjacent normal vectors that occurs between the vertex and fragment stages changes the length of the normal vector.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

4 hours ago, swiftcoder said:

You need to re-normalise the normal vector in the fragment shader.

The interpolation between adjacent normal vectors that occurs between the vertex and fragment stages changes the length of the normal vector.

Thank you for yur reply!

It is what I did but it didnt fix the problem. What happened is that I have the same specular for every object, and that specular was the specular of a plastic.

 

Greetings.

This topic is closed to new replies.

Advertisement