Jump to content
  • Advertisement
Sign in to follow this  
BlackWind

Normals in graphical effects

This topic is 3332 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, i've seen that many effects can be achieved by doing operations with the normals? My question is....why? how does this works? whats the process 'behind the scenes'? greetings,

Share this post


Link to post
Share on other sites
Advertisement
The primary purpose of the normals is to guide the lighting process; as such, messing with the normals is usually done to change the amount of diffuse and specular light each point on an object reflects. For more detailed info, you'll have to specify which effects you're talking about. In the meantime, are you familiar with the Blinn-Phong shading model? That's the core of how most GPU lighting happens, so it's worth a read if you're not familiar with the math.

Share this post


Link to post
Share on other sites
Just like in the real world, each and every surface has a direction. In 3D its represented by a 3D vector {xyz}. If Y is the up vector, your floor would get normal {0,1,0}, ceiling {0,-1,0}, wall on the left {1,0,0}, and so on.

One of the lighting basics is the dot(L,N) function. If you shine a light on a ball, the backside (from your point of view) won't be affected while the front side is litten up fully. Fixed pipeline (OpenGL/DirectX) or shaders often use

diffuseColor = saturate(dot( lightVector, surfaceNormal )) * lightColor * surfaceColor;
lightVector = 3D 'direction' vector between lightPoint and surface(vertex/pixel) position

SurfaceNormal = the normal you're asking about, the direction this polygon is facing. Nowadays we often add detail by normal/bumpMaps. Each pixel on a surface could get its own normal to simulate a a varying or rough structure.

lightColor = the color of the light (blue, green, white, whatever)

surfaceColor = the polygon material. Ussually the diffuse/albedo Texture applied on that polygon.

The dot(L,N) part calculates the correlation between the lightVector and the surfaceNormal. If they are completely the opposite, we get maximum lighting:

A.---LightVector {+1,0,0} ----> <-- pixel/surfaceNormal {-1,0,0}
B.---LightVector {+1,0,0} ----> -- pixel/surfaceNormal {+1,0,0} -->

In A the dotProduct will result in 1.0. In B it will be -1. If the angle between the 2 rays was 90 degrees, it would result in 0.0. The saturate function keeps the result between 0 and 1. 0=fully shaded/not litten, 1=fully litten. Everything in between will get a value somewhere between 0 and 1. That is the darkened/shaded side we would see on that ball when shining a light on it.



Besides diffuseLighting, normals can be used for many, many more things. Such as
- Specular lighting (highlight on a billiard ball, depends on eye position)
- Rim Lighting / Cell shading (detecting object edges, cartoony shading)
- Reflections (glass, metal, cubeMaps, and so on)
- reflecting (physics, raytracer, bouncing objects)
- Diagnostics (rendering the normal as a color to analyse objects/scenes)
- ...
Almost every shader that does something with lighting or reflections will use normals. Simply because we need to know how the light rays 'fall' and reflect on a surface.

Greetings,
Rick

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!