Hi community. I finished a new demo about normal mapping using DirectX 11.
When we apply a brick texture to a cone shaped column, the specular
highlights looks unnaturally smooth compared to the bumpiness of the
brick texture. This is because the underlying mesh geometry is smooth,
and we have merely applied the image of bumpy bricks over the smooth
cylindrical surface. However, the lighting calculations are performed
based on the mesh geometry (in particular the interpolated vertex
normals), and not the texture image. Thus the lighting is not completely
consistent with the texture.
The strategy of normal mapping is to texture our polygons with normal
maps. We then have per-pixel normals which capture the fine details of a
surface like bumps, scratches and crevices. We then use these per-pixel
normals from the normal map in our lighting calculations, instead of
the interpolated vertex normal.
A normal map is a texture, but instead of storing RGB data at each texel,
we store a compressed x-coordinate, y-coordinate and z-coordinate in
the red component, green component, and blue component, respectively.
These coordinates define a normal vector, thus a normal map stores a
normal vector at each pixel.
To generate normal maps we can use a NVIDIA Photoshop Plugin or CrazyBump software.
The coordinates of the normals in a normal map are relative to the
texture space coordinate system. Consequently, to do lighting
calculations, we need to transform the normal from the texture space to
the world space so that the lights and normals are in the same
coordinate system. The TBN-bases (Tangent, Bitangent, Normal) built at
each vertex facilitates the transformation from texture space to world
Introduction to 3D Game programming using DirectX 11.
Real Time Rendering