# bump mapping in deferred renderer

This topic is 2115 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi,

I'm trying to implement bump mapping in my deferred renderer.
I'm following this tutorial: http://fabiensanglard.net/bumpMapping/index.php

I already have the tangent and bitangent attributes calculated.

Currently I'm storing albedo, per-face normals, and depth in my G-buffer. However I don't know how should I implement bump mapping, since in the tutorial he transforms the light and eye vectors to tangent space, but leaves the normals alone.

So how is it usually done?

Best regards,
Yours3!f

##### Share on other sites
Instead of saving the per-face normal to the g-buffer, save the procceeded normal of the normal map to the buffer, that's all.

##### Share on other sites
What coordinates do you do your lighting in? To illustrate, I'm going to assume world-space below.

Normal maps are usually stored in tangent-space. Your vertex's [font=courier new,courier,monospace]T[/font], [font=courier new,courier,monospace]B[/font], and [font=courier new,courier,monospace]N[/font] values form a 3x3 rotation matrix, which rotates from model-space into tangent-space.
The transpose of a rotation matrix is the same as it's inverse, so [font=courier new,courier,monospace]transpose(TBN)[/font] rotates tangent-space values back into model-space.

By using your world matrix (model-space to world-space transform) and your TBN, you can rotate your normal-map values into world-space, and then write them into your g-buffer as usual.

To contrast, the method you're reading about performs it's lighting calculations in tangent-space. This means that they don't have to perform per-pixel rotations on their normal maps, but they do have to perform per-pixel rotation of the light and camera instead. There's no "right space" to do your calculations in -- it's just each will require different parameters to be transformed in different ways.

For certain kinds of forward-renderers, it turns out that tangent-space lighting can be the optimal choice (least transforms), but deferred renderers usually use world or view-space (tangent space would require writing the TBN into your G-Buffer!).

##### Share on other sites
Instead of saving the per-face normal to the g-buffer, save the procceeded normal of the normal map to the buffer, that's all.[/quote]

and then proceed to use those normals with the lighting equations in the lighting stage?

##### Share on other sites
What coordinates do you do your lighting in?[/quote]

Since I use OpenGL the simplest way to go was view-space. So I encode and store everything in view-space, and reconstruct everything to view-space, and do the lighting in view-space.

For certain kinds of forward-renderers, it turns out that tangent-space lighting can be the optimal choice (least transforms), but deferred renderers usually use world or view-space (tangent space would require writing the TBN into your G-Buffer!).

yep that's what I was afraid of

Normal maps are usually stored in tangent-space. Your vertex's [font=courier new,courier,monospace]T[/font], [font=courier new,courier,monospace]B[/font], and [font=courier new,courier,monospace]N[/font] values form a 3x3 rotation matrix, which rotates from model-space into tangent-space.
The transpose of a rotation matrix is the same as it's inverse, so [font=courier new,courier,monospace]transpose(TBN)[/font] rotates tangent-space values back into model-space.

By using your world matrix (model-space to world-space transform) and your TBN, you can rotate your normal-map values into world-space, and then write them into your g-buffer as usual.[/quote]

So the normal map that I load in would be in tangent space? thats interesting... but can't they be stored in view-space? it would be the same -1...1 range that could be stored as 0...1

Ok so I tried to create a concrete implementation using your advice:
 [vertex shader] #version 410 uniform mat4 m4_p, m4_mv; //projection and modelview matrices uniform mat3 m3_n; //normal matrix, I think gl_NormalMatrix in old GLSL in vec4 v4_vertex; //per vertex attributes in vec3 v3_normal; in vec3 v3_tangent; in vec2 v2_texture; //texture coordinates out cross_shader_data { vec2 v2_texture_coords; vec4 position; //view space pos mat3 m3_tbn; //transposed tbn matrix } vertex_output; void main() { vec3 vs_normal = m3_n * v3_normal; vec3 vs_tangent = m3_n * v3_tangent; vec3 vs_bitangent = cross(vs_normal, vs_tangent); //compose tbn matrix and transpose it vertex_output.m3_tbn = transpose(mat3(normalize(vs_normal), normalize(vs_tangent), normalize(vs_bitangent))); vertex_output.v2_texture_coords = v2_texture; vertex_output.position = m4_mv * v4_vertex; gl_Position = m4_p * vertex_output.position; } [pixel shader] #version 410 uniform sampler2D texture0; //albedo uniform sampler2D texture1; //normal map uniform float far; in cross_shader_data { vec2 v2_texture_coords; vec4 position; mat3 m3_tbn } pixel_input; out vec4 v4_albedo; out vec4 v4_normal; out vec4 v4_depth; vec2 encode_normals_spheremap(vec3 n) { vec2 enc = (normalize(n.xy) * sqrt(-n.z * 0.5 + 0.5)) * 0.5 + 0.5; return enc; } void main() { v4_albedo = texture(texture0, pixel_input.v2_texture_coords); //sample normal map, multiply by transposed tbn matrix, normalize it, and encode it using spheremap encoding //so I would store this way view-space normals right? v4_normal.xy = encode_normals_spheremap(normalize(pixel_input.m3_tbn * texture(texture1, pixel_input.v2_texture_coords).xyz)); v4_depth.x = pixel_input.position.z / -far; //store linear depth } 

##### Share on other sites
So the normal map that I load in would be in tangent space? thats interesting... but can't they be stored in view-space? it would be the same -1...1 range that could be stored as 0...1
If they were stored in view-space, you'd need to load a new texture each time the view changed
Tangent-space means 'relative to the surface', where Z (or blue (or the surface normal)) is the axis sticking out of the surface, which is why a flat blue normal-map is a flat surface..

Unless your normal-map texture is in some kind of signed format, you probably need a "*2-1" on it, like[font=courier new,courier,monospace] (texture(texture1, pixel_input.v2_texture_coords).xyz*2-1)[/font]

##### Share on other sites

If they were stored in view-space, you'd need to load a new texture each time the view changed

omg I didn't think about that, but you're right...

Unless your normal-map texture is in some kind of signed format, you probably need a "*2-1" on it, like[font=courier new,courier,monospace] (texture(texture1, pixel_input.v2_texture_coords).xyz*2-1)[/font]
[/quote]

oh, I forgot to add that but then apart from that the shader is correct right? And I can do lighting in view-space as usual, right?

##### Share on other sites
Hi,

I tried to implement it, but the normals just doesn't look right, so the lighting...

I attached a screenshot about it. The cube is bumpmapped the rest isn't.

here are the shader that I used:

 [vertex_shader] #version 410 uniform mat4 m4_p, m4_mv; uniform mat3 m3_n; in vec4 v4_vertex; in vec3 v3_normal; in vec3 v3_tangent; in vec2 v2_texture; out cross_shader_data { vec2 v2_texture_coords; vec4 position; mat3 tbn; } vertex_output; void main() { vec3 normal = m3_n * v3_normal; vec3 tangent = m3_n * v3_tangent; vertex_output.tbn = transpose(mat3( normalize(m3_n * v3_normal), normalize(m3_n * v3_tangent), normalize(cross(normal, tangent)) )); vertex_output.v2_texture_coords = v2_texture; vertex_output.position = m4_mv * v4_vertex; gl_Position = m4_p * vertex_output.position; } [pixel shader] #version 410 uniform sampler2D texture0; uniform sampler2D texture1; uniform float far; in cross_shader_data { vec2 v2_texture_coords; vec4 position; mat3 tbn; } pixel_input; out vec4 v4_albedo; out vec4 v4_normal; out vec4 v4_depth; vec2 encode_normals_spheremap(vec3 n) { vec2 enc = (normalize(n.xy) * sqrt(-n.z * 0.5 + 0.5)) * 0.5 + 0.5; return enc; } void main() { v4_albedo = texture(texture0, pixel_input.v2_texture_coords.st); v4_normal.xy = encode_normals_spheremap(pixel_input.tbn * normalize(texture(texture1, pixel_input.v2_texture_coords).xyz * 2.0 - 1.0)); v4_depth.x = pixel_input.position.z / -far; } 

##### Share on other sites
Ok, I think I got it right, but please confirm.

I had to change the vertex shader to this:
 #version 410 uniform mat4 m4_p, m4_mv; uniform mat3 m3_n; in vec4 v4_vertex; in vec3 v3_normal; in vec3 v3_tangent; in vec2 v2_texture; out cross_shader_data { vec2 v2_texture_coords; vec4 position; mat3 tbn; } vertex_output; void main() { vec3 normal = normalize(m3_n * v3_normal); vec3 tangent = normalize(m3_n * v3_tangent); vec3 bitangent = normalize(cross(normal, tangent)); vertex_output.tbn = transpose(mat3( tangent.x, bitangent.x, normal.x, tangent.y, bitangent.y, normal.y, tangent.z, bitangent.z, normal.z )); vertex_output.v2_texture_coords = v2_texture; vertex_output.position = m4_mv * v4_vertex; gl_Position = m4_p * vertex_output.position; } 

the issue was the column major tbn matrix vs the row major, which I used in the previous vertex shader.

I attached an image about the normals, but please confirm if it looks right.

##### Share on other sites
Your last screenshot looks like it's probably right.

BTW, these two are the same thing: vertex_output.tbn = transpose(mat3( tangent.x, bitangent.x, normal.x, tangent.y, bitangent.y, normal.y, tangent.z, bitangent.z, normal.z ));//manually transpose, then (un)transpose vertex_output.tbn = mat3(tangent, bitangent, normal);

##### Share on other sites

Your last screenshot looks like it's probably right.

BTW, these two are the same thing: vertex_output.tbn = transpose(mat3( tangent.x, bitangent.x, normal.x, tangent.y, bitangent.y, normal.y, tangent.z, bitangent.z, normal.z ));//manually transpose, then (un)transpose vertex_output.tbn = mat3(tangent, bitangent, normal);

yeah I just noticed that, so no matrix transpose or inverse at all --> very cheap
and thanks for confirming AND for the help

##### Share on other sites
vertex_output.tbn = mat3(tangent, bitangent, normal);
Should you renormalize that at fragment shader? Now values are linearly interpolated but how valid the transformation is? If you send unnormalized tangent, bitanget and normal and then normalize those at fragment shader and create matrix there you might get better results. But havent tested it myself so I can't say is there any real difference.

##### Share on other sites

vertex_output.tbn = mat3(tangent, bitangent, normal);
Should you renormalize that at fragment shader? Now values are linearly interpolated but how valid the transformation is? If you send unnormalized tangent, bitanget and normal and then normalize those at fragment shader and create matrix there you might get better results. But havent tested it myself so I can't say is there any real difference.

ummm as far as I know you only need to normalize them at pixel shader stage if they're unnormalized... but I'm already sending them normalized so interpolation gives correct results. (I'm not sure about this, but I remember something like this...)
and I think thats why I do normalization in the vertex shader, in case the normals arent normalized. (which may happen if you scale the object)
but you should try it out

##### Share on other sites
Interpolated normals does not stay normalized. This is basic thing that is done for every lightmodel. Normal is renormalized at pixel shader.

Try to lerp between [1,0,0] and [0,1,0] and you clearly see the unormalized results.
But problem is more like. Is it needed for normal mapping? Or is the problem too small to be notified.

##### Share on other sites

Interpolated normals does not stay normalized. This is basic thing that is done for every lightmodel. Normal is renormalized at pixel shader.

Try to lerp between [1,0,0] and [0,1,0] and you clearly see the unormalized results.
But problem is more like. Is it needed for normal mapping? Or is the problem too small to be notified.

well, then they that should be done at pixel shader stage...

[s]well, I haven't noticed any visible errors, so I guess the error is neglible.[/s]
when I simply displayed the normals, some error is visible, that is when you normalize in the vertex shader the normals in the end are not unit vectors, but they have a little bit less length, which makes lighting a little darker. (see the cubes on the pictures)