bump mapping in deferred renderer

Started by
13 comments, last by Yours3!f 11 years, 11 months ago
Hi,

I'm trying to implement bump mapping in my deferred renderer.
I'm following this tutorial: http://fabiensanglard.net/bumpMapping/index.php

I already have the tangent and bitangent attributes calculated.

Currently I'm storing albedo, per-face normals, and depth in my G-buffer. However I don't know how should I implement bump mapping, since in the tutorial he transforms the light and eye vectors to tangent space, but leaves the normals alone.

So how is it usually done?

Best regards,
Yours3!f
Advertisement
Instead of saving the per-face normal to the g-buffer, save the procceeded normal of the normal map to the buffer, that's all.
What coordinates do you do your lighting in? To illustrate, I'm going to assume world-space below.

Normal maps are usually stored in tangent-space. Your vertex's [font=courier new,courier,monospace]T[/font], [font=courier new,courier,monospace]B[/font], and [font=courier new,courier,monospace]N[/font] values form a 3x3 rotation matrix, which rotates from model-space into tangent-space.
The transpose of a rotation matrix is the same as it's inverse, so [font=courier new,courier,monospace]transpose(TBN)[/font] rotates tangent-space values back into model-space.

By using your world matrix (model-space to world-space transform) and your TBN, you can rotate your normal-map values into world-space, and then write them into your g-buffer as usual.


To contrast, the method you're reading about performs it's lighting calculations in tangent-space. This means that they don't have to perform per-pixel rotations on their normal maps, but they do have to perform per-pixel rotation of the light and camera instead. There's no "right space" to do your calculations in -- it's just each will require different parameters to be transformed in different ways.

For certain kinds of forward-renderers, it turns out that tangent-space lighting can be the optimal choice (least transforms), but deferred renderers usually use world or view-space (tangent space would require writing the TBN into your G-Buffer!).
Instead of saving the per-face normal to the g-buffer, save the procceeded normal of the normal map to the buffer, that's all.[/quote]

and then proceed to use those normals with the lighting equations in the lighting stage?
What coordinates do you do your lighting in?[/quote]

Since I use OpenGL the simplest way to go was view-space. So I encode and store everything in view-space, and reconstruct everything to view-space, and do the lighting in view-space.


For certain kinds of forward-renderers, it turns out that tangent-space lighting can be the optimal choice (least transforms), but deferred renderers usually use world or view-space (tangent space would require writing the TBN into your G-Buffer!).


yep that's what I was afraid of

Normal maps are usually stored in tangent-space. Your vertex's [font=courier new,courier,monospace]T[/font], [font=courier new,courier,monospace]B[/font], and [font=courier new,courier,monospace]N[/font] values form a 3x3 rotation matrix, which rotates from model-space into tangent-space.
The transpose of a rotation matrix is the same as it's inverse, so [font=courier new,courier,monospace]transpose(TBN)[/font] rotates tangent-space values back into model-space.

By using your world matrix (model-space to world-space transform) and your TBN, you can rotate your normal-map values into world-space, and then write them into your g-buffer as usual.[/quote]

So the normal map that I load in would be in tangent space? thats interesting... but can't they be stored in view-space? it would be the same -1...1 range that could be stored as 0...1

Ok so I tried to create a concrete implementation using your advice:

[vertex shader]
#version 410
uniform mat4 m4_p, m4_mv; //projection and modelview matrices
uniform mat3 m3_n; //normal matrix, I think gl_NormalMatrix in old GLSL
in vec4 v4_vertex; //per vertex attributes
in vec3 v3_normal;
in vec3 v3_tangent;
in vec2 v2_texture; //texture coordinates
out cross_shader_data
{
vec2 v2_texture_coords;
vec4 position; //view space pos
mat3 m3_tbn; //transposed tbn matrix
} vertex_output;
void main()
{
vec3 vs_normal = m3_n * v3_normal;
vec3 vs_tangent = m3_n * v3_tangent;
vec3 vs_bitangent = cross(vs_normal, vs_tangent);

//compose tbn matrix and transpose it
vertex_output.m3_tbn = transpose(mat3(normalize(vs_normal), normalize(vs_tangent), normalize(vs_bitangent)));

vertex_output.v2_texture_coords = v2_texture;
vertex_output.position = m4_mv * v4_vertex;
gl_Position = m4_p * vertex_output.position;
}
[pixel shader]
#version 410
uniform sampler2D texture0; //albedo
uniform sampler2D texture1; //normal map
uniform float far;
in cross_shader_data
{
vec2 v2_texture_coords;
vec4 position;
mat3 m3_tbn
} pixel_input;
out vec4 v4_albedo;
out vec4 v4_normal;
out vec4 v4_depth;
vec2 encode_normals_spheremap(vec3 n)
{
vec2 enc = (normalize(n.xy) * sqrt(-n.z * 0.5 + 0.5)) * 0.5 + 0.5;
return enc;
}
void main()
{
v4_albedo = texture(texture0, pixel_input.v2_texture_coords);
//sample normal map, multiply by transposed tbn matrix, normalize it, and encode it using spheremap encoding
//so I would store this way view-space normals right?
v4_normal.xy = encode_normals_spheremap(normalize(pixel_input.m3_tbn * texture(texture1, pixel_input.v2_texture_coords).xyz));
v4_depth.x = pixel_input.position.z / -far; //store linear depth
}
So the normal map that I load in would be in tangent space? thats interesting... but can't they be stored in view-space? it would be the same -1...1 range that could be stored as 0...1
If they were stored in view-space, you'd need to load a new texture each time the view changed wink.png
Tangent-space means 'relative to the surface', where Z (or blue (or the surface normal)) is the axis sticking out of the surface, which is why a flat blue normal-map is a flat surface..

Unless your normal-map texture is in some kind of signed format, you probably need a "*2-1" on it, like[font=courier new,courier,monospace] (texture(texture1, pixel_input.v2_texture_coords).xyz*2-1)[/font]

If they were stored in view-space, you'd need to load a new texture each time the view changed


omg I didn't think about that, but you're right...


Unless your normal-map texture is in some kind of signed format, you probably need a "*2-1" on it, like[font=courier new,courier,monospace] (texture(texture1, pixel_input.v2_texture_coords).xyz*2-1)[/font]
[/quote]

oh, I forgot to add that :) but then apart from that the shader is correct right? And I can do lighting in view-space as usual, right?
Hi,

I tried to implement it, but the normals just doesn't look right, so the lighting...

I attached a screenshot about it. The cube is bumpmapped the rest isn't.

here are the shader that I used:


[vertex_shader]
#version 410
uniform mat4 m4_p, m4_mv;
uniform mat3 m3_n;
in vec4 v4_vertex;
in vec3 v3_normal;
in vec3 v3_tangent;
in vec2 v2_texture;
out cross_shader_data
{
vec2 v2_texture_coords;
vec4 position;
mat3 tbn;
} vertex_output;
void main()
{
vec3 normal = m3_n * v3_normal;
vec3 tangent = m3_n * v3_tangent;
vertex_output.tbn = transpose(mat3( normalize(m3_n * v3_normal),
normalize(m3_n * v3_tangent),
normalize(cross(normal, tangent)) ));
vertex_output.v2_texture_coords = v2_texture;
vertex_output.position = m4_mv * v4_vertex;
gl_Position = m4_p * vertex_output.position;
}
[pixel shader]
#version 410
uniform sampler2D texture0;
uniform sampler2D texture1;
uniform float far;
in cross_shader_data
{
vec2 v2_texture_coords;
vec4 position;
mat3 tbn;
} pixel_input;
out vec4 v4_albedo;
out vec4 v4_normal;
out vec4 v4_depth;
vec2 encode_normals_spheremap(vec3 n)
{
vec2 enc = (normalize(n.xy) * sqrt(-n.z * 0.5 + 0.5)) * 0.5 + 0.5;
return enc;
}
void main()
{
v4_albedo = texture(texture0, pixel_input.v2_texture_coords.st);
v4_normal.xy = encode_normals_spheremap(pixel_input.tbn * normalize(texture(texture1, pixel_input.v2_texture_coords).xyz * 2.0 - 1.0));
v4_depth.x = pixel_input.position.z / -far;
}
Ok, I think I got it right, but please confirm.

I had to change the vertex shader to this:

#version 410
uniform mat4 m4_p, m4_mv;
uniform mat3 m3_n;
in vec4 v4_vertex;
in vec3 v3_normal;
in vec3 v3_tangent;
in vec2 v2_texture;
out cross_shader_data
{
vec2 v2_texture_coords;
vec4 position;
mat3 tbn;
} vertex_output;
void main()
{
vec3 normal = normalize(m3_n * v3_normal);
vec3 tangent = normalize(m3_n * v3_tangent);
vec3 bitangent = normalize(cross(normal, tangent));
vertex_output.tbn = transpose(mat3( tangent.x, bitangent.x, normal.x,
tangent.y, bitangent.y, normal.y,
tangent.z, bitangent.z, normal.z ));
vertex_output.v2_texture_coords = v2_texture;
vertex_output.position = m4_mv * v4_vertex;
gl_Position = m4_p * vertex_output.position;
}


the issue was the column major tbn matrix vs the row major, which I used in the previous vertex shader.

I attached an image about the normals, but please confirm if it looks right.
Your last screenshot looks like it's probably right.

BTW, these two are the same thing: vertex_output.tbn = transpose(mat3( tangent.x, bitangent.x, normal.x,
tangent.y, bitangent.y, normal.y,
tangent.z, bitangent.z, normal.z ));//manually transpose, then (un)transpose

vertex_output.tbn = mat3(tangent, bitangent, normal);

This topic is closed to new replies.

Advertisement