Sign in to follow this  
Yours3!f

bump mapping in deferred renderer

Recommended Posts

Hi,

I'm trying to implement bump mapping in my deferred renderer.
I'm following this tutorial: http://fabiensanglard.net/bumpMapping/index.php

I already have the tangent and bitangent attributes calculated.

Currently I'm storing albedo, per-face normals, and depth in my G-buffer. However I don't know how should I implement bump mapping, since in the tutorial he transforms the light and eye vectors to tangent space, but leaves the normals alone.

So how is it usually done?

Best regards,
Yours3!f

Share this post


Link to post
Share on other sites
Instead of saving the per-face normal to the g-buffer, save the procceeded normal of the normal map to the buffer, that's all.

Share this post


Link to post
Share on other sites
What coordinates do you do your lighting in? To illustrate, I'm going to assume world-space below.

Normal maps are usually stored in tangent-space. Your vertex's [font=courier new,courier,monospace]T[/font], [font=courier new,courier,monospace]B[/font], and [font=courier new,courier,monospace]N[/font] values form a 3x3 rotation matrix, which rotates from model-space into tangent-space.
The transpose of a rotation matrix is the same as it's inverse, so [font=courier new,courier,monospace]transpose(TBN)[/font] rotates tangent-space values back into model-space.

By using your world matrix ([i]model-space to world-space transform[/i]) and your TBN, you can rotate your normal-map values into world-space, and then write them into your g-buffer as usual.


To contrast, the method you're reading about performs it's lighting calculations in tangent-space. This means that they don't have to perform per-pixel rotations on their normal maps, but they do have to perform per-pixel rotation of the light and camera instead. There's no "right space" to do your calculations in -- it's just each will require different parameters to be transformed in different ways.

For certain kinds of forward-renderers, it turns out that tangent-space lighting can be the optimal choice ([i]least transforms[/i]), but deferred renderers usually use world or view-space ([i]tangent space would require writing the TBN into your G-Buffer![/i]).

Share this post


Link to post
Share on other sites
[quote]Instead of saving the per-face normal to the g-buffer, save the procceeded normal of the normal map to the buffer, that's all.[/quote]

and then proceed to use those normals with the lighting equations in the lighting stage?

Share this post


Link to post
Share on other sites
[quote]What coordinates do you do your lighting in?[/quote]

Since I use OpenGL the simplest way to go was view-space. So I encode and store everything in view-space, and reconstruct everything to view-space, and do the lighting in view-space.

[quote name='Hodgman' timestamp='1333539896' post='4928172']
For certain kinds of forward-renderers, it turns out that tangent-space lighting can be the optimal choice ([i]least transforms[/i]), but deferred renderers usually use world or view-space ([i]tangent space would require writing the TBN into your G-Buffer![/i]).
[/quote]

yep that's what I was afraid of

[quote]Normal maps are usually stored in tangent-space. Your vertex's [font=courier new,courier,monospace]T[/font], [font=courier new,courier,monospace]B[/font], and [font=courier new,courier,monospace]N[/font] values form a 3x3 rotation matrix, which rotates from model-space into tangent-space.
The transpose of a rotation matrix is the same as it's inverse, so [font=courier new,courier,monospace]transpose(TBN)[/font] rotates tangent-space values back into model-space.

By using your world matrix ([i]model-space to world-space transform[/i]) and your TBN, you can rotate your normal-map values into world-space, and then write them into your g-buffer as usual.[/quote]

So the normal map that I load in would be in tangent space? thats interesting... but can't they be stored in view-space? it would be the same -1...1 range that could be stored as 0...1

Ok so I tried to create a concrete implementation using your advice:
[CODE]
[vertex shader]
#version 410
uniform mat4 m4_p, m4_mv; //projection and modelview matrices
uniform mat3 m3_n; //normal matrix, I think gl_NormalMatrix in old GLSL
in vec4 v4_vertex; //per vertex attributes
in vec3 v3_normal;
in vec3 v3_tangent;
in vec2 v2_texture; //texture coordinates
out cross_shader_data
{
vec2 v2_texture_coords;
vec4 position; //view space pos
mat3 m3_tbn; //transposed tbn matrix
} vertex_output;
void main()
{
vec3 vs_normal = m3_n * v3_normal;
vec3 vs_tangent = m3_n * v3_tangent;
vec3 vs_bitangent = cross(vs_normal, vs_tangent);

//compose tbn matrix and transpose it
vertex_output.m3_tbn = transpose(mat3(normalize(vs_normal), normalize(vs_tangent), normalize(vs_bitangent)));

vertex_output.v2_texture_coords = v2_texture;
vertex_output.position = m4_mv * v4_vertex;
gl_Position = m4_p * vertex_output.position;
}
[pixel shader]
#version 410
uniform sampler2D texture0; //albedo
uniform sampler2D texture1; //normal map
uniform float far;
in cross_shader_data
{
vec2 v2_texture_coords;
vec4 position;
mat3 m3_tbn
} pixel_input;
out vec4 v4_albedo;
out vec4 v4_normal;
out vec4 v4_depth;
vec2 encode_normals_spheremap(vec3 n)
{
vec2 enc = (normalize(n.xy) * sqrt(-n.z * 0.5 + 0.5)) * 0.5 + 0.5;
return enc;
}
void main()
{
v4_albedo = texture(texture0, pixel_input.v2_texture_coords);
//sample normal map, multiply by transposed tbn matrix, normalize it, and encode it using spheremap encoding
//so I would store this way view-space normals right?
v4_normal.xy = encode_normals_spheremap(normalize(pixel_input.m3_tbn * texture(texture1, pixel_input.v2_texture_coords).xyz));
v4_depth.x = pixel_input.position.z / -far; //store linear depth
}
[/CODE]

Share this post


Link to post
Share on other sites
[quote name='Yours3!f' timestamp='1333550372' post='4928213']So the normal map that I load in would be in tangent space? thats interesting... but can't they be stored in view-space? it would be the same -1...1 range that could be stored as 0...1[/quote]If they were stored in view-space, you'd need to load a new texture each time the view changed [img]http://public.gamedev.net//public/style_emoticons/default/wink.png[/img]
Tangent-space means 'relative to the surface', where Z (or blue [i](or the surface normal)[/i]) is the axis sticking out of the surface, which is why a flat blue normal-map is a flat surface..

Unless your normal-map texture is in some kind of signed format, you probably need a "*2-1" on it, like[font=courier new,courier,monospace] (texture(texture1, pixel_input.v2_texture_coords).xyz*2-1)[/font]

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1333551349' post='4928214']
If they were stored in view-space, you'd need to load a new texture each time the view changed
[/quote]

omg I didn't think about that, but you're right...

[quote]
Unless your normal-map texture is in some kind of signed format, you probably need a "*2-1" on it, like[font=courier new,courier,monospace] (texture(texture1, pixel_input.v2_texture_coords).xyz*2-1)[/font]
[/quote]

oh, I forgot to add that :) but then apart from that the shader is correct right? And I can do lighting in view-space as usual, right?

Share this post


Link to post
Share on other sites
Hi,

I tried to implement it, but the normals just doesn't look right, so the lighting...

I attached a screenshot about it. The cube is bumpmapped the rest isn't.

here are the shader that I used:

[CODE]
[vertex_shader]
#version 410
uniform mat4 m4_p, m4_mv;
uniform mat3 m3_n;
in vec4 v4_vertex;
in vec3 v3_normal;
in vec3 v3_tangent;
in vec2 v2_texture;
out cross_shader_data
{
vec2 v2_texture_coords;
vec4 position;
mat3 tbn;
} vertex_output;
void main()
{
vec3 normal = m3_n * v3_normal;
vec3 tangent = m3_n * v3_tangent;
vertex_output.tbn = transpose(mat3( normalize(m3_n * v3_normal),
normalize(m3_n * v3_tangent),
normalize(cross(normal, tangent)) ));
vertex_output.v2_texture_coords = v2_texture;
vertex_output.position = m4_mv * v4_vertex;
gl_Position = m4_p * vertex_output.position;
}
[pixel shader]
#version 410
uniform sampler2D texture0;
uniform sampler2D texture1;
uniform float far;
in cross_shader_data
{
vec2 v2_texture_coords;
vec4 position;
mat3 tbn;
} pixel_input;
out vec4 v4_albedo;
out vec4 v4_normal;
out vec4 v4_depth;
vec2 encode_normals_spheremap(vec3 n)
{
vec2 enc = (normalize(n.xy) * sqrt(-n.z * 0.5 + 0.5)) * 0.5 + 0.5;
return enc;
}
void main()
{
v4_albedo = texture(texture0, pixel_input.v2_texture_coords.st);
v4_normal.xy = encode_normals_spheremap(pixel_input.tbn * normalize(texture(texture1, pixel_input.v2_texture_coords).xyz * 2.0 - 1.0));
v4_depth.x = pixel_input.position.z / -far;
}
[/CODE]

Share this post


Link to post
Share on other sites
Ok, I think I got it right, but please confirm.

I had to change the vertex shader to this:
[CODE]
#version 410
uniform mat4 m4_p, m4_mv;
uniform mat3 m3_n;
in vec4 v4_vertex;
in vec3 v3_normal;
in vec3 v3_tangent;
in vec2 v2_texture;
out cross_shader_data
{
vec2 v2_texture_coords;
vec4 position;
mat3 tbn;
} vertex_output;
void main()
{
vec3 normal = normalize(m3_n * v3_normal);
vec3 tangent = normalize(m3_n * v3_tangent);
vec3 bitangent = normalize(cross(normal, tangent));
vertex_output.tbn = transpose(mat3( tangent.x, bitangent.x, normal.x,
tangent.y, bitangent.y, normal.y,
tangent.z, bitangent.z, normal.z ));
vertex_output.v2_texture_coords = v2_texture;
vertex_output.position = m4_mv * v4_vertex;
gl_Position = m4_p * vertex_output.position;
}
[/CODE]

the issue was the column major tbn matrix vs the row major, which I used in the previous vertex shader.

I attached an image about the normals, but please confirm if it looks right.

Share this post


Link to post
Share on other sites
Your last screenshot looks like it's probably right.

BTW, these two are the same thing:[code] vertex_output.tbn = transpose(mat3( tangent.x, bitangent.x, normal.x,
tangent.y, bitangent.y, normal.y,
tangent.z, bitangent.z, normal.z ));//manually transpose, then (un)transpose

vertex_output.tbn = mat3(tangent, bitangent, normal);[/code]

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1333608562' post='4928400']
Your last screenshot looks like it's probably right.

BTW, these two are the same thing:[code] vertex_output.tbn = transpose(mat3( tangent.x, bitangent.x, normal.x,
tangent.y, bitangent.y, normal.y,
tangent.z, bitangent.z, normal.z ));//manually transpose, then (un)transpose

vertex_output.tbn = mat3(tangent, bitangent, normal);[/code]
[/quote]

yeah I just noticed that, so no matrix transpose or inverse at all --> very cheap [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
and thanks for confirming AND for the help [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]

Share this post


Link to post
Share on other sites
vertex_output.tbn = mat3(tangent, bitangent, normal);
Should you renormalize that at fragment shader? Now values are linearly interpolated but how valid the transformation is? If you send unnormalized tangent, bitanget and normal and then normalize those at fragment shader and create matrix there you might get better results. But havent tested it myself so I can't say is there any real difference.

Share this post


Link to post
Share on other sites
[quote name='kalle_h' timestamp='1336253975' post='4937667']
vertex_output.tbn = mat3(tangent, bitangent, normal);
Should you renormalize that at fragment shader? Now values are linearly interpolated but how valid the transformation is? If you send unnormalized tangent, bitanget and normal and then normalize those at fragment shader and create matrix there you might get better results. But havent tested it myself so I can't say is there any real difference.
[/quote]

ummm as far as I know you only need to normalize them at pixel shader stage if they're unnormalized... but I'm already sending them normalized so interpolation gives correct results. (I'm not sure about this, but I remember something like this...)
and I think thats why I do normalization in the vertex shader, in case the normals arent normalized. (which may happen if you scale the object)
but you should try it out :)

Share this post


Link to post
Share on other sites
Interpolated normals does not stay normalized. This is basic thing that is done for every lightmodel. Normal is renormalized at pixel shader.

Try to lerp between [1,0,0] and [0,1,0] and you clearly see the unormalized results.
But problem is more like. Is it needed for normal mapping? Or is the problem too small to be notified.

Share this post


Link to post
Share on other sites
[quote name='kalle_h' timestamp='1336292000' post='4937725']
Interpolated normals does not stay normalized. This is basic thing that is done for every lightmodel. Normal is renormalized at pixel shader.

Try to lerp between [1,0,0] and [0,1,0] and you clearly see the unormalized results.
But problem is more like. Is it needed for normal mapping? Or is the problem too small to be notified.
[/quote]

well, then they that should be done at pixel shader stage...

[s]well, I haven't noticed any visible errors, so I guess the error is neglible.[/s]
when I simply displayed the normals, some error is visible, that is when you normalize in the vertex shader the normals in the end are not unit vectors, but they have a little bit less length, which makes lighting a little darker. (see the cubes on the pictures)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this