tutorial deferred shading? [Implemented it, not working properly]

Started by
9 comments, last by BlueSpud 10 years, 6 months ago

I've been trying to implement a 1 light deferred shading for upwards of 10 hours no because I have no idea what I'm doing wrong. I'm writing this using GLSL. It seems I've tried every variation of view space, world space, etc. and nothing works. I've looked on the internet for tutorials, but I can't find any, just papers explaining the theory. If anyone could point me to a good tutorial, that would be greatly appreciated. Thanks.

Advertisement

There are a lot of tutorials about deferred rendering out there (most might be in XNA/DirectX but you should be able to port the code to OpenGL because it all works the same way):

There's an Hieroglyph 3 sample that with a good implementation of Deferred Rendering (it's in D3D11...)

Deferred Rendering (XNA) <- This one looks good and well explained

Simple OpenGL Deferred Rendering <- This one stores full position instead of reconstructing it from depth...

There are a lot of tutorials about deferred rendering out there (most might be in XNA/DirectX but you should be able to port the code to OpenGL because it all works the same way):

There's an Hieroglyph 3 sample that with a good implementation of Deferred Rendering (it's in D3D11...)

Deferred Rendering (XNA) <- This one looks good and well explained

Simple OpenGL Deferred Rendering <- This one stores full position instead of reconstructing it from depth...

Thanks for pointing me in the right direction... but its still not working properly. Maybe you can help?

Here is what I have:

(these are all vertex shaders, the fragment shader just draws it.)

position shader:


varying vec3 pos;
void main(void)
{
    gl_Position =gl_ModelViewProjectionMatrix*gl_Vertex;
    pos = vec3(gl_ModelViewMatrix*gl_Vertex);
}

I know this can be inaccurate and such, but its just temporary.

Normal Shader:


varying vec3 normal;
void main(void)
{
    gl_Position =gl_ModelViewProjectionMatrix*gl_Vertex;
    normal = gl_NormalMatrix*gl_Normal;
}

The Albedo is just rendered with no shader, just textures.

Then here is my fragment light shader:


uniform sampler2D positionMap;
uniform sampler2D normalMap;
uniform sampler2D albedoMap;
varying vec2 texcoord;

void main()
{
    //get all the G-buffer information
    vec3 normal = ((texture2D(normalMap,texcoord)).rgb);
    vec3 color = (texture2D(albedoMap,texcoord)).rgb;
    vec3 position = (texture2D(positionMap,texcoord)).rgb;

    //start making the light happen
    vec3 lightVec = normalize(gl_LightSource[0].position.xyz - position);
    vec4 diffuselight = vec4(max(dot(normal,lightVec), 0.0),max(dot(normal,lightVec), 0.0),max(dot(normal,lightVec), 0.0),0.0);
    diffuselight = clamp(diffuselight, 0.0, 1.0)*2.0;


    gl_FragColor = vec4(diffuselight);
}

This makes really strange results like no shading from certain angles etc. Before I started implementing this I made a forward rendering shader for one light. Here it is

Vertex Shader:



varying vec3 N;
varying vec3 v;

void main(void)
{
    
    v = vec3(gl_ModelViewMatrix * gl_Vertex);
    N = (gl_NormalMatrix * gl_Normal);
    gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}

Fragment:


varying vec3 N;
varying vec3 v;

void main(void)
{
    vec3 L = normalize(gl_LightSource[0].position.xyz - v);
    vec4 diffuselight = gl_FrontLightProduct[0].diffuse * max(dot(N,L), 0.0);
    diffuselight = clamp(diffuselight, 0.0, 1.0);
    float l = length(L);
    float att = 1.0/(l*l);
    vec4 amb = vec4(.12,.12,.12,0);
    
    gl_FragColor = diffuselight*att+amb;
}

I know there are a lot of problems with this, but for the most part, it works fine. It strikes me as odd that they both have the exact same input, yet the output of the deferred one isn't even close to the same. You may not be able to answer this question, or it might need to be posted in a new question, but I'd love to get this figured out. Thanks for your help.

EDIT: As as side note, I have taken screenshots of the 2 shaders rendered side by side with the values rendered to the screen and then compared the colors in photoshop. They were identical which makes this even weirder to me.

First up, it looks like you render the object twice? Once for the position and again for the normals? In a deferred renderer you would normally output to multiple render targets rather than multiple renders. You also do not normally need to output the position to a render target, you can calculate the position in your light shader using the depth buffer and an inverse transform. With the following code, I should point out that I am not familiar with GLSL, as I typically work with HLSL in DirectX, so if I make some silly mistake please forgive me!

From your description:


This makes really strange results like no shading from certain angles etc

This sounds like your deferred rendering is actually "working" but you're getting incorrect lighting. Also from the description it suggests your normal is incorrect, the normal must be in world space (well, "must" is a bit strong. But they should be in a space consistent with your lighting calculations). I would also check the surface format you are rendering your normal into. If it's 32bit RGBA then you need to compress your normal before you write it onto the surface, otherwise negative values cannot be stored. Something like:


// Output normals to surface...
gl_Position =gl_ModelViewProjectionMatrix * gl_Vertex;
normal = ( gl_NormalMatrix * gl_Normal ) * 0.5f + 0.5f;

Here we compress the normal from the [-1,1] range to a [0,1] range suitable for storage in an unsigned render target.

We can convert back in the lighting shader with:


vec3 normal = normalize( texture2D(normalMap,texcoord)).rgb * 2.0f - 1.0f ); // Convert back to [-1,1] range

In your deferred lighting shader I would also change:


vec4 diffuselight = vec4(max(dot(normal,lightVec), 0.0),max(dot(normal,lightVec), 0.0),max(dot(normal,lightVec), 0.0),0.0);

to:


vec3 diffuselight = color * max(dot(normal, lightVec), 0.0);

I am also unsure of what the multiply by 2 is for here:


diffuselight = clamp(diffuselight, 0.0, 1.0)*2.0;

Anyway, hopefully there's something helpful there.

n!

Make sure your render targets are of sufficient precision. If you're storing position in ARGB8 then you're going to have a bad time.;)

Make sure your render targets are of sufficient precision. If you're storing position in ARGB8 then you're going to have a bad time.;)

I'm using frame buffers, but whenever I try to use more than GL_RGB or GL_RGBA it just shows up as a white screen, so I've been using GL_RGB. Could that be the problem that my textures aren't high enough precision?

First up, it looks like you render the object twice? Once for the position and again for the normals? In a deferred renderer you would normally output to multiple render targets rather than multiple renders. You also do not normally need to output the position to a render target, you can calculate the position in your light shader using the depth buffer and an inverse transform. With the following code, I should point out that I am not familiar with GLSL, as I typically work with HLSL in DirectX, so if I make some silly mistake please forgive me!

From your description:


This makes really strange results like no shading from certain angles etc

This sounds like your deferred rendering is actually "working" but you're getting incorrect lighting. Also from the description it suggests your normal is incorrect, the normal must be in world space (well, "must" is a bit strong. But they should be in a space consistent with your lighting calculations). I would also check the surface format you are rendering your normal into. If it's 32bit RGBA then you need to compress your normal before you write it onto the surface, otherwise negative values cannot be stored. Something like:


// Output normals to surface...
gl_Position =gl_ModelViewProjectionMatrix * gl_Vertex;
normal = ( gl_NormalMatrix * gl_Normal ) * 0.5f + 0.5f;

Here we compress the normal from the [-1,1] range to a [0,1] range suitable for storage in an unsigned render target.

We can convert back in the lighting shader with:


vec3 normal = normalize( texture2D(normalMap,texcoord)).rgb * 2.0f - 1.0f ); // Convert back to [-1,1] range

In your deferred lighting shader I would also change:


vec4 diffuselight = vec4(max(dot(normal,lightVec), 0.0),max(dot(normal,lightVec), 0.0),max(dot(normal,lightVec), 0.0),0.0);

to:


vec3 diffuselight = color * max(dot(normal, lightVec), 0.0);

I am also unsure of what the multiply by 2 is for here:


diffuselight = clamp(diffuselight, 0.0, 1.0)*2.0;

Anyway, hopefully there's something helpful there.

n!

First up, it looks like you render the object twice? Once for the position and again for the normals? In a deferred renderer you would normally output to multiple render targets rather than multiple renders. You also do not normally need to output the position to a render target, you can calculate the position in your light shader using the depth buffer and an inverse transform. With the following code, I should point out that I am not familiar with GLSL, as I typically work with HLSL in DirectX, so if I make some silly mistake please forgive me!

From your description:


This makes really strange results like no shading from certain angles etc

This sounds like your deferred rendering is actually "working" but you're getting incorrect lighting. Also from the description it suggests your normal is incorrect, the normal must be in world space (well, "must" is a bit strong. But they should be in a space consistent with your lighting calculations). I would also check the surface format you are rendering your normal into. If it's 32bit RGBA then you need to compress your normal before you write it onto the surface, otherwise negative values cannot be stored. Something like:


// Output normals to surface...
gl_Position =gl_ModelViewProjectionMatrix * gl_Vertex;
normal = ( gl_NormalMatrix * gl_Normal ) * 0.5f + 0.5f;

Here we compress the normal from the [-1,1] range to a [0,1] range suitable for storage in an unsigned render target.

We can convert back in the lighting shader with:


vec3 normal = normalize( texture2D(normalMap,texcoord)).rgb * 2.0f - 1.0f ); // Convert back to [-1,1] range

In your deferred lighting shader I would also change:


vec4 diffuselight = vec4(max(dot(normal,lightVec), 0.0),max(dot(normal,lightVec), 0.0),max(dot(normal,lightVec), 0.0),0.0);

to:


vec3 diffuselight = color * max(dot(normal, lightVec), 0.0);

I am also unsure of what the multiply by 2 is for here:


diffuselight = clamp(diffuselight, 0.0, 1.0)*2.0;

Anyway, hopefully there's something helpful there.

n!

Thanks for the tips. I implemented what you said, and I still got fairly same results, however better. I think no whats going on is the position. In my previous post I said I am using GL_RGB for my frame buffers. Should I be increasing that because there are inaccuracies in the position buffer so bad, it can destroy the render? Also, I've read that using view space normals is fine, as long as everything else is in view space. The multiply by 2.0 at the end of your post was purely aesthetic, to make the lighting a little brighter when I was testing out the forward rendering shader (I just copy and pasted most of it.)

EDIT: I tried bumping up the frame buffers to GL_RGB16 and nothing changed.

For position you'll want more than 16 bit float (usually, maybe not if you're using camera-relative positions). Your best bet is to use a single channel 32 bit texture for depth and recompute position in the shader before lighting (you can use the inverse viewprojection matrix or you can use the frustum corners trick).

For normals 8 bits can work fine but you'll get banding in smooth surfaces or weak normal detail (soft normal maps), so you might want some compression scheme or use a higher precision.

The rest can be 8-bit as it's usually texture data or 0-1 scalars for lighting info (glossiness). If you're using lighting buffers, you should use some higher-precision format. R11G11B10 seems to work just fine, but if you plan on having large varieties you can go with 16 bit.

Never test your deferred lighting/shading approach using lower precision buffers. Always test with the highest precision you can afford and get everything working, then optimize the formats. Otherwise your issues might be with the buffers and not the data. :)

For position you'll want more than 16 bit float (usually, maybe not if you're using camera-relative positions). Your best bet is to use a single channel 32 bit texture for depth and recompute position in the shader before lighting (you can use the inverse viewprojection matrix or you can use the frustum corners trick).

For normals 8 bits can work fine but you'll get banding in smooth surfaces or weak normal detail (soft normal maps), so you might want some compression scheme or use a higher precision.

The rest can be 8-bit as it's usually texture data or 0-1 scalars for lighting info (glossiness). If you're using lighting buffers, you should use some higher-precision format. R11G11B10 seems to work just fine, but if you plan on having large varieties you can go with 16 bit.

Never test your deferred lighting/shading approach using lower precision buffers. Always test with the highest precision you can afford and get everything working, then optimize the formats. Otherwise your issues might be with the buffers and not the data. smile.png

Thanks, that looks a bit better, but I still don't understand why the light is moving when the view changes. I've using GLSL's light source which it reads from opengl's lights and opengl's light is being transformed by the view matrix. The light still moves, whatever I seem to try.

It sounds like your light position might be camera-relative, or vice-versa. smile.png

This topic is closed to new replies.

Advertisement