• Create Account

We need 7 developers from Canada and 18 more from Australia to help us complete a research survey.

Support our site by taking a quick sponsored survey and win a chance at a \$50 Amazon gift card. Click here to get started!

### #ActualSeraphLance

Posted 20 September 2013 - 02:35 PM

You have your render targets for your G-Buffer (those would be your color/normal/specular values, and you need depth as well).  Use them as textures into another shader.  The shader in question looks something like this:

Render a light geometry (my simple renderer uses a fullscreen quad for directional lights) in screen space (with no transforms).

Across the texture coordinates of this quad, sample your textures and do your math, writing to a new lighting buffer.

Then, you have another shader that does this:

renders another fullscreen quad, using the diffuse and light buffers as textures.

combine the two buffers for your output.

Here is the relevant information I use for the lighting stage in my simple, modest deferred renderer.  Note that this is for a directional light.  Point lights are somewhat harder:

EDIT:  If the vertex shader looks strange to you, it's basically rendering a large triangle with an interior quad that covers the screen with the correct texture coordinates.  Ergo, sort of a fullscreen quad.  Doing it this way means I don't need to write another vertex buffer just for the fullscreen render, and it simplifies things a bit.

struct VertexShaderOutput
{
float4 position : SV_POSITION;
float2 texCoord : TEXCOORD0;
};

{
Output.texCoord = float2((id << 1) & 2, id & 2);
Output.position = float4(Output.texCoord * float2(2,-2) + float2(-1,1), 0, 1);
return Output;
}

{
float4 color = tDiffuse.Sample(sDiffuse, input.texCoord);
float4 normal = tNormal.Sample(sNormal, input.texCoord)*2 - 1;
float4 depth = tDepth.Sample(sDepth, input.texCoord);

}


### #1SeraphLance

Posted 20 September 2013 - 02:33 PM

You have your render targets for your G-Buffer (those would be your color/normal/specular values, and you need depth as well).  Use them as textures into another shader.  The shader in question looks something like this:

Render a light geometry (my simple renderer uses a fullscreen quad for directional lights) in screen space (with no transforms).

Across the texture coordinates of this quad, sample your textures and do your math, writing to a new lighting buffer.

Then, you have another shader that does this:

renders another fullscreen quad, using the diffuse and light buffers as textures.

combine the two buffers for your output.

Here is the relevant information I use for the lighting stage in my simple, modest deferred renderer.  Note that this is for a directional light.  Point lights are somewhat harder:

struct VertexShaderOutput
{
float4 position : SV_POSITION;
float2 texCoord : TEXCOORD0;
};

{
Output.texCoord = float2((id << 1) & 2, id & 2);
Output.position = float4(Output.texCoord * float2(2,-2) + float2(-1,1), 0, 1);
return Output;
}