Basic structure of a deferred rendering?

Started by
5 comments, last by korvax 10 years, 6 months ago

Hello, I have some trouble understanding how the basic structure of a deferred rendering solution should be.

So this is pretty much where I am.

I have one shader with 3 Targets


struct PSOutput
{
  float4 Color : SV_Target0;
  float3 Normal : SV_Target1;
  float3 Specular : SV_Target2;
};

I feel them with stuff. (I can see the different targets in my program as textures and they all look fine).


 output.Color = diffuse*txDiffuse.Sample(samLinear, input.texcoord);
 output.Normal = input.normal;
 output.Specular = specular;

But now the tricky part. If i have understand this correctly I need to send this values in to another shader that does the "deferred lighting calculation", I dont really care about the "calculation" part right now, Im mainly interested in how to structure this. How do I use/transfer this Targets in to the "light calculation shader". And do I need render the scene twice, first with basic shader above and then a second time with the "light calculation shader"?

Im using Directx11 so any sample in dx11 is much appreciated.

Advertisement

You have your render targets for your G-Buffer (those would be your color/normal/specular values, and you need depth as well). Use them as textures into another shader. The shader in question looks something like this:

Render a light geometry (my simple renderer uses a fullscreen quad for directional lights) in screen space (with no transforms).

Across the texture coordinates of this quad, sample your textures and do your math, writing to a new lighting buffer.

Then, you have another shader that does this:

renders another fullscreen quad, using the diffuse and light buffers as textures.

combine the two buffers for your output.

Here is the relevant information I use for the lighting stage in my simple, modest deferred renderer. Note that this is for a directional light. Point lights are somewhat harder:

EDIT: If the vertex shader looks strange to you, it's basically rendering a large triangle with an interior quad that covers the screen with the correct texture coordinates. Ergo, sort of a fullscreen quad. Doing it this way means I don't need to write another vertex buffer just for the fullscreen render, and it simplifies things a bit.


struct VertexShaderOutput
{
	float4 position : SV_POSITION;
	float2 texCoord : TEXCOORD0;
};
 
VertexShaderOutput VertexShaderFunction(uint id : SV_VertexID)
{
	VertexShaderOutput Output;
	Output.texCoord = float2((id << 1) & 2, id & 2);
	Output.position = float4(Output.texCoord * float2(2,-2) + float2(-1,1), 0, 1);
	return Output;
}
 
float4 PixelShaderFunction(VertexShaderOutput input) : SV_TARGET
{
	float4 color = tDiffuse.Sample(sDiffuse, input.texCoord);
	float4 normal = tNormal.Sample(sNormal, input.texCoord)*2 - 1;
	float4 depth = tDepth.Sample(sDepth, input.texCoord);


	float irradiance = saturate(dot(normal, -vLightVector))*lightIntensity;
	return float4(lightColor*irradiance, 1);
}

Hi, thank you for you help so far. Things are a bit clearer now but still :

Have understood this correctly?

1) Render a light geometry, Render the whole scene with all the objects so I can get the correct data in the GBuffers

2) Render a simple 2D quad over the screen with the second shader with all the textures as input (this is you shader example).

3) Combine the two outputs? Not sure if I understand you here, isnt the second shader the final output that will be drawn on the quad?

Not quite.

1. Render your geometry to buffers. (You appear to already do this)

2. Load your buffers into textures and perform lighting equations. You have a lot of control over how to do this, and people argue about some of the nuances. For a first shot, I'd honestly just render a fullscreen quad and perform on that. The result of this shader(s) will be an accumulation buffer of all the light on your rendered geometry.

3. multiply the diffuse colors of your GBuffer and the light buffer together.

Here's a reference image ripped from google:

GBuffer9.png

Left Column:

Diffuse

Normal

Depth

Light Buffer

Right image is the diffuse and light buffer multiplied together.

EDIT: It appears my shader above is a bit misleading. I don't actually *use* the diffuse texture in that shader. It actually gets compiled out entirely. The lighting stage is a combination of the normals, height, and any material properties you happen to store. The diffuse isn't used at all until the third stage.

Hi, again you help is very much appreciated.

2. Load your buffers into textures and perform lighting equations. You have a lot of control over how to do this, and people argue about some of the nuances. For a first shot, I'd honestly just render a fullscreen quad and perform on that. The result of this shader(s) will be an accumulation buffer of all the light on your rendered geometry.

I have some trouble understanding the lighting equations part of it, not the math itself, more how this is acctually applied in you program. Is this shader applied to a fullscreen quad, and is it excuated for every light you have in the scene, because every light has its own vLightVector and lightIntensity? Or how are you applying this?

Hi, again you help is very much appreciated.

2. Load your buffers into textures and perform lighting equations. You have a lot of control over how to do this, and people argue about some of the nuances. For a first shot, I'd honestly just render a fullscreen quad and perform on that. The result of this shader(s) will be an accumulation buffer of all the light on your rendered geometry.

I have some trouble understanding the lighting equations part of it, not the math itself, more how this is acctually applied in you program. Is this shader applied to a fullscreen quad, and is it excuated for every light you have in the scene, because every light has its own vLightVector and lightIntensity? Or how are you applying this?

You're more or less correct here.

Each light is a separate shader pass. You want to use additive blending to accumulate the contributions of each light into a single light buffer. Which geometry you use in these shader passes is dependent on what your needs are. Typically I usually see full-screen quads used for directional lights, spheres used for point lights, and cones used for spotlights, but I've heard talk of screen-aligned bounding boxes too.

Once you've written all of your lights to the light buffer, you do one final shader pass -- a fullscreen quad with samples from your diffuse and light maps multiplied together.

Excellent, thx for your help, I got that sorted now. thx.

This topic is closed to new replies.

Advertisement