Sign in to follow this  
korvax

Basic structure of a deferred rendering?

Recommended Posts

korvax    309

Hello, I have some trouble understanding how the basic structure of a deferred rendering solution should be.

 

So this is pretty much where I am.

 

I have one shader with 3 Targets

struct PSOutput
{
  float4 Color : SV_Target0;
  float3 Normal : SV_Target1;
  float3 Specular : SV_Target2;
};

I feel them with stuff. (I can see the different targets in my program as textures and they all look fine).

 output.Color = diffuse*txDiffuse.Sample(samLinear, input.texcoord);
 output.Normal = input.normal;
 output.Specular = specular;

But now the tricky part. If i have understand this correctly I need to send this values in to another shader that does the "deferred lighting calculation", I dont really care about the "calculation" part right now, Im mainly interested in how to structure this.  How do I use/transfer this Targets in to the "light calculation shader".  And do I need render the scene twice, first with basic shader above and then a second time with the "light calculation shader"?

 

Im using Directx11 so any sample in dx11 is much appreciated.

Share this post


Link to post
Share on other sites
korvax    309

Hi, thank you for you help so far. Things are a bit clearer now but still :

 

Have understood this correctly? 

1) Render a light geometry, Render the whole scene with all the objects so I can get the correct data in the GBuffers

2) Render a simple 2D quad over the screen with the second shader with all the textures as input (this is you shader example).

3) Combine the two outputs? Not sure if I understand you here, isnt the second shader the final output that will be drawn on the quad?

Share this post


Link to post
Share on other sites
SeraphLance    2603

Not quite.

 

1.  Render your geometry to buffers.  (You appear to already do this)

2.  Load your buffers into textures and perform lighting equations.  You have a lot of control over how to do this, and people argue about some of the nuances.  For a first shot, I'd honestly just render a fullscreen quad and perform on that.  The result of this shader(s) will be an accumulation buffer of all the light on your rendered geometry.

3.  multiply the diffuse colors of your GBuffer and the light buffer together.

 

Here's a reference image ripped from google:

 

GBuffer9.png

 

Left Column:

Diffuse

Normal

Depth

Light Buffer

 

Right image is the diffuse and light buffer multiplied together.

 

EDIT:  It appears my shader above is a bit misleading.  I don't actually *use* the diffuse texture in that shader.  It actually gets compiled out entirely.  The lighting stage is a combination of the normals, height, and any material properties you happen to store.  The diffuse isn't used at all until the third stage.

Edited by SeraphLance

Share this post


Link to post
Share on other sites
korvax    309

Hi, again you help is very much appreciated. 

 

 

2.  Load your buffers into textures and perform lighting equations.  You have a lot of control over how to do this, and people argue about some of the nuances.  For a first shot, I'd honestly just render a fullscreen quad and perform on that.  The result of this shader(s) will be an accumulation buffer of all the light on your rendered geometry.

I have some trouble understanding the lighting equations part of it, not the math itself, more how this is acctually applied in you program. Is this shader applied to a fullscreen quad, and is it excuated for every light you have in the scene, because every light has its own vLightVector and lightIntensity? Or how are you applying this?

Share this post


Link to post
Share on other sites
SeraphLance    2603

Hi, again you help is very much appreciated. 

 

 

2.  Load your buffers into textures and perform lighting equations.  You have a lot of control over how to do this, and people argue about some of the nuances.  For a first shot, I'd honestly just render a fullscreen quad and perform on that.  The result of this shader(s) will be an accumulation buffer of all the light on your rendered geometry.

I have some trouble understanding the lighting equations part of it, not the math itself, more how this is acctually applied in you program. Is this shader applied to a fullscreen quad, and is it excuated for every light you have in the scene, because every light has its own vLightVector and lightIntensity? Or how are you applying this?

 

 

You're more or less correct here.

Each light is a separate shader pass.  You want to use additive blending to accumulate the contributions of each light into a single light buffer.  Which geometry you use in these shader passes is dependent on what your needs are.  Typically I usually see full-screen quads used for directional lights, spheres used for point lights, and cones used for spotlights, but I've heard talk of screen-aligned bounding boxes too.

Once you've written all of your lights to the light buffer, you do one final shader pass -- a fullscreen quad with samples from your diffuse and light maps multiplied together.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this