Sign in to follow this  

Deferred Shading

This topic is 3110 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So far I've managed to set up deferred shading for the terrain part of my engine. Doing so got me thinking though, how should I integrate this into the rest of my engine? I was thinking the best way would be to set up a deffered shading "manager" class of sorts, which sets up the gbuffer, then calls the draw method of each of my objects I want drawn. Then telling this to update in my main game loop. Something like: Main game class:
Draw(GameTime gameTime)
{
   Deffered.Draw()
   // Other drawing, such as the GUI which goes on top
}
Then for the deferred shading class:
Draw()
{
   // setup MRTs here

   Terrain.Draw()
   Models.Draw()
   // ... Etc

   // Resolve MRTs here
}
Is this a sensible way of doing things? Where should I be adding my lighting if I go for this approach? I suppose it should be in the deferred shading class, but would it be sensible to set up a LightManager class as well, then pass the contents of the g-buffer to it? Finally, how do I prevent models etc. overwritting other objects if they would not be draw. Say I draw my terrain first, then all my models, what if a model is drawn behind a hill, how is this dealt with in deferred shading? Do I have to run occlusion tests for everything and order my draw calls?

Share this post


Link to post
Share on other sites
Quote:

Finally, how do I prevent models etc. overwritting other objects if they would not be draw. Say I draw my terrain first, then all my models, what if a model is drawn behind a hill, how is this dealt with in deferred shading? Do I have to run occlusion tests for everything and order my draw calls?

Standard Z-buffer will take care of that.

Deferred shading is little different from forward shading - the only difference is that you don't shade when you render the scene, but you shade after the scene has been drawn. And, of course, to be able to do that, you simply draw the scene with special materials that output depth/normal/colour/etc. info.

Share this post


Link to post
Share on other sites
Hi,

Your approach seems correct. But rendering all the terrain geometry in every frame may not be a good way. Because, if your terrain is big enough, it may be a performance killer :) To avoid this, I recommend you to use a partitioning technique such as "spatial partitioning". Other expert members here can describe it or you can google for it.

Quote:
Where should I be adding my lighting if I go for this approach? I suppose it should be in the deferred shading class, but would it be sensible to set up a LightManager class as well, then pass the contents of the g-buffer to it?


After filling the g-buffer, lighting must be done for every "active" light in your scene as a post process (*). Then you must add the lighting results by additive alpha blending. A custom light manager class would be useful for stroing your light parameters. For example, I'm using a light manager class to add/remove lights to screen, determine light types, prepare convex light geometry, calculate light-space matrices (for projective texturing and/or shadow mapping) etc.

(*)Pass your render target textures to your deferred-shading shader. Do your lighting calculation in it and render a full-screen quad to show the results.

Quote:
Finally, how do I prevent models etc. overwritting other objects if they would not be draw. Say I draw my terrain first, then all my models, what if a model is drawn behind a hill, how is this dealt with in deferred shading? Do I have to run occlusion tests for everything and order my draw calls?


It can be done by using occlusion tests. Or, early-z culling can be used (I'm not sure, experts correct me :) ).

Hope this helps.
Regards,
Rohat.

Share this post


Link to post
Share on other sites
Thanks to both of you.

I was having a slightly stupid moment there with the z-buffer, since obviously it's already taken care of as all I'm doing is outputting to MRTs instead of the backbuffer.

programci_84:
Don't worry, my terrain is already frustrum culled. It's clipmap based and renders massive terrain pretty fast anyway.

Share this post


Link to post
Share on other sites
To step onto a more abstract level I suggest using a general View class. Next you need a data structure to store or renderable objects in such as a ... well scene graph (no I didn't really say that nono word but you get the idea). The View class can do forward as well as deferred shading and is responsible for activating the corresponding shaders and render targets. After that it just calls the scene gra ... er well you know whom, to render itself. This data structure then goes thrpugh its hierarchy and renders all renderable objects. It just doesn't care if fowrd or deferred shading as long as your View class provides an interface for activating materials to internally decide which material attributes are actually required by the currently active shaders (forward versus deferred).

Finally, when ... (you know whom) is done with rendering the View class proceeds by either being finished or applying post processing effects. In case of forward and deferred shading you can have bloom, screen space ambient occlusion, and things like that. In deferred shading only there is the shading post processing to be done. Again, you need ... (you know whom) to provide a list of lights in the scene for that.

The bottom line is that the View class encapsulates rendering to render targets and managing the shaders as well as doing the post processing effects. It does not really matter that much if you are using forward or deferred shading. Unless you come across transparent objects of course where it is even helpful to have a forward rendering path as well.

But this should get you started.

Share this post


Link to post
Share on other sites
Urgh, scene graphs. I never fully understood the concept.

From what I've read, they seem just the same as an octree for all the objects in your world. Would that be a correct (for a certain degree of correct) statement?

I hope so, since I understand octrees.

Unfortunately I'm a mathematician first and a computer scientist second, so all the tutorials/explainations out there that go into detail on the math and assume the programming knowledge are useless to me! Linear/matrix algebra I can do in my sleep.

Share this post


Link to post
Share on other sites
Yep that's true. SceneGraphs are just a tool for managing objects in your scene. Now people will discuss me to death since you can demand a lot more of a scene graph but for the purposes outlined above an octree or a simple binary tree or whatever is perfectly valid to be used as "scene manager". You would only need it to sort out the visibility of your objects as well as the ability to call the objects to render themselves or kick off the appropriate renderer. So the View class can just use it as a black box to render your virtual world.

Share this post


Link to post
Share on other sites
I'm guessing the scene graph could be adapted to other purposes such as collision detection? (casting a ray and subdividing nodes, etc) Is that what you were getting at?

It's definately something worth considering whilst I'm setting up the deferred rendering framework/view class, currently I'm not interested in such uses now, but I will be in the future.

Share this post


Link to post
Share on other sites
Well, yes you can of course use your scene manager (whatever it happens to be: octrre, scene graph, ...) for collision detection as well. Its just a matter of clean interface design [grin]

Share this post


Link to post
Share on other sites
I've put some more thought into it and come up with the following:

Resolved Gbuffer as structure:
struct Gbuffer
{
3x Texture2D;
}



DeferredRender class:

Draw()
{
// Setup MRTs

// Draw various components of scene

// Resolve MRTs into Gbuffer struct

PostProcessManager.Draw(Gbuffer);

// Full screen quad render of results of postprocess
}



The PostProcessManager will handle each of my post process effects (lights, etc) and the passing of the altered final image between them.

I was considering creating a Gbuffer class (to handle creation, and clearing) with 3 render targets, and then change the struct above to say ResolvedGbuffer. Seems... messy though, or am I overthinking this?

Share this post


Link to post
Share on other sites
Don't nail me on compilable code but here is how I would approach this ... roughly spoken in C++ pseudocode. First, define an enumeration which passes your View class should support. Note that it does not devide rendering from post processing by design to keep things simple for the users of your interface.


enum RenderPipelineStage_t
{
RPS_SCENE_FORWARD_SHADING=1, // renders scene with forward lighting
RPS_SCENE_FOR_DEFERRED_RENDERING=2, // renders scene with implicit G-Buffer
RPS_DEFERRED_SHADING=4, // applies deferred shading (dont use with forward rendered scene)
RPS_BLOOM=8, // bloom as post processing effect
RPS_AMBIENT_OCCLUSION=16 // SSAO (requires G-buffer or forward rendered scene needs to store depth pass as well
};


Next you have a view class which looks roughly like this:


class View
{
public:
View();
~View();

void processStage(RenderPipelineStage_t stage)
{
m_all_stages = m_all_stages | stage;

initializeRenderTargetsForStage(stage);

switch (stage)
{
case RPS_SCENE_FORWARD_SHADING:
m_forward_scene_and_depth->activate();
m_scene_manager->renderScene(true /* shading */);
m_forward_scene_and_depth->deactivate();
break;

case RPS_SCENE_FOR_DEFERRED_RENDERING:
for (uint i=0; i<4; ++i)
m_g_buffer[i]->activate();

m_scene_manager->renderScene(false /* no shading */);

for (uint i=0; i<4; ++i)
m_g_buffer[i]->deactivate();

break;

case RPS_DEFERRED_SHADING:
// blabla activate RT, apply post processing effects
break;

case RPS_BLOOM:
// blabla activate RT, apply post processing effects
break;

case RPS_AMBIENT_OCCLUSION:
// blabla activate RT, apply post processing effects
break;
}
}

void presentScene()
{
// activate corresponding shader to compose final image

// -> checking m_all_stages tells you which stages must be composed

// render the final composite image as fullscreen rectangle

m_all_stages = 0;
}

private:
void initializeRenderTargetsForStage(RenderPipelineStage stage)
{
switch (stage)
{
// blabla ... create render target for stage if it does not exist yet
}
}

private:
uint m_all_stages;
ISceneManager* m_scene_manager; // might be an octree or scenegraph
RenderTarget* m_g_buffer[4];
RenderTarget* m_forward_scene_and_depth;
RenderTarget* m_bloom;
RenderTarget* m_ssao;
[...]
};


Basically that's all there is to it. Once you have a View class instance you just need to call processStage() for all stages you want to use. Obviously there is a certain order to the effects such that most stages expect various other stages as input. For example, the rendered scene is input to most other stages and the depth (either from g-buffer or from forward rendered depth pass) is input to the screen space ambient occlusion pass.

Once you have processed all stages you call presentScene() to create a final composite image from all stage results and render that to the back buffer before swapping.

Also note that there is no g-buffer class as such. I consider it overkill because the g-buffer is rather an implicit structure that just bundles 4 render targets.

Hope that helps.

Share this post


Link to post
Share on other sites

This topic is 3110 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this