Sign in to follow this  
Ripiz

Deferred Shading troubles

Recommended Posts

Hello,

I am trying to implement deferred shading but I ran into one issue. But first, some background details.

All the buffers:[list=1]
[*]SwapChain; R8G8B8A8_UNORM
[*]Depth Stencil; R32_TYPELESS
[*]Color Buffer; R8G8B8A8_UNORM
[*]Depth Buffer (linear); R32_FLOAT
[*]Normals Buffer; R8G8B8A8_UNORM (accuracy might be low, but it doesn't matter at this point)
[/list]

Rendering loop:[list=1]
[*]I use ID3D11DeviceContext::OMSetRenderTargets() to set #3, #4, #5 as render targets and #2 as depth stencil.
[*]Draw all geometry: unshaded color goes into #3, normals into #5, vertex shader outputs transformedPosition.z, pixel shader divides it by far plane to normalize it.
[*]Using ID3D11DeviceContext::OMSetRenderTargets() setting #1 as render target, no depth stencil
[*]#3 and #5 are set as shader resources
[*]Performing phong shading
[/list]
However, I'd also like to have FXAA (or any other post-process antialiasing), however it'll require #1 as input, where do I output then?
It seems wrong to output into #3 and then copy back to #1. How can this be solved?

Thank you in advance.

Share this post


Link to post
Share on other sites
Post-processing is mostly done by rendering to another render target texture, you don't write it back into the G-Buffer since the G-Buffer is solely meant for storing actual data about your geometry and because it could be required for other post-processing steps.

When you're done with post-processing you can write out the contents of your last post-processing step's render target to your actual back buffer, or you could just set your back buffer as the render target for your last step.

Share this post


Link to post
Share on other sites
[quote name='Radikalizm' timestamp='1344373269' post='4967142']
Post-processing is mostly done by rendering to another render target texture, you don't write it back into the G-Buffer since the G-Buffer is solely meant for storing actual data about your geometry and because it could be required for other post-processing steps.
[/quote]
I'm unsure I understood you correctly, please tell me if I'm wrong.
So I have to keep Color, Depth and Normal Buffers unmodified at all times, perform phong shading and output result into extra RT #1, then perform antialiasing and output into extra RT #2, and then copy from extra RT #2 to backbuffer? Isn't it a bit memory waste keeping one extra RT for each pass?

Share this post


Link to post
Share on other sites
[quote name='Ripiz' timestamp='1344411378' post='4967289']
[quote name='Radikalizm' timestamp='1344373269' post='4967142']
Post-processing is mostly done by rendering to another render target texture, you don't write it back into the G-Buffer since the G-Buffer is solely meant for storing actual data about your geometry and because it could be required for other post-processing steps.
[/quote]
I'm unsure I understood you correctly, please tell me if I'm wrong.
So I have to keep Color, Depth and Normal Buffers unmodified at all times, perform phong shading and output result into extra RT #1, then perform antialiasing and output into extra RT #2, and then copy from extra RT #2 to backbuffer? Isn't it a bit memory waste keeping one extra RT for each pass?
[/quote]

That's basically how I do this yes, and I assume that's how it's done in most of the cases. You could avoid wasting memory by re-using RTTs when possible, but this becomes more difficult when you need more 'exotic' texture formats for specific purposes or differently sized render targets, which you will definitely need once you get to more advanced effects. Another problem with re-using RTTs could arise when you have to run different branches of post-processing effects which need to be combined together as you have to make sure that the contents of certain RTTs don't get invalidated.

Experiment with it a bit and see what works out for you
It comes down to doing a good amount of profiling I suppose Edited by Radikalizm

Share this post


Link to post
Share on other sites
[quote name='Ripiz' timestamp='1344411378' post='4967289']
then perform antialiasing and output into extra RT #2, and then copy from extra RT #2 to backbuffer? Isn't it a bit memory waste keeping one extra RT for each pass?
[/quote]

There's no need to perform antialiasing into a second extra render target and then copy it to the backbuffer. You can just apply the antialiasing as you write to the back buffer.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this