Jump to content
  • Advertisement
Sign in to follow this  
jmaupay

Deferred rendering + MRT + MSAA

This topic is 4235 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

For short: Is it a good solution on modern hardware (let's say GF8800) ? And how to have hardware anti-aliasing (MSAA) with deferred shading ? Long explanation: I had a look on that threads: - thread424468 - thread396527 and : - Yann L questions For the moment, I implemented a render to texture (FBO) of my engine (with MSAA: using glBlitFramebufferEXT) and it works pretty well. But all the calculations (shadow mapping ...) are done during that first pass. Then I have post-processing to do some simple 2D effects. Now I would like to do the shadows/lighting/fog/etc... as post-processing passes. My firts idea is to use Multiple-Render-Target (MRT) in my first pass to implement Deferred shading. But I have doubts: a) if I understand Yann, this is not a good solution (not possible) to use MSAA MRT with deferred rendering buffers (diffuse / position / normals) and then to resolve (blit) to textures. am I right ? This gives stange textures for position/normals ... ? Is it possible to use non AA renderbuffers for position/normals ... b) What are the alternate solutions ? (I don't want to do AA myself, please). Yann says he has got a first pass to render in MSAA buffers then a second pass to render the deferred buffers in not-AA buffers ? Could the first pass also be used as a Z-prepass and the second pass to be very cheap ? c) Other ideas ? Blending ? Avoid deferred ?

Share this post


Link to post
Share on other sites
Advertisement
Quote:
a) if I understand Yann, this is not a good solution (not possible) to use MSAA MRT with deferred rendering buffers (diffuse / position / normals) and then to resolve (blit) to textures. am I right ? This gives stange textures for position/normals ... ? Is it possible to use non AA renderbuffers for position/normals ...

Right..
Think of it like this (4 X MSAA).
Forward rendering:
FwdPixel = (F(Normal0, Position0, Color0) + F(Normal1, Position1, Color1) + F(Normal2, Position2, Color2) + F(Normal3, Position3, Color3)) / 4

Where F is your pixel shader (i.e lighting, texture lookup etc), and 0, 1, 2, 3 is the fragment number.

Deferred rendering:
DefPixel = F((Normal0 + Normal1 + Normal2 + Normal3) / 4, (Position0 + Position1 + Position2 + Position3) / 4, (Color0 + Color1 + Color2 + Color3) / 4)

In order to make FwdPixel = DefPixel there's only a VERY limited set of F's that you can make.

Share this post


Link to post
Share on other sites
Quote:
Original post by eq

Deferred rendering:
DefPixel = F((Normal0 + Normal1 + Normal2 + Normal3) / 4, (Position0 + Position1 + Position2 + Position3) / 4, (Color0 + Color1 + Color2 + Color3) / 4)


Ok, I see. So with deferred, the post-processing shader receive the "mean" (average) of normal, position and color sub-fragments.

For my information, what is the result ? A blurred image ? An aliased image ? I imagine that post-processing shaders (lighting ...) don't work nicely with that sort of input ? (someone has a snapshot ?)

[Edited by - jmaupay on May 15, 2007 6:10:50 AM]

Share this post


Link to post
Share on other sites
Quote:
Ok, I see. So with deferred, the post-processing shader receive the "mean" (average) of normal, position and color sub-fragments.

Yes, since the normal, position and all other data is stored in textures (and/or surfaces) and textures doesn't store fragments, it stores pixels.
The HW merges the fragments into pixels upon the destination write (this is also why MSAA is more efficient than SSAA, it does the same math but in the 4X case, save 400% on the memory writes).

Quote:
For my information, what is the result ? A blurred image ? An aliased image ? I imagine that post-processing shaders (lighting ...) don't work nicely with that sort of input ? (someone has a snapshot ?)

Don't have any screenshots, I abandonned my deferred redenderer a couple of years ago.
But I recall that the artifacts that bothered me most was that sometimes it looked like there was no AA and most of the time the "edges" were enhanced, like a sobel edge detection filter albeit more subtle.
I.e the jaggy edges where enhanced, which defies the purpose of AA!

Share this post


Link to post
Share on other sites
Quote:
Original post by eq
I abandonned my deferred redenderer a couple of years ago.
But I recall that the artifacts that bothered me most was that sometimes it looked like there was no AA and most of the time the "edges" were enhanced, like a sobel edge detection filter albeit more subtle.


And "a couple of years ago" you were able to render into an AA buffer ? (I thought that resolve ("blit") functions on application frame buffer are quite new) ?

Share this post


Link to post
Share on other sites
Yeah as explained MSAA is not compatible with a wide range of non-linear operations, deferred shading being one of them. That's just a limitation of how the MSAA resolve works.

That said, custom resolves can be implemented on the G80 and R600 (indeed on the latter it seems that *all* resolves are essentially "custom"). Don't be scared off of this or jittered supersampling: "doing your own AA" is actually quite simple, although comes with performance implications. Still, if R600 is an example of things to come, programmable AA and multi-sampling will be the norm not the exception. The "fixed-function" resolve is just too inflexible.

Share this post


Link to post
Share on other sites
Quote:
Original post by AndyTX
That said, custom resolves can be implemented on the G80


So if I use only a GeForce 8800, I can program the resolve function, that's it ? Then I can have an antialiased resolve function for the color buffer and no anti-aliasing for the position and normals ... Is it correct ?

Can I do it just now ? what function to call (on XP/OpenGL) ?

Or you mean that I have to render to a 4X buffer (for 4X AA) then code AA myself ?

Share this post


Link to post
Share on other sites
Quote:
Original post by jmaupay
So if I use only a GeForce 8800, I can program the resolve function, that's it ? Then I can have an antialiased resolve function for the color buffer and no anti-aliasing for the position and normals ... Is it correct ?

Custom resolves are supported on G80 and R600 (they are required for D3D10). In D3D10 you can load the different elements in an MSAAed buffer using the "Load" shader function with an extra integer parameter for which color index. Note that *all* of your attribute buffers should technically be multisampled (this is also supported on G80/R600).

Also note that MRT's are a totally separate issue here... even if you render one RT at a time, you will still have a problem with MSAA + deferred shading. MRT's are *just a performance optimization* - always remember that.

I assume the G80 OpenGL extensions expose something similar, but I'd have to check them out.

What you need to do for deferred shading is to load all of the different indices, resolve the lighting function, then average the results for all samples. So for 4x MSAA something like (in HLSL):


color_sum = float3(0, 0, 0);
for (int sample = 0; sample < 4; ++sample) {
position = position_buffer.Load(vpos, sample);
normal = normal_buffer.Load(vpos, sample);
... // load all attributes
color_sum += evaluate_brdf(position, normal, attributes, ...)
}
color = color_sum / 4;


For different levels of AA, just change the "4"s.

Share this post


Link to post
Share on other sites
Quote:
And "a couple of years ago" you were able to render into an AA buffer

Yes? I did it amongst other things on Xbox (original, not 360).
Quote:
(I thought that resolve ("blit") functions on application frame buffer are quite new) ?

It's been around since DX9 at least, I have a 3 year old laptop with a GeForce (5600 Go?) that supports PS/VS 2.0 and there's no problem there...
Edit: DX9 was released late 2002 (according to wikipedia ).

Share this post


Link to post
Share on other sites
Quote:
What you need to do for deferred shading is to load all of the different indices, resolve the lighting function, then average the results for all samples. So for 4x MSAA something like (in HLSL):

It seems to me that performance wise it will be just as slow as doing 4x SSAA.
It would require the same temporary storage space and the same memory bandwidth consumption, or?
The only benefit over SSAA that I can see is all the fancy sampling patterns that you could do (quincux, rotated grids etc), right?


Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!