• Advertisement
Sign in to follow this  

MSAA explanation for a dummy?

This topic is 2173 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I missed MSAA and the likes the past 10 years, so I need to catch up with this one. I'm interested in Alpha-Coverage in particular, but let's start with MSAA. There are demo's and tutorials, but I was hoping someone can explain this as if were a 4 year old. Ready?

I get the idea of having multiple (sub)samples per pixel, and blend them together to soften (average) the edges. What I don't get, is what the related functions exactly do. Since I'm doing deferred rendering, I normally render stuff to a couple of textures. For example:
* screensize is 800 x 600
- albedo rgba16f = 1.8 MB
- normals rgba16f
- depth rgba16f
Normally I could attach these textures (at the same time) to a FBO and fill them, then use it for lighting. Nothing new. But let's say I want to Anti-Alias these 3 textures now....

I understand the function parameters, but... what exactly happens? Does it create a buffer somewhere, or adjust the texture contents? In the examples, first a FBO is activated + a renderTarget is attached (depth buffer, color attachment). So for example:

// Activate FBO, attach to depthbuffer and the "albedo" texture we render on
glBindFramebufferEXT( GL_FRAMEBUFFER_EXT, fbo.handle );
glFramebufferTexture2DEXT( GL_FRAMEBUFFER_EXT,
GL_TEXTURE_2D, albedoTex.handle , 0);
glFramebufferRenderbufferEXT( GL_FRAMEBUFFER_EXT,
GL_RENDERBUFFER_EXT, depthBuffer.handle );

And now I call glRenderbufferStorageMultisample(...). Does this function actually adjust the "abledoTex", so it changes from a normal RGBA16f texture, to a "supertexture" with multiple samples per pixel? I guess not, but I wonder how/where else the sub-samples are stored.

Depending on how and where, I wonder what the consequences are. Next step is to render stuff to my FBO. If using 4 sub-samples, does that mean I simply render 4 times more "fragments" (thus 800x600 becomes 1600x1200). I red about applications that can decide on the fly just to render 1 pixel, or to do multiple sub-samples. Normally, you only need this at geometry edges of course.

Second, how do other shaders read from these sub-samples? I mean, let's say I filled the gBuffers (colors, normals, et cetera), and now I want to do a lighting pass. For example, I render a screen-quad that lights each pixel. If there are multiple samples per pixel, I may get 4,8 or 16 different lighting results per sample as well. But how to do that? Does the lighting targetTexture also needs be setup with glRenderbufferStorageMultisample,
(making it 4,8 or 16 times bigger?), or can a "tex2D" read inside the shader actually decide from which sample to read somehow, and eventually average it?

Third. I'm not really interested in rendering everything with MSAA enabled, or Alpha Coverage in my case. I'd like to use it for transparent surfaces such as foliage, fences, et cetera. Can I render without MSAA / CSAA to the same texture buffer first, then enable MSAA / CSAA for the transparent portion? I guess so, simply by glEnable / glDisable( MSAA / CSAA ), but won't this give a performance hit? (I mean rendering non-transparent stuff with CSAA disabled)?

Your help is appreciated!

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement