Jump to content
  • Advertisement
Sign in to follow this  
GuyCalledFrank

fake alpha to coverage in deferred rendering?

This topic is 2640 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi. I've been using forward rendering for a long time and specifically MSAA and alpha-to-coverage that allowed me to draw a lot of foliages with nice soft edges without sorting hundreds of them. But depth and shader complexity grew, so I moved to a deferred approach. Everything is works much faster now and FXAA is not too bad comparing to MSAA, BUT the lost of alpha-to-coverage is sad. I'm using DX9 btw.

But then I found these slides:

http://www.slideshare.net/codevania/deferred-rendering-transparency

I'm sure it's some trick to render transparent surfaces similiar way to alpha to coverage, but with deferred shading. But it's on Korean, so I can't get the idea even translating it through google translate.

Maybe someone can understand what did the author meant?

Share this post


Link to post
Share on other sites
Advertisement
There's nothing about alpha to coverage that's incompatible with deferred rendering. You just use it when rendering your G-Buffer, and then light everything normally. Obviously MSAA requires special techniques that you can't do in DX9, but that's a separate issue.

Share this post


Link to post
Share on other sites
yes, I actually meant the problem with MRT which is often used for rendering GBuffer.

it doesn't support MSAA and it is to slow to render depth/normals/diffuse as separate passes.

but i think there is a key in that slides)

Share this post


Link to post
Share on other sites
That presentation is basically using the same technique proposed in the inferred lighting paper published by Volition. Basically you use regular grid pattern to dither out transparent G-Buffer values, then in the second geometry pass (they use a setup similar to Light Prepass) they use special filtering and knowledge of the grid pattern to gather and filter the transparent values, and blend them with the previous results. It's a setup that only gives you at most 3-4 layers of transparency and comes with all sorts of other caveats, and since it's not order-independent like A2C it's probably not going to suitable for your foliage problem.

Like I was saying earlier you can still use the basic premise of A2C, which a screen-space dither pattern used to clip/discard pixels based on their alpha. In fact if you make your own dither pattern and store it in a texture, you can easily make a much better pattern than what's used in the hardware for A2C. The big downside of course is that without MSAA the quality will suffer, since there will be no blended values (each pixel is either on or off).

Share this post


Link to post
Share on other sites
Hidden
That presentation is basically using the same technique proposed in the inferred lighting paper published by Volition. Basically you use regular grid pattern to dither out transparent G-Buffer values, then in the second geometry pass (they use a setup similar to Light Prepass) they use special filtering and knowledge of the grid pattern to gather and filter the transparent values, and blend them with the previous results. It's a setup that only gives you at most 3-4 layers of transparency and comes with all sorts of other caveats, and since it's not order-independent like A2C it's probably not going to suitable for your foliage problem.

Like I was saying earlier you can still use the basic premise of A2C, which a screen-space dither pattern used to clip/discard pixels based on their alpha. In fact if you make your own dither pattern and store it in a texture, you can easily make a much better pattern than what's used in the hardware for A2C. The big downside of course is that without MSAA the quality will suffer, since there will be no blended values (each pixel is either on or off).

Share this post


Link to post
That presentation is basically using the same technique proposed in the inferred lighting paper published by Volition.[/quote]

seems like it really is. thanks for turning it out.

Like I was saying earlier you can still use the basic premise of A2C, which a screen-space dither pattern used to clip/discard pixels based on their alpha. In fact if you make your own dither pattern and store it in a texture, you can easily make a much better pattern than what's used in the hardware for A2C. The big downside of course is that without MSAA the quality will suffer, since there will be no blended values (each pixel is either on or off).[/quote]

already tried it out - terribly noisy, but may work if rendering screen at least 2x larger and then downsampling. I'll try that.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!