[DX11] Deferred rendering and alpha-to-coverage

Started by
6 comments, last by matt77hias 6 years, 7 months ago
In our old DX9 renderer, I used alpha-to-coverage to render a huge amount of ground cover such as grass, plants, and flowers. It worked really well because depth buffering remained enabled and nothing had to be sorted.

I would like to use our new DX11 deferred renderer to draw all of our ground cover foliage because it will allow all of the foliage to be lit in the standard deferred lighting pass. However, I'm not having any success.

Here's how I'm setting up the DX11 blend state:

D3D11_BLEND_DESC::AlphaToCoverageEnable = true
D3D11_BLEND_DESC::IndependentBlendEnable = false
D3D11_BLEND_DESC::RenderTarget[0].BlendEnable = false
D3D11_BLEND_DESC::RenderTarget[0].SrcBlend = D3D11_BLEND_ONE
D3D11_BLEND_DESC::RenderTarget[0].DestBlend = D3D11_BLEND_ZERO
D3D11_BLEND_DESC::RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD
D3D11_BLEND_DESC::RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE
D3D11_BLEND_DESC::RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO
D3D11_BLEND_DESC::RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD

The deferred renderer has 3 render targets:

RT0: R8G8B8A8_UNORM: RGB=diffuse, A=alpha-to-coverage output
RT1: R16G16B16A16_FLOAT: RGB=emissive, A=unused
RT2: R8G8B8A8_UNORM: RG=normal, B=spec power, A=spec intensity

I assume that to use alpha-to-coverage in a deferred renderer, I simply need to set AlphaToCoverageEnable to true in the blend state and then output an opacity value to RT0's alpha component. But when I try this, what I see is that when the shader outputs an alpha value of <=0.5 to RT0, nothing at all is written to the RGBA components of RT0. When an alpha value of >0.5 is written to RT0, the RGBA components of RT0 are written just fine.

Any help appreciated!
Advertisement
You seem to be setting it up right. The way it should work when having MSAA disabled is that it will dither across 2x2 quads of pixels. So at around 0.25 alpha you'll get one pixel of the quad written to, at 0.5 you'll get two pixels, and so on. With MSAA enabled it will also dither across subsamples as well, which provides better quality.
Hi MJP,

Yeah, that's what I'd expect to see, but I'm just getting 2 states: unwritten (a <= 0.5) and written (a > 0.5). MSAA is disabled.
I think the problem is that the render targets written to by the deferred renderer are being created without MSAA and so only 2 levels of alpha-to-coverage exist: off and on. Sounds like I'll need to create all of my render targets with 2x MSAA to get 4 levels of alpha-to-coverage. But that sure sounds like it'll consume a LOT of memory since the deferred renderer is using 4 render targets! Is this the only way?

desc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;

Shouldn't you also set the render target write mask in the blending descriptor?

🧙

Alpha to coverage is a MSAA only technique, how could it work without it ? Forget about the super sampling depth part, It is the edge AA and is unrelated, alpha to coverage just rely on masking a variable amount of the fragments destination based on opacity, the transparency is provide by blending the fragments together at the resolve step.

Now, with a deferred rendered to work with alpha to coverage, you have no choice to use MSAA surfaces for your gbuffer, this is the first requirement, and a big one for bandwidth and memory usage.

But your problem only started here. If you do that, at the lighting stage, you will now have pixels that are "uniform", with all the fragments are identical and can be lit once, and pixels where fragments are different, either because of the edge/depth boundaries of triangles or because of the alpha to coverage. You have no choice but to lit each fragments before blending them ! You have to detect such pixels and it is not always trivial to do it cheaply.

On 8/28/2017 at 8:12 PM, galop1n said:

But your problem only started here. If you do that, at the lighting stage, you will now have pixels that are "uniform", with all the fragments are identical and can be lit once, and pixels where fragments are different, either because of the edge/depth boundaries of triangles or because of the alpha to coverage. You have no choice but to lit each fragments before blending them ! You have to detect such pixels and it is not always trivial to do it cheaply.

I presume this is done heuristically? It also seems you need dynamic branching?

Additionally the GBuffer will be really memory consuming (4x or 8x MSAA :o ).

🧙

This topic is closed to new replies.

Advertisement