Deferred Rendering Alpha Blending/Transparency Issue.

Started by
6 comments, last by Sirisian 12 years, 2 months ago
You know lately i have been thinking and wondering why Alpha Blending/Transparency is a real issue for Deferred rendering. And i could not see why people are saying that doing any kind of alpha blending is not good to do in the deferred state, but they rather used a forward rendering to handle those geometries. What i wanted to understand is why not do it all in the deferred state since at the end we are still writing the final result which are store in a texture into the frame buffer anyway. Doesn't each Texture we create for each Deferred image also contain an Alpha channel as well. So when we save our base image, could we not save the alpha information as well in the alpha channel of the RGBA texture. Unless when you read that texture value back and the alpha is missing then i would understand and see why alpha blending/transparency is an issue. So i guess what i really want to know is that does the Alpha values you store in a Texture on Reading the texture back in a shader will always be 1 or it will be whatever alpha value you had it previously set to. Or is it because Since we are deferring the final rendering after all base images are done and we lost the Blend state for each object.
Advertisement
The problem has to do with ordering and lighting.

When you initially create the g-buffer you render the opaque set of objects to the scene to your g-buffer, this includes details like: Depth, Normal, Texture, and specular data. Since all the objects initially rendered are opaque you don't have to worry much about ordering or "alpha", its just not present at all.

You then render your light buffer, which will typically use the z-buffer from the g-buffer stage to filter out the portions of the light geometry that are not visible.

Your final stage of the deferred renderer combines the two previous stages to produce the output buffer.

Now, in this entire process the ORDER and DEPTH of the objects is pretty much irrelevant, but when you attempt to render transparent objects you need to know both. This is typically why transparent objects are z-sorted. You need to know the depth of each transparent object because a light can be visible from behind the object, or from the front of the object, and if you have two transparent objects in a row then the order in which they apply their filters will change the output light. As a simple example, look at the attached image, note how 3 is different from 4, even though both are the same filters applied one on top of the other...BUT THE ORDER IS DIFFERENT.

This is information that is not easily stored in the g-buffer.

In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh.

lets say you have a wooden table, very diffuse and a transparent bottle made of glass on it,
how would you shade that in a deferred way, what normal, depth, specular-factors would you use?
Now i see what the problem is. So the main reason why is because when you have an transparent object you basically need to keep track of both depth buffer in that case since both pixel should be visible somewhat. And since we only have on Depth Buffer we will only have the depth of the visible pixel and not the one that is behind it. Well in a situation like this, Using depth Peeling could solve the problem right.
Now i see what the problem is. So the main reason why is because when you have an transparent object you basically need to keep track of both depth buffer in that case since both pixel should be visible somewhat. And since we only have on Depth Buffer we will only have the depth of the visible pixel and not the one that is behind it. So if we were to used depth peeling and peel each layer depth factor/color, we can always always composite them together at the end. But something like that would take a tons of memory since each layer of depth/color would require an additional G Buffer.

So if we were to used depth peeling and peel each layer depth factor/color, we can always always composite them together at the end. But something like that would take a tons of memory since each layer of depth/color would require an additional G Buffer.


Yeah, it's doable but expensive. Humus has a demo on his website. You can also do tricks like stuffing alpha layers in MSAA subsamples.
Thanks MJP. i will definitely look into that. :)
Thanks Everyone. @MJP, Interesting idea. i will definitely look into doing something similar.
Thanks Everyone. @MJP, Interesting idea. i will definitely look into doing something similar. But don't you also need a color buffer for each layer. I mean if you save only the alpha layer, how do you associate those alpha on each layer when it comes down to figuring out which pixel RGB that alpha affects.
.... not just depth and color, you need EVERY component of your gbuffer, you cannot blend any of them ahead of shading.
The mecha demo from AMD demonstrates how to do it properly for transparency:
http://developer.amd.com/samples/demos/pages/ATIRadeonHD5800SeriesRealTimeDemos.aspx
by first storing all fragments in the gbuffer (they use linked lists), then shading, then blending (although they use just forward shading in that demo)

ILM does it also in a tool, it's the "uber" version of this idea
http://people.csail.mit.edu/jrk/lightspeed/lightspeed_thesis.pdf

This topic is closed to new replies.

Advertisement