Members - Reputation: 605
Posted 23 February 2012 - 10:39 AM
I am researching about deferred rendering and especially the handling of transparent objects. I know the "easy" way to do it by rendering them in a separate pass using forward lighting, however could it be possible to integrate them in the deferred pass using an order independent transparency technique such as depth peeling or stochastic transparency?
I would like your views on this
Members - Reputation: 2760
Posted 23 February 2012 - 10:48 AM
There is inferred rendering and plain old screen door transparency (filtering optional), but those techniques obviously have many problems. Namely quality and the number of transparent layers. Then there is the deep gbuffer approach, which suffers from a limited number of transparent layers and also a high memory consumption and bandwidth usage.
Stochastic transparency is a nice concept, but I don't think its close to being practical. If my understanding is correct, it would require a lot of memory and most likely several render passes for noiseless images. My guess is that you would never see playable frame rates on current hardware.
Crossbones+ - Reputation: 3634
Posted 23 February 2012 - 10:51 AM
In my opinion, for transparent passes, one of the techs described here is the way to go
Members - Reputation: 574
Posted 23 February 2012 - 12:17 PM
Members - Reputation: 773
Posted 23 February 2012 - 12:52 PM
Crossbones+ - Reputation: 3634
Posted 23 February 2012 - 01:05 PM
not really, OIT solves the problem of the proper order of alpha fragments/pixels. it's needed for proper transparency even without any lighting. check out
Rendering using OIT could be possible in forward rendering, but that defeats the purpose of the OIT, doesn't it?.
I think the only shading comes from a fresnel term for the blend intensity.
the solution you want to choose is very dependant on what you're actually rendering. having just one layer of glass (e.g. window) might work with simple MSAA and a custom sample mask where you writ every second pixel in case of transparency. it's simple to implement, fits naturally into the deferred shading pipeline with all the other effects. particles on the other side won't work with satisfying results with this solution. but particles can be shaded per vertex (or at least you can select like the 4 most contributing light sources per particle/vertex and use those in the usually forward rendering fashion (but that won't work for big windows that might be affected by several tiny lightsources. For tiny objects (e.g. a bottle), you might get away with cubemaps only that have lighting backed in etc.
And lighting + transparent objects are not the only issues you might have, you also want fog (even fog areas?) to work fine, mirrors (e.g. a water plane)? you might want to have shadows + projectors, decals?
and on top, you need to be very careful regarding performance, while you have an exact limit of solid pixels on the screen (-> your resolution), transparent objects can easily add 10x the cost. you might become limited to e.g. the fillrate, which isn't that common in solid deferred rendering. you might also process a lot of fully transparent or overwritten pixel. that's the main reason why most games nowadays have very few transparent objects.