Some thoughts about renderers and deferred shading

Started by
94 comments, last by AndyTX 17 years, 10 months ago
Quote:What do you mean by the same blur? Because he is using an edge detection filter, he's only blurring the slight edges of objects, and so objects far away will have a slight blur around them, while objects up close will have more blurring because of the larger edge and width of the edge.

I just did some googling on deferred shading (not too experienced with it myself), but this paper describes that anti-aliasing technique in section 3.5.1. I have no practical experience with it, but I don't see why it wouldn't work.

Ok let me re-phrase this: with a fixed-size filter kernel, this does not work ... I tried it. If you can afford an adjustable filter kernel, I do not know how well this would work out, but on my target platform, this was not an option.
Let me explain this a bit more: a fixed-size filter kernel means, that the blur kernel always has the same size. This means if an object is far away, it will has this -let's say two pixel wide- blur. It just does not look good. Additionally the blur sometimes results in color bleeding.
One way to reduce this problem is to restrict the screenspace blur on a certain distance by checking the Z value in camera space and then let the Depth of Field do its job ...
But to get back to my intial point, let's say you run a filter kernel that only filters 2x2, you will see this if you know where to look for. Suddenly the edges of the characters look different than the character itself ...

I missed this paper completely ... thanks for the link. Looks great. I will have a good time reading it.
Advertisement
Just looked into the paper to see how they do the edge detection and blur ... the whole paper looks very well written.
The edge detection filter looks expensive. If you have ps_3_0 hardware as the min spec, you might use the ddx and ddy instructions for a nice edge filter with less instructions and without normal map fetches ... but maybe the normal map method is more precise.
The idea with depth buffer as the position value was not very well known. I wonder what the advantage over storing the position.z value is. Is it more precise?
Quote:Original post by AndyTX
2) Alpha blending is a "hack" that fails in several instances. Other "more-correct" transluscency techniques work fine with deferred shading. Furthermore one can still do transluscent surfaces using a forward renderer after doing the majority of the scene with deferred shading. STALKER does this IIRC.


Could you list some of those more-correct translucency techniques?

Quote:6) Different surface shaders is also not a problem on modern hardware. This is a perfect case for dynamic branching as it is extremely coherant. Using something like libsh or Cg to a lesser extent, the "jumbo shader" can even be created from the smaller shaders and avoids needing to change shaders per-object/light.


As I said, the geometry phase is no problem. But how do you apply lighting shaders for individual geometry if they cannot be distinguished in the illumination phase?
Quote:As I said, the geometry phase is no problem. But how do you apply lighting shaders for individual geometry if they cannot be distinguished in the illumination phase?

you provide a material id in a free channel of a render target ... this is skin, metal, wood etc ...
Quote:Original post by wolf
you provide a material id in a free channel of a render target ... this is skin, metal, wood etc ...


And switch how? Branching? Still very expensive. Also, this still requires one uber-shader, just with a switch in it.

One way would be to distinguish between surface shaders and illumination shaders. First phase: sort geometry by surface shaders, render. (Different surface shaders like parallax mapping, fur...) Then sort geometry by illumination shader, render geometry with illumination shader nr. 1 in typical deferred style to backbuffer with additive blending, then the same with illumination shader nr. 2 etc. This eases batching a bit, since illumination is decoupled from surface shading; in cases with lots of different surface but few illumination shaders it might be superior to forward rendering. But I fear it would eat tons of fillrate, and requires re-transforming the geometry several times (once per illumination shader), which defeats one of the advantages of deferred shading.

EDIT: I got it, the depth trick is nice (multiplying the ID with 0.01 and sending the result to the depth buffer, then in the illumination phase comparing with "equal" depth test), but suffers from precision problems :/

[Edited by - Ardor on June 5, 2006 7:20:15 PM]
Quote:Original post by Ardor
Quote:Original post by wolf
you provide a material id in a free channel of a render target ... this is skin, metal, wood etc ...


And switch how? Branching? Still very expensive. Also, this still requires one uber-shader, just with a switch in it.


That's actually what AndyTX suggested above. If you don't like that, for lower-end hardware you can always have a seperate shader for each material, and do clip()/texkill if the material ID in the g-buffers doesn't match what the shader does, so that effectively each light is done in a few passes; one for each material.

Isn't UE3 using deferrered shading?

"Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety." --Benjamin Franklin

you mean UNreal 3? I dont think so.

I dont think the new Id engine will use it either.

So what is the main advantage of using defered shading? Is it just lighting passes? because it seems to me the disadvantages are many, and the advantages are few.

When in the real world do you need more than 5 or so lights on a single object or face?
Quote:Original post by Ardor
And switch how? Branching? Still very expensive. Also, this still requires one uber-shader, just with a switch in it.


When branching isn't supported you can emulate it by using the Stencil Buffer, there is a nice paper on the ATI SDK about that.

Quote:Original post by Wolf
Caosstec: this does not work, because the screenspace blur will be the same size throughout the image ... so objects that are far away would have the same blur as objects that are very near.

That color bleeding happens if you sample the 3x3 filter from the complete framebuffer. What I have done is to run an edge detection filter producing a 1-2 pixels wide edge. Then I run the blur pass where only the edged pixels are blurred but also the blurring sample are only taken from the surrounded pixels that are on the edge mask. But from the samples that lies on the mask just the closest to the current pixel are taken. Is easier and faster if the closest selection is performed based on pattern instead of sorting.
That allows just to smooth the closest polygons so no color bleeding will happen from the polygon that are overlaped. Therefore the background will not bleed the color into the characters but the characters edges will be smooth.
I remember seeing some ATI demos explaining leak reduction for Depth of Field that can be applied to DS AA as well.

Quote:]Original post by Cypher19
That's exactly what the first thing I mentioned in my post was, and I'll be shocked if your solution looks as good as what hardware AA provides. I honestly that it is the #1 worst solution to AA+DS in existence, and I find it ludicrous that ANYONE in the graphics industry actually takes it seriously, considering the results that a 3x3 blur gives compared to what hardware AA does.

I never said that the results are as good as the hardware AA output, but are a lot better than not simulating AA at all. Deferred rendering seems a promising solution to handle next generation graphics, Forward Rendering get to slow when increasing the number of effects and passes. For today, I greadly prefer seing a very highly detailed world with lot of lights and effects but with some not incredibly nice AA (just fake), than amazing AA with medium detailed polygons and a few dynamic lights.
The bluring in my opinion produce good results, there is some over blurring on the edges of the polygons, but in general bleeding is not so bad.



Other way to almost simulate AA is to perform a separated pass when storing the color in the G-Buffer. Don't render the colors to a Float Texture, do it to the classical RGBA8 framebuffer with AA enabled, and then use it with DS as usual.
Doing that only the colors are AA, Lights and effects are not, but lighting not being AA in the edgeds doesn't look that bad. Colors are the one that produce bad looking results when not AA.
If you combine the color AA with the edge detection filter + bluring, the results are quite ok.
Don't most renderers (like DirectX) use early z-fail, where it will discard a pixel before doing any shading if the depth buffer says it's occluded?

So, if you sort all your objects front to back (which isn't that hard nor computationally expensive), you will have accomplished the main point of deferred shading? Of course it won't be as accurate, but it cuts out most of the unnecessary shading.

This topic is closed to new replies.

Advertisement