GLSL+Depth: What is true?

Started by
3 comments, last by RPTD 16 years, 10 months ago
There is something which bothers me as it does not make sense. During a Fragment Program in GLSL I can write the depth value if I want or leave it up to the wired pipeline to calculate this. Now somebody claims that you can first render a simple depth pass and then the complex shader as due to the depth test the shader is only evaluated for the visible fragments. Now this would require that the depth is compared "before" the pixel is processed by the fragment program. But the fragment program can change the depth so the test has to be done "after" the fragment program. In this situation though the entire speedup using an early depth-fail does not hold true. So my problem is now: What is true? 1) depth-test is made "after" the fragment program in which case optimizing using a depth-pass before doesn't help 2) depth-test is made "before" the fragment program in which case optimizing using a depth-pass would help but changing the depth in the fragment program would be ignored

Life's like a Hydra... cut off one problem just to have two more popping out.
Leader and Coder: Project Epsylon | Drag[en]gine Game Engine

Advertisement
Short answer : Both.

Longer answer:

Early depth culling is an optimisation in the pipeline, technically speaking depth testing should come after the fragment programs are executed, however if the fragment program doesn't adjust the depth of the fragment then the result before and after execution is the same, thus the test can be performed before the fragment program executed.

So, if your shader modifies the depth of a fragment then early depth culling is turned off and the depth test is done after the fragment shader is evaluted.

On the other hand, if the shader doesn't touch the depth then the early depth test can stay on and you get a potential speed up.
I already assumed such a thing could be but just wanted to get it confirmed first. Just read that Carmack did this early depth-pass in Doom3 to avoid heavy pixel shaders. Planing to do deferred shading with complex geometry and material shaders this would then give me potential benefits to do a depth-pass first in that case. Let's have a test. It can only get faster, can't it? :D

Life's like a Hydra... cut off one problem just to have two more popping out.
Leader and Coder: Project Epsylon | Drag[en]gine Game Engine

Actually with deferred shading the first pass just writes out the attribute buffers, so you already get the "pre-z" performance win (since the shaders are extremely simple, and the second pass benefits from the already-filled depth buffer). There's no need to prime the z-buffer with deferred shading unless potentially you have a TON of overdraw, don't sort your objects and want to avoid the write bandwidth... I've never seen such a dire situation though.
Well in this case I'll keep this depth-pass as a sort of "maybe-test" in the back of my head then. I have to get this deferred shading working first anyways. Framebuffers on ATI are a tollhouse :(

Life's like a Hydra... cut off one problem just to have two more popping out.
Leader and Coder: Project Epsylon | Drag[en]gine Game Engine

This topic is closed to new replies.

Advertisement