Jump to content
  • Advertisement
Sign in to follow this  
RPTD

GLSL+Depth: What is true?

This topic is 3999 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

There is something which bothers me as it does not make sense. During a Fragment Program in GLSL I can write the depth value if I want or leave it up to the wired pipeline to calculate this. Now somebody claims that you can first render a simple depth pass and then the complex shader as due to the depth test the shader is only evaluated for the visible fragments. Now this would require that the depth is compared "before" the pixel is processed by the fragment program. But the fragment program can change the depth so the test has to be done "after" the fragment program. In this situation though the entire speedup using an early depth-fail does not hold true. So my problem is now: What is true? 1) depth-test is made "after" the fragment program in which case optimizing using a depth-pass before doesn't help 2) depth-test is made "before" the fragment program in which case optimizing using a depth-pass would help but changing the depth in the fragment program would be ignored

Share this post


Link to post
Share on other sites
Advertisement
Short answer : Both.

Longer answer:

Early depth culling is an optimisation in the pipeline, technically speaking depth testing should come after the fragment programs are executed, however if the fragment program doesn't adjust the depth of the fragment then the result before and after execution is the same, thus the test can be performed before the fragment program executed.

So, if your shader modifies the depth of a fragment then early depth culling is turned off and the depth test is done after the fragment shader is evaluted.

On the other hand, if the shader doesn't touch the depth then the early depth test can stay on and you get a potential speed up.

Share this post


Link to post
Share on other sites
I already assumed such a thing could be but just wanted to get it confirmed first. Just read that Carmack did this early depth-pass in Doom3 to avoid heavy pixel shaders. Planing to do deferred shading with complex geometry and material shaders this would then give me potential benefits to do a depth-pass first in that case. Let's have a test. It can only get faster, can't it? :D

Share this post


Link to post
Share on other sites
Actually with deferred shading the first pass just writes out the attribute buffers, so you already get the "pre-z" performance win (since the shaders are extremely simple, and the second pass benefits from the already-filled depth buffer). There's no need to prime the z-buffer with deferred shading unless potentially you have a TON of overdraw, don't sort your objects and want to avoid the write bandwidth... I've never seen such a dire situation though.

Share this post


Link to post
Share on other sites
Well in this case I'll keep this depth-pass as a sort of "maybe-test" in the back of my head then. I have to get this deferred shading working first anyways. Framebuffers on ATI are a tollhouse :(

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!