Jump to content
  • Advertisement
Sign in to follow this  
kuroioranda

Linear depth buffer question

This topic is 3544 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to write an outline shader in GLSL to be used as a post-processing step, but I'm having some problems. The non-linear nature of z-buffers is causing a lot of false edges in my scene, especially close to the camera, and I'm looking for ways to either make a linear z-buffer or else somehow derive a linear value from the z-buffer. Does anybody have experience with doing either? If possible I'd rather derive a linear z-value in the outline shader from the z-buffer value, but as the z-buffer holds z/w and w is lost by the post-processing step I don't know if that is possible.

Share this post


Link to post
Share on other sites
Advertisement
I found a way around the problem by manually specifying my z-buffer value with gl_FragDepth when I render the scene, does anybody have any idea what sort of performance hit that could cause (Since I suspect I'm now bypassing a bunch of fail-fast fragment discarding optimizations)?

Share this post


Link to post
Share on other sites
You can simply compute the linear depth with this function

float LinearDepth(in float depth){
return (2.0 * Near) / (Far + Near - depth * (Far - Near));
}


Near is the value of the near clipping plane, far for the far clipping plane. The linear depth is in the range 0 - 1.

I used it for my outline shader. You can find it at http://3d.benjamin-thaut.de

Share this post


Link to post
Share on other sites
Yeah you can reconstruct a linear Z value from the z-buffer using Ingrater's method...just keep in mind that you will still have all the precision issues associated with non-linear Z (most of the precision is dedicated to the area close to your near-clip plane) unless you actually render out a linear depth buffer.

As for manually specifying the depth in the shader, yeah this will break hierarchical-z optimizations that are you used to cull a fragment before the pixel shader runs (since the output depth value could be any arbitrary number after the shader runs).

Share this post


Link to post
Share on other sites
Quote:
(Since I suspect I'm now bypassing a bunch of fail-fast fragment discarding optimizations)?


Depth testing will come after the shader, since the shader allows you to set the depth. If thats what you meant.

Share this post


Link to post
Share on other sites
MJP, Integrator:

It makes sense that if the shader doesn't supply the z-value then the card can optimize the fragment shader away completely for that pixel, so thanks for the info :)!

Does it matter what values I supply for the near and far buffer when I convert the z-buffer value to a linear value? And does anybody know what the 2.0 term is for? I'm just trying to understand what exactly is going on with that expression.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!