Sign in to follow this  
emt

What is demerit of using big Zfar value

Recommended Posts

emt    134

In a scenario I am rendering only two cubes one at z = 10 units and other at z = 100 units,

what difference does it make if I set

  • Zfar = 500 units
  • Zfar = 1000 units

Since the depth test will only check for fragments with common Z value and will render the cube at z = 10 units, does any of these Zfar values will provide same level of rendering optimization in this case or different ?

 

 

Share this post


Link to post
Share on other sites
Brother Bob    10344

Ultimately, you have a fixed depth buffer precision that has to be distributed over the full Z-range from the near clip plane to the far clip plane. The further away you put the clip planes from each other, the smaller the depth buffer precision and consequently the ability to distinguish objects close to each other.

 

However, the way the precision is distributed, it is the ratio of distances that matters. In practice, if the far clip plane is sufficiently large, then everything is determined by the relative of the distance to the near clip plane alone. So often the far clip plane doesn't matter much, but the near clip plane is extremely important to get correct.

 

So in practice, it often doesn't matter much for situations such as in your example, but there is a tiny difference in that a larger far plane distance is worse.

Share this post


Link to post
Share on other sites
mhagain    13430

See http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html for more info on this.  As Brother Bob correctly pointed out, the ratio between zNear and zFar is the important thing; with the figures you've given you're going to maintain decent precision unless you set your zNear to something stupidly low like 0.01 (assuming a 24-bit depth buffer).

 

You mentioned optimization, so it's important to state that this has little to do with performance (not nothing; I'll touch on this later).  The values of zNear and zFar just go into making the projection matrix, and each vertex that you pass through the rendering pipeline gets the same matrix multiplication, so it's effectively the same question as asking if multiplying an arbitrary number by 500 is faster than multiplying it by 1000.

 

This assumes a classical rendering pipeline in which everything is carried out in the same order as specified in the pipeline diagrams you'll find online and in publications; i.e with depth test carried out after per-fragment/pixel ops.  For the past 10-odd years (give or take) that's not actually always the case, and hardware will instead often be able to run an initial depth test before the per-fragment/pixel stages (you can do stupid - or sometimes necessary - things that disable this optimization).  It's important to realize that this is an optimization in hardware, so by being able to skip the per-fragment/pixel ops at an early stage can give you higher performance.

 

Where this is relevant is that if you get bad depth-buffer precision by choosing bad values for zNear and zFar, you'll likely get Z-Fighting (this is one possible cause of it) (unless your geometry is large enough that the reduced precision doesn't matter - nobody ever said any of this was going to be straightforward!) in which case the early-Z optimization performed by your hardware may not be as efficient and may not reject as many fragments/pixels as it otherwise would.

 

So like I said, in your case, and assuming a 24-bit depth buffer and a sane value for zNear, none of this is going to be a problem for you.  With the zFar values you're using you're going to maintain reasonable precision even at the far end of your view frustum.  But it is something to be aware of for the longer term.

Share this post


Link to post
Share on other sites
Brother Bob    10344

As Brother Bob correctly pointed out, the ratio between zNear and zFar is the important thing [...]

Actually, that is only half the truth, and, in my opinion, the wrong half of the truth. It is not what I said, but, to be honest, I wasn't very explicit in what I meant either. What matters for precision is the ratio of the near clip plane and the objects you draw; the far clip plane plays a very small role, if not completely insignificant.

 

Let's say you have zNear=1, and zFar=100, and you draw the object at around z=10. Plug those values into the link you gave and it will tell you what precision you have at z=10 which is what matters if you want to draw an object at that depth. I used 10 depth buffer bits just to get some reasonably scaled values (plus the fact that 210 ~ 1000, so the depth resolution is easily related to percentages also so you can say how large percentage of the precision is distributed where); the depth resolution at z=10 is 0.095.

 

Now change zFar to something of several orders of magnitude larger, say 100000, and the depth resolution at z=10 becomes 0.096. So the near clip plane was pushed away with three orders of magnitude, and the precision changed by roughly 1% for the worse. Add another three orders of magnitude to the far clip plane, and the precision is virtually identical; the change is in the 7-8:th digit.

 

Now let's look at the near clip plane instead. Double the near clip plane, and watch the depth resolution at z=10 improve by roughly the same ratio.

 

So once the far clip plane is about 1000 times the near clip plane, its actual value is virtually meaningless and has no effect on precision. The near clip plane, and the ratio between the near clip plane and the objects being drawn, is everything.

Edited by Brother Bob

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this