Avoiding Z-Fighting

Started by
7 comments, last by VladR 17 years, 4 months ago
Hi, and sorry for this long post, I believe the z-fighting problem can never be totally avoided considering current hardware's architecture. Because of the z-buffer's discrete nature, no matter how many bits of precision it's dedicated and how much its accuracy is increased, one can always reach a threshold where objects of less distance can not be distinguished depth-wise. It's not possible to map an infinite continious space to a finite discrete space such as z buffer. Say, you have a 64-bit z-buffer, a huge z-buffer which by the way is not available on today's hardware, and want to render a scene that is 256 meters long in depth, using orthographic projection. Again, objects that are less distant than 256 meters/ 2^64 == 2^(-56) meters, cause z-fighting problem. Perspective projection also suffers from the same problem, but the calculations are more involved because of depth resolution's non-uniform distribution. Generally, moving zNear and zFar clipping planes has roughly the effect of loosing log2(zFar/zNear) bits of precision. So, as the zNear approaches zero, we loose a GREAT deal of precision as the function approaches infinity. Moving the far plane to infinity loses surprisingly little precision normally, because when the distance to the far plane is significantly greater than the near plane (which is often the case), moving the far plane to infinity has little overall effect on precision. In other words, when the difference between zFar and zNear values is already great, as is the usual case, moving the zFar plane further will not affect the zFar/zNear ratio and hence not much precision will be lost. I've come to the find the following methods useful. I would greatly appreciate your new ideas on how to resolve z-fighting problem or your suggestions on the followings, specially on 4: 1) The best way (though some might argue) to avoid this problem is to totally avoid objects that are very close to each other. This method is specially applicable to static scenes where objects don't move around, so the possibility of them colliding and hence crossing over the threshold is minimal. If the objects don't move, it's possible to find a threshold above which, z-fighting can be avoided, as described above. 2) Another simple but effective method is to move the near plane as far as possible. This will decrease the zFar/zNear ratio and earns us more precision, as discussed above. 3) BSP Trees: BSP trees can be used to render the scene in a back-to-front or a front-to-back order and hence totally eliminate the need for z-buffers. The main problem with rendering in a back-to-front order is the massive overdraw of pixels, where time is wasted drawing objects that will be overdrawn later. PLUS, When using a z-buffer, a pixel can be culled (discarded) as soon as its depth is known, which makes it possible to skip the entire process of lighting and texturing a pixel that would not be visible anyway. This kind of early z rejection in the pipeline can not be utilized when z-buffers are disabled, which is usually the case when using BSP trees. 4) The most general method (meaning that it doesn't need any specific knowledge of the scene) to achieve higher precisions, is to render the scene in several passes. The scene is broken into several, say n, non-overlapping partitions that do not interfere in z. These non-overlapping partitions are then rendered from back to front, each in a distinct rendering pass. The depth buffer is cleared before each rendering pass. This way the precision of the entire depth buffer is made available to each partition. This method sacrifices speed in favour of more precision. 5) The fifth method, which needs specific knowledge of the coplanar objects in the scene, is to use two projection matrices. First, the programmer needs to specify which objects are coplanar and are a candidate for causing z-fighting, then uses one of those projection matrices to render the coplanar object that he wants to put "further", and the other, which is a biased version of the first projection matrix, to render the "closer" one. This method is specially useful for rendering posters, shadows and decals. What makes these examples, all fall in the same group is our knowledge of what objects cause z-fighting. For example, we know which poster belongs to which wall. So, we render all those walls first, switch to the biased projection matrix, and render all those posters in front of them. 6) One other method is to calculate the nearest and farthest vertices to the camera position each frame, and set the near and far planes to these points. This method needs manipulating the projection matrix each iteration of the rendering loop and may not the best option performance-wise. Any ideas/suggestions are welcome. Thanks for investing the time and reading this long post.
Advertisement
Hi, I'll add my 2c...

3) All you're avoiding here is the projective transformation - you're still working with finite representations of numbers in world space...

4) Z-paritioning has been used in several renderers already to combat z-precision issues. It works relatively well, but it's just a hybrid between object sorting and fragment hidden surface removal. Degenerate cases can be constructed here just like in any other method.

My point is that the problem simply cannot be solved completely with inexact arithmetic. Thus there are all sorts of tricks that we can use to get an arbitrary amount of precision from the depth buffer, but we cannot get exact real number representations without using infinite memory (although we can do better with rationals).

Still, moving forward the best solution to more complex scenes is still to probably just increase depth buffer precision. For those 1% of applications that simply cannot be made to work on current hardware, hybrid techniques like those that you have described can be employed.
Quote:Original post by AndyTX
4) Z-paritioning has been used in several renderers already to combat z-precision issues. It works relatively well, but it's just a hybrid between object sorting and fragment hidden surface removal. Degenerate cases can be constructed here just like in any other method.


I'm currently trying to implement the z partitioning method as I think it has the best flexibility among those choices. Simply increase the render passes where more precision is needed. I was mostly thinking to generalize the idea of sorting relative to coodinate system axes, as we are most get used to. Here, the idea is to sort objects(or vertices) relative to an arbitrary axis based on their projection on that axis. That's exactly what we do when we sort some vertices/points relative to, say x-axis. We actually project those points onto x -axis by eliminating y and z coordinates. Here the direction relative to which I want to sort is the camera view direction. This way objects will be sorted according to how 'deep' they lie in the scene.

After giving it some thought and finding the formulas, I came to the conclusion that it's actually not meant to be done that way. That's exactly why there is a world-to-camera space transformation: to point the camera in negative z direction (like OpenGL does) and ease these depth computations (such a discovery!:) ).

But one question remains. We may be forced to break some objects into parts to partition a portion of space. Considering that each of these partitions clear the z-buffer in their corresponding rendering pass, I was wondering if those 'boundry' planes can cause artifacts?

Thanks
Quote:Original post by Ashkan
But one question remains. We may be forced to break some objects into parts to partition a portion of space. Considering that each of these partitions clear the z-buffer in their corresponding rendering pass, I was wondering if those 'boundry' planes can cause artifacts?

Yes, they can. In particular any polygons that cross the boundary will be problematic. The method degenerates into fragment sorting (or at the very least, polygon sorting) in the worst case. Thus it's unsuitable for some scenes/applications, but works well for others (ex. space).
In perspective projections the z-fighting can be highly avoided linearizing the depth (check Dunlop's article http://www.mvps.org/directx/articles/linear_z/linearz.htm).
If the z-buffer uses the same format as the vertex-shader (64-bit floats), then I do not see how it can be the scapegoat
Quote:Original post by AP
If the z-buffer uses the same format as the vertex-shader (64-bit floats), then I do not see how it can be the scapegoat.

Does it? I'm not sure a 64-bit depth buffer is yet available though. Anyway, this is not my point. If me pointing out to a 64-bit depth buffer in the first post is what you mean by that, I was just trying to emphasize that, one can always passes that aforementioned threshold, no matter how precise the depth buffer become.

A 24 bit depth buffer is more than enough for me. I find the problem theoretically interesting, PS. I need to support ancient cards with 16-bit depth buffers, which can really become troublesome in times. There, is where I want to put this effort into good use.

Quote:Original post by Sergi
In perspective projections the z-fighting can be highly avoided linearizing the depth (check Dunlop's article http://www.mvps.org/directx/articles/linear_z/linearz.htm).


Thank you Sergi, it was indeed an interesting read. I came up with some interesting results. So to share the info with the community, here is my 2 cents:

First, that article is using row major matrices. PS, contrary to D3D, OpenGL maps the near plane to z = -1. These are the first things to keep in mind. Following the equations in that article, I came up with the following relation to compute the Zs, the depth of each point in the homogeneous representation of screen space. Multiply that by 1/w to get the post-perspective devision depth. Zv here, is the depth of the point in the view space:

D3D:
        f                                 Zs = ----------- * ( Zv - n )      ( f - n )                         with w = Zv, as pointed out in the article. 


Here is the OpenGL equivalent, derived from the projection matrix presented in "Real-Time Rendering" :
          - Zv * ( f + n ) - 2 * f * n    Zs = ----------------------------------                     ( f - n )                  with w = - Zv 


Substituting the first equation with values n = 10, f = 10000, as in the article's second table, and choosing the correct Zv, results in the exact same pre-perspective devision screen space (aka. homogeneous representaion) depths.
Is it possible for one triangle has z fighting problem? like a triangle with coordinates like
glBegin(GL_TRIANGLES)
glVertex3f(0.0,5.0,5.0);
glVertex3f(10.0,5.0,5.0);
glVertex3f(5.0,5.0,0.0);
glEnd();

If the view direction is (0,0,1), which means we look at z+ diraction, Since the three points have same y coordiantes, what result will this show? A smooth line? Actually no, there are many disconnectes, Is this also z fighting problem?
Quote:Original post by Sergi
In perspective projections the z-fighting can be highly avoided linearizing the depth (check Dunlop's article http://www.mvps.org/directx/articles/linear_z/linearz.htm).


Thanks Sergi for the link, it was extremely helpful, since I`m currently dealing with huge terrains (visibility up to 10 km and later much more - seamless transition from ground to space) and I was starting to experience this Z-Buffer artifacts.

VladR My 3rd person action RPG on GreenLight: http://steamcommunity.com/sharedfiles/filedetails/?id=92951596

This topic is closed to new replies.

Advertisement