Depth buffer really confuse

Started by
7 comments, last by frob 8 years, 4 months ago

I am learning OpenGl on this site. I know whats the use of depth buffer but I am confuse on how it works.

On the tutorial. There is a statement saying

Once enabled OpenGL automatically stores fragments their z-values in the depth buffer if they passed the depth test and discards fragments if they failed the depth test accordingly.

and

The depth buffer contains depth values between 0.0 and 1.0 and it compares its content with z-value of all the objects in the scene as seen from the viewer.

and

By default the depth function GL_LESS is used that discards all the fragments that have a depth value higher than or equal to the current depth buffer's value

I got some questions on this:

1st - If once the opengl is enabled, where does the first fragment compared to if the depth buffer is still empty?

2nd - Does the depth buffer compares the Fragments z-value to the objects z-value relative to eye space?

3rd - If I am comparing the fragments value to the current's depth value then this current value is from the previous fragment? or is it the z-value of ojbects relative to eye space?

ugh I am so confuse.

Please enlighten me. Thanks

ooookkkkkaaaayyy

Advertisement
1st - If once the opengl is enabled, where does the first fragment compared to if the depth buffer is still empty?

Although it is configurable, you generally use the depth buffer to discard writes of fragments for which a "closer" fragment has already been written. At the beginning of the frame you usually clear the depth buffer to a known value, in this default case that would be the "farthest" depth possible (1.0).
2nd - Does the depth buffer compares the Fragments z-value to the objects z-value relative to eye space?

The depth buffer itself compares nothing; it just stores values. You can configure what you write into the buffer in your shaders, but generally the depth written is not the eye-space depth, it's a post-projection depth scaled, nonlinearly, into the 0 to 1 range.
3rd - If I am comparing the fragments value to the current's depth value then this current value is from the previous fragment? or is it the z-value of ojbects relative to eye space?

The depth test will be against the current fragment being considered for rasterization and the content of the depth buffer for that fragment. The depth buffer is written on successful rasterization of a fragment (unless you disable this behavior), so it's always comparing the current fragment depth (again, not usually eye-space depth) with the value of the previous fragment that was successfully written.


The depth buffer itself compares nothing; it just stores values. You can configure what you write into the buffer in your shaders, but generally the depth written is not the eye-space depth, it's a post-projection depth scaled, nonlinearly, into the 0 to 1 range.

If you want to learn more about this look here:

http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html

-potential energy is easily made kinetic-

Just re-iterating and re-wording:

The depth buffer usually works as a shortcut to see if something should be drawn or discarded. It has several options:

* NEVER (don't draw)

* ALWAYS (always draw)

* EQUAL (draw if the values are exactly equal)

* NOTEQUAL (draw if the values are different and store this)

* LESS (draw if the value is smaller than what exists, and store this new smaller value)

* LEQUAL (draw if the value is smaller or equal to what exists, and store the new value)

* GREATER (you get it...)

* GEQUAL (Greater or equal)

Generally games will not explicitly clear the depth buffer. Instead they set the flag to ALWAYS and draw their skybox or draw a distant plane, thus obliterating whatever was stored and setting a new maximum depth.

The depth buffer can be used for other tasks as well as depth. You can use depth images, or pre-computed depth fields, to mix and match 3D worlds with pre-rendered 2D images by drawing a pre-rendered image and telling the depth buffer what pre-computed depths they represent. You can use depth buffers to help compute fog or atmospheric scattering or depth-of-field distortions.

As for how the depth number is determined, there are many methods. You can change the values that get used inside your graphics shaders or compute shaders.

Because of the magic of floating point, smaller numbers are more precise than bigger numbers. The sliding scale of the decimal point (hence "floating" point) means the tiniest numbers have the greatest precision. Each time it floats bigger, the precision drops by half. Floating point precision operates at logarithmic scales.

That means that the nearest items usually have high precision, but distant items can suffer from "z-fighting", where planes that are touching each other will shimmer and shift between the objects as the camera and models moves.

To counter the sliding precision, people have come up with many different ways to compute the depth values.

You can use the simple calculation of (object depth / view frustum depth). That is normally the default that happens if you didn't provide a custom value. Some programs will store the inverse of that distance, termed a 'w-buffer". Some programs will use a logarithmic z value, basically reversing the logarithmic nature of floating point turning it back to a roughly linear scale. Some programs will use different algorithms that suit them better.

The value that you store and compare against is up to you.


Generally games will not explicitly clear the depth buffer. Instead they set the flag to ALWAYS and draw their skybox or draw a distant plane, thus obliterating whatever was stored and setting a new maximum depth.

Are you sure this is still the truth? With all modern GPU having Hi-Z, I know at least ATI(AMD) has fast Z clear IIRC using Hi-Z buffer. I would think drawing the skybox last will potentially save alot of fill depending on how much of the sky is visible.

For more info on how the z-buffer is actually used in modern gpus look here:

http://developer.amd.com/wordpress/media/2012/10/Depth_in-depth.pdf

edit - and if you're still confused as to what get compared ("standard" operation)

1. clear zbuffer

2. draw stuff with depth buffering enabled.

2a. if a fragment passes depth test replace both framebuffer and zbuffer data with fragment color and z data.

3. draw stuff again (same 2d position as last draw)

3a. if a fragment passes depth test replace both framebuffer and zbuffer data with fragment color and z data.

4. repeat as many times as you need.

(you of course don't need to draw in the place on the screen consecutively, that was just for this example)

-potential energy is easily made kinetic-


Are you sure this is still the truth? With all modern GPU having Hi-Z, I know at least ATI(AMD) has fast Z clear IIRC using Hi-Z buffer. I would think drawing the skybox last will potentially save alot of fill depending on how much of the sky is visible.

Ah, the joys of constantly-changing graphics hardware.

Looks like yet another change, where an older best practice is replaced.

I'm glad I'm not a graphics-centric engineer because it seems every few years it flips itself on its head; the old best practices are discouraged, the old practices to avoid become recommendations.

From some of that reading you linked, if your graphics card has a compressed z-buffer, use a clear operation since it will drop the compressed blocks. if you are using an older card without compressed Z-buffer, overwrite rather than clear since a clear resets the value to a specific depth and then you'll immediately overwrite it with new depth data.

And if you're targeting cards in between, implement both.

If you implement only one or the other, you're doing it wrong on the opposite era's cards. blink.png


Ah, the joys of constantly-changing graphics hardware.

Looks like yet another change, where an older best practice is replaced.

I'm glad I'm not a graphics-centric engineer because it seems every few years it flips itself on its head; the old best practices are discouraged, the old practices to avoid become recommendations.

Actually the zbuffer stuff has been like that since 2000 under the name hyperz for the then ATI. Even for Nvidia its been around for at least a decade.


From some of that reading you linked, if your graphics card has a compressed z-buffer, use a clear operation since it will drop the compressed blocks. if you are using an older card without compressed Z-buffer, overwrite rather than clear since a clear resets the value to a specific depth and then you'll immediately overwrite it with new depth data.

I'm not sure if it operates on the compressed blocks at all, I think it operates on the meta-data associated with a surface instead... but I'm not sure.


And if you're targeting cards in between, implement both.

If you implement only one or the other, you're doing it wrong on the opposite era's cards.

Well like I said it has been at least a decade so I think you're pretty safe at this point.

-potential energy is easily made kinetic-

The depth test will be against the current fragment being considered for rasterization and the content of the depth buffer for that fragment. The depth buffer is written on successful rasterization of a fragment (unless you disable this behavior), so it's always comparing the current fragment depth (again, not usually eye-space depth) with the value of the previous fragment that was successfully written.

I read again the tutorial.

there is also this statement

. These z-values in view space can be any value between the projection frustum's near and far value. We thus need some way to transform these view-space z-values to the range of [0,1] and one way is to linearly transform them to the [0,1] range.

Isnt this on eye space? I must have confuse on how depth buffer works on depth precision. All in all my real confusion was on the depth precision it self. Which happens first? is the the test or the equation for depth precision?

The equation to transform z-values (from the viewer's perspective) is embedded within the projection matrix so when we transform vertex coordinates from view to clip and then to screen-space the non-linear equation is applied.

Since it is embedded on the project matrix. I think i should limit on how far should I dig to clearly understand some things since I am not really a math person. But, does the depth precission equation happens first before the depth test? or is it the depth test first before the depth precision?

ooookkkkkaaaayyy


Isnt this on eye space? I must have confuse on how depth buffer works on depth precision. All in all my real confusion was on the depth precision it self. Which happens first? is the the test or the equation for depth precision?

Normally it is a value between 0 and 1, mostly this stems from the nature of floating point.

At the point in the pipeline it is happening the pixel fragment is generally relative to the view frustum, a value from 0 to 1.

The task for comparing against the z-buffer is simple enough. The pixel fragment's depth value is compared against the z-buffer using the selected operation (any one of <, <=, ==, !=, >=, >, true, false). If the result is false the pixel fragment is discarded and processing stops. if the result is true the z-value is written to the z-buffer, and processing continues.

As for what happens first, that gets a little tricky.

From a high level perspective, the test against the z-buffer happens toward the end of the graphics pipeline: Everything is sent to the card to be rendered. The vertex shader is run, which can do tasks like skinning and transformation, basically moving stuff around. The hull shader is run that can break down line segments into smoother curve segments. The tesselator runs to connect all the tiny line pieces. The domain shader is run, and it can calculate the final position of each vertex point. The geometry shader runs next and can further modify what vertices are actually drawn. The pixel shader runs and can modify color and depth information. Conceptually the z-test runs here, as described above. Assuming the z-buffer test passes, the pixel fragment is passed along to be merged with other values. Finally, output of all the rendered pixels are merged together and generate a full final image.

In practice, there are several tricks and optimizations that hardware can do to make the test happen much earlier. The hardware can peek ahead at the shaders being used.

If the hardware can see that the pixel shader does not modify the z-value, it can run the test before the vertex shader eliminating what is normally the most expensive shader.

If the hardware can also see that the geometry shader does not modify z-value, it can run the test that much earlier and avoid the work if the test fails. If the hardware can also see the domain shader doesn't affect the z value, it can be bumped again. Repeat for each shader.

That part happens automatically.

There are methods (I am not fully read up on them) to run an early depth-only processing pass. Basically it runs to transform all the geometry but does not do any of the more expensive lighting or coloring operations. That information is used to perform earlier or more reliable depth tests in the second pass.

This topic is closed to new replies.

Advertisement