Linear depth and double fast Z render

Started by
11 comments, last by Zoner 12 years, 2 months ago
On the 360, I'm trying to accomplish two things at once.

Double fast Z render: Disable pixel shader and color writes. vertex shader -> depth buffer only.
Output linear depth: Used for reconstructing world position and depth in later stages of the pipeline.

I've found all sorts of material on linear depth, but lots of it is contradicting.

You've got this, which seems too good to be true, and has been denounced in other threads:
http://www.mvps.org/directx/articles/linear_z/linearz.htm

This, which requires a pixel shader and a render target or a fragment depth change:
http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/

What's the best way to put it all together?
Is it possible to output "double fast" linear-z?
Advertisement
your first link is wrong, it's not possible to output linear z directly,
you don't want to to output depth values in shader either, it's slow especially when you'd have alpha test.

so yeah, you're left with those 2 possibilities:
1.a 2nd rendertarget (e.g. float32) where you write out depth ->no double speed
2.not use linear depth, but the real depth buffer.

the question is, do you really need linear depth? is it for precision or for performance reasons that you want to use it? if it's the 2nd point, you could create a second pass to linearize the depth into a separate buffer.

your first link is wrong, it's not possible to output linear z directly,


Not really related to the original question but if the projection matrix is orthogonal isn't the resulted depth linear?

For the OP if you can and it suits your needs : use the hardware z-buffer. You are able to calculate the linear z-value pretty easily from the stored depth value. Of course the precision isn't as good at all depths, but should be enough.

Cheers!
You've got this, which seems too good to be true, and has been denounced in other threads:
http://www.mvps.org/...r_z/linearz.htm
Can you link to these denouncements?

What's the best way to put it all together?
Is it possible to output "double fast" linear-z?


Only for orthographic projections.

Also, double-speed-z is also a minomer, its actually much faster than that, as the pixel shader part of the pipeline is skipped entirely when it is all working (you will be bound by vertex throughput and ROP).
http://www.gearboxsoftware.com/

[quote name='Krypt0n' timestamp='1328780975' post='4911233']
your first link is wrong, it's not possible to output linear z directly,


Not really related to the original question but if the projection matrix is orthogonal isn't the resulted depth linear?[/quote]you had ever non linear z in an orthogonal projection? I would think the topic obviously implies perspective projection, otherwise it would be kind of pointless?


For the OP if you can and it suits your needs : use the hardware z-buffer. You are able to calculate the linear z-value pretty easily from the stored depth value. Of course the precision isn't as good at all depths, but should be enough.[/quote]welcome back Mr McFly ;)

Also, double-speed-z is also a minomer
its actually much faster than that, as the pixel shader part of the pipeline is skipped entirely when it is all working (you will be bound by vertex throughput and ROP).

double z meant to be relative to the z-pass performance without double-z, as those passes are usually purely ROP bound (vertexshaders that output just the position, even with some skinning, are usually not the bottleneck and in a usually case, you use the mesh LOD that fits the targer resolution, only limiting thing ends up being the ROP unit).
and it's quite accurate, z-passes are ROP bound, having double Z will result in twice the throughput of the ROP unit -> twice performance, not more and not less (if double-z is working and you have a discreet GFX card).

you had ever non linear z in an orthogonal projection? I would think the topic obviously implies perspective projection, otherwise it would be kind of pointless?


I made the comment just to point out that there are cases where the output is actually linear z. Strictly the topic implies "linear depth and double fast z-render" and it is a useful information that both are possible under certain circumstances.

Cheers!
If you can access the hardware depth buffer then there's no reason to manually write out depth to a render target. You can reconstruct position from depth values in a depth buffer.
For the main rendering, not considering double-fast-z:
The reason would be to write out a modified depth, such as linear depth.
If you just wrote the new depth to HLSL depth output, it would disable hierarchical-z optimization which is critical.
But writing a whole render target just for depth seems out of the question since we already have the depth buffer.

I'm most definitely reconstructing position from the depth buffer. How ever i would like a more linear precision, because it's pretty bad right now.

The double fast part, is for both shadow maps (which in some cases are orthogonal), and for the early-z pre pass used to fill the hierarchical z buffer.

This topic is closed to new replies.

Advertisement