Jump to content
  • Advertisement
Sign in to follow this  
hick18

the linear depth method (Crysis method)

This topic is 3353 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Ive been reading on this forumn posts by MJP where he talks about his method of using a camera frustum corner and linear depth. I have a few questions 1. Are you using 2 depth buffers? one with linear depth and the other the standard z/w depth for ztest? with the linear depth buffer jsut being another render target? The reason i ask is, if you were just using the linear depth buffer, what is being using for your zDepth test? If you were using the linear depth buffer then that would require your whole scene to be using linear depth. 2. Why do you send the 4 frustum corners to the shader? could'nt you just send the top right of the Proj frustum and use that for the rest - TopRight vec3(topright.x, topright.y, farclip) - Topleft vec3(-topright.x, topright.y, farclip) - botRight vec3(topright.x, -topright.y, farclip) - botleft vec3(-topright.x, -topright.y, farclip) ...if you're using viewspace. And simply multiply by the View matrix to get them in world space. Maybe im wrong though

Share this post


Link to post
Share on other sites
Advertisement
Sorry to hijack the thread, but it is a somewhat related question. Why even use the z/w depth for ztest on Direct3D 10 and above hardware? With floating-point render targets, why don't they just use 32-bit float depth buffer, and store linear depth?

Share this post


Link to post
Share on other sites
Quote:
Original post by hick18
1. Are you using 2 depth buffers? one with linear depth and the other the standard z/w depth for ztest? with the linear depth buffer jsut being another render target? The reason i ask is, if you were just using the linear depth buffer, what is being using for your zDepth test? If you were using the linear depth buffer then that would require your whole scene to be using linear depth.


Yeah, I would render linear depth to a regular render target and then just use a regular depth-stencil surface for depth testing. This is because anything I was talking about was related to D3D9 or XNA, where you can't access the native depth-stencil buffer even if you wanted to. So since you have to render depth manually to a regular render target, it makes sense to output to a format that has good precision and that's convenient for reconstructing position. If you're using D3D10 or you're on a console and you do have access to the native depth buffer, then it's probably not worth writing out depth manually. However Crysis still chose to always write out linear depth in order to keep things simple and consistent between their D3D9 and D3D10 rendering paths.

Quote:
Original post by hick18
2. Why do you send the 4 frustum corners to the shader? could'nt you just send the top right of the Proj frustum and use that for the rest

- TopRight vec3(topright.x, topright.y, farclip)
- Topleft vec3(-topright.x, topright.y, farclip)
- botRight vec3(topright.x, -topright.y, farclip)
- botleft vec3(-topright.x, -topright.y, farclip)

...if you're using viewspace. And simply multiply by the View matrix to get them in world space.

Maybe im wrong though


Yeah, you can certainly do that. I did it the way I did in my blog because I thought that would be easiest to understand. There are other ways you could do it too, like sticking the corner coordinates in texture coordinates of your quad vertices.

Share this post


Link to post
Share on other sites
I'm using Dx10, and wasnt aware you could access the depth, i thought you still had to write it out to a rendertargrt.

How do you access it in dx10? do you mean sending it to the shader as a shader resource view?

Share this post


Link to post
Share on other sites
Quote:
Original post by hick18
I'm using Dx10, and wasnt aware you could access the depth, i thought you still had to write it out to a rendertargrt.

How do you access it in dx10? do you mean sending it to the shader as a shader resource view?


Yup. I believe some of the samples in the SDK do it, like this one.

Share this post


Link to post
Share on other sites
I noticed that before they used the depth buffer as a shader resource, they first had to unbind it. Do you have to do this? can you use it as shader resoucrce but also have it still performing the zTest for what you are about to render, without doing it manually?

So say i was rendering water, and wanted to use the zbuffer for testing the distance between the current pixel(the water) and the stored zdepth at that location. If the depth buffer was unbinded, the water geometry wont get written to the depth. And what if i need it in the depthbuffer, should i want to perform some screen effect or post effect. What do i do in these situations?

If you do have to unbind it, should i turn off depthWrites/depthTests with a depthstencil state? is it needed? does it provide any better performance

Again if you do have to unbind the depth buffer, this means that I'll have to do manually depth teesting of what im rendering after i unbind it. Is this just as simple as testing the z/w after being mulitplied by the projection matrix? Is there anything to watch out for?

Its a shame you can *read* and write to the Depth and colour buffers in the shader. Do you know if there is any plans for this? I gather its a performance issue, but i think it would make things so much easier

Share this post


Link to post
Share on other sites
Yeah you can't sample from a depth buffer in a shader while it's still bound to the device for depth testing. I'd imagine that allowing that would break GPU performance optimizations.

You could do the depth test manually by dividing z/w in the pixel shader, but if you wanted you could also have another depth buffer and you could fill that buffer with the depth in your original depth buffer by writing out to SV_Depth with a full-screen quad. This would be very quick since GPU's do z-only passes very quickly, and also since the pixel shader would be dead simple. Then you could use hardware z-testing for rendering, which could definitely be advantageous if you're rendering with a heavy pixel shader that you don't want run for pixels that will fail the depth test.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!