Sign in to follow this  
Raed

Creating a readable zBuffer

Recommended Posts

Hi I am trying to read pixel depth information from the z-buffer. As far as i know, the only readable z-buffer format is: D16Lockable The problem is, when i create the direct3d device using this format, the creation fails. here is the code (C#): CreateFlags flags = CreateFlags.SoftwareVertexProcessing; // Check to see if we can use a pure hardware device Caps caps = Manager.GetDeviceCaps(adapterOrdinal, DeviceType.Hardware); // Do we support hardware vertex processing? if (caps.DeviceCaps.SupportsHardwareTransformAndLight) { // Replace the software vertex processing flags = CreateFlags.HardwareVertexProcessing; } // Do we support a pure device? if (caps.DeviceCaps.SupportsPureDevice) { flags |= CreateFlags.PureDevice; } PresentParameters presentParams = new PresentParameters(); presentParams.SwapEffect = SwapEffect.Discard; presentParams.AutoDepthStencilFormat = DepthFormat.D16Lockable; presentParams.EnableAutoDepthStencil = true; presentParams.Windowed = true; // create the direct3d device try { device = new Device(adapterOrdinal, DeviceType.Hardware, renderingWindow, flags, presentParams); } catch (Exception error) { // an invalid call exception is thrown... } when i set the AutoDepthStencilFormat back to D16, it works. Any ideas why the above code fails? and if there is any other way to read the z-buffer content please do tell :) thanks Raed

Share this post


Link to post
Share on other sites
thanks for the reply, i am actually sort of new to this stuff. could you please explain more about rendering depth values to R16F or R32F textures?
if there are any samples or links, please send them on.

i really appreciate your help

Share this post


Link to post
Share on other sites
Rendering depth values to R16F or R32F textures is common in "shadow mapping" technique - it is the first pass of this algorithm. You have to create texture with R16F or R32F format. Then you have to set this texture as render target (see DX SDK).
It is noticable that when you are rendering to texture and Z-test or Z-write is enabled then DepthStencilBuffer must have same or greater size than render taget texture.

Share this post


Link to post
Share on other sites
The most common approach is storing the depth in a [0..1] range from the camera (in view space)

float depth = mul(mul(Position, World), View).z / FarPlane

You can output this depth to the red channel of the R32F target.

Share this post


Link to post
Share on other sites
The most common approach is also using HLSL (high-level shader language).
In vertex shader you can have code like this:
OUT.Position = mul(position, ShadowMatrix);
OUT.Depth = OUT.Position.z / OUT.Position.w;
In pixel shader you only have to output depth interpolated from vertex shader to pixel shader:
return IN.Depth;

Share this post


Link to post
Share on other sites
Keep in mind there is more than one way to render depth to a texture, as you can see from the two posts above mine. Your perspective position.z/position.w is different from depth in view space in ways that may not be obvious at first, and you'll have to be careful when using it. Typically having view-space depth is nice for reconstructing a pixel's view-space or world-space position in a separate pass, and is essential for techniques like screen-space ambient occlusion or most deferred rendering implementations. If this is what you're trying to and you have trouble (it can be tricky writing the shader code for this), you should search through the threads in Graphics Programming & Theory for some in-depth discussion on the topic (as well as some code in HLSL).

Share this post


Link to post
Share on other sites
Just wanted to point out that on newer hardware (past couple of years or so) it's possible to create a texture with a depth-stencil format and usage, and grab its top-level surface to use as the depth-stencil buffer. This allows you to turn around and bind that same depth-stencil texture directly. AFAIK it's supposed to perform better than MRT if it's supported, but like all things hardware related you'd have to test it to know for sure.

Given that this relies on the hardware to write the depth values, they will be non-linear Z/W. There was a thread a few weeks ago where we discussed the differences between using linear and non-linear depth for position reconstruction and other purposes, which I know MJP and myself were a part of for sure, if you want to search for it. This is a popular topic so you should find plenty of information.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this