Sign in to follow this  
Jinhua

Depth ordering problems when using render targets

Recommended Posts

When I try to render my scene to a render target, the depth ordering will get all messed up. Polygons that are drawn last with DrawIndexedPrimitive() will show up on top, as opposed to based on the distance from the view. If I render directly to the back buffer, then everything works as expected. Another interesting bit is that both rendering with and without render targets works fine on a different machine without changing a single line of code, and I am completely baffled as to what I am missing. The machine that "works" uses an ATI card and the other machine that doesn't uses an Nvidia card. Video drivers are up-to-date for both cars, and I'm using the April 2006 DirectX SDK on both machines as well.

Share this post


Link to post
Share on other sites
Are you using the same depth buffer for both ?
What are the dimensions of both your depth buffer and rendertarget ? (and backbuffer).

Are you using any form of multisampling ?

What does the debug runtime says ?

Share this post


Link to post
Share on other sites
Ahh, so it seems that multisampling was the problem. I removed it and everything works fine now. Sorry but I'm new to render targets.

This leads to another related question that I and couldn't figure out - how do I perform antialiasing when using render targets?

Thanks!

Share this post


Link to post
Share on other sites
See the documentation of IDirect3DDevice9::SetRenderTarget (the sdk doc is a good source of information, you should read it before doing any kind of work) :

Quote:
Some hardware tests the compatibility of the depth stencil buffer with the color buffer. If this is done, it is only done in a debug build.

Restrictions for using this method include the following:

The multisample type must be the same for the render target and the depth stencil surface.
The formats must be compatible for the render target and the depth stencil surface. See CheckDepthStencilMatch.
The size of the depth stencil surface must be greater than or equal to the size of the render target.


If you enable the debug runtime, it will report an error on both ATI and NVIDIA hardware.

The way to use rendertarget and multisampling is to :
- create a depth buffer with multisample_type != NONE (use CreateDepthStencilSurface)
- create an offscreen rendertarget with the EXACT same multisample_type. (use CreateRendertarget)
- call stretchrect when you're done rendering to your multisampled rendertarget to a non multisampled target.

Usually you use a separate rendertarget in order to do a render to texture.. BUT you cannot texture from an offscreen rendertarget. So that's why you have to call stretchrect from the multisampled offscreen rendertarget to the texture RT.

Note that you can also use the Backbuffer as a multisampled source instead of an offscreen rendertarget.. BUT IF you do allocate the device with the multisampled flag set. AND IF you redirect all your rendering to a rendertarget texture. THEN you'll possibly end up downsampling TWICE.

Ideally for post processing and multisampling you should have this number of surfaces created :
- a device with a regular flipchain and MULTISAMPLE_NONE. No automatic depth buffer unless you also plan to render to a non multisampled rendertarget and you need Z for that.
- an offscreen rendertarget with MULTISAMPLE != NONE
- a depth stencil buffer with MULTISAMPLE != NONE (same as above)
- a texture RT, that you will use a source for your postprocessing or whatever is your need for the rendering.

Your mileage may vary depending on what is your exact usage though..

LeGreg

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this