Jump to content
  • Advertisement
Sign in to follow this  
swordyijian

double depth buffer

This topic is 3930 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Advertisement
Please be a bit more specific. You can create new depth buffers with CreateDepthStencilSurface. Make sure you use thesame multisample level as the render target you use the with. Use the new depth buffer with SetDepthStencilSurface but make sure you save a pointer to the old one so you dont lose it.

Share this post


Link to post
Share on other sites
http://developer.nvidia.com/object/order_independent_transparency.html

in the paper, it tells us how to overcome the problems of transparency. it use double depth buffer, but i do not konw how to do it in directx. Or, do you have another easy method of handling the problems of transparency?

Share this post


Link to post
Share on other sites
From a quick look at those slides it seems OpenGL supports multiple depth buffers ("depth unit 0" and "depth unit 1" in the slides) which is not something you get in Direct3D.

I've not tried to implement OIT/"depth peeling" before as a back-to-front sorting algorithm has worked fine for me. Based on the number of discussions it seems that a back-to-front for semi-transparent and front-to-back for opaque geometry works very nicely. It's usually possible to optimize it temporally as well which limits the performance impact further...

Having a search around for D3D examples might well yield what you want though. "depth peeling" is a common enough phrase.

hth
Jack

Share this post


Link to post
Share on other sites
That seems a horribly inneficient way to do things. Unless I'm mistaken, you need to do 4 passes for that moderately simple scene...

Share this post


Link to post
Share on other sites
then how to use directx to do the "depth peeling"? should i create a surface to store the depth value? but how can i compare the depth value stored in the created surface? where can i find the directx examples?

Share this post


Link to post
Share on other sites
The presentation mentions specifically that there's no "double depth buffer" in OpenGL and has a little bit of discussion about what to do. I imagine that the full article delves more deeply into this. Any solutions presented for OpenGL are probably relevant for Direct3D, too.

Share this post


Link to post
Share on other sites
Quote:
The presentation mentions specifically that there's no "double depth buffer" in OpenGL
Yup, I saw that but I followed some phrases via searching and found this page. However on re-reading it now it's just an ideas page. Oops.

Quote:
should i create a surface to store the depth value? but how can i compare the depth value stored in the created surface?
From the looks for things, you'll need to at least in part manually implement a second depth buffer - something along the lines of how shadow mapping works.

Quote:
where can i find the directx examples?
Your favourite search engine might be a good start [wink]. Otherwise check out the AMD/ATI or Nvidia developer SDK's - they're crammed full of cool samples and whitepapers.

hth
Jack

Share this post


Link to post
Share on other sites
Quote:
Original post by swordyijian
then how to use directx to do the "depth peeling"? should i create a surface to store the depth value? but how can i compare the depth value stored in the created surface? where can i find the directx examples?


Hi, I've implemented depth peeling before in DirectX. The basic idea (in pseudo-code) is this:


(Initialization)
(1) Create 2 offscreen render targets (R32F works well)
--> note, I've found you need 2 of them for ping-ponging purposes
--> these two render targets are referred to as "RT_1" and "RT_2" in the below pseudo-code








(Render Code)
(1) Set ping RT = RT_1, pong RT = RT_2
(2) Clear the main DB to 0.0f
(3) Clear the 2 offscreen RT's to 1.0f
(3) Set ZFunc to Greater (want the pixel furthest from the eye-pt)

do{
(4) SetRendetTaget(0, ping RT)
(5) SetTexture(0, pong RT)
(6) DrawScene()
(7) swap( ping RT, pong RT )

}while( Pixels are drawn );




(In your Depth Peeling shader)
(1) Compare fragments depth to depth in the 'pong' render target
(2) If 'this fragments depth' >= 'depth stored in pong RT'
discard
Else
light and shade pixel




I don't guarantee this is the most optimal way or anything, and you might be able to do this without the 2 offscreen render targets. There might be a step or two missing or slightly incorrect but I think the above description is pretty complete.

Also, the "while( Pixels are drawn )" part basically is saying you should do an occlusion query and see if you rendered any pixels. If no pixels were rendered, you have rendered all the layers. Alternatively, you can just hard code the number of layers.

Hope that helps. I'd post my code but its quite a mess at the moment and I don't really have the time to clean it up and make a nice example demo right now.

Note, this does require rendering your geometry N times for N layers, so I recommend only drawing transparent geometry in this fashion. So, DrawScene() should really be DrawTransparentObjects() or something like that.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!