Sign in to follow this  
swordyijian

double depth buffer

Recommended Posts

Please be a bit more specific. You can create new depth buffers with CreateDepthStencilSurface. Make sure you use thesame multisample level as the render target you use the with. Use the new depth buffer with SetDepthStencilSurface but make sure you save a pointer to the old one so you dont lose it.

Share this post


Link to post
Share on other sites
http://developer.nvidia.com/object/order_independent_transparency.html

in the paper, it tells us how to overcome the problems of transparency. it use double depth buffer, but i do not konw how to do it in directx. Or, do you have another easy method of handling the problems of transparency?

Share this post


Link to post
Share on other sites
From a quick look at those slides it seems OpenGL supports multiple depth buffers ("depth unit 0" and "depth unit 1" in the slides) which is not something you get in Direct3D.

I've not tried to implement OIT/"depth peeling" before as a back-to-front sorting algorithm has worked fine for me. Based on the number of discussions it seems that a back-to-front for semi-transparent and front-to-back for opaque geometry works very nicely. It's usually possible to optimize it temporally as well which limits the performance impact further...

Having a search around for D3D examples might well yield what you want though. "depth peeling" is a common enough phrase.

hth
Jack

Share this post


Link to post
Share on other sites
That seems a horribly inneficient way to do things. Unless I'm mistaken, you need to do 4 passes for that moderately simple scene...

Share this post


Link to post
Share on other sites
then how to use directx to do the "depth peeling"? should i create a surface to store the depth value? but how can i compare the depth value stored in the created surface? where can i find the directx examples?

Share this post


Link to post
Share on other sites
The presentation mentions specifically that there's no "double depth buffer" in OpenGL and has a little bit of discussion about what to do. I imagine that the full article delves more deeply into this. Any solutions presented for OpenGL are probably relevant for Direct3D, too.

Share this post


Link to post
Share on other sites
Quote:
The presentation mentions specifically that there's no "double depth buffer" in OpenGL
Yup, I saw that but I followed some phrases via searching and found this page. However on re-reading it now it's just an ideas page. Oops.

Quote:
should i create a surface to store the depth value? but how can i compare the depth value stored in the created surface?
From the looks for things, you'll need to at least in part manually implement a second depth buffer - something along the lines of how shadow mapping works.

Quote:
where can i find the directx examples?
Your favourite search engine might be a good start [wink]. Otherwise check out the AMD/ATI or Nvidia developer SDK's - they're crammed full of cool samples and whitepapers.

hth
Jack

Share this post


Link to post
Share on other sites
Quote:
Original post by swordyijian
then how to use directx to do the "depth peeling"? should i create a surface to store the depth value? but how can i compare the depth value stored in the created surface? where can i find the directx examples?


Hi, I've implemented depth peeling before in DirectX. The basic idea (in pseudo-code) is this:


(Initialization)
(1) Create 2 offscreen render targets (R32F works well)
--> note, I've found you need 2 of them for ping-ponging purposes
--> these two render targets are referred to as "RT_1" and "RT_2" in the below pseudo-code








(Render Code)
(1) Set ping RT = RT_1, pong RT = RT_2
(2) Clear the main DB to 0.0f
(3) Clear the 2 offscreen RT's to 1.0f
(3) Set ZFunc to Greater (want the pixel furthest from the eye-pt)

do{
(4) SetRendetTaget(0, ping RT)
(5) SetTexture(0, pong RT)
(6) DrawScene()
(7) swap( ping RT, pong RT )

}while( Pixels are drawn );




(In your Depth Peeling shader)
(1) Compare fragments depth to depth in the 'pong' render target
(2) If 'this fragments depth' >= 'depth stored in pong RT'
discard
Else
light and shade pixel




I don't guarantee this is the most optimal way or anything, and you might be able to do this without the 2 offscreen render targets. There might be a step or two missing or slightly incorrect but I think the above description is pretty complete.

Also, the "while( Pixels are drawn )" part basically is saying you should do an occlusion query and see if you rendered any pixels. If no pixels were rendered, you have rendered all the layers. Alternatively, you can just hard code the number of layers.

Hope that helps. I'd post my code but its quite a mess at the moment and I don't really have the time to clean it up and make a nice example demo right now.

Note, this does require rendering your geometry N times for N layers, so I recommend only drawing transparent geometry in this fashion. So, DrawScene() should really be DrawTransparentObjects() or something like that.

Share this post


Link to post
Share on other sites

thanks a lot for replying.

Quote:
Original post by wyrzy

Hope that helps. I'd post my code but its quite a mess at the moment and I don't really have the time to clean it up and make a nice example demo right now.


any messed code or examples will be appreciated

Share this post


Link to post
Share on other sites
BTW, the NVIDIA article, which is more detailed than the presentation, is available here. Though wyrzy's explanation may be a better way to start an implementation.

You can also find the OpenGL code here.

Share this post


Link to post
Share on other sites
then how to write the depth value into the texture, and how to compare the depth value in texture and the cureent depth value, i mean how to write the shader.
in the dx sdk sample of shadow map. the depth value seems just equal to positiion.z/position.w

Share this post


Link to post
Share on other sites
Quote:
Original post by swordyijian
then how to write the depth value into the texture, and how to compare the depth value in texture and the cureent depth value, i mean how to write the shader.
in the dx sdk sample of shadow map. the depth value seems just equal to positiion.z/position.w


I just used linear eye space position (ie, length(world position - Eye position)) and stored that in the render target.

Even though the depth values in the depth buffer are not stored linearly (rather hyperbollicaly), since you only care about comparison and not the exact difference it shouldn't really make a difference.

Regarding writing values into a texture, any of the DXSDK or NVidia SDK demos that render to offscreen surfaces should show how to do that quite well.

Share this post


Link to post
Share on other sites
There is a cheese-muffin way to implement a double depth buffer, but it has issues as well, the basic idea is to use the blending unit, in openGL to simulate a duble sided depth buffer you could do this:

make one offscreen buffer, and use floating point type, lets say .red-->depth buffer 0 and .blue-->depth buffer 1. In openGL the calls go soemthing like this:



glBlendFunc(GL_ONE, GL_ONE);
glBlendEquation(GL_MIN);





in the fragment shader, we do this:


glFrag.r=depthValue0;
glFrag.b=depthValue1;




the the pixel at each deal will get incremented up to the min that happens there, if you need to have different test, i.e. one is min and the other is max, turn off calamping of the output, and stored the negative of one of the depth values and negate it agian when you are ready to use it....


this technique will then make 2 depth buffers of the stuff and then run your code again this with your fragment shader testing agains the saved depth values, and if the test fails you discard the fragment.. admittedly this is no tas good as if a hardware had multiple depth buffer... but it would work here...


on another note, there is a very easy way to do good order independent transparancy: view each transparent object as a _filter_. Draw all transparent objects to an offscreen buffer, but the blending is multiplicate, the corrosponding GL code would be:



//first clear the offscreen buffer with the values (1,1,1)
glClearColor(1,1,1);
glClear(GL_COLOR_BUFFER_BIT);

glEnable(GL_BLEND);

//for transluscent object we do multiplicative blending:
//we let New_pixel= gl_Frag*old_pixel
glBlendFunc(GL_DST_COLOR, GL_ZERO);
glBlendEquation(GL_FUNC_ADD);

//disable clamping...
glClampColorARB(GL_CLAMP_FRAGMENT_COLOR_ARB, GL_FALSE);
glClampColorARB(GL_CLAMP_VERTEX_COLOR_ARB, GL_FALSE);
glClampColorARB(GL_CLAMP_READ_COLOR_ARB, GL_FALSE);


glDepthMask(GL_FALSE);
glEnable(GL_DEPTH_TEST);

//draw all transparent objects



then at each pixel will be the product of all the filters, that are inform of the opaque objects, and this value would then be used to modulate the final color value of the object...

it does have issues,
1) you should draw both front and back facing polygons, this way the value that is at the pixel indicates the light passing throught both the front and back sides of the object.
2) if you put a pure blue filter in front of a pure red filter, you get black! this implements color filtering, _not_ transparency, but careful shoice of coloring will help you out a great deal though...
3) the filter objects themselves do not reflect any light, which is what they do in real life as well, you will need to do another pass to get that.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this