How can i get z-buffer value in screen point?

Started by
7 comments, last by Black_Moon 18 years, 6 months ago
i have screen mouse coordinates. The problem is - How can i get z-buffer value in screen point? Maybe - using shaders or GetBackBuffer(), LockRect(). i don't know. Please help with source code. thanks.
Advertisement
Did you try using GetDepthStencilSurface()?
i don't know yet anything about GetDepthStencilBuffer().
do u have source anr/or link?
Quote:
don't know yet anything about GetDepthStencilBuffer().
do u have source anr/or link?

It's in the SDK docs. Just note that it will be difficult to directly acess the depth buffer. The depth buffer is not meant to be acessed by the main application. Different hardware manufactuerers implement the depth buffer in different ways.

What is it that you're trying to do? Perhaps there is another way to acomplish what you want.

neneboricua
Quote:
What is it that you're trying to do? Perhaps there is another way to acomplish what you want.

the problem is old & global - i need to know what object was picked by mouse.
but counting intersection of ray and object triangles is not suitable, cause my object has about 1E5~5E5 triangles(it's look like a cube, but each surface is not flat). so i can't find ray intersection with each triangle. so i decided to find z-value in mouse coords, convert mouse xy-coords and z-value into object coords, and i'll get the one triangle, which was picked. i consider that in this way processor will do this faster(in first case time delayed about 1~2sec).

i have one shader code, maybe it can help

vs.1.1
m4x4 r0, v0, c4
mov oPos, r0
mov oT0.xyzw, r0.zzzw

ps.1.4
texld r0, t0_dw.xyw


but how it used? how to init this shader and use it?

Picking is a common operation but should not be handled by trying to acess the depth buffer.

If all you need to know is when a particular object is selected, and not necessarily *which* individual triangle was clicked on, then you should use the idea of bounding volumes. The idea is that each object is contained in some kind of bounding volume to aide with intersection tests. Common bounding volumes are spheres and boxes but any volume can be used. The reason people use these bounding volumes is because it is much less expensive to perform ray intersection tests on these bounding volumes than it is to perform them on the actual mesh.

The bounding volumes aren't actual polygon data. They are not meant to be rendered. The bounding volumes are simply a mathematical construct to make intersection tests more efficient. A sphere would be defined by a center point and a radius. A box would be defined by its minimum and maximum extents. Algorithms for intersecting with these kinds of objects are freely available on the internet and are very fast.

For example, to determine if the user clicked on a chair in your application, you would only need to perform a ray intersection test on the bounding box of the chair, and not on the triangles of the chair itself.

You could also organize your scene into a bounding volume hierarchy to speed up the intersection tests. The idea is that your scene is recursively divided into smaller and smaller bounding volumes. You can use this to effectively cull away large parts of your scene from consideration when performing intersection tests.

A search on bounding volumes/spheres/boxes would give you a lot of information.

neneboricua

By the way, the shader code you posted just transforms the model by a matrix and fetches the color from the texture.
Quote:Original post by neneboricua19
Picking is a common operation but should not be handled by trying to acess the depth buffer.

If all you need to know is when a particular object is selected, and not necessarily *which* individual triangle was clicked on, then you should use the idea of bounding volumes. The idea is that each object is contained in some kind of bounding volume to aide with intersection tests. Common bounding volumes are spheres and boxes but any volume can be used. The reason people use these bounding volumes is because it is much less expensive to perform ray intersection tests on these bounding volumes than it is to perform them on the actual mesh.

The bounding volumes aren't actual polygon data. They are not meant to be rendered. The bounding volumes are simply a mathematical construct to make intersection tests more efficient. A sphere would be defined by a center point and a radius. A box would be defined by its minimum and maximum extents. Algorithms for intersecting with these kinds of objects are freely available on the internet and are very fast.

For example, to determine if the user clicked on a chair in your application, you would only need to perform a ray intersection test on the bounding box of the chair, and not on the triangles of the chair itself.

You could also organize your scene into a bounding volume hierarchy to speed up the intersection tests. The idea is that your scene is recursively divided into smaller and smaller bounding volumes. You can use this to effectively cull away large parts of your scene from consideration when performing intersection tests.

A search on bounding volumes/spheres/boxes would give you a lot of information.

neneboricua

By the way, the shader code you posted just transforms the model by a matrix and fetches the color from the texture.


neneboricua, u're right about picking 3d objects, but my task is more deeper.
i need to know which vertex in object was selected (if any) or which vertex is near the mouse click, and then highlight it. that's why i need to check each visible triangle, get baricentric coords of click and count near vertex.
I suppose, when video card draws a vertex on the screen it transforms on

D3DXMATRIX m = mWorld*mView*mProj, so if screen coords transform on
m inverse maybe i'll get 3d coords of click.

p.s. don't u know how to print cpp code on this forum?
Quote:Original post by Black_Moon
neneboricua, u're right about picking 3d objects, but my task is more deeper.
i need to know which vertex in object was selected (if any) or which vertex is near the mouse click, and then highlight it. that's why i need to check each visible triangle, get baricentric coords of click and count near vertex.

That's why you should use some kind of bounding hierarchy. If your mesh is divided into a number of bounding volumes, it would speed up your computations. For example, say you have a character model. You could have a bounding sphere/box around the torso, head, both arms, both legs, and another one around the entire character. First check if the ray hits the bounding volume of the character itself. Then check for intersection with the bounding volumes of each part of the character. Once you find which section of the character the ray has hit, you can check the actual triangles of that section for the exact intersection point. This will make sure you don't have to check every single triangle for intersection.

If you still want to do the depth buffer idea, you can try to acess the depth buffer. As stated before, though, different hardware manufacturers implement the depth buffer in different ways. I don't think any graphics card out there will let you directly lock the depth buffer at all. One way to do this would be to have your own depth buffer set up as a seperate render target when you draw your scene. What you would do is in a pixel shader, you would not only output the color of the pixel to the main target, but you would also output its depth (basically the value that will be stored in the depth buffer) to your own render target.

You can then lock this surface and read the depth information stored there. This is not the fastest operation in the world but neither is picking so you probably won't notice it much. Be aware that the precision of renderable surfaces is not the greatest. Even if you're using a floating point render target for your pseudo-depth buffer, you may run into precision issues if your application deals with very finely tessellated meshes.
Quote:Original post by Black_Moon
I suppose, when video card draws a vertex on the screen it transforms on

D3DXMATRIX m = mWorld*mView*mProj, so if screen coords transform on
m inverse maybe i'll get 3d coords of click.

Yes, but each triangle could potentially have a different world matrix. Remember that the world matrix changes for each mesh that you render. It's the matrix used to position and orient the model in your scene. The view and projection matrices typically stay the same for the whole frame, but the world matrix changes for every object.
Quote:Original post by Black_Moon
p.s. don't u know how to print cpp code on this forum?

Look at the forum FAQ. You need to use source tags [ source ] and [ / source ] but without the spaces inside the brackets.

neneboricua
Original post by neneboricua19
You can then lock this surface and read the depth information stored there.
Original post by Black_Moon

Yes, i think this way is for my problem. And video cards must copy with locking stencil buffer since GF3(posted on nvidia.com).

First -
LPDIRECT3DSURFACE8 surf;D3DLOCKED_RECT lr;HRESULT hRes;hRes = device->GetDepthStencilBuffer(&surf);//ok hRes == S_OKhRes = surf->LockRect(&lr, 0, D3DLOCK_READONLY);// here hRes == D3DERR_INVALIDCALLsurf->UnlockRect();SAFE_RELEASE(surf);

Maybe locking is not right? And that's next - need to transform
lr.pBits to depth array..

idea. to get 3d world-coords of mouse click - compute the ray intersetion with all visible triangles. How to count which triangle is visible - i can use bsp-tree algorithm...

[Edited by - Black_Moon on October 10, 2005 7:44:16 AM]

This topic is closed to new replies.

Advertisement