[DX10] Your picking techniques

Started by
3 comments, last by JohnnyCode 13 years, 11 months ago
I recently posted a pixel-precise question which has to do with a very particular scenario, but I should be also fighting with "classic picking" soon... So I would be interested in knowing what picking techniques are you using for generic vertex/triangle objects, not D3DX10mesh - which I don't plan using for several reasons. I am wondering because given that most transformations are done in the shaders, on the cpu side the positional data one should use for intersections and such is quite "raw" and, thus, pretty much useless. So, for example I have a sphere with two frames which I interpolate in the shaders (linear interpolation, no bones or such). How would I be able to correctly check if I am picking some triangle out of it?
..so we ate rare animals, we spent the night eating rare animals..
Advertisement
I get the physics engine to do all the picking, based on the physical shapes (not the graphical shapes). The physics engine maintains it's own transformations for each shape (incidentally, these transforms are pushed from the physics engine into graphics objects a lot of the time).

In the general graphics-tri-mesh case though, you've got to re-implement your vertex-shaders on the CPU. A lot of engines actually do this to some degree (e.g. supply vertex-shader and CPU-software code for animating a mesh) so you can get results on either end (CPU or GPU) if needed.
Quote:Original post by Hodgman
In the general graphics-tri-mesh case though, you've got to re-implement your vertex-shaders on the CPU. A lot of engines actually do this to some degree (e.g. supply vertex-shader and CPU-software code for animating a mesh) so you can get results on either end (CPU or GPU) if needed.


The first thing I thought infact, was something like "Well I can check againist a bounding box first... then if it intersects, I will take the object, manually transform and interpolate it, and perform the deeper intersection". Only, it sounds like nails on a blackboard for some reason :)

..so we ate rare animals, we spent the night eating rare animals..
I implemented picking for an editor by using multiple render targets:

Render target 1 - colour render target (what the user sees).
Render target 2 - identifier render target, stores primitive IDs. The user doesn't see this.

When I need to do a pick, I read a few pixels around the cursor from render target 2 and make a decision on what ID is under / nearest the cursor. Basically it goes something like

- if there is a valid ID right under the cursor, then choose that.
- otherwise, find most common ID weighted by distance from centre of cursor

It could be more sophisticated than that, but it works OK. Then I look up the primitive corresponding to the chosen ID in a map (needs to use a data structure with near O(1) search time).

The ID codes are stored in the buffer data that is input to the shaders. This means that depending on your needs, you can the make ID codes unique at whatever level of granularity you want: a triangle / point / line, a mesh, part of a mesh, a polygon, a curve, a surface etc. DX10 has good support for integer datatypes.

You could probably also use the ID method as part of a hybrid pick system that also uses geometric information.

The main downside of this method (as I have currently implemented it) is that it can't pick anything that is covered up by something else. A geometric pick system has the advantage of being able to tell you all of the primitives under the cursor, if you cast a ray through the scene.

[Edited by - tweduk on May 5, 2010 8:48:11 AM]
I do not quite get your point and what you aim for with "pixel precise picking out of render target "

If you wan to pick an object, you have to know which object it is, how are you gonna identify that, out of x,y,z point in the render target?

I you want to know wheather a rendered object is hit by a ray, transform the ray from screen space to object space of the object, run intersection algorithm on all faces, get the point, transform point to world space.

You do not have to worry that verticies on cpu are different from gpu ones, you move the ray to space that gpu uses.

Example:

an object has world matrix, and shader applies bone matrix , so in shader your .Pos is transformed by Proj*View*ObjWorld*BoneMat.

On cpu you transform ray by invProj*invView*invWorld*invBone and check ray against raw verticies. I recommend to store the verticies in extra memory so you do not have to lock buffers.

This topic is closed to new replies.

Advertisement