• Advertisement
Sign in to follow this  

Collision detection with vertex shaders?

This topic is 2299 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

hello all, i've been doing a bit of research on how to best approach collision detection for a game i'm working on, currently, i'm offloading skeletal deformation of my mesh's to the gpu, which is working great, but i'm presented a new problem with this...how do i perform accurate collision detection if my mesh's final state is sitting on the gpu, as any data i may have before it was uploaded to the gpu could/is invalid at that point to do any accurate polygon-polygon collision detection.

any suggestions for a possible work around.

i have two possible solutions i can think of:
1. somehow do all collision detection on the gpu purely(although requires locking the gpu for read back to the cpu to actually use the results.)
2. perform skeletal deformations on the cpu, and keep a valid copy of the mesh's current state in ram for working on the object.

i believe i could implement option 2 fairly easily, and speed it up by offloading deformations into another thread, but i don't know which direction would be best.

Share this post


Link to post
Share on other sites
Advertisement
You almost never want to use the actual, renderable triangles for collision detection -- collision systems usually represent the same data in a different representation, such as a collection of boxes/cylinders/polytopes/spheres instead of a collection of triangles.

Also, generally the animation of the skeleton (i.e. the bone hierarchy) is done on the CPU, while the GPU uses these bones to deform the vertices. So, you should use the CPU-side bones to deform the CPU-side collision-shapes, and also use the same bones to deform the GPU-side vertices/triangles.

If you're running the entire skeletal animation logic on the GPU (i.e. calculating the bone matrices there from animation playback/blending/etc), then I'd duplicate the same logic on the CPU for the sake of the collision system :/

Share this post


Link to post
Share on other sites

You almost never want to use the actual, renderable triangles for collision detection -- collision systems usually represent the same data in a different representation, such as a collection of boxes/cylinders/polytopes/spheres instead of a collection of triangles.

Also, generally the animation of the skeleton (i.e. the bone hierarchy) is done on the CPU, while the GPU uses these bones to deform the vertices. So, you should use the CPU-side bones to deform the CPU-side collision-shapes, and also use the same bones to deform the GPU-side vertices/triangles.

If you're running the entire skeletal animation logic on the GPU (i.e. calculating the bone matrices there from animation playback/blending/etc), then I'd duplicate the same logic on the CPU for the sake of the collision system :/

it really depends on what i'm aiming for if i want true polygon collision(my order of collision is a tree of aabb collision, where the bottom leaf links directly to the face of the polygon it represents, so only after all aabb checks have passed, would i perform triangle->triangle collision, which in what i'm trying to do, is necessary).

yes, i do, do the bone deformations on the cpu, i only meant bone->mesh deformation is done on the gpu, also, translating just the aabb's sounds like a much less expensive solution, and not one i had thought of(and would only require translating as needed, which is another plus.)

if anyone else has alternative methods, i wouldn't mind mulling over a few possible approachs to this situations.

edit: actually, not sure if this method would work 100%, or at the very least, i would obtain some false positives, for when i translate an aabb which represents a face that has vertices linked to diffrent bones, how would i approach that situation?

my original idea if i did CPU deformation of the mesh, was after(or even during), deforming the vertices, i'd update the aabb's accordingly, but with translating only the aabb's, i'd run into this type of situation.

Share this post


Link to post
Share on other sites
I'm very inclined to follow Hodgman's advice.
But, if accurate collision is needed (such as for "pain textures") I'd try something with geometry shaders, outputting a sequence of triangles to UAV?... Perhaps I might just stream out them for later readback?
I'm just shooting in the dark. And I cannot really tell what will happen when those triangles are later deformed.

Share this post


Link to post
Share on other sites

I'm very inclined to follow Hodgman's advice.
But, if accurate collision is needed (such as for "pain textures") I'd try something with geometry shaders, outputting a sequence of triangles to UAV?... Perhaps I might just stream out them for later readback?
I'm just shooting in the dark. And I cannot really tell what will happen when those triangles are later deformed.


i do like his idea, it would divide the cost of skeletal deformation on the entire object by 3 since i only need to calculate the face's new center point if all three points were to be translated by the same bone, this might be easy, but if one vertex in the face is translated by a different bone, this can cause problems.

i've thought of one approach, which is to check what each vertex belongs to, and if they don't belong to the same bone, deform each vertex and calculate a new aabb box, and if they all belong to the same bone, deform the face's center point by the bone.

any suggestion on a possible alternative approach is much welcome.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement