Jump to content
  • Advertisement
Sign in to follow this  
anders211

How to make collision mesh for animated not rigid human mesh?

This topic is 1487 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have skeletonal human mesh. It consists of tens of bones. Now how I know that collision object like for example ball hits the shoulder or knee of human mesh?
 
My algorithm:
1.) Make based on bone vertex influences bounding volumes (spheres,boxes,cyllinder etc). So after this step You have your animated human mesh approximated by tens of separate plain bounding volumes. The amount of bounding volumes is equal to the number of bones.
In my case I take into account only vertex weights > 0.5 and the reaults are as below.
Thanks to this approximated model I can detect what part of body the ball hits.
2.) However the last sentence is not true. I can determine if ball hits head or knee or shoulder, however I am not able to determine if ball hits mouths, left/right ear, forehead etc
So what to do in order to have such functionality? And here I stuct! My only one idea:
    a.) In 3d tool (blender in my case) bounds left ear with the aid of for example sphere mesh. Save mesh to file.
    b.) In 3d tool bounds right ear with the aid of for example sphere mesh. Save mesh to file.
    ...
    z.) In 3d tool bounds another interesting small part of body and save it to file.
Now read to the C++ application each part of body mesh file and execute for each D3DXComputeBoundingBox/Sphere or make Your own implementation for computing another volumes like cyllinder.
Now in order to detect if bal hits left ear You firstly iterate through bounding volumes (transformed by final matrices) from step 1. The results is that ball hit bounding volume made from head bone. Then You iterate through all small pieces of body in the head area and in this way You find ear sphere from step 2.
 
Is my algorithm OK? Or making collision mesh is done in another way? I have none experience in it, this is just my thoughts how to detect ball collision with human mesh.
Edited by anders211

Share this post


Link to post
Share on other sites
Advertisement

You mention, in particular, testing for a collision between a ball (sphere) and your mesh. You don't need a sphere mesh to continue the collision algorithm. Once you've determined the appropriate impacted large bounding volume, you can use just a simple sphere (center point transformed by the appropriate skinning matrix) and a radius for each body part of interest within the bounding volume. Sphere-sphere intersections between the soccer ball (given a center point and radius) and a "body part bounding sphere" can be done very quickly.

 

EDIT: I don't know how accurate you want the collision to be, but, when the impacted bounding volume has been determined, a relatively quick calc for the minimum distance between the soccer ball center point and body part points (transformed, no radius needed) would be even quicker. Use the square of (soccer-ball-center-point minus transformed-body-point) for comparison and avoid using square roots altogether.

Edited by Buckeye

Share this post


Link to post
Share on other sites

Sorry but I didn't understand why I don't need sphere mesh for separate body part? My idea is to model in 3d tool (blender in my case) bounding volume for each interesting me body part which I want to be differentiated. So it would be separate sphere/cyllinder/box mesh for left ear, right ear, nose, mouth, forehead, I want also to divide foot into small pieces (small bounding volumes). Then each mesh is saved into file. Then I load into C++ application those meshes. And I use D3DXComputeBoundingBox/Sphere (for cyllinder probably I have to make my own implementation) for each mesh in order to automatically get to know what are bounding volumes, so for example for sphere it would be:

struct sphereVolume

{

D3DXVECTOR3 center

float radius

};

Then I delete my meshes because the aim was to only determine structure with sphere/box/cyllinder volumes.

Without modelling bounding volumes in blender I don't knwo how to determine bounding volumes??? Is any easier way? Maybe it should be done based on hardcoded values - so for example I model my bounding volume in 3d tool and then I check values from 3d tool, write them into some xml and do it with each bounding volume and then just load into C++ application xml file instead of meshes. So in this way I don't have to use D3DXComputeBoundingBox/speheres (implement function for cyllinder), I have just all needed data which I fulfill my structures.

Each bounding volume will correspond to only one bone with weight 1 and here I worry that the effect won't be satisfied on joint areas. So as far as bounding volume meshes in joints areas are concerned probably I will need to set weight 0.5 for two shared bones - probably based on experiment I will need to tune weights.

Then collision detection would be just bounding volumes-ball sphere tests. Of course each bounding volume coordinates would be transformed by appropriate final transformation matrices for a given bone.

Is the above concept OK? Or maybe there is much easier way to have collision structures, detections??? You mentioned something about:

EDIT: I don't know how accurate you want the collision to be, but, when the impacted bounding volume has been determined, a relatively quick calc for the minimum distance between the soccer ball center point and body part points (transformed, no radius needed) would be even quicker. Use the square of (soccer-ball-center-point minus transformed-body-point) for comparison and avoid using square roots altogether

 

 

However I don't understand, so could You make it more clear? I need to have accuracy which for user eye is visible as ball contact with the body. Sorry for stupid questions but I am implementing game for the first time and now I am on collision implementation stage which before starting the implementation I considered as the most difficult task. Even ball physic which I have already implemented and it works even quite good I consider as the easier task.

Edited by anders211

Share this post


Link to post
Share on other sites

It appears your primary problem is to determine bounding sphere parameters for the small body parts (ear, nose, etc.). Your approach is to create a bounding mesh in the modeler (for which you'll have to know the center point and radius), import the mesh, analyze that mesh to determine the original center point and radius, and "throw" the mesh away. That's a lot of modeling work, and coding for import and analysis.

 

A simpler approach may be to add a bone for each small body part, positioned where you want the sphere center, perhaps with a child bone of that bone with a length (distance from parent) for the radius. After you import the mesh hierarchy, extract the information needed for your small collision spheres from the hierarchy data. You can delete those bones from the hierarchy after the sphere info has been loaded. That approach would allow you to edit the mesh (change a bone position and/or length) and still have an "automatic" way to load the required data.

Share this post


Link to post
Share on other sites

another possible approach:

 

if you have a skinned mesh with subsets over your bones skeleton, you might try "ray - mesh intersect" (as seen in directx's raypick sample). the flight path of the ball is the ray. your skinned mesh is the mesh. the sample returns the subset # , triangle number, and coordinates of intersection, from which you might be able to determine eye, nose, mouth, etc. especially if they are different subsets in your skinned mesh. sounds easier than making a bunch of bbox models.

Share this post


Link to post
Share on other sites


you might try "ray - mesh intersect" (as seen in directx's raypick sample) ... sounds easier than making a bunch of bbox models.

 

The D3DXIntersectxxx functions work well with static meshes. Those functions access the mesh triangles as stored in the D3DXBaseMesh buffer. However, an animated mesh is a different beast, as the bone transforms are (normally) applied in a shader to move the base mesh vertices into their animated positions. Further, each vertex in the triangle may be influenced by more than one bone, and may be influenced by a different set of bones that the other vertices in the same triangle, which can cause the triangle to "stretch" and "shrink" during animation. Although it can be done on the CPU, recalculating each vertex every frame to provide exact intersections is a very expensive operation.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!