"line from point perpendicular to triangle plane." - no it is not direction. It is just the closest distance from point to the mesh face.
it is not ray direction because between two frames the movement direction might be not perpendicular to the triangle plane. I mean in frame N point is on positive half space, in frame N+1 on negative half space, so we have collision and ray direction might not be perpendicular to triangle face.
So in order to have ray direction I should remember position of point in frame N and then in frame N+1 I should calculate ray as = P(N+1) - P(N) and then normalize it. And this value I should introduce into "D" parameter, am I right??? This is the way You do when using Moller-Trumbole algorithm? Or You determine ray direction in other way?
int triangle_intersection(const Vec3 V1, // Triangle vertices
const Vec3 V2, const Vec3 V3, const Vec3 O, //Ray origin const Vec3 D, //Ray direction float* out )
ok and what is really "ray direction"? I don't have any "ray direction" attributable to any object. So this Moller-Trumbore algorithm cannot be used for mesh collision when You don't have "ray direction" variable attached to vertex?
Sorry but I didn't understand why I don't need sphere mesh for separate body part? My idea is to model in 3d tool (blender in my case) bounding volume for each interesting me body part which I want to be differentiated. So it would be separate sphere/cyllinder/box mesh for left ear, right ear, nose, mouth, forehead, I want also to divide foot into small pieces (small bounding volumes). Then each mesh is saved into file. Then I load into C++ application those meshes. And I use D3DXComputeBoundingBox/Sphere (for cyllinder probably I have to make my own implementation) for each mesh in order to automatically get to know what are bounding volumes, so for example for sphere it would be:
Then I delete my meshes because the aim was to only determine structure with sphere/box/cyllinder volumes.
Without modelling bounding volumes in blender I don't knwo how to determine bounding volumes??? Is any easier way? Maybe it should be done based on hardcoded values - so for example I model my bounding volume in 3d tool and then I check values from 3d tool, write them into some xml and do it with each bounding volume and then just load into C++ application xml file instead of meshes. So in this way I don't have to use D3DXComputeBoundingBox/speheres (implement function for cyllinder), I have just all needed data which I fulfill my structures.
Each bounding volume will correspond to only one bone with weight 1 and here I worry that the effect won't be satisfied on joint areas. So as far as bounding volume meshes in joints areas are concerned probably I will need to set weight 0.5 for two shared bones - probably based on experiment I will need to tune weights.
Then collision detection would be just bounding volumes-ball sphere tests. Of course each bounding volume coordinates would be transformed by appropriate final transformation matrices for a given bone.
Is the above concept OK? Or maybe there is much easier way to have collision structures, detections??? You mentioned something about:
EDIT: I don't know how accurate you want the collision to be, but, when the impacted bounding volume has been determined, a relatively quick calc for the minimum distance between the soccer ball center point and body part points (transformed, no radius needed) would be even quicker. Use the square of (soccer-ball-center-point minus transformed-body-point) for comparison and avoid using square roots altogether
However I don't understand, so could You make it more clear? I need to have accuracy which for user eye is visible as ball contact with the body. Sorry for stupid questions but I am implementing game for the first time and now I am on collision implementation stage which before starting the implementation I considered as the most difficult task. Even ball physic which I have already implemented and it works even quite good I consider as the easier task.