Triangle collision detection

Started by
2 comments, last by Nene 18 years, 4 months ago
Hi everyone! My doubt is, we want to calculate triangle collisions between two objects (before their vertices are sent and transformed to the API, of course) which are in different positions in the world space. So, how should we detect collision between their triangles if vertex information are in local coordinates of each object (from the 3d modelling tool)? Must we transform them (like they will do in api's pipeline with the model_view matrix) in an own buffer prior to make all these calculations? From Spain (Barcelona), thank you very much.
Advertisement
What’s all this about transforming? The API, transforms the vertices you send it so that they project onto a two- dimensional screen. If you’re triangles are already two-dimensional, then you have nothing to worry about. If they are three-dimensional, then you still having nothing to worry about. Render vertices as you normally would. The transformation used by the API has nothing to do with the actual integrity of the mesh’s vertices. The only case where transformations would effect things, is when you are translating and rotating objects. But still, You’re triangle’s vertices should not be directly effected by this, but indirectly though the rendering pipe-line. Look up ‘rendering-pipeline’ for more information about what I’m describing. The vertices them self, are not effected. But, when rendered, they go through the proper transformation until the fit onto a 2D screen. But anyways, if you want to detect collision with two triangles. Use a collision detection algorithm such as spherical collision and test with the original vertices of you mesh. You shouldn't be applying any transformations to you’re models vertices directly! You should be performing transformations to them through the pipeline (i.e. world, view, projection).
Take back the internet with the most awsome browser around, FireFox
I think what the OP is saying is this. He has two models, defined of course in local space. Each has a transformation applied to it that places it in world space. He wants to perform per-triangle collision detection on the models, but in order to do so it would seem that the triangles of each model must be (temporarily) transformed into world space. The API will be of little use here, as the geometry as transformed within the pipeline isn't necessarily readily available.

I don't have a definite answer, but I will ask, are you sure you need per-triangle coldet? It may be that you do, but it's often quite possible to get by with simpler alternatives.

If you are going to do per-triangle, one shortcut you can take is to create a single matrix that is the product of the transform of one object with the inverse transform of the other object. You can then use this matrix to transform the geometry of one of the models directly into the local space of the other. This should be considerably more efficient than simply transforming both models into world space and performing the tests there.

The next step would probably be a bounding volume tree of some sort, such as spheres, AABBs, or OBBs. With a well-implemented system of this type you may be able to reduce the number of potentially intersecting triangles to the point where the cost of transforming the triangles themselves is not too big a factor.
It was only a supposition, I do not have the intention of doing so, but I thought what would happen if I wanted to do something like this. Thanks for your answers :)

This topic is closed to new replies.

Advertisement