Using Vertex Shaders and keeping Bounding Volumes up to date

Started by
4 comments, last by sirob 18 years, 8 months ago
I'm using a vertex shader that makes meshes appear bigger. This is done by translating every vertex to the direction of its normal. The problem is that I cannot keep the bounding volumes of the meshes up to date because every vertex has it's own unique transformation and it cannot be applied to the bounding volume. The same happens when I animate a skinned mesh with my animation shader. This shader is very similar to the shader used in MultiAnimation sample of DirectX SDK. It transforms each vertex by the bones' offset matrices and then applies the bone matrices. The problem is that the bounding volume doesn't appear in the right place, because the vertices have been transformed by the shader. Again, I cannot apply a transformation to the bounding volume. Each vertex is transformed in its own way, influenced by many bones. Is there any solution to that?
Advertisement
The bounding volume doesn't need to undergo the same transformation as the vertices, it only has to fit the vertices after the transformation. If you grow your meshes according to a known equation/pattern, you can pre-calculate the bounding volumes for them at various stages.

As for skinned meshes, you can do a pre-processing step where you generate every frame of animation and create a bounding volume for it, then you can store the volume that is the union of all frame volumes. However, you can also go with another alternative - store the bounding volume that covers most of the mesh all the time. For example, with a human character, you can use a cylinder that's slightly wider than the body itself, so that it'd cover the head, torso and legs in standing position. With this kind of bounding volume, you'll occasionally get the limbs to "penetrate" other geometry, but it shouldn't be that bad.

Thanks for the advices.

For the skinned models, I create several levels of bounding volumes. For example, for a human character, I have one bounding volume for the each hand, one for the head, for the legs and so on. Then I unite all the volumes to create a new and bigger one that covers all the low-level bounding volumes.
When I search for a collision, the first bounding volume that I check is the one on the high level. If a collison is detected, I start checking for collision with the inner bounding volumes, those on the low level. Until this step I can make it in the way you said.

But what about per-poly collision detection after checking the bounding volumes? How can I keep the position and the orientation of the triangles up to date? Do I have to keep a copy of the vertices and transform them along with the animation?
Quote:Original post by George109
But what about per-poly collision detection after checking the bounding volumes? How can I keep the position and the orientation of the triangles up to date? Do I have to keep a copy of the vertices and transform them along with the animation?

If you don't store a separate collision mesh (e.g. a simplified version), then you'd have to do some software skinning to get the transformed triangles.

Perhaps someone else has better ideas, though. We'll wait and see [smile]

What I do is offer 4 possibilities for skinned models:

1.) provide a bounding volume yourself, like a cylinder for a character, and move this cylinder with the characters position
2.) calculate bounds based on the skinned vertices, if you need accurate bounds (skinned on cpu only)
3.) calculate bounds based on the collision meshes, which can either be rigid or deformable.
4.) calculate the bounds based on the bone/node worldspace positions, which is relatively fast.

In your specific case you could precalculate the bounds based on the vertices before passing them to the vertex shader. Then you could widen the AABB (so in all dimensions) by the maximum distnace value you offset your vertices with. It won't be 100% accurate always as it might be a bit too big in one or two of the axes, but that shouldn't matter.

If you also transform your model by some matrix, you might want to apply this same transformation to the resulting min and max of the bounding box and creating a new bounding box from these points. Not sure if this will work though, there might be better ways, but I'm a bit braindead now :))

Going for dinner now. Good luck!

As for the mesh scaling vertex shader, assuming your normals are normalized, the scaling of the bounding volume would be at least the scaling of the mesh.

What this means is, that if you scale your bounding box, sphere whatever by the same amount as the mesh, your new bounding volume would surely surround the new mesh, though it might be larger by some. If it being a bit larger isn't a problem, Id use this very fast method to do that.


As for the smaller volumes, assuming each of the limbs' volumes are simple, it shouldn't be that difficult nor cpu wasting to calculate a new bounding volume on the fly. Let say you have lots of bounding boxes for the limbs, and the different orientations, you'd have to transform very little in software just to get the new bounding box, and leave transforming the actual vertices to the gpu. I think this option is worth considering.

Good luck.
Sirob Yes.» - status: Work-O-Rama.

This topic is closed to new replies.

Advertisement