Jump to content
  • Advertisement
Sign in to follow this  
Adaline

How to compute the bounding volume for an animated (skinned) mesh ?

This topic is 2064 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello

 

An example maybe :

Usually a human being is forming a T with his arms in model space.

Then I compute the bounding volume using this pose.

But if the arms within an animation are towards the up direction they'll get out of the bounding volume and they would be frustum-culled even if they are visible ...

 

How can I avoid this please ? (sorry for the poor explanation, but it's quite hard for me to explain that in English)

 

Thanks

Share this post


Link to post
Share on other sites
Advertisement

You could store local-space bounding boxes or bounding spheres for the bones in your model asset format. To calculate them, find out which vertices the bone influences with sufficiently big weight ("sufficient" is an arbitrary value for you to choose), and those vertices form the bounding box/sphere. Remember to transform them into the bone's local space!

 

At runtime, transform the bone bounding boxes/spheres with the bones' world transforms after applying animation, and merge them together to get the model's final world-space bounding box. The individual bone bounds will also be useful in per-bone collision detection ("hitboxes").

 

This operation is fairly CPU-intensive, so all engines eg. Unity or OGRE don't do it. But if you're for example already splitting your animation update into multiple threads, the bounding box calculation can also be part of the threaded update.

 

Btw. if you were doing software skinning (I hope you aren't), you could also simply merge an updated bounding box from the vertices you're transforming, without having bone bounding boxes at hand. This is what Horde3D is doing, or at least was doing some years ago.

Edited by AgentC

Share this post


Link to post
Share on other sites

We have a couple different solutions to this. In the past, if the set of animations for a particular character was small (and known up front), we would compute the bounding box of the transformed mesh during a pre-process step, and store that information. These days, we just pick an arbitrary scale factor, relative to the original T-pose. For example, we assume that the actual bounds of the mesh will never be more than 3x the original bounds, and use that for culling. Doing something with the bones themselves would be better, obviously. You could probably find a minimal set of bones that always determine the extents of the full model (intermediate bones along appendages aren't going to push the mesh out further than the hand/foot bones, for example). You could also store the maximum distance to a weighted vertex from each bone, so you know how big the virtual spheres around each bone need to be.

Share this post


Link to post
Share on other sites

Thank you for your answers.

Finally I think I will explicitly define an extra geometry in the asset defining the bounding volume. (only for animated meshes)

The modeler will have to create it, bounding all poses the mesh can have.

Edited by Tournicoti

Share this post


Link to post
Share on other sites

Calculate the bounding volume after the geometry data has been formed and manipulated, if you're basing the bounding volume on the 'at rest' pose or what ever your first keyframe is then you're going to be losing accuracy in the space filled by the volume.

Share this post


Link to post
Share on other sites

Calculate a bounding box for each joint in the body in the orientation of that joint, based only on the vertices that are skinned to that joint.

Then at runtime, project the joint-oriented bounding boxes into worldspace, taking the min and max of each of those bounding boxes.

 

There's no real reason to have an artist make the bounding volumes.

Share this post


Link to post
Share on other sites

Calculate the bounding volume after the geometry data has been formed and manipulated, if you're basing the bounding volume on the 'at rest' pose or what ever your first keyframe is then you're going to be losing accuracy in the space filled by the volume.

In fact I calculate the AABB in model-space, then I convert it to OBB to transform it in world space, then I calculate the final AABB

 

Calculate a bounding box for each joint in the body in the orientation of that joint, based only on the vertices that are skinned to that joint.

Then at runtime, project the joint-oriented bounding boxes into worldspace, taking the min and max of each of those bounding boxes.

 

There's no real reason to have an artist make the bounding volumes.

Thank you, this is probably the best way, but before implementing this I prefer use simpler methods. (i.e. the artist provides the model space's AABB). But your method is the best, and will try it latersmile.png

Edited by Tournicoti

Share this post


Link to post
Share on other sites

Another way:

 

Compute the bounding box of all of the skinning bones (that aren't scaled to zero).  Inflate the resulting box by a precomputed amount (scaled according to the maximum scale applied to any skinning bone).

 

Advantage:  Cheaper in CPU cost to compute (compared to the bounding box per bone method); also no per-bone extra data

 

Disadvantage:  Less optimal fit.

Share this post


Link to post
Share on other sites

 

Calculate the bounding volume after the geometry data has been formed and manipulated, if you're basing the bounding volume on the 'at rest' pose or what ever your first keyframe is then you're going to be losing accuracy in the space filled by the volume.

In fact I calculate the AABB in model-space, then I convert it to OBB to transform it in world space, then I calculate the final AABB

 

Why are you calculating two AABB's?

 

The AABB in model space should be the same volumetricly as the OBB, albeit the OBB is translated and rotated to be aligned to the facing of the object, the AABB in world space from the OBB will have large areas of 'empty' space, thereby negating the advantage of calculating an OBB. If your intent is to only ever use Axis Alligned, then calculating an AABB straight up would save you a tonne of time and could be done by calculating the max x,y,z post the scale, translate and rotate for any step in the animation.

 

Just to add something, the image below is what i expect that your concern is.

 

[attachment=18722:boundingboxexample.png]

 

Left hand image is the model at the da Vinci Pose (the T pose), the right hand is the model with its arms up.

 

Your concern, as i understand it, is that your binding box (the black boxes) is not changing from frame to frame.

Am i on the right track?

Edited by Stragen

Share this post


Link to post
Share on other sites

Thank you for your answers smile.png

 


Why are you calculating two AABB's?

 

The AABB in model space is precomputed. It is an "axis-aligned OBB" in model-space ready for world transformation.

 

At runtime : model-space OBB  -> world transform -> world-space OBB -> world-space AABB

 

EDIT : But maybe you mean that I can just transform in world space the 2 vertices (min & max) of the AABB and compute the new AABB according to the 2 transformed vertices instead of the 8 transformed vertices of the OBB?

Edited by Tournicoti

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!