Jump to content
  • Advertisement
Sign in to follow this  
finalcut

Facial Animation

This topic is 4012 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I want to do a little project on facial animation.I've found several papers on the net covering the theory,and i was wondering if anyone knows any source(site,book) covering implementation too(preferably in Direct3D).I've seen some examples that use interpolation between different poses,though i was interested in having control points on the face and applying some deformation algorithm.

Share this post


Link to post
Share on other sites
Advertisement
AFAIK The two commonly used methods are blend shapes and cluster based, both of which you mentioned. Its worth noting that blend shapes are more flexible then simply linearly blending from one expression to the next, many static blend shapes can be combined together to produce almost any expression. You should download the Source SDK, it contains the facial animation tool they used in HL2, and shows (what I believe to be) multiple blend shapes.
Cluster based animation is similar to the bones animation used in fullbody character animation: the mesh is weighted to a number of animation nodes (joints/bones/clusters/control points) and the nodes are then used to drive the vertices of the mesh by blending together the positions of a vertex, transformed by each of the nodes it is weighted to, using the weights as coefficients.
I have had success in the past using purely translation based clusters (which I call control points) to drive facial animation. Clusters with rotation as well are preferable (to me at least) as fewer clusters can be used to capture the same expression: a lot of movement on the face has a rotational quality to it, especially the mouth and surrounding area.
I have not seen any specifically facial animation code samples around, but the theory is simple enough to implement. Look at for full body character animation code samples as the method is the same. I think the old Quake games used blend shapes for full body character animation, and there are plently of full body bones based animation samples floating around (e.g. one of the GPU Gems books discusses skinning characters on the GPU).

Share this post


Link to post
Share on other sites
cluster based methods (or basically skinning) is generally more useful for games imho mainly because you can use the same skeletal rig to drive the fcial animations on different models (assuming they are all rigged to a single skeleton). The problem with blend shape aproaches is that they tend to require you to remodel every single facial pose for every single model. The other advantage of clusters is that they do not require any additional work over your standard skinned meshes.

Share this post


Link to post
Share on other sites
The problem with the skeletal model is once you start wanting fine details, you end up with an absurd number of bones, each influencing just a few vertices, and an absurdly complex animation rig. It's also really difficult for artists to handle, when all they want is to move THIS vertex OVER HERE - they have to fiddle around with multiple bones without the face exploding in the process.

The solution is to do both skeletal and morph-based stuff at the same time (you do vertex morphs in the model's rest-pose space, then you apply skeletal animation). The idea is that you use a skeletal system to model the "bones" of the face - the jaw (obviously) and the major moving parts such as lips, tongue, eyebrows, voicebox, major cheek movement. Then you use vertex-morph-based techniques to control the fine details - the eye crinkles, the exact shape of the muscles around the mouth, etc. The nice thing about this is that once the major motions have been modelled by the skeletal animation, the vertex-morph stuff just has to apply "deltas" from it. These are much smaller movements, and they compress far better than putting the large movements directly into the vertex morphs.

So you get the benefits of both. From the skeletal system, you get the large movements in a form that is fairly simple to retarget to different shapes of face, you get pretty sensible blends even between two very different expressions, and you get good compression of those large movements. You also get a simple cheap low-LOD version you can use for more distant models.

But for the fine detail, the deltas are encoded per-vertex, and the artists still get their exact control. Because the deltas are small, they compress really well. And because they're always relative to the major movements (done by the skeleton), you don't get strange lerping results between different expressions, even when they're quite different. All blending of vertex deformation is done in the rest pose of the model, before skeletal animation, so it "makes sense".

You can also use this trick with clothing. The skeletal stuff does the person's skeleton, and then the vertex morphing just controls how the cloth hangs and wrinkles. The other nice thing is you can do standard IK on the skeleton, and the vertex morphing still deltas off that, even though it has no idea the IK is happening. The cloth might hang in a way that isn't quite perfect, but it's still just "cloth wrinkling", it's not lerping through the arms or anything gross like that.

<mini-pimp>Granny 3D has both these built in, including the delta stuff.</mini-pimp>

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!