Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 22 May 2013
Offline Last Active Jun 27 2016 04:39 PM

Posts I've Made

In Topic: Animation-transitioning

20 December 2014 - 04:07 PM

It seems that you are using bone transformation matrices in world or modelling space as Ashaman73 mentioned. All of the professional animation systems store the keyframe matrices relative to binding pose or maybe reference pose. During animation blending the keyframe matrices are being interpolated relative to ref or binding pose and the final pose will be calculated in the modeling or world space to be passed to the skinned mesh renderer.


Maybe you are calculating the blended pose in the world or modelling space and add it to the animation track and the animation system you are using is assuming that your blended transformation is calculated in local space and then it adds that local transformation to the final world space matrix.


For a more accurate blending you can interpolate position rotation and scale tracks separately just like what Buckeye suggested. However the way you are interpolating is not going to produce bad results on scale and translation if you are interpolating between two correct matrices. It can cause non-linear interpolation on rotations because you are using LERP instead of SLERP.

In Topic: Havok for dynamic animation like Euphoria?

09 October 2014 - 04:11 AM

Hi eatmicco,


Havok animation supports blending between physical and keyframe animation very well. It also supports pose matching which help you to switch between physical animation and key frame animation without any glitches. Havok animation SDK provides all of the things you need for combining keyframe animation and ragdoll. However euphoria made the physical contacts intelligent. As far as i know It uses machine learning techniques to control biomechanical forces and make intelligent body collisions.


If you want to make intelligent body contacts you should make a system on top havok physical-keyframe animation blending. If you want to find more about the havok physical animation-keyframe blending, you may search about the "Powered Ragdoll Contoller" and "Rigid Body Ragdoll Controller" on havok animation SDK or havok animation tool.

In Topic: Animated Parameterized Models

29 September 2014 - 02:03 PM

It seemed like that could cause conflicts some how, like it could cause anomalies in the model.  I can't think of a specific example and it's hard to describe, but it's just something I was afraid would happen.


Actually you should do some more things to apply differentials to the final model. You may just search about the combination of morph and skeletal animation and you'll find out more about it. It's a solved problem :)


The important thing is that you need to save the finalized and customized model in a single buffer. You might have many morph targets for customization and you don't need them after you finalized the model. Having all of the morph targets during gameplay is not a good idea as they can take much memory so saving the finalized model and calculating the differences from base model can help you achieve this better.

In Topic: Animated Parameterized Models

26 September 2014 - 10:23 AM

Hi myvraccount,


You should implement the parameterized face shapes with morph targets or you might also know them as blend shapes. When you use morph targets the topology of the base model is not changing. This means the order and structure of the vertex and index buffers are not changing at all and the only thing is changing is the position of the vertices. So if you have a basic model you can move his/her nose vertices and save it as morph target and again on the base model change his/her eyes vertices and save it as another morph target. So now you have two morph targets and you can blend your base model with these two and achieve various results based on the parameters. 3D modelers know how to make different variations on a base model with same topology so you just need to hire a good one ;)


For the animations on the parameterized face or models you should not be worried. As i mentioned the topology of the mesh is not changing in morph targets and it's just the position of the vertices that are changing. So first you just need to apply the normal animation with its skinning data to the base model and after that you should apply the morph targets blending on top of it.


To apply the morph targets blending you need to save the differential position of each vertex in an array. This means after you finalized the the parameterization of your character and customized it, you should save the paremterized model vertices positions (in modeling space) and calculate the difference of each of its vertices with the base model and save it in an array or buffer and then after you applied the normal animation on the base model you should add that differential to each vertex. For example you have a character that can move his jaw with bones. To apply the final animation you have to compute the basic model vertex positions based on the bone animations and then you need to apply the differential positions to each vertex after that to achieve the final shape.


The book which buckeye mentioned has described how to implement the skinned mesh and morph blending in separated chapters. It already said how to combine skeletal animation and morph animations. So you might give it a try, it's a good book and answers the basic questions on animation programming very well.


Hope this would help

In Topic: Quaternions and Animation Blending questions

13 July 2013 - 07:29 AM

q = nlerp(q1, q2);
t = t1*t2;
s = s1*s2;


Use Lerp for blending instead of vector cross:


t = Lerp(t1, t2, a);

s = Lerp(s1, s2, a);