• Advertisement
Sign in to follow this  

Animation question: One KFA, a tall skeleton and a short skeleton

This topic is 3029 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Question for the experts here. I have a single keyframe animation (KFA) which rotates/translates a common set of bones to produce a biped type animation (for example, clapping hands). I have two different skinned meshes with the same skeletons, the only difference being the distance between their bones in their bind positions, because one model is a human being and the other model is a giant (or a dwarf too for example). The single KFA file has translations for the bones it animates baked into the bone's keyframes using a "human-sized" translation. What is the common technique for applying that single KFA to both skeletons? If I simply accept the translations then running the hand-clapping animation on a giant skeleton squashes the character to the height of a human during the animation then returns them to their original height (assuming the giant has a different "IDLE" KFA than the human). Similarly for a dwarf model, the hand-clapping animation stretches the dwarf until it is as tall as the human during hand-clapping then returns the dwarf to original size (again assuming different dwarf "IDLE" KFA). Of course I could just ignore the translations in the keyframes and use the bind position translations instead, and this work for stationery KFA's (like hand-clapping). The issue still comes in when the KFA requires some movement of the biped root bone (Bip01), for example a sitting or falling animation designed for a human-sized model applied to a giant/dwarf. In the case of ignoring all animation translations, the motions look correct for all skeleton sizes but for no skeleton size does animated movement look correct (for instance all the skeleton's will fall or sit at the same height as their bind position root bone, which places them in mid-air instead of translating down to reach the sitting position).

Share this post


Link to post
Share on other sites
Advertisement
Hey Steve!

The thing to remember is that vertices transform relative to the joints they are assigned to. If you have a human sized rig, vertex groups assigned to any joint will try to conform to the space around the rig, causing... Ahem,
">unusual effects.

If you want correct transformations for all of your dynamic meshes, you will need a different rig for every different body structure, unless you get complicated and try to have joints "home in" to where they should be on other meshes.

If you look at WoW all the races have members of a similar build. The developers refer back to a single rig many times. For example, you don't see a really fat Night Elf or a scrawny Orc.

Cheers!
-Zyro

Share this post


Link to post
Share on other sites
Ha Ha funny vidoes of those Pokemon!

Yes I know quite well what vertices do in the skinned index blending implementation, the question I have is whether I can somehow normalize the artists translations of the bones and then account for the different heights of the skeletons while interpolating bone matrix positions per frame?

And as to your WOW example, there is no real reason not to have a fat model or a skinny model, provided that the skinned mesh is different for each one with respect to which vertices are influenced by which bones and by how much (to avoid the candy wrapper or interpenetration artifacts). As long as the model is roughly the same *height* as the one for which the keyframe animations have the translations baked in, there should be no issue. In other words, Blizzard could still have hundreds of biped keyframe anim sequences and put either a fat orc or a thin orc on any of them and it would look the same, but as soon as they put a hobbit on one of them things would look wrong (unless their artists modelled the hobbit as exactly the same height as the other bipeds, and then the game engine uniformly scaled the hobbit to be small after the skinning).

Can somebody tell me how to include screenshots here and I can provide pictures to explain more?

Share this post


Link to post
Share on other sites
Unrelated to the technical ability to apply an animation to a biped of different size, I don't think you'd want to use the same animations for a human-sized model as you would for a giant or a dwarf anyway - they're going to move differently, after all.

Share this post


Link to post
Share on other sites
From the screenshot it looks like you're playing with DAoC models and animations. If that's the case, DAoC's KFAs have a matrix associated with each node. It's been a long time since I worked with those but from memory these are used to transform vertices into bone space. Applying the bone transform then moves the bone correctly regardless of size.

Share this post


Link to post
Share on other sites
Thanks guys I'll post the real pictures later today, this one was a picture I tried to add to another post for determining collisions between an animated models and picking rays, I was trying to show this thing I created called a "collision rig" by taking my character's bone's and creating spheres with centers at the bone's origin and diameter's scaled by the distance from root.

Share this post


Link to post
Share on other sites
Oh here's a giant woman with an animation (idle) that is tailored for height:

Photobucket

and here's the same woman performing a victory emote animation designed for a normal-sized human player, and being squashed:

Photobucket

Just for fun here's a player doing the same anim:

Photobucket

A small idle gnome:

Photobucket

And a small gnome doing a salute emote and being stretched to the height of the player:

Photobucket

@PlayerX: There is indeed a list of nodes with local transform matrices after the list of NiExtraStringData containing the index into animnodes.dat of which bone the keyframe data belongs to. I will try combining this with the bone transforms I already derived from the NIF file (i.e. the local transform and bone offset transform that was found in the NiSkinData or NiSkinPartition information in the NIF), and see if this overcomes the keyframe translation's annoying tendency to "fit" the bones (and hence the model's skin) to a hard-coded height.

Share this post


Link to post
Share on other sites
More information on PlayerX's suggestion:

A human male's idle animation is in a KFA file called I_HM.KFA, it has 62 root nodes (if root node is defined as a node
without a parent). The first root node has 61 NiStringExtraData nodes, containing an integer index into the master bone
list defined in <daoc_root>\figures\animnode.dat. The first root node also has 61 NiKeyframeController nodes, each
containing 0 or more Rotation and Translation keys defining animation on that bone within a time period 0.0f to 1.0f.

This leaves 61 additional root nodes, which contain the matrix associated with each of the animation nodes defined above,
and referred to by PlayerX in his previous post. Unfortunately these simply duplicate the combined transform given
by the first Rotation and Translation keyframe of that particular bone. In other words, simply applying the animation
with a time of 0.0f or greater will effectively apply the same matrix to the bone positions as these matrices contain.

For example, say that the fifth of the 61 NiStringExtraData nodes refers to bone 4 in the master bone list, which is
really the "Bip01 Spine2" bone. The fifth NiKeyframeController specifies 2 Rotation keys and 2 Translation keys. The first
Rotation and Translation key produce a Y,P,R of { 0.0, 0.0, 4.0 } and X,Y,Z translation of { 4.59, -0.0006, 0.0 }.

The sixth root node (Skipping the first one because it contains all the NiStringExtraData/NiKeyframeController nodes),
has a local transform that specifies a Rotation and Translation of Y,P,R { 0.0, 0.0, 4.0 } and { 4.59, -0.0006, 0.0 }
respectively.

I can see no meaningful way to apply this extra matrix per node in the KFA file since it already gets factored into
the vertices position by simply applying an animation time of >= 0.0f and sampling from the first Rotation/Translation
keyframe for Bip01 Spine2.

So for hardware skinning, the matrix palette contained in shader constants registers would be loaded up with the following
matrix for the bone in the Model file corresponding to Bip01 Spine.

MatrixPalette[ Bip01 Spine ] = Model's Bone Offset for Bip01 Spine (as given by the Model's NIF file) * mixed 4x4 transform
produced by lerping out the correct Rotation/Translations given current animation period.

Now since each vertex will have an array of up to 4 bone indices and weight values, any vertex that references the
Bip01 Spine bone with a non-zero weight, will be pulled towards the coordinate system defined by the above matrix
concatenation.

Share this post


Link to post
Share on other sites
From the sound of it you didn't get it working. I know those matrices are used to rebase the animation as I've done this before with those models. Here's some examples applying the same running animation to different models:






This one is just stupid as obviously the animation was not intended for the horse model, but it still works to some extent, and you gotta love a horse running on two legs. :-)



So keep at it, you're not doing something right. If it helps I'm only using the translation component of those matrices and it is enough.

Share this post


Link to post
Share on other sites
Thanks for the reply PlayerX, can you try yours with emo_victory.kfa or emo_salute.kfa on the ice queen?

The CSV files seem to imply that she can use those keyframes, but these anims are just suited for players.

I can also get all the player models (trolls, kobolds and humans) to work with any of the KFAs just by accepting the bind translation from the NIF and ignoring the one's in the KFAs.

Also since you seem to have gone over this stuff before, do you get a weird translation of the CATA lion's jaw and twisting of his tail using the translations provided?

Another question for you: how do the dungeons connect to the outside maps? I haven't really played the original game, except to take some PIX captures of it for analysis, and noticed you did work for a server emulator and prolly have detailed information about the correct zone jump points (the data in the CSV file doesn't seem complete).

[Edited by - Steve_Segreto on November 8, 2009 5:21:47 PM]

Share this post


Link to post
Share on other sites
Yes they seem to be okay:





I never got into the Catacombs models so couldn't tell you about those. This was all stuff I did years ago. It was useful to use a real game's assets while I was working on a 3D engine:


Share this post


Link to post
Share on other sites
Quote:
Original post by Steve_Segreto
Another question for you: how do the dungeons connect to the outside maps? I haven't really played the original game, except to take some PIX captures of it for analysis, and noticed you did work for a server emulator and prolly have detailed information about the correct zone jump points (the data in the CSV file doesn't seem complete).


I don't know. The server emulator used the existing game's client and I can't remember if it was the client or the server that handled the jump points. It's been a loooong time. :-)

Share this post


Link to post
Share on other sites
Oooh, that's very nice, its impressive you got the speed tree NIFs decoded as well as the ground cover. Plus your shading/lighting looks better than mine and you got shadows on the ground. Did you use vertex/pixel shaders or fixed-function? - the original PC DAOC client uses fixed function in PIX. Did you use the per-vertex colors or just do your own lighting stuff?

This is the best I can get at the moment, I receive a y-axis (height) translation when using the player character animations with short/tall monsters.

Photobucket

Photobucket

Share this post


Link to post
Share on other sites
Your render is looking good. I recognise that town. :-)

Yes I used vertex/pixel shaders, so the lighting is done per-pixel rather than per-vertex. The shadows are straight shadow-mapping.

I don't know why your model is sticking into the ground. Are you doing something unusual with the root node before you apply the animation?

Are you writing a DAoC-alike?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement