Can't get skeletal animation to work [SOLVED :-) ]

Started by
6 comments, last by Fredericvo 9 years, 4 months ago

I have been trying to get this to work for the past few weeks but to no avail. Apparently I must still be missing something.

Some background information first:

I use 2 executables, the first one uses Assimp to import mesh, skeleton and animation data. I save everything in 3 separate binary files.

The second program is a modified Rastertek Tutorial, one of the initial ones with basic lighting, to which I'm trying to add vertex skinning.

Loading a static mesh works perfectly. I've also successfully modified the vertex input signature to include the indices and blendweights.

My binary format is extremely simple and, dare I say, naive. The Assimp structures usually consist of a mNumSomething which I save with a simple ofstream e.g. ofs.write( (char *) &numvtxindices, 4); followed with a for loop that iterates the items and writes them too.

It's very easy to read this data back by getting the number of items, then reading them in a loop and cast to the original or a similar same sized structure.

I also flattened the skeletal node hierarchy to a simple array of parent ids as in this tutorial:

http://molecularmusings.wordpress.com/2013/02/22/adventures-in-data-oriented-design-part-2-hierarchical-data/

Creating a current pose should really be as simple as iterating this array and for each node:

uint parentid = mHierarchy;

mGlobalPoses = mLocalPoses * mGlobalPoses[parentid];

then upload mInvBindMatrices * mGlobalPoses to the shader. Or so I thought, because nothing works.

I just get Spaghetti Bolognaise whatever combination I try, invert Assimps offset matrix or not, transpose or not etc.

Now it might sound like I'm just trying random hacks but in reality I think I do know that for DirectXmath's row-major matrices the above is the right combination, with Assimp's mOffset aiMatrices transposed to XMMATRIX, and aiQuats order (w,x,y,z) swizzled to XMVECTOR's (x,y,z,w)

It doesn't help that most examples are in OpenGL too.

I haven't interpolated anything yet. It's a single pose for which i created a temporary class, adequately named testpose (lol), and probably used a lot of bad coding practices just to keep it simple. I made every member variable public for easy access when sending the matrix palette,

din't create many separate methods etc. Moreover I have still kept skeleton & anim data together. I'm aware once it works it'll need to be split.

OK, now that this is out of the way let's post the most relevant code.

Let's start with the polygon layout


polygonLayout[0].SemanticName = "POSITION";
polygonLayout[0].SemanticIndex = 0;
polygonLayout[0].Format = DXGI_FORMAT_R32G32B32_FLOAT;
polygonLayout[0].InputSlot = 0;
polygonLayout[0].AlignedByteOffset = 0;
polygonLayout[0].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
polygonLayout[0].InstanceDataStepRate = 0;
 
polygonLayout[1].SemanticName = "TEXCOORD";
polygonLayout[1].SemanticIndex = 0;
polygonLayout[1].Format = DXGI_FORMAT_R32G32_FLOAT;
polygonLayout[1].InputSlot = 0;
polygonLayout[1].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;
polygonLayout[1].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
polygonLayout[1].InstanceDataStepRate = 0;
 
polygonLayout[2].SemanticName = "NORMAL";
polygonLayout[2].SemanticIndex = 0;
polygonLayout[2].Format = DXGI_FORMAT_R32G32B32_FLOAT;
polygonLayout[2].InputSlot = 0;
polygonLayout[2].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;
polygonLayout[2].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
polygonLayout[2].InstanceDataStepRate = 0;
 
polygonLayout[3].SemanticName = "BLENDINDICES";
polygonLayout[3].SemanticIndex = 0;
polygonLayout[3].Format = DXGI_FORMAT_R32G32B32A32_UINT;
polygonLayout[3].InputSlot = 0;
polygonLayout[3].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;
polygonLayout[3].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
polygonLayout[3].InstanceDataStepRate = 0;
 
polygonLayout[4].SemanticName = "BLENDWEIGHT";
polygonLayout[4].SemanticIndex = 0;
polygonLayout[4].Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
polygonLayout[4].InputSlot = 0;
polygonLayout[4].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;
polygonLayout[4].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
polygonLayout[4].InstanceDataStepRate = 0;
// Get a count of the elements in the layout.
    numElements = sizeof(polygonLayout) / sizeof(polygonLayout[0]);
 
// Create the vertex input layout.
result = device->CreateInputLayout(polygonLayout, numElements, vertexShaderBuffer->GetBufferPointer(), vertexShaderBuffer->GetBufferSize(), 
                              &m_layout);
if(FAILED(result))
{
return false;
}

The vertex shader


////////////////////////////////////////////////////////////////////////////////
// Filename: light.vs
////////////////////////////////////////////////////////////////////////////////
 
#define MAXJOINTS 60
/////////////
// GLOBALS //
/////////////
cbuffer MatrixBuffer
{
matrix worldMatrix;
matrix viewMatrix;
matrix projectionMatrix;
matrix joints[MAXJOINTS];
};
 
cbuffer CameraBuffer
{
    float3 cameraPosition;
float padding;
};
 
 
//////////////
// TYPEDEFS //
//////////////
struct VertexInputType
{
    float4 position : POSITION;
    float2 tex : TEXCOORD0;
float3 normal : NORMAL;
uint4 boneidx : BLENDINDICES;
float4 bonewgt : BLENDWEIGHT;
 
};
 
struct PixelInputType
{
    float4 position : SV_POSITION;
    float2 tex : TEXCOORD0;
float3 normal : NORMAL;
float3 viewDirection : TEXCOORD1;
};
 
 
////////////////////////////////////////////////////////////////////////////////
// Vertex Shader
////////////////////////////////////////////////////////////////////////////////
PixelInputType LightVertexShader(VertexInputType input)
{
    PixelInputType output;
float4 worldPosition;
float4 skinnedpos = float4(0.0f,0.0f,0.0f,1.0f);
// Change the position vector to be 4 units for proper matrix calculations.
    input.position.w = 1.0f;
//transform according to joint matrix palette
skinnedpos += input.bonewgt.x * mul(input.position, joints[input.boneidx.x]);
skinnedpos += input.bonewgt.y * mul(input.position, joints[input.boneidx.y]);
skinnedpos += input.bonewgt.z * mul(input.position, joints[input.boneidx.z]);
    skinnedpos += input.bonewgt.w * mul(input.position, joints[input.boneidx.w]);
   
 
 
 
 
skinnedpos.w = 1.0f;
 
 
// Calculate the position of the vertex against the world, view, and projection matrices.
    output.position = mul(skinnedpos, worldMatrix);
    output.position = mul(output.position, viewMatrix);
    output.position = mul(output.position, projectionMatrix);
    
//todo: transform normals too but since it doesn't work they can wait
 
// Store the texture coordinates for the pixel shader.
output.tex = input.tex;
    
// Calculate the normal vector against the world matrix only.
    output.normal = mul(input.normal, (float3x3)worldMatrix);
 
    // Normalize the normal vector.
    output.normal = normalize(output.normal);
 
// Calculate the position of the vertex in the world.
    worldPosition = mul(input.position, worldMatrix);
 
    // Determine the viewing direction based on the position of the camera and the position of the vertex in the world.
    output.viewDirection = cameraPosition.xyz - worldPosition.xyz;
 
    // Normalize the viewing direction vector.
    output.viewDirection = normalize(output.viewDirection);
 
    return output;
}

The code that sends the cBuffers


bool LightShaderClass::SetShaderParameters(ID3D11DeviceContext* deviceContext,XMMATRIX * globalPoses, unsigned int numJoints, const XMMATRIX& worldMatrix, const XMMATRIX& viewMatrix,
  const XMMATRIX& projectionMatrix, ID3D11ShaderResourceView* texture, const XMFLOAT3& lightDirection, 
  const XMFLOAT4& ambientColor, const XMFLOAT4& diffuseColor, const XMFLOAT3& cameraPosition, const XMFLOAT4& specularColor, float specularPower )
{
HRESULT result;
    D3D11_MAPPED_SUBRESOURCE mappedResource;
unsigned int bufferNumber;
MatrixBufferType* dataPtr;
LightBufferType* dataPtr2;
CameraBufferType* dataPtr3;
 
XMMATRIX worldMatrixCopy,viewMatrixCopy,projectionMatrixCopy;
 
// Transpose the matrices to prepare them for the shader.
worldMatrixCopy = XMMatrixTranspose( worldMatrix );
viewMatrixCopy = XMMatrixTranspose( viewMatrix );
projectionMatrixCopy = XMMatrixTranspose( projectionMatrix );
 
// Lock the constant buffer so it can be written to.
result = deviceContext->Map(m_matrixBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
if(FAILED(result))
{
return false;
}
 
// Get a pointer to the data in the constant buffer.
dataPtr = (MatrixBufferType*)mappedResource.pData;
 
// Copy the matrices into the constant buffer.
dataPtr->world = worldMatrixCopy;
dataPtr->view = viewMatrixCopy;
dataPtr->projection = projectionMatrixCopy;
for(unsigned char i = 0; i < numJoints; i++)
{
dataPtr->joints[i] = XMMatrixTranspose( globalPoses[i]);
//dataPtr->joints[i] = globalPoses[i];
//dataPtr->joints[i] = XMMatrixIdentity();
}
 
// Unlock the constant buffer.
    deviceContext->Unmap(m_matrixBuffer, 0);
 
// Set the position of the constant buffer in the vertex shader.
bufferNumber = 0;
 
// Now set the constant buffer in the vertex shader with the updated values.
    deviceContext->VSSetConstantBuffers(bufferNumber, 1, &m_matrixBuffer);
 
// Lock the camera constant buffer so it can be written to.
result = deviceContext->Map(m_cameraBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
if(FAILED(result))
{
return false;
}
                                  
// Get a pointer to the data in the constant buffer.
dataPtr3 = (CameraBufferType*)mappedResource.pData;
 
// Copy the camera position into the constant buffer.
dataPtr3->cameraPosition = cameraPosition;
dataPtr3->padding = 0.0f;
 
// Unlock the camera constant buffer.
deviceContext->Unmap(m_cameraBuffer, 0);
 
// Set the position of the camera constant buffer in the vertex shader.
bufferNumber = 1;
 
// Now set the camera constant buffer in the vertex shader with the updated values.
deviceContext->VSSetConstantBuffers(bufferNumber, 1, &m_cameraBuffer);
 
// Set shader texture resource in the pixel shader.
deviceContext->PSSetShaderResources(0, 1, &texture);
 
// Lock the light constant buffer so it can be written to.
result = deviceContext->Map(m_lightBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
if(FAILED(result))
{
return false;
}
 
// Get a pointer to the data in the light constant buffer.
dataPtr2 = (LightBufferType*)mappedResource.pData;
 
// Copy the lighting variables into the light constant buffer.
dataPtr2->ambientColor = ambientColor;
dataPtr2->diffuseColor = diffuseColor;
dataPtr2->lightDirection = lightDirection;
dataPtr2->specularColor = specularColor;
dataPtr2->specularPower = specularPower;
 
// Unlock the light constant buffer.
deviceContext->Unmap(m_lightBuffer, 0);
 
// Set the position of the light constant buffer in the pixel shader.
bufferNumber = 0;
 
// Finally set the light constant buffer in the pixel shader with the updated values.
deviceContext->PSSetConstantBuffers(bufferNumber, 1, &m_lightBuffer);
 
return true;
}
 
 
void LightShaderClass::RenderShader(ID3D11DeviceContext* deviceContext, int indexCount)
{
// Set the vertex input layout.
deviceContext->IASetInputLayout(m_layout);
 
    // Set the vertex and pixel shaders that will be used to render this triangle.
    deviceContext->VSSetShader(m_vertexShader, NULL, 0);
    deviceContext->PSSetShader(m_pixelShader, NULL, 0);
 
// Set the sampler state in the pixel shader.
deviceContext->PSSetSamplers(0, 1, &m_sampleState);
 
// Render the triangle.
deviceContext->DrawIndexed(indexCount, 0, 0);
 
return;
}

The vertex declaration


struct VertexType
{
XMFLOAT3 position;
   XMFLOAT2 texture;
XMFLOAT3 normal;
unsigned int blendindices[4];
 float blendweights[4];
};

My "testpose" header file


/********************************************************************************************************************************************************************\
*
* This class is only used for testing a frame of an animation e.g. 1 pose at a given keyframe
*
*********************************************************************************************************************************************************************/
#pragma once
#include <DirectXMath.h>
#include <iostream>
#include <fstream>
using namespace DirectX;
using namespace std;
 
class testpose
{
public:
struct Channel 
{
XMFLOAT3 S;
XMFLOAT4 R;
XMFLOAT3 T;
};
testpose(void);
~testpose(void);
public:
unsigned int mNumChannels;
Channel *mChannels;
public:
bool Init(void);
unsigned int mNumJoints;
unsigned int *mHierarchy;
XMMATRIX *mInvbindpose;
XMMATRIX *mLocalPoses;
XMMATRIX *mGlobalPoses;
};
 

My testpose's code //testpose.cpp


#include "testpose.h"
 
 
testpose::testpose(void)
{
mNumChannels = 0;
mChannels = nullptr;
mNumJoints = 0;
mHierarchy = nullptr;
mInvbindpose = nullptr;
mLocalPoses = nullptr;
mGlobalPoses = nullptr;
}
 
 
testpose::~testpose(void)
{
if(mChannels != nullptr)
{
delete[] mChannels;
mChannels = nullptr;
}
 
if(mHierarchy != nullptr)
{
delete[] mHierarchy;
mHierarchy = nullptr;
}
 
if(mInvbindpose != nullptr)
{
_aligned_free(mInvbindpose);
mInvbindpose = nullptr;
}
 
if(mLocalPoses != nullptr)
{
_aligned_free(mLocalPoses);
mLocalPoses = nullptr;
}
 
if(mGlobalPoses != nullptr)
{
_aligned_free(mGlobalPoses);
mGlobalPoses = nullptr;
}
}
 
 
bool testpose::Init(void)
{
ifstream ifs("../data/anim1frame.bin", std::ifstream::in | ios::binary);
 
if(ifs.bad())
{
return false;
}
//------------------Read in a frame of animation--------------------------------------------
ifs.read( (char*) &mNumChannels,4);
 
if(mNumChannels == 0)
{
return false;
}
 
mChannels = new Channel[mNumChannels];
for(unsigned int i = 0; i < mNumChannels; i++)
{
ifs.read( (char*) &mChannels[i].S, 3 * 4);
ifs.read( (char*) &mChannels[i].R, 4 * 4);
ifs.read( (char*) &mChannels[i].T, 3 * 4);
}
ifs.close();
//---------------read in bone data such as hierarchy and inverse bind transform-----------------------------------------------
ifs.open("../data/skeleton.bin",  std::ifstream::in | ios::binary);
 
if(ifs.bad())
{
return false;
}
//determine number of joints
ifs.read( (char*) &mNumJoints,4);
if(mNumJoints == 0)
{
return false;
}
mHierarchy = new unsigned int[mNumJoints];
for(unsigned int i = 0; i < mNumJoints; i++)
{
ifs.read( (char*) &mHierarchy[i],4);
}
 
mInvbindpose = (XMMATRIX*) _aligned_malloc(sizeof(XMMATRIX) * mNumJoints,16);
for(unsigned int i = 0; i < mNumJoints; i++)
{
ifs.read( (char*) &mInvbindpose[i],16 * 4); //4x4 matrix of 4 byte floats
mInvbindpose[i] = XMMatrixTranspose(mInvbindpose[i]);
 
}
ifs.close();
//-------------------------------------convert the animation from vectors & quaternions to actual SIMD-friendly matrices--------------
mLocalPoses = (XMMATRIX *) _aligned_malloc( sizeof(XMMATRIX) * mNumChannels, 16);
for(unsigned int i = 0; i < mNumChannels; i++)
{
//aiQuaternions that were saved are stored as <w,x,y,z> but XMVECTORS use the <x,y,z,w> convention
float x,y,z,w;
w = mChannels[i].R.x;
x = mChannels[i].R.y;
y = mChannels[i].R.z;
z = mChannels[i].R.w;
XMVECTOR S = XMLoadFloat3(&mChannels[i].S);
XMVECTOR R = XMVectorSet(x,y,z,w);
XMVECTOR T = XMLoadFloat3(&mChannels[i].T);
XMMATRIX Smat = XMMatrixScalingFromVector(S);
XMMATRIX Rmat = XMMatrixRotationQuaternion(R);
XMMATRIX Tmat = XMMatrixTranslationFromVector(T);
XMMATRIX SRTmat = Smat * Rmat * Tmat;
//XMMATRIX SRTmat =Tmat * Rmat * Smat;
//SRTmat = XMMatrixTranspose(SRTmat);
mLocalPoses[i] = SRTmat;
//XMMatrixDecompose(&Stmp,&Rtmp,&Ttmp,SRTmat); //just tried to debug this to see whether data seemed valid, it seemed like it was.
 
 
}
//--------------------------------------create global poses from the hierarchy ---------------------------------------------------------
mGlobalPoses = (XMMATRIX *) _aligned_malloc( sizeof(XMMATRIX) * mNumChannels, 16);
for(unsigned int i = 0; i < mNumChannels; i++)
{
 
if(mHierarchy[i] == -1)
{
mGlobalPoses[i] = mLocalPoses[i];
 
}
else
{
unsigned int parentid = mHierarchy[i];
mGlobalPoses[i] = mLocalPoses[i] * mGlobalPoses[parentid];
//mGlobalPoses[i] = mGlobalPoses[parentid] * mLocalPoses[i]; //I've tried permutations of this
 
}
 
}
//-------------------------------------------------generate final Matrix Palette for the vertex shader ---------------------------------------------
for(unsigned int i = 0; i < mNumChannels; i++)
{
//mGlobalPoses[i] = mGlobalPoses[i] * mInvbindpose[i]; //I've tried permutations of this too and combinations with the above ones
mGlobalPoses[i] = mInvbindpose[i] * mGlobalPoses[i];
 
}
 
return true;
}
 

What model I am using (it's a test model from Assimp itself)


scene = importer.ReadFile( "../../../../../SDKs/assimp-3.1.1-win-binaries/test/models/X/BCN_Epileptic.x", 
        aiProcess_CalcTangentSpace       | 
        aiProcess_Triangulate            |
        aiProcess_JoinIdenticalVertices  | 
aiProcessPreset_TargetRealtime_Quality | 
aiProcess_OptimizeGraph | 
aiProcess_OptimizeMeshes |
aiProcess_ConvertToLeftHanded
&~aiProcess_FindInvalidData | //recommended on a tutorial
        aiProcess_SortByPType);
  
  
  // If the import failed, report it
  if( scene == nullptr)
  {
   cout << "import failed" << endl;
    return -1;
  }

How I saved 1 frame of animations, no interpolation yet


 //---------------------------------------------------temp 1 frame of animation--------------------------------------------------
  cout << "saving one frame of animation" << endl;
  ofstream ostmp("anim1frame.bin", std::ofstream::out | ios::binary);
  ostmp.write( (char*) &scene->mAnimations[0]->mNumChannels,4);
  for(unsigned int chanidx = 0; chanidx < scene->mAnimations[0]->mNumChannels; chanidx++)
  {
 aiVector3D S,T;
 aiQuaternion R;
 unsigned int rotidx = 0;
 scene->mAnimations[0]->mChannels[chanidx]->mNumRotationKeys > 1 ? rotidx = 45 : rotidx = 0; //not all channels have an array of rotations
 S = scene->mAnimations[0]->mChannels[chanidx]->mScalingKeys[0].mValue; //none of them have more than one
 R = scene->mAnimations[0]->mChannels[chanidx]->mRotationKeys[rotidx].mValue; //well 1 or 100 depending on the channel, choose 45 for no apparent reason (somewhere mid-anim)
 T = scene->mAnimations[0]->mChannels[chanidx]->mPositionKeys[0].mValue; //same as scalings
 ostmp.write( (char*) &S,3*4);
 ostmp.write( (char*) &R,4*4);
 ostmp.write( (char*) &T,3*4);
  }
  cout << endl;
  ostmp.close();

How my skeleton was saved


//------------------------------ save skeleton data-----------------------------------------------------------------
 cout << "saving skeleton" << endl;
  ofstream ofskel ("skeleton.bin", std::ofstream::out | ios::binary);
  unsigned int numnodestosave = flattenedNodes.size();
  ofskel.write( (char*) &numnodestosave,4);
  for (unsigned int i = 0; i < numnodestosave; i++)
  {
 ofskel.write( (char*) &flattenedNodes[i].id, 4); // 4 byte ids
  }
  for (unsigned int i = 0; i < numnodestosave; i++)
  {
  ofskel.write( (char*) &flattenedNodes[i].invbindpose, 4 * 4 * 4); // 4 bytes per float, 4x4 matrix
  }
 
  ofskel.close();

For completeness, I added a Renderdoc screen cap that shows indices and weights seem OK.

Advertisement

You can help others help you (by doing something other than staring at the code you've already stared at - TL;DR) by determining (by following the data) at a minimum which section of code causes the problem. That is, look at the run-time values of variables at various places in your code to determine if they're correct. If they are correct, the problem is somewhere later in the "loop." If they're incorrect, the problem is occurring before that code. E.g., is the data imported correctly? That is, have you examined the actual data (run-time) after import?

As a suggestion, your goal should be to be able to post: "At [some-specified-point-in-my-code], I've verified the input data is correct; the matrix values are correct; there are no errors indicated in any function calls up to the point; and the debug runtime reports no problems. But, after the following 4 lines of code are executed, the results are incorrect as follows: ..."

Also, it appears you've jumped from a static cube to a much more complex hierarchical mesh. If you create a very simple hierarchy to begin with, and your 1000 lines of code don't work, finding what portion of the code is causing the problem will be much simpler.


it might sound like I'm just trying random hacks

Actually, you are just hacking, which sometimes works. But it's much better to understand what you should be doing, and verify that the code actually does what you think it should, rather than "discovering" what works by guessing.

EDIT: There's no problem with not understanding why a line of code doesn't do what you think it should. We've all been there. wink.png Just suggesting you be the one to find that line of code!


I think [emphasis mine - buckeye] I do know that for DirectXmath's row-major matrices the above is the right combination

Row-major or not, a matrix with incorrect values will probably result in something you don't want. It's much better to verify that the actual values at run-time are correct, or at least, what you think is correct. Have you done that?

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

Thanks for answering.

I want to clarify one thing, I'm not a noob who just made his first cube just because I said I used an early rastertek tutorial.

In fact I did them all including refraction & reflection,shadows, their whole terrain tuts etc.

If I used an early tut it's because I needed a framework that was easy enough to add to and it would have seemed a weird choice to choose one where there's already a skybox, reflection, simple collisons etc...

As far as looking at variables at exact places etc (aka debugging?) I've done that where I could, I think I saw meaningful matrices (I mean a matrix with values like 1546789,98654544,23455,-454545435435,545343 etc would show as weird. I did see meaningful values. I did use XMMatrixdecompose() to see whether the individual S,R,T values made at least a little sense but in the end of the day I think I can't tell whether values that do look ok are the ones to be expected in vertex skinning because it's such a long chain of events and I can't know the actual end values. I'm not sure if this makes sense.


I did see meaningful values. I did use XMMatrixdecompose() to see whether the individual S,R,T values made at least a little sense but in the end of the day I think I can't tell whether values that do look ok are the ones to be expected in vertex skinning because it's such a long chain of events and I can't know the actual end values. I'm not sure if this makes sense.

Oh, yeah, it makes sense. biggrin.png Anyone who's gotten into skinned mesh animation has been there. Also, I wasn't trying to imply you were a noob at programming. Just seemed like you weren't sure where to look, and link was intended to give some idea where to start. Seriously, particularly for finding problems with skinned mesh animation, if "meaningful" doesn't mean "correct", that isn't enough.

A response you should expect to hear: you've got 1000 lines of untested code, pieced together from what sounds like several sources, you have a custom binary format file, you coded an import routine, and imported relatively complex data that may or may not be loaded correctly, and expect it all to fire up the first time? Really?

That's out of the way, so get yourself down to serious business (if you really want it all to work). I still suggest you "follow the data." If you don't know what good data looks like, then give yourself a chance. Go into Blender (or whatever you're modeling program is) and create the simplest skinned mesh you can get away with - maybe just 2 or 3 bones, a couple of boxes to skin, positioned at easily recognized values such as (0,0,0), (0,1,0), etc, all bones and boxes aligned vertically, each vertex weighted to just 1 bone - something like that. Create an animation of 1 or 2 frames with no rotations or translations. Vertex data is easy to recognize as correct, the animation SRT's are trivial, you have simple matrices to look at, etc.

Start with loading the data, and make sure it's correct - not "meaningful" - correct. That includes vertices, matrices, animation values, etc. There's absolutely no sense in debugging code which may be FUBAR, if the input data is FUBAR.

Follow the data from there. E.g., you posted "Creating a current pose should really be as simple as ..." Go look at it!

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.


Go into Blender (or whatever you're modeling program is) and create the simplest skinned mesh you can get away with - maybe just 2 or 3 bones, a couple of boxes to skin, positioned at easily recognized values such as (0,0,0), (0,1,0), etc, all bones and boxes aligned vertically, each vertex weighted to just 1 bone

This is something I was considering next, although I was hoping to avoid it as Blender has a steep learning curve of its own.

I think I will first try another test model with fewer bones such as the wuson.


Start with loading the data, and make sure it's correct - not "meaningful" - correct.

I think this is where I might use a small recap as I'm starting to have doubts.

Are localposes & offset matrices, and applying those 2 equations above to get global poses really all there is to it or did I miss something?

I've read at several tutorials or forum posts that the transform in aiNodes is only used when there is no animation and that you can ignore them for animations because the animation data replaces, not complements, those.

If I can be 100% confident about this then it's obviously my data trail that's wrong, probably in my importer or something.

OK guys, I found a huge flaw in the way I saved animations.

That model was composed of 3 meshes. I only used mesh 0, saved offset matrices for it but while saving animations I just saved all channels, even those which affected nodes for which I didn't save offset matrices (those nodes that probably affect mesh 1 and 2)

I obviously have to match anim node with ainodes the same way I did for offset matrices. I must've been drunk when I wrote that code.


Are localposes & offset matrices, and applying those 2 equations above to get global poses really all there is to it or did I miss something?

If you're referring to the paragraph that begins: "Creating a current pose ..." - maybe. wink.png Depends on what you mean by "current pose, " "localpose," "global pose," where in the process that occurs, and what you're going to do with that matrix. Unfortunately, there isn't a universal set of skinned mesh animation terms. I.e., you may be using "local pose" to mean something different than what I may assume. "Global" implies "world," but you may mean "relative to root frame/bone." (etc.)

First, though, take a look at this article regarding skinned mesh animation. Maybe it will help you with your understanding of the process. Rather than concentrating on the implementation of the code, I'd suggest you glean from it more of how the process works, particularly regarding what each matrix (by whatever name) does with regard to the final process.

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

YAAAAAY !!!!!!!!!!!!!!!!!!!!!!!!!!!

I Finally got this to work.

After correcting the way I saved animations it didn't work yet and I was so disappointed and frustrated.

Now it appears I simply had another big bug in how I saved the BLENDINDICES. I saved the index into mBones (wrong, so wrong) instead of into my flattenednode array.

Next step will be to actually interpolate and animate but that's for Monday and the whole of next week :)

This topic is closed to new replies.

Advertisement