# DX11 Bottleneck in MD5 animation

## Recommended Posts

Well met,

I have implemented animation into my 3d engine [dx11]. I chose the md5 format, known from doom 3 I think, because it appeared

to be the most simple format for bonebased animations (correct me, if im wrong on this).

Anyway, I manage to make it work, but appearently I created a bottleneck somewhere, because even a very simple anymation with 3 bones and like 50 polys makes everything slow as hell when played more than 10 times at once on the screen.

I doublechecked the code for the animation itself, im not allocating memory every frame or something like that. So my best guess is that it has to do with the passing from CPU to GPU, which is done like this:


D3D11_MAPPED_SUBRESOURCE mappedResource;
cbPerObject* dataPtr;
dataPtr = (cbPerObject*)mappedResource.pData;
dataPtr->worldMatrix = World;
dataPtr->viewMatrix = camView;
dataPtr->projectionMatrix = camProjection;
d3d11DevCon->Unmap(matrixBuffer, 0);
d3d11DevCon->VSSetConstantBuffers(0, 1, &matrixBuffer);

for(int i=0;i<numSubsets;i++){

d3d11DevCon->IASetIndexBuffer(subsets[i].indexBuff, DXGI_FORMAT_R32_UINT, 0);
d3d11DevCon->IASetVertexBuffers( 0, 1, &subsets[i].vertBuff, &stride, &offset );
d3d11DevCon->DrawIndexed(subsets[i].indices.size(), 0, 0 );

}


I am only using Vertex and Pixelshader, both very basic.

EDIT: After further inverstigation, I figured that this part is the slow one, not the actual rendering. It's the update for the vertices using the bones.

EDIT2: Okey, I managed to narrow down the problem - using more-poly objects makes me lose more performace. Therefore, the problem lies not in something nasty called per object, but something called per vertex/joint .... Or MD5 is just a really slow format, but I can't see why this would be the case. Someone please enlighten me :(


struct Weight{
int jointID;
float bias;
D3DXVECTOR3 normal;
D3DXVECTOR3 pos;
};

struct ModelSubset{

int numTriangles;

vector<Vertex> vertices;
vector<DWORD> indices;
vector<Weight> weights;

vector<D3DXVECTOR3> positions;
VertexPosNormalTex *verts;
unsigned long *indis;

ID3D11Buffer* vertBuff;
ID3D11Buffer* indexBuff;
};

void animatedModel::updateWithAnimation(float deltaTime, int animationID){

currentAnimationID = animationID;
animations[animationID].currAnimTime += deltaTime; // Update the current animation time

if(animations[animationID].currAnimTime > animations[animationID].totalAnimTime){animations[animationID].currAnimTime = 0.0f;}

// Which frame are we on
float currentFrame = animations[animationID].currAnimTime * animations[animationID].frameRate;
int frame0 = floorf( currentFrame );
int frame1 = frame0 + 1;

// Make sure we don't go over the number of frames
if(frame0 == animations[animationID].numFrames-1){frame1 = 0;}

float interpolation = currentFrame - frame0; // Get the remainder (in time) between frame0 and frame1 to use as interpolation factor

vector<joint> interpolatedSkeleton; // Create a frame skeleton to store the interpolated skeletons in

// Compute the interpolated skeleton
for( int i = 0; i < animations[animationID].numJoints; i++){
joint tempJoint;
joint joint0 = animations[animationID].frameSkeleton[frame0][i]; // Get the i'th joint of frame0's skeleton
joint joint1 = animations[animationID].frameSkeleton[frame1][i]; // Get the i'th joint of frame1's skeleton

tempJoint.parentID = joint0.parentID; // Set the tempJoints parent id

// Turn the two quaternions into XMVECTORs for easy computations
D3DXQUATERNION joint0Orient = D3DXQUATERNION(joint0.orientation.x, joint0.orientation.y, joint0.orientation.z, joint0.orientation.w);
D3DXQUATERNION joint1Orient = D3DXQUATERNION(joint1.orientation.x, joint1.orientation.y, joint1.orientation.z, joint1.orientation.w);

// Interpolate positions
tempJoint.pos.x = joint0.pos.x + (interpolation * (joint1.pos.x - joint0.pos.x));
tempJoint.pos.y = joint0.pos.y + (interpolation * (joint1.pos.y - joint0.pos.y));
tempJoint.pos.z = joint0.pos.z + (interpolation * (joint1.pos.z - joint0.pos.z));

// Interpolate orientations using spherical interpolation (Slerp)
D3DXQUATERNION tempO;
D3DXQuaternionSlerp(&tempO, &joint0Orient, &joint1Orient, interpolation);
tempJoint.orientation.x = tempO.x;
tempJoint.orientation.y = tempO.y;
tempJoint.orientation.z = tempO.z;
tempJoint.orientation.w = tempO.w;

interpolatedSkeleton.push_back(tempJoint); // Push the joint back into our interpolated skeleton
}

for ( int k = 0; k < numSubsets; k++){
for ( int i = 0; i < subsets[k].vertices.size(); ++i ){
Vertex tempVert = subsets[k].vertices[i];
tempVert.pos = D3DXVECTOR3(0, 0, 0); // Make sure the vertex's pos is cleared first
tempVert.normal = D3DXVECTOR3(0,0,0); // Clear vertices normal

// Sum up the joints and weights information to get vertex's position and normal
for ( int j = 0; j < tempVert.WeightCount; ++j ){
Weight tempWeight = subsets[k].weights[tempVert.StartWeight + j];
joint tempJoint = interpolatedSkeleton[tempWeight.jointID];

// Convert joint orientation and weight pos to vectors for easier computation
D3DXQUATERNION tempJointOrientation = D3DXQUATERNION(tempJoint.orientation.x, tempJoint.orientation.y, tempJoint.orientation.z, tempJoint.orientation.w);
D3DXQUATERNION tempWeightPos = D3DXQUATERNION(tempWeight.pos.x, tempWeight.pos.y, tempWeight.pos.z, 0.0f);

// We will need to use the conjugate of the joint orientation quaternion
D3DXQUATERNION tempJointOrientationConjugate;
D3DXQuaternionInverse(&tempJointOrientationConjugate, &tempJointOrientation);

// Calculate vertex position (in joint space, eg. rotate the point around (0,0,0)) for this weight using the joint orientation quaternion and its conjugate
// We can rotate a point using a quaternion with the equation "rotatedPoint = quaternion * point * quaternionConjugate"
D3DXVECTOR3 rotatedPoint;
D3DXQUATERNION temp1, temp2;
D3DXQuaternionMultiply(&temp1, &tempJointOrientation, &tempWeightPos);
D3DXQuaternionMultiply(&temp2, &temp1, &tempJointOrientationConjugate);
rotatedPoint.x = temp2.x;rotatedPoint.y = temp2.y;rotatedPoint.z = temp2.z;

// Now move the verices position from joint space (0,0,0) to the joints position in world space, taking the weights bias into account
tempVert.pos.x += ( tempJoint.pos.x + rotatedPoint.x ) * tempWeight.bias;
tempVert.pos.y += ( tempJoint.pos.y + rotatedPoint.y ) * tempWeight.bias;
tempVert.pos.z += ( tempJoint.pos.z + rotatedPoint.z ) * tempWeight.bias;

// Compute the normals for this frames skeleton using the weight normals from before
// We can comput the normals the same way we compute the vertices position, only we don't have to translate them (just rotate)
D3DXQUATERNION tempWeightNormal = D3DXQUATERNION(tempWeight.normal.x, tempWeight.normal.y, tempWeight.normal.z, 0.0f);

// Rotate the normal
D3DXVECTOR3 rotatedPoint2;
D3DXQUATERNION temp3, temp4;
D3DXQuaternionMultiply(&temp3, &tempJointOrientation, &tempWeightPos);
D3DXQuaternionMultiply(&temp4, &temp3, &tempJointOrientationConjugate);
rotatedPoint2.x = temp4.x; rotatedPoint2.y = temp4.y; rotatedPoint2.z = temp4.z;

// Add to vertices normal and ake weight bias into account
tempVert.normal.x -= rotatedPoint2.x * tempWeight.bias;
tempVert.normal.y -= rotatedPoint2.y * tempWeight.bias;
tempVert.normal.z -= rotatedPoint2.z * tempWeight.bias;
}

subsets[k].positions[i] = tempVert.pos; // Store the vertices position in the position vector instead of straight into the vertex vector
subsets[k].vertices[i].normal = tempVert.normal; // Store the vertices normal
D3DXVec3Normalize(&subsets[k].vertices[i].normal, &subsets[k].vertices[i].normal);

}

// Put the positions into the vertices for this subset
for(int i = 0; i < subsets[k].vertices.size(); i++){
subsets[k].vertices[i].pos = subsets[k].positions[i];
subsets[k].verts[i].pos = subsets[k].vertices[i].pos;
subsets[k].verts[i].normal = subsets[k].vertices[i].normal;
subsets[k].verts[i].texcoord = subsets[k].vertices[i].texCoord;
}

D3D11_MAPPED_SUBRESOURCE mappedVertBuff;
VertexPosNormalTex *updatedV; updatedV = (VertexPosNormalTex *)mappedVertBuff.pData;
for(int h=0;h<subsets[k].vertices.size();h++){
updatedV[h].pos = subsets[k].verts[h].pos;
updatedV[h].normal = subsets[k].verts[h].normal;
updatedV[h].texcoord = subsets[k].verts[h].texcoord;
}
d3d11DevCon->Unmap(subsets[k].vertBuff, 0);

}
}

Edited by gnomgrol

## Create an account

Register a new account

• ### Similar Content

• By isu diss
I'm trying to code Rayleigh part of Nishita's model (Display Method of the Sky Color Taking into Account Multiple Scattering). I get black screen no colors. Can anyone find the issue for me?

• By Endurion
I have a gaming framework with an renderer interface. Those support DX8, DX9 and latest, DX11. Both DX8 and DX9 use fixed function pipeline, while DX11 obviously uses shaders. I've got most of the parts working fine, as in I can switch renderers and notice almost no difference. The most advanced features are 2 directional lights with a single texture
My last problem is lighting; albeit there's documentation on the D3D lighting model I still can't get the behaviour right. My mistake shows most prominently in the dark side opposite the lights. I'm pretty sure the ambient calculation is off, but that one's supposed to be the most simple one and should be hard to get wrong.
Interestingly I've been searching high and low, and have yet to find a resource that shows how to build a HLSL shader where diffuse, ambient and specular are used together with material properties. I've got various shaders for all the variations I'm supporting. I stepped through the shader with the graphics debugger, but the calculation seems to do what I want. I'm just not sure the formula is correct.
This one should suffice though, it's doing two directional lights, texture modulated with vertex color and a normal. Maybe someone can spot one (or more mistakes). And yes, this is in the vertex shader and I'm aware lighting will be as "bad" as in fixed function; that's my goal currently.
• By Mercesa
Hey folks. So I'm having this problem in which if my camera is close to a surface, the SSAO pass suddenly spikes up to around taking 16 milliseconds.
When still looking towards the same surface, but less close. The framerate resolves itself and becomes regular again.
This happens with ANY surface of my model, I am a bit clueless in regards to what could cause this. Any ideas?
In attached image: y axis is time in ms, x axis is current frame. The dips in SSAO milliseconds are when I moved away from the surface, the peaks happen when I am very close to the surface.

Edit: So I've done some more in-depth profiling with Nvidia nsight. So these are the facts from my results
Count of command buffers goes from 4 (far away from surface) to ~20(close to surface).
The command buffer duration in % goes from around ~30% to ~99%
Sometimes the CPU duration takes up to 0.03 to 0.016 milliseconds per frame while comparatively usually it takes around 0.002 milliseconds.
I am using a vertex shader which generates my full-screen quad and afterwards I do my SSAO calculations in my pixel shader, could this be a GPU driver bug? I'm a bit lost myself. It seems there could be a CPU/GPU resource stall. But why would the amount of command buffers be variable depending on distance from a surface?

Edit n2: Any resolution above 720p starts to have this issue, and I am fairly certain my SSAO is not that performance heavy it would crap itself at a bit higher resolutions.

• In DirectX 11 we have a 24 bit integer depth + 8bit stencil format for depth-stencil resources ( DXGI_FORMAT_D24_UNORM_S8_UINT ). However, in an AMD GPU documentation for consoles I have seen they mentioned, that internally this format is implemented as a 64 bit resource with 32 bits for depth (but just truncated for 24 bits) and 32 bits for stencil (truncated to 8 bits). AMD recommends using a 32 bit floating point depth buffer instead with 8 bit stencil which is this format: DXGI_FORMAT_D32_FLOAT_S8X24_UINT.
Does anyone know why this is? What is the usual way of doing this, just follow the recommendation and use a 64 bit depthstencil? Are there performance considerations or is it just recommended to not waste memory? What about Nvidia and Intel, is using a 24 bit depthbuffer relevant on their hardware?
Cheers!

• By gsc
Hi! I am trying to implement simple SSAO postprocess. The main source of my knowledge on this topic is that awesome tutorial.
But unfortunately something doesn't work... And after a few long hours I need some help. Here is my hlsl shader:
float3 randVec = _noise * 2.0f - 1.0f; // noise: vec: {[0;1], [0;1], 0} float3 tangent = normalize(randVec - normalVS * dot(randVec, normalVS)); float3 bitangent = cross(tangent, normalVS); float3x3 TBN = float3x3(tangent, bitangent, normalVS); float occlusion = 0.0; for (int i = 0; i < kernelSize; ++i) { float3 samplePos = samples[i].xyz; // samples: {[-1;1], [-1;1], [0;1]} samplePos = mul(samplePos, TBN); samplePos = positionVS.xyz + samplePos * ssaoRadius; float4 offset = float4(samplePos, 1.0f); offset = mul(offset, projectionMatrix); offset.xy /= offset.w; offset.y = -offset.y; offset.xy = offset.xy * 0.5f + 0.5f; float sampleDepth = tex_4.Sample(textureSampler, offset.xy).a; sampleDepth = vsPosFromDepth(sampleDepth, offset.xy).z; const float threshold = 0.025f; float rangeCheck = abs(positionVS.z - sampleDepth) < ssaoRadius ? 1.0 : 0.0; occlusion += (sampleDepth <= samplePos.z + threshold ? 1.0 : 0.0) * rangeCheck; } occlusion = saturate(1 - (occlusion / kernelSize)); And current result: http://imgur.com/UX2X1fc
I will really appreciate for any advice!

• 22
• 15
• 18
• 10
• 18