directx/vertexshader bone animation problems

Started by
5 comments, last by Dogen 19 years, 6 months ago
Hi there I have two questions concerning bone animations that I can't sort out on my own. Any help is appreciated. I have an application that can draw 3d model animations using bones in directx9. The application uses an index in a custom vertex to know what bone-matrix to multiply with like so: struct MYVERTEX { float x, y, z; int matrixIndex;//the bone index float nx,ny,nz; unsigned int color; float tu, tv; }; I calculate the matrix for each bone and sets it like so: for(all bones) pDevice->SetTransform(D3DTS_WORLDMATRIX(index), &bone_matrix); I also set these flags for my models that uses bones: SetRenderState(D3DRS_INDEXEDVERTEXBLENDENABLE, TRUE); SetRenderState(D3DRS_VERTEXBLEND, D3DVBF_0WEIGHTS); and the FVF for the models that uses bones are as follows: MYVERTEX_FVF = D3DFVF_XYZ | D3DFVF_DIFFUSE | D3DFVF_NORMAL | D3DFVF_TEX1 | D3DFVF_XYZB1 | D3DFVF_LASTBETA_UBYTE4; This works great and my character is moving correctly, however as the model animates, the "shadows" on the model moves along with the animation so it seems like the calculations of each triangles' color is made before the bone-matrix transformation which seems very odd. Of course I want the model to be affected by lights correctly even when the model animates with bones. Question 1: Does anyone know why this happens and If someone has any idea how to solve this problem without writing my own vertex shader I would be greatful. Actually I have already started on a vertex shader that performs the bone matrix calculation but I have come to a problem: In my vertex shader I will have to multiply each vertex position with different bone matrixes depending on what bone index the vertex has but I don't know how to write this in my vertex shader. This is how my vertex shader looks like currently: //a simple example with three bones //bonematrix-1 is in constant register c0-c3 //bonematrix-2 is in constant register c4-c7 //bonematrix-2 is in constant register c8-c11 dcl_position v0 dcl_color v7 dcl_texcoord v8 dp4 oPos.x, v0, c0 dp4 oPos.y, v0, c1 dp4 oPos.z, v0, c2 dp4 oPos.w, v0, c3 mov oT0, v7 mov oD0, v1 So what I wan't to do is perform the "dp4 oPos.x, v0, c0" operation on different constant registers depending on what bone index the vertex has. So if the index is: 0, it should do: "dp4 oPos.x, v0, c0" and so on and if the index is: 1, it should do: "dp4 oPos.x, v0, c4" and so on and if the index is: 2, it should do: "dp4 oPos.x, v0, c8" and so on. But as I said before I have no idea how to write this. This was Question 2. I know that these might be some tricky questions but I hope I have described them clear enough and I am looking forward for any suggestions. /Dogen
Advertisement
The lighting must be done after bones, and after transforming to world coords (if you bones don't include world transforms too)possibly even after transforming to view coords (this is common)

I store 4 boneids per vertex in a d3dcolor (nVidia doesn't support ubyte4), multiply it by 765.1 (3 constant reg, color is divided by 255, so 765... and add in the .1 because of rounding errors).


mul a0.x, c0.x, v1.x // Get first bone index (scale if needed)
dp4 r0.x, v0, c[a0.x + 1] // pos
dp4 r0.y, v0, c[a0.x + 2]
dp4 r0.z, v0, c[a0.x + 3]
dp3 r1.x, v2, c[a0.x + 1] // normal
dp3 r1.y, v2, c[a0.x + 2]
dp3 r1.z, v2, c[a0.x + 3]
// scale by weight
// get next bone id
// get values
// use mad to scale and add to previous
// transform by World if needed, View, Proj, output position

// transform by world if needed
// do height fog if needed
// transform by view
// OR
// tranform by world, view

// do depth fog
// do lighting
// do camera space tex coord generation
"The lighting must be done after bones, and after transforming to world coords"
Yes of course it has to, that is why I think it is very strange that it seems to be the opposite way around when I let direcx take care of my bone-matrix multiplication using:
SetTransform(D3DTS_WORLDMATRIX(index), &bone_matrix);
SetRenderState(D3DRS_INDEXEDVERTEXBLENDENABLE, TRUE);
SetRenderState(D3DRS_VERTEXBLEND, D3DVBF_0WEIGHTS);
and so on..


"dp4 r0.x, v0, c[a0.x + 1]"
Ohh, so you can address things like that. I had no idea of that.
I will try to use that. Thanks.
However I don't fully understand why and where you multiply the boneid by 765.1?
Do you set 765.1 in constant register c0?
Shouldn't it be enough to simply multiply the boneid by four to be able to "scale" to the correct bone-matrix?

/Dogen
I've never used the fixed pipe skinning, so I can't say how it works. I'm sure you can't mix it with vertex shaders, so how it behaves doesn't matter. I even don't think nVidida supports it, so I guess you've got an ATI card.

Yes I put 765.1 at c0.x for this example (note how I multiply the boneid register by c0.x and store that in a0.x

Assume you're doing the complex case of 4 bone skinning. You could store the boneid*3 as float4, or short4, or even ubyte4. nVidia doesn't support ubyte4. But wait, if we use D3DCOLOR, we still have 4 8-bit values, they're just 0/255 to 255/255. This is why I scale the IDs. To overcome the n/255 of D3DCOLOR and to multiply by 3.

The only matrix that need 4 constants is the projection matrix, as it uses the last column (row after transposing for constant usage). Every other matrix can get away with 3 constants. That's why it's *255*3 instead of *255*4.
Hi again
I have tried it now and it works very well. Thanks alot. Now I also understand why you used the number 765.1.

"I'm sure you can't mix it with vertex shaders"
My intention was not to mix them from the beginning but as it didn't work with the fixed pipe skinning I started with the vertex shader approach. However I still would like to know why it behaves as I described previously when I used fixed pipe skinning if someone has any idea.

I have a geforce4 TI4800 (nvidia) card but I have been using software vertex processing when implementing which is probably why it worked with ubyte4.

I have another question: How smart is a "vertex shader"?
I mean does it run all the code in the shader for each vertex or can it tell what parts that only needs to be run once for all vertexes?
For example If I send in all my bone-matrixes to the vertex shader without multiplying them with the projection matrix and view matrix before I will have to do this operation in the vertex shader instead. But can I somehow write this so the vertex shader only will do the bone-matrix * view-matrix * projection-matrix operation once or will it do it once for each vertex?
As I understand it: If I send in the projection-matrix and view-matrix separately I only have to send in the 3 first "float4" and can thus use many more bones in my models. But on the other hand if I calculate the "bone-matrix * view-matrix * projection-matrix" operation
before the vertex shader I have to send all the 4 "float4" values to the vertex shader which of course will use many more constant registers.

/Dogen
Shaders are dumb. If you can pre-calculate something on the CPU, do it... I don't think mixing in view/proj matrices with your bone matrices would be wise though. You can mix in view if you don't need a few possible features.

When doing software processing, the shader is run once for each vertex between (startvertex) and (startvertex+numvertices), then sent to the card. In hardware the shader is generally run on the fly as needed. If your index list skips vertices (ie: 0 10, 15) only the used vertices will be transformed. If the index is reused soon (within ~18 indices), the card will use a cached result.

Now, as to why view/proj may not be wise in your bone matrices...

If you do height fog, or want worldxz for lightmaps, etc. you'll want world coordinates. I've been meaning to experiment with using WorldSpaceNormal, for example, to automatically add moss onto the correct part of a tree. If you don't want any of these, you could use bone*world*view as your bone matrix.

If you want depth fog, CameraSpaceNormal, CameraSpaceReflectionVector, CameraSpacePosition, EyeVector (specular, CSRV, etc), Lighting, etc. these are generally done in viewspace. (For lighting, transform point lights and directional lights to view space before programming as constants). If you mix in the projection, you'll not only take extra constants, but you will have transformed to a space where you can't do any useful work.
Ok
I see your point. I have changed my vertex shader application so it separates the bones-matrixes, thew view-matrix and the projection matrix. I have also changed so I only use 3 constant registers for each bone(I used four before) so I can use some more bones, and it works very nice.

Now when I had this working I wanted to incorporate some lighting handling into my vertex shader to make my character look better. I have read some articles and looked at some examples to try to understand how lighting works. It seems that some perform the lighting calculations in viewspace and some in world space. As I understand it, it doesn't matter how you do it as long as you have the light position/direction and the normals in the same space(world or view space). I have tried both approaches but I get the same strange results as I described previously(the lighting follows my animation, so a dark unlit area doesn't get bright even though the animation rotates the dark area towards the light). Here is the code I used when I tried the world space approach:

//I only use one directional light in my example
I send the light's direction to the vertex shader like so:
SetVertexShaderConstantF(8, (float*)&light.Direction, 1)

//c14 = constant {0, 0, 0, 1}
//c8 = the light's direction
//(c16 - ?) = bone matrixes
dcl_position v0
dcl_color v7
dcl_texcoord v8
dcl_normal v4
dcl_blendindices v1

//I transform the vertexs' positions with the bone-matrixes:
//r1 = vertex position * bonematrix[id]
dp4 r1.x, v0, c[a0.x + 16]
dp4 r1.y, v0, c[a0.x + 17]
dp4 r1.z, v0, c[a0.x + 18]
dp4 r1.w, v0, c14 //without this directx refuses to compile my shader

//oPos = transformed vertex position * viewprojection-matrix
dp4 oPos.x, r1, c0
dp4 oPos.y, r1, c1
dp4 oPos.z, r1, c2
dp4 oPos.w, r1, c3

//The lighting part:
//I tranform the normals with the bone-matrixes just as I did with the positions but with dp3 so I don't change the normal's "position"
//r2 = normal * bonematrix[id]
dp3 r2.x, v4, c[a0.x + 16]
dp3 r2.y, v4, c[a0.x + 17]
dp3 r2.z, v4, c[a0.x + 18]
dp3 r2.w, v4, c14

//This is used to renormalize the normal, I found this code in some tutorial, I have tried both with and without this code
//renormalize the normal
dp3 r2.w, r2, r2
rsq r2.w, r2.w
mul r2, r2, r2.w

//r3.x = (altered normal) * -(light's direction)
dp3 r3.x, r2, -c8

//oD0 = r3.x * color
mul oD0, r3.x, v7


I can't find any real problems with this, it simply doesn't work as it should as I described before. Since I perform the same calculation on the normals as on the vertexs' positions exept the transformation in bone-position I think the normals should be rotated correctly.

I also think one other thing is rather strange, in one tutorial they said that the normal should be calculated like so:
dp3 r?, normal, -c[LIGHT_POSITION]
But since I use light direction shouldn't I do:
dp3 r?, normal, c[LIGHT_DIRECTION]
but if I don't have the minus sign before the light's direction the character will get bright on the wrong side of my character compared to the rest of my world which is rendered with the fixed function pipeline.

/Dogen

This topic is closed to new replies.

Advertisement