Jump to content

  • Log In with Google      Sign In   
  • Create Account


Tournicoti

Member Since 30 Aug 2009
Offline Last Active Jun 02 2014 12:05 AM
**---

Topics I've Started

Extract position, scaling and rotation from world matrix

01 June 2014 - 09:29 AM

Hello smile.png

 

I need to get position, scaling and rotation from a world matrix.

I would be pleased if someone could correct me if what I suppose is wrong :

 

(In my case matrix are row major)

 

So let's take this world matrix :

m11,m12,m13,m14
m21,m22,m23,m24
m31,m32,m33,m34
m41,m42,m43,m44

position=Vec3(m41,m42,m43)

 

scale.x=length(m11,m12,m13)

scale.y=length(m21,m22,m23)

scale.z=length(m31,m32,m33)

 

Then I use D3DX to get the quaternion from this matrix :

D3DXQuaternionRotationMatrix(&quaternion,&worldMatrix);
D3DXQuaternionNormalize(&quaternion,&quaternion);

Does it seems you OK ?

 

Thanks in advance for help ! smile.png

 

[EDIT]

My mistake, I didn't see the D3DX fonction 'D3DXMatrixDecompose()'huh.png


Issue while trying to sample a texture in a vertex shader

08 December 2013 - 04:48 PM

I have this vertex shader:

Texture2D matrixPalettes;

SamplerState matrixPaletteSamplerState
{
    AddressU=CLAMP;
    AddressV=CLAMP;
    Filter=MIN_MAG_MIP_POINT;
};

row_major matrix computeBoneTransformInstanced(in uint instanceIndex,in float4 weights,in uint4 boneIds)
{
	row_major matrix boneTransform=matrix(0.0f,0.0f,0.0f,0.0f,
										  0.0f,0.0f,0.0f,0.0f,
										  0.0f,0.0f,0.0f,0.0f,
										  0.0f,0.0f,0.0f,0.0f);
	row_major matrix transform;

	[unroll (NUM_INFLUENCES)]
	for (int i=0;i<NUM_INFLUENCES;i++)
	{
		[unroll (4)]
		for (int j=0;j<4;j++)
			transform[j]=matrixPalettes.Sample(matrixPaletteSamplerState,float2(0.0f,0.0f),uint2(4*boneIds[i]+j,instanceIndex));
		
		boneTransform+=weights[i]*transform;
	}

	return boneTransform;
}

PS_INPUT_DS_GB VS_DS_GB_SKINNED(VS_INPUT_SKINNED_INST input)
{
	PS_INPUT_DS_GB output;
	row_major matrix boneTransform=computeBoneTransformInstanced(input.csmMaskInstanceId.y,input.weights,input.ids);
	row_major matrix W=mul(boneTransform,input.vWorldMatrix);
	row_major matrix WVP=mul(W,g_mViewProjection);

//...
	
	return output;
}

It's supposed to get the matrix plalettes from a texture for instanced skinning rendering.

But I get this error : error X4532 : cannot map expression to vs_4_0 instruction set, for the line :

transform[j]=matrixPalettes.Sample(matrixPaletteSamplerState,float2(0.0f,0.0f),uint2(4*boneIds[i]+j,instanceIndex));

What's wrong here ? Is it legit to sample a texture in a vertex shader ?


Question about bone transformations

27 November 2013 - 03:19 AM

Hello

I'm implementing skeletal animations with Assimp for DX10 following this tutorial :http://ogldev.atspace.co.uk/www/tutorial38/tutorial38.html

 

For now, I can display the skinned mesh in global pose, filling the matrix palette with identities.

 

I don't understand the bone transformation explained in this tutorial, that is:

offsetMatrix*scale*rotation*translation*inverseRootNodeTransform

I think it should be :

offsetMatrix*scale*rotation*translation*inverseOffsetMatrix

1) offsetMatrix : to get from model space to bone space

2) scale*rotation*translation : transformation in bone space, get by interpoling animation keys,

3) inverseOffsetMatrix : to get back in model space.

 

Which one do you think is correct, please?

 

Thanks

 

[EDITED]

I tried both, and get incorrect results (abstract art instead of a ... duck sad.png )

 

Animation keys interpolation seems to work, but I wrongly compute the bone transformations apparently.

I suppose I don't understand correctly the aim of the offset matrix ... 

 

[EDITED2] fixed !

 

in fact the transformation for a node is :

scale*rotation*translation*parent

and is for a bone :

offset*scale*rotation*translation*parent*inverseRoot

[EDIT3]

In this tutorial it has been said :

 

 

If the node does not correspond to a bone then that is its final transformation. If it does we overwrite it with a matrix that we generate.

 

Actually it's not totally true, if an animation channel is associated to a node, we must also overrite the transform with animated one, even if it is not associated to a bone.

(Anyway, it would be odd that some animations channels would be defined for nothing ....)


Input layout for skinned meshes

25 November 2013 - 10:25 AM

Hello

 

I define like this the input layout in cpp file :

const UINT NMF_INPUTLAYOUT_SKINNED_SIZE=8;
const D3D10_INPUT_ELEMENT_DESC NMF_INPUTLAYOUT_SKINNED[NMF_INPUTLAYOUT_SKINNED_SIZE]=
{
	{"POSITION",	0,	DXGI_FORMAT_R32G32B32_FLOAT,	0,	0,	D3D10_INPUT_PER_VERTEX_DATA,	0},
	{"TANGENT",	0,	DXGI_FORMAT_R32G32B32_FLOAT,	0,	12,	D3D10_INPUT_PER_VERTEX_DATA,	0},
	{"BINORMAL",	0,	DXGI_FORMAT_R32G32B32_FLOAT,	0,	24,	D3D10_INPUT_PER_VERTEX_DATA,	0},
	{"NORMAL",	0,	DXGI_FORMAT_R32G32B32_FLOAT,	0,	36,	D3D10_INPUT_PER_VERTEX_DATA,	0},
	{"TEXCOORD",	0,	DXGI_FORMAT_R32G32B32A32_FLOAT,	0,	48,	D3D10_INPUT_PER_VERTEX_DATA,	0},
	{"TEXCOORD",	1,	DXGI_FORMAT_R32G32B32A32_FLOAT,	0,	64,	D3D10_INPUT_PER_VERTEX_DATA,	0},
	{"BONES",	0,	DXGI_FORMAT_R8G8B8A8_UINT,	0,	80,	D3D10_INPUT_PER_VERTEX_DATA,	0},
	{"WEIGHTS",	0,	DXGI_FORMAT_R8G8B8A8_UNORM,	0,	84,	D3D10_INPUT_PER_VERTEX_DATA,	0}
};

... and like this in the hlsl file :

struct VS_INPUT_SKINNED
{
    float3 vPosition				: POSITION;
    float3 vTangent				: TANGENT;
    float3 vBinormal				: BINORMAL;
    float3 vNormal				: NORMAL;
    float4 vUV					: TEXCOORD0;
    float4 vUV2				        : TEXCOORD1;
    int4 ids				        : BONES;
    float4 weights				: WEIGHTS;
};

and I get this warning :

ID3D10Device::CreateInputLayout: The provided input signature expects to read an element with SemanticName/Index: 'BONES'/0 and component(s) of the type 'int32'.  However, the matching entry in the Input Layout declaration, element[6], specifies mismatched format: 'R8G8B8A8_UINT'.  This is not an error, since behavior is well defined: The element format determines what data conversion algorithm gets applied before it shows up in a shader register. Independently, the shader input signature defines how the shader will interpret the data that has been placed in its input registers, with no change in the bits stored.  It is valid for the application to reinterpret data as a different type once it is in the vertex shader, so this warning is issued just in case reinterpretation was not intended by the author. [ STATE_CREATION WARNING #391: CREATEINPUTLAYOUT_TYPE_MISMATCH]

is this normal ? If not what's wrong here ? Thanks


How to compute the bounding volume for an animated (skinned) mesh ?

07 November 2013 - 07:48 AM

Hello

 

An example maybe :

Usually a human being is forming a T with his arms in model space.

Then I compute the bounding volume using this pose.

But if the arms within an animation are towards the up direction they'll get out of the bounding volume and they would be frustum-culled even if they are visible ...

 

How can I avoid this please ? (sorry for the poor explanation, but it's quite hard for me to explain that in English)

 

Thanks


PARTNERS