Jump to content

  • Log In with Google      Sign In   
  • Create Account


Need help!My skinned mesh work incorrectly in ARM device(Surface RT)


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
4 replies to this topic

#1 kgs   Members   -  Reputation: 205

Like
0Likes
Like

Posted 27 November 2013 - 09:09 PM

My program is a windows store DirectX app.

Just like the topic title, my code work well with win32.But when I debug my demo with my Surface RT.Some vertex position wrong.Just like the photo below. 

3tb_131128104817o6rm512293.jpg

I have do some test about my program.

This is my VertexShaderInput:

struct VertexShaderInput
{
    float3 pos : POSITION;
    float3 norm : NORMAL;
    float2 tex : TEXCOORD0;
    float4 Tangent	: TANGENT;
    float4 Weights    : WEIGHTS;
    uint4 BoneIndices : BONEINDICES;
};

When I use only pos, norm, tex data(mesh with no skinned animation data).My mesh displayed correctly.

So I thought the problem may be the BoneIndices,it's uint4 type in my HLSL code.

For more detail info, I paste my full hlsl code:

cbuffer SimpleConstantBuffer : register(b0)
{
    matrix model;
    matrix view;
    matrix projection;
    matrix gBoneTransforms[59];
};

struct VertexShaderInput
{
    float3 pos : POSITION;
    float3 norm : NORMAL;
    float2 tex : TEXCOORD0;
    float4 Tangent : TANGENT;
    float4 Weights    : WEIGHTS;
    uint4 BoneIndices : BONEINDICES;
};

struct PixelShaderInput
{
    float4 pos : SV_POSITION;
    float3 norm : NORMAL;
    float2 tex : TEXCOORD0;
};

PixelShaderInput SimpleVertexShader(VertexShaderInput input)
{
    PixelShaderInput vertexShaderOutput;

    float weights[4] = { 0.0f, 0.0f, 0.0f, 0.0f };
    weights[0] = input.Weights.x;
    weights[1] = input.Weights.y;
    weights[2] = input.Weights.z;
    weights[3] = input.Weights.w;
    float4 pos = float4(input.pos, 1.0f);
    float4 skinnedPos = float4(0.0f, 0.0f, 0.0f, 0.0f);
    float4 norm = float4(normalize(input.norm), 0.0f);
    float4 normFinal = float4(0.0f, 0.0f, 0.0f, 1.0f);
    for (int i = 0; i < 4; ++i)
    {
	skinnedPos += weights[i] * mul(pos, gBoneTransforms[input.BoneIndices[i]]);
	normFinal += weights[i] * mul(norm, gBoneTransforms[input.BoneIndices[i]]);
    }
    skinnedPos.w = 1.0f;
    pos = mul(skinnedPos, model);
    pos = mul(pos, view);
    pos = mul(pos, projection);
    norm = mul(normFinal, model);

    vertexShaderOutput.pos = pos;
    vertexShaderOutput.tex = input.tex;
    vertexShaderOutput.norm = normalize(normFinal.xyz);
    return vertexShaderOutput;
}
 

And the vertex structure in my code is:

struct VertexPNTTBI
{
	XMFLOAT3 pos;
	XMFLOAT3 norm;
	XMFLOAT2 tex;
	XMFLOAT4 tan;
	XMFLOAT4 blendWeight;
	BYTE blendIndice[4];
};

And my D3D11_INPUT_ELEMENT_DESC is:

const D3D11_INPUT_ELEMENT_DESC VertexLayoutDesc[] =
{
     { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0,  D3D11_INPUT_PER_VERTEX_DATA, 0 },
     { "NORMAL",   0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
     { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT,    0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0 },
     { "TANGENT", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 32, D3D11_INPUT_PER_VERTEX_DATA, 0 },
     { "WEIGHTS", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 48, D3D11_INPUT_PER_VERTEX_DATA, 0 },
     { "BONEINDICES", 0, DXGI_FORMAT_R8G8B8A8_UINT, 0, 64, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};

These code work well with my local machine(win32).But I don't known if there are somethine I must change between WIN32 and ARM.

I have been working on this problem for 4 days.

Any help would be appreciated!Thanks.


Edited by kgs, 27 November 2013 - 09:17 PM.

My english is very poor!

Sponsor:

#2 Samith   Members   -  Reputation: 1876

Like
0Likes
Like

Posted 27 November 2013 - 09:20 PM

From the description of your problem it sounds like everything is working except the bones. You might have to swap the order you write the bone indices into the blendIndice field, because ARM is a big endian processor (edit: nope), whereas your Windows machine is little endian. For example, if you have a vert with bone indices like so:

vertex.blendIndice[0] = 10;
vertex.blendIndice[1] = 11;
vertex.blendIndice[2] = 12;
vertex.blendIndice[3] = 13;

// switch to this on arm:
// reverse byte ordering to accommodate big endian processor
vertex.blendIndice[0] = 13;
vertex.blendIndice[1] = 12;
vertex.blendIndice[2] = 11;
vertex.blendIndice[3] = 10;

I'm not 100% positive how DirectX reads in R8G8B8A8_UINT attributes on ilttle/big endian machines, so I might be wrong, but your problem really sounds like an endian problem, so swapping the ordering of the blend indices is the first thing I would try.

 

EDIT: removed incorrect stuff!


Edited by Samith, 27 November 2013 - 11:31 PM.


#3 kgs   Members   -  Reputation: 205

Like
0Likes
Like

Posted 27 November 2013 - 11:07 PM

From the description of your problem it sounds like everything is working except the bones. You might have to swap the order you write the bone indices into the blendIndice field, because ARM is a big endian processor, whereas your Windows machine is little endian. For example, if you have a vert with bone indices like so:

vertex.blendIndice[0] = 10;
vertex.blendIndice[1] = 11;
vertex.blendIndice[2] = 12;
vertex.blendIndice[3] = 13;

// switch to this on arm:
// reverse byte ordering to accommodate big endian processor
vertex.blendIndice[0] = 13;
vertex.blendIndice[1] = 12;
vertex.blendIndice[2] = 11;
vertex.blendIndice[3] = 10;

I'm not 100% positive how DirectX reads in R8G8B8A8_UINT attributes on ilttle/big endian machines, so I might be wrong, but your problem really sounds like an endian problem, so swapping the ordering of the blend indices is the first thing I would try.

 

EDIT: More about component packing and endianness: an R8G8B8A8 format is 32 bits (4 bytes) wide and the R component comes from the least significant 8 bits, the G from next least significant bits, B from the next, and A from the most significant bits. On a little endian machine (your windows desktop) the least significant byte is the first byte in memory. On a big endian machine (ARM) it's the opposite: the first byte is the MOST significant byte. So, when DirectX reads your vertices, it's going to treat the blendIndice array as a uint32, and it'll take the R component from the least significant byte, G from the next least, B from the next, etc. The least significant byte on ARM will be blendIndice[3], whereas on your Windows machine it'll be blendIndice[0].

Thanks for your reply.And I have tried your method.But it's wrong.My bone animation is correct.Only the skinned mesh is incorrect.


Edited by kgs, 27 November 2013 - 11:25 PM.

My english is very poor!

#4 Samith   Members   -  Reputation: 1876

Like
0Likes
Like

Posted 27 November 2013 - 11:33 PM


Thanks for your reply.And I have tried your method.But it's wrong.

 

Damn. Apparently the SurfaceRT isn't even (necessarily) big endian so I guess that advice would never have worked! 



#5 kgs   Members   -  Reputation: 205

Like
0Likes
Like

Posted 28 November 2013 - 12:59 AM

 


Thanks for your reply.And I have tried your method.But it's wrong.

 

Damn. Apparently the SurfaceRT isn't even (necessarily) big endian so I guess that advice would never have worked! 

 

I tested my program with other model files.And found that not every model have this issue.


My english is very poor!




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS