Sign in to follow this  

Debug if my constant buffer is uploaded correctly?

Recommended Posts

Posted (edited)
So I'm trying to upload an array of matrices to my vertex shader, like so:
cbuffer cbJointTransforms : register(b6)
{
    float4x4 gBoneTransforms[96];
};
I think it is done correctly, however when I try to draw my model using this data to transform the vertices, nothing shows up on screen. If I just do the basic WVP multiplication it draws fine. I cant really find any obvious mistake with the shader code, so i have to, for now, assume im uploading this data to the GPU wrong.
 
here's how I do it:
//Constant buffer
D3D11_BUFFER_DESC cbDesc;
cbDesc.ByteWidth = sizeof(XMMATRIX)*aFinalMatrices.size(); 
cbDesc.Usage = D3D11_USAGE_DYNAMIC;
cbDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
cbDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
cbDesc.MiscFlags = 0;
cbDesc.StructureByteStride = 0;


D3D11_SUBRESOURCE_DATA InitData;
InitData.pSysMem = &aFinalMatrices[0];
InitData.SysMemPitch = 0;
InitData.SysMemSlicePitch = 0;


hr = this->device->CreateBuffer(&cbDesc, &InitData, &cbJointTransforms);

aFinalMatrices is a vector<DirectX::XMMATRIX>

 
I don't know how to debug the shader like you do with regular C++ code in Visual Studio (i.e putting breakpoints and checking locals). I am compiling it during runtime.
I tried using Graphics Debugging but it doesn't show me any values in the buffer.
 
Any help appreciated!! Been trying to solve this for hours and nothing. I just want to get going with debugging the actual animation, not the friggin constant buffer...
 
Shader code: (relevant parts)
cbuffer cbJointTransforms : register(b6)
{
    float4x4 gBoneTransforms[96];
};


struct VS_IN
{
    float3 pos : POSITION;
    float3 nor : NORMAL;
    //float2 UV : TEXCOORD;
    float4 weights : WEIGHTS;
    int4 boneIndices : BONEINDICES;
};
VS_OUT VS(VS_IN input)
{
    VS_OUT output;


    //FRANK D. LUNA (p.781)


    //init array
    float weights[4] = { 0.0f, 0.0f, 0.0f, 0.0f };
    weights[0] = input.weights.x;
    weights[1] = input.weights.y;
    weights[2] = input.weights.z;
    weights[3] = input.weights.w;


    //Blend verts
    float3 position = float3(0.0f, 0.0f, 0.0f);
    float3 nor = float3(0.0f, 0.0f, 0.0f);


    for (int i = 0; i < 4; i++)
    {
        if (input.boneIndices[i] >= 0) //unused bone indices are negative
        {
            position += weights[i] * mul(float3(input.pos),
            gBoneTransforms[input.boneIndices[i]]).xyz;
            position = float3(position.xyz);
            nor += weights[i] * mul(input.nor,
            (float3x3)gBoneTransforms[input.boneIndices[i]]);
        }
    }


    output.pos = mul(Proj, mul(View, mul(World, float4(position, 1.0))));
    output.wPos = mul(World, float4(position, 1.0f));
    output.nor = mul(NormalMatrix, nor);
    output.nor = normalize(output.nor);
    output.uv = float2(0.0f, 0.0f);
    return output;
}
Edited by Finoli

Share this post


Link to post
Share on other sites
You're doing your world/view/proj math using column-vector conventions (mat * vec), and your bone matrix math using row-vector conventions (vec * mat). They should probably both be using the same conventions.

The XMMATRIX library is based around row-vector maths and row-major array indexing. HLSL works just as well with either maths convention, but defaults to column-major array indexing.
This means when you uploaded your data, you've copied from a row-major indexed array into a column-major indexed array... Your world/view/proj code is tolerating this error by switching to the opposite mathematical conventions than you're using on the C++ side (two wrongs do make a right). An alternative solution is to replace "float4x4" with "row_major float4x4" in your HLSL code -- this tells the compiler that your source arrays are using row-major indexing, which lets it interpret your XMMATRIX structures correctly without the need to flip all your maths back to front.

As for debugging, I highly recommend Renderdoc. It will shoe you your cbuffer layouts and permits breakpoints and inspection of locals,as long as you compile your shaders with debug info enabled.

Share this post


Link to post
Share on other sites

You're doing your world/view/proj math using column-vector conventions (mat * vec), and your bone matrix math using row-vector conventions (vec * mat). They should probably both be using the same conventions.

The XMMATRIX library is based around row-vector maths and row-major array indexing. HLSL works just as well with either maths convention, but defaults to column-major array indexing.
This means when you uploaded your data, you've copied from a row-major indexed array into a column-major indexed array... Your world/view/proj code is tolerating this error by switching to the opposite mathematical conventions than you're using on the C++ side (two wrongs do make a right). An alternative solution is to replace "float4x4" with "row_major float4x4" in your HLSL code -- this tells the compiler that your source arrays are using row-major indexing, which lets it interpret your XMMATRIX structures correctly without the need to flip all your maths back to front.

As for debugging, I highly recommend Renderdoc. It will shoe you your cbuffer layouts and permits breakpoints and inspection of locals,as long as you compile your shaders with debug info enabled.

Thanks for the quick reply! This might be part of the problem, however i also noticed that my weights[i].x values are way off.
Sounds really weird that they choose row-major for the built in matrix structs and column-major for HLSL... oh well, im sure theres a reason for it (???).

As for the shader debug info, how do I enable that? I compile the shaders at runtime with D3DCompileFromFile()

Thanks  :D

Share this post


Link to post
Share on other sites

This might be part of the problem, however i also noticed that my weights[i].x values are way off.

You're reading them in the VS as integers. What type are they in the buffers and the input layout? You probably want to declare them as float inputs in the VS and use a _UNORM data format.

Sounds really weird that they choose row-major for the built in matrix structs and column-major for HLSL... oh well, im sure theres a reason for it (???).

I don't know if there's a good reason :lol:
In my experience, mathematicians prefer column-vector conventions, but row-vector conventions were easier to write on a typewriter, so they're common in early computer graphics papers!
Before shaders, D3D fixed function was all row-major array indexing and row-vector maths. GL used column-major arrays and I can't remember which kind of math. For some reason D3D saw the light and chose column-major arrays by default when they created HLSL... Maybe Nvidia's influence as they co-authored the language? But yeah, they kept doing math on the CPU the way they always had. The XMMATRIX class isn't actually a core part of D3D though, so technically D3D doesn't have an official stance on array majorness or mathematical conventions.

As for the shader debug info, how do I enable that? I compile the shaders at runtime with D3DCompileFromFile

There's a flags parameter that you can pass D3DCOMPILE_DEBUG to.
There's also a flag that forces all matrices to row-major by default if you don't want to insert the row_major keyword everywhere.

Share this post


Link to post
Share on other sites

Alright, so, I managed to get the locals to show up in VS2015, and.. well...

 9a703b809e91f128df8fc032885324d6.png

This is actually the first time I've tried uploading an array of matrices like this to a shader, and im quite obviously doing something wrong.
The code that creates the buffer is in the original post.

I update them every frame like this:

void Animator::DrawAndUpdate(float deltaTime)
{
	PreDraw();

	this->Update(deltaTime);
	this->UpdateConstantBuffers();
	this->deviceContext->Draw(this->mesh->verts.size(), 0);
}

void Animator::UpdateConstantBuffers()
{
	D3D11_MAPPED_SUBRESOURCE mappedResource;
	ZeroMemory(&mappedResource, sizeof(D3D11_MAPPED_SUBRESOURCE));
	deviceContext->Map(cbJointTransforms, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);

	memcpy(mappedResource.pData, &aFinalMatrices, sizeof(XMMATRIX) * aFinalMatrices.size());

	deviceContext->Unmap(cbJointTransforms, 0);
} 

Does anyone see a problem with it? If not, what could be the reason for these values?
Worth noting is that the data that i update the buffer with is only three XMMATRIX, since the mesh im using uses only 3 joints.
I don't see why that would be an issue though, maybe worth mentioning.

Here's a print of what they should look like:

1a4e8de328bf78fd44eb9fe09b9293a3.png

 

Share this post


Link to post
Share on other sites

 

This might be part of the problem, however i also noticed that my weights[i].x values are way off.

You're reading them in the VS as integers. What type are they in the buffers and the input layout? You probably want to declare them as float inputs in the VS and use a _UNORM data format.

 

The weights are float, indices are int :P

Layout looks like this:

	D3D11_INPUT_ELEMENT_DESC input_desc[] = {
		{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
		{ "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
		{ "BONEINDICES", 0, DXGI_FORMAT_R32G32B32A32_SINT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0},
		{ "WEIGHTS", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 36, D3D11_INPUT_PER_VERTEX_DATA, 0}
	};

Maybe format is the problem?

DXGI_FORMAT_R32G32B32A32_SINT

Share this post


Link to post
Share on other sites
memcpy(mappedResource.pData, &aFinalMatrices, sizeof(XMMATRIX) * aFinalMatrices.size());

Is aFinalMatrices a std::vector?  This won't copy the data to your cbuffer; instead it will copy the address of the vector object.  Use &aFinalMatrices[0] or aFinalMatrices.data() instead.

Share this post


Link to post
Share on other sites
	D3D11_INPUT_ELEMENT_DESC input_desc[] = {
		{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
		{ "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
		{ "BONEINDICES", 0, DXGI_FORMAT_R32G32B32A32_SINT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0},
		{ "WEIGHTS", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 36, D3D11_INPUT_PER_VERTEX_DATA, 0}
	};

 

Why do the indices and weights overlap?

Indices are 16 bytes starting at offset 24 bytes, but weights starts just 12 bytes later. You have an overlap between BoneIndices.w and Weights.x.

Share this post


Link to post
Share on other sites

 

	D3D11_INPUT_ELEMENT_DESC input_desc[] = {
		{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
		{ "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
		{ "BONEINDICES", 0, DXGI_FORMAT_R32G32B32A32_SINT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0},
		{ "WEIGHTS", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 36, D3D11_INPUT_PER_VERTEX_DATA, 0}
	};

 

Why do the indices and weights overlap?

Indices are 16 bytes starting at offset 24 bytes, but weights starts just 12 bytes later. You have an overlap between BoneIndices.w and Weights.x.

 

Yes I noticed this after I posted it. Fixed it, weights seem to work like they should now!

 

memcpy(mappedResource.pData, &aFinalMatrices, sizeof(XMMATRIX) * aFinalMatrices.size());

Is aFinalMatrices a std::vector?  This won't copy the data to your cbuffer; instead it will copy the address of the vector object.  Use &aFinalMatrices[0] or aFinalMatrices.data() instead.

 

This was the problem, thank you for pointing it out!
Animation works as expected now!
Thank you all for your help! Now i can finally start making some actual animations and clean the code a bit!  :D

Share this post


Link to post
Share on other sites

You should probably be making use of D3D11_APPEND_ALIGNED_ELEMENT instead of calculating each attribute offset manually. If you ever want to go back and compress one of the attributes (you really shouldn't be using 32 bit signed indices!) you'll have to recalculate the offsets for every attribute that appears after the one you're compressing.

Share this post


Link to post
Share on other sites

The weights are float, indices are int :P

Hahah oops :o

As Adam says above, there's some optimization opportunities if you feel like continuing work on this :wink:

Weights are often stored as 8bit UNORM and indices as 8 bit UINT. That will save you 24 bytes per vertex.

However that will require you to change your "unused bone indices" condition. You could use "weight == 0" instead... Or you can completely get rid of that branch, as any compuations with a weight of zero will have no effect anyway. If statements can have a negative performance impact on the GPU so it's worth testing with and without them :wink:
Lastly you can try putting "[unroll]" on the front of the for loop to tell the compiler not to use any dynamic jumps at runtime.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this