which I then bind to a ShaderResourceView and set as a shader resource
HR(device->CreateShaderResourceView(m_texture, 0, &m_textureRV));
.
.
deviceContext->PSSetShaderResources(0, 1, &m_textureRV);
I render this once and it works just as expected.
Then I try to update the texture..
HR(deviceContext->Map(m_texture, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource)
Right now Im just updating with the same buffer but that wont be the case in reality. After I try to update the texture, the texture is no longer shows up, and nothing shows up anymore. I am not getting any errors related to this from the Debug Device. I feel like I am doing everything correctly. Creating the texture with D3D11_USAGE_DYNAMIC, and D3D11_CPU_ACCESS_WRITE then using Map to update the texture. Everything I read says this is the proper method. Either I missed something, or I am doing it incorrectly.
Any ideas?
"There is no secret ingredient.." - Po (Kung Fu Panda)
[font="Arial"]You're not using this right. What Map() does is reserve a piece of GPU memory for you to copy your data in, and gives you a pointer that you can use to write into that memory. That's the [color="#1C2837"][color="#000000"]mappedResource[color="#666600"].[color="#000000"]pData pointer. You're not meant to set that pointer to your data, but instead copy your data into the memory it points to.[/font]
I am pretty sure the buffer itself is not the issue because so my guess is it has something to do with my use of memcpy but i just do see it.
I feel like I am being a little vague, but cant think of anything else to say haha. I can try to elaborate if need be.
"There is no secret ingredient.." - Po (Kung Fu Panda)
The texture resource will have it's own pitch (the number of bytes in a row), which is probably different than the pitch of your source data. This pitch is given to you as the "RowPitch" member of D3D11_MAPPED_SUBRESOURCE. So typically you do something like this:
BYTE* mappedData = reinterpret_cast<BYTE*>(mappedResource.pData);
for(UINT i = 0; i < height; ++i)
{
memcpy(mappedData, buffer, rowspan);
mappedData += mappedResource.RowPitch;
buffer += rowspan;
}
The texture resource will have it's own pitch (the number of bytes in a row), which is probably different than the pitch of your source data. This pitch is given to you as the "RowPitch" member of D3D11_MAPPED_SUBRESOURCE. So typically you do something like this:
BYTE* mappedData = reinterpret_cast<BYTE*>(mappedResource.pData);
for(UINT i = 0; i < height; ++i)
{
memcpy(mappedData, buffer, rowspan);
mappedData += mappedResource.RowPitch;
buffer += rowspan;
}
Excellent, works perfectly. Except now I am curious why the pitch of the texture is different when I set the pitch in the D3D11_SUBRESOURCE_DATA description.
"There is no secret ingredient.." - Po (Kung Fu Panda)
You don’t set the pitch with D3D11_SUBRESOURCE_DATA. You just tell the runtime what pitch your data uses. The runtime will use this information and the real pitch from your texture to copy the data correctly. As Map gives you a memory pointer were the driver wants the data to be copied it tells you the real pitch.[/font]