D3D12: CreateShaderResourceView crash

Started by
14 comments, last by VietNN 6 years, 4 months ago

Hi everyone, I am new to Dx12 and working on a game project.

My game just crash at CreateShaderResourceView with no infomation output in debug log, just: 0xC0000005: Access violation reading location 0x000001F22EF2AFE8.

my code at current:

CreateShaderResourceView(m_texture, &desc, *cpuDescriptorHandle);

 - m_texture address is: 0x000001ea3c68c8a0

- cpuDescriptorHandle address is 0x00000056d88fdd50

- desc.Format, desc.ViewDimension, Texture2D.MostDetailedMip, Texture2D.MipLevels is initalized.

The crash happens all times at that stage but not on same m_texture. As I noticed the violation reading location is always somewhere near m_texture address.

I just declare a temp variable to check how many times CreateShaderResourceView already called, at that moment it is 17879 (means that I created 17879 succesfully), and CreateDescriptorHeap for cpuDescriptorHandle was called 4190, do I reach any limit?

One more infomation, if I set miplevel of all texture when create to 1 it seem like there is no crash but game quality is bad. Do not sure if it relative or not.

Anyone could give me some advise ?

Advertisement

In cases like this, turning on the debug layer for the directx device could help you a lot. 

A create view failure result into a device removed, but should not have crashed in the first place. Are you sure you did not destroy the texture and have a stale pointer ?

When you created your descriptor heap how many records did you give it ?

Thank you guys, below is my answer

21 hours ago, turanszkij said:

In cases like this, turning on the debug layer for the directx device could help you a lot. 

Debug layer is on, i still l see some D3D12 warning, so I sure debug layer is working. but the warning is not relative to this, just clear value of some render target is not same as when it is create.

13 hours ago, galop1n said:

A create view failure result into a device removed, but should not have crashed in the first place. Are you sure you did not destroy the texture and have a stale pointer ?

No, there is no device removed in output log, and I did not destroyed any texture

11 hours ago, CortexDragon said:

When you created your descriptor heap how many records did you give it ?

Each descriptor just have about ~20 maximum NumDescriptors, it's quite small I think

Still stuck here, do I need to make sure m_texture is not update in any command list why it is being used to create shader resource ?view

What's the call stack of the crash? Is it in your app? In D3D12.dll? In the driver?

@SoldierOfLight

Yes, last call stack is D3D12.dll:

CreateShaderResourceView >> d3d12SDKLayer >> D3D12.dll : crash

:)

Hi @SoldierOfLight , I have an offtopic question, about this:

On 4/15/2016 at 11:09 AM, SoldierOfLight said:

The reason for the RowSizeInBytes parameter has to do with pitch alignment. The hardware requires a pitch alignment of D3D12_TEXTURE_DATA_PITCH_ALIGNMENT = 256 bytes, while each row may have less. Specifically, the last row is allowed to consist only of RowSizeInBytes, not necessarily the full row pitch. The pitch is simply defined as the number of bytes between rows, but is technically unrelated to the number of bytes in a row. Did you know that you can actually pass a RowPitch of 0 to read the same row over and over again

If i have a texture 128 x 128 with BC1 format, on mip level 2 it will be 32x32, so the RowSizeInBytes will be 64 (and numRow will be 8), but due to pitch alignment each row will be 256, will the rest be empty ? And when API use it, does it use 64 bytes to make a textures or make a texture with 256 x 8?

This question is vendor dependent for the textures. All hardwares use tiling and swizzling to improve texture reads, and they have a full set of tweak here even for a single format. You do usually use layout unknow for your texture so you don't know the exact padding structure. There is a 64k standard tiling that no one implement i believe and a 64k unknow for tile resources (with miptail)

 

You have the GetMemoryFootprint function (needed to know memory usage for a placed resource). So you can measure the delta between actual needed memory and actual data memory.

And you wont have a 256x8 texture, more likely the footprint of 256x64. Small textures waste the most memory, on GCN for example, BCn use 32pixel alignments. As for the row size aligned on 256, this is only for the transfer from/to buffers, it does not influence the texture layout.

This topic is closed to new replies.

Advertisement