Jump to content
  • Advertisement
VietNN

DX12 D3D12: CreateShaderResourceView crash

Recommended Posts

Hi everyone, I am new to Dx12 and working on a game project.

My game just crash at CreateShaderResourceView with no infomation output in debug log, just: 0xC0000005: Access violation reading location 0x000001F22EF2AFE8.

my code at current:

CreateShaderResourceView(m_texture, &desc, *cpuDescriptorHandle);

 - m_texture address is: 0x000001ea3c68c8a0

- cpuDescriptorHandle address is 0x00000056d88fdd50

- desc.Format, desc.ViewDimension, Texture2D.MostDetailedMip, Texture2D.MipLevels is initalized.

The crash happens all times at that stage but not on same m_texture. As I noticed the violation reading location is always somewhere near m_texture address.

I just declare a temp variable to check how many times CreateShaderResourceView already called, at that moment it is 17879 (means that I created 17879 succesfully), and CreateDescriptorHeap for cpuDescriptorHandle was called 4190, do I reach any limit?

One more infomation, if I set miplevel of all texture when create to 1 it seem like there is no crash but game quality is bad. Do not sure if it relative or not.

Anyone could give me some advise ?

Share this post


Link to post
Share on other sites
Advertisement

A create view failure result into a device removed, but should not have crashed in the first place. Are you sure you did not destroy the texture and have a stale pointer ?

Share this post


Link to post
Share on other sites

Thank you guys, below is my answer

21 hours ago, turanszkij said:

In cases like this, turning on the debug layer for the directx device could help you a lot. 

Debug layer is on, i still l see some D3D12 warning, so I sure debug layer is working. but the warning is not relative to this, just clear value of some render target is not same as when it is create.

13 hours ago, galop1n said:

A create view failure result into a device removed, but should not have crashed in the first place. Are you sure you did not destroy the texture and have a stale pointer ?

No, there is no device removed in output log, and I did not destroyed any texture

11 hours ago, CortexDragon said:

When you created your descriptor heap how many records did you give it ?

Each descriptor just have about ~20 maximum NumDescriptors, it's quite small I think

Edited by VietNN

Share this post


Link to post
Share on other sites

Still stuck here, do I need to make sure m_texture is not update in any command list why it is being used to create shader resource ?view

Share this post


Link to post
Share on other sites

Hi @SoldierOfLight , I have an offtopic question, about this:

On 4/15/2016 at 11:09 AM, SoldierOfLight said:

The reason for the RowSizeInBytes parameter has to do with pitch alignment. The hardware requires a pitch alignment of D3D12_TEXTURE_DATA_PITCH_ALIGNMENT = 256 bytes, while each row may have less. Specifically, the last row is allowed to consist only of RowSizeInBytes, not necessarily the full row pitch. The pitch is simply defined as the number of bytes between rows, but is technically unrelated to the number of bytes in a row. Did you know that you can actually pass a RowPitch of 0 to read the same row over and over again

If i have a texture 128 x 128 with BC1 format, on mip level 2 it will be 32x32, so the RowSizeInBytes will be 64 (and numRow will be 8), but due to pitch alignment each row will be 256, will the rest be empty ? And when API use it, does it use 64 bytes to make a textures or make a texture with 256 x 8?

Edited by VietNN

Share this post


Link to post
Share on other sites

This question is vendor dependent for the textures. All hardwares use tiling and swizzling to improve texture reads, and they have a full set of tweak here even for a single format. You do usually use layout unknow for your texture so you don't know the exact padding structure. There is a 64k standard tiling that no one implement i believe and a 64k unknow for tile resources (with miptail)

 

You have the GetMemoryFootprint function (needed to know memory usage for a placed resource). So you can measure the delta between actual needed memory and actual data memory.

And you wont have a 256x8 texture, more likely the footprint of 256x64. Small textures waste the most memory, on GCN for example, BCn use 32pixel alignments. As for the row size aligned on 256, this is only for the transfer from/to buffers, it does not influence the texture layout.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
  • Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By korben_4_leeloo
      Hi.
      I wanted to experiment D3D12 development and decided to run some tutorials: Microsoft DirectX-Graphics-Samples, Braynzar Soft, 3dgep...Whatever sample I run, I've got the same crash.
      All the initialization process is going well, no error, return codes ok, but as soon as the Present method is invoked on the swap chain, I'm encountering a crash with the following call stack:
      https://drive.google.com/open?id=10pdbqYEeRTZA5E6Jm7U5Dobpn-KE9uOg
      The crash is an access violation to a null pointer ( with an offset of 0x80 )
      I'm working on a notebook, a toshiba Qosmio x870 with two gpu's: an integrated Intel HD 4000 and a dedicated NVIDIA GTX 670M ( Fermi based ). The HD 4000 is DX11 only and as far as I understand the GTX 670M is DX12 with a feature level 11_0. 
      I checked that the good adapter was chosen by the sample, and when the D3D12 device is asked in the sample with a 11_0 FL, it is created with no problem. Same for all the required interfaces ( swap chain, command queue...).
      I tried a lot of things to solve the problem or get some info, like forcing the notebook to always use the NVIDIA gpu, disabling the debug layer, asking for a different feature level ( by the way 11_0 is the only one that allows me to create the device, any other FL will fail at device creation )...
      I have the latest NVIDIA drivers ( 391.35 ), the latest Windows 10 sdk ( 10.0.17134.0 ) and I'm working under 
      Visual Studio 2017 Community.
      Thanks to anybody who can help me find the problem...
    • By _void_
      Hi guys!
      In a lot of samples found in the internet, people when initialize D3D12_SHADER_RESOURCE_VIEW_DESC with resource array size 1 would normallay set its dimension as Texture2D. If the array size is greater than 1, then they would use dimension as Texture2DArray, for an example.
      If I declare in the shader SRV as Texture2DArray but create SRV as Texture2D (array has only 1 texture) following the same principle as above, would this be OK? I guess, this should work as long as I am using array index 0 to access my texture?
      Thanks!
    • By _void_
      Hey!
       
      What is the recommended upper count for commands to record in the command list bundle?
      According to MSDN it is supposed to be a small number but do not elaborate on the actual number.
      I am thinking if I should pre-record commands in the command buffer and use ExecuteIndirect or maybe bundles instead.
      The number of commands to record in my case could vary greatly. 
       
      Thanks!
    • By Jason Smith
      While working on a project using D3D12 I was getting an exception being thrown while trying to get a D3D12_CPU_DESCRIPTOR_HANDLE. The project is using plain C so it uses the COBJMACROS. The following application replicates the problem happening in the project.
      #define COBJMACROS #pragma warning(push, 3) #include <Windows.h> #include <d3d12.h> #include <dxgi1_4.h> #pragma warning(pop) IDXGIFactory4 *factory; ID3D12Device *device; ID3D12DescriptorHeap *rtv_heap; int WINAPI wWinMain(HINSTANCE hinst, HINSTANCE pinst, PWSTR cline, int cshow) { (hinst), (pinst), (cline), (cshow); HRESULT hr = CreateDXGIFactory1(&IID_IDXGIFactory4, (void **)&factory); hr = D3D12CreateDevice(0, D3D_FEATURE_LEVEL_11_0, &IID_ID3D12Device, &device); D3D12_DESCRIPTOR_HEAP_DESC desc; desc.NumDescriptors = 1; desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV; desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; desc.NodeMask = 0; hr = ID3D12Device_CreateDescriptorHeap(device, &desc, &IID_ID3D12DescriptorHeap, (void **)&rtv_heap); D3D12_CPU_DESCRIPTOR_HANDLE rtv = ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart(rtv_heap); (rtv); } The call to ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart throws an exception. Stepping into the disassembly for ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart show that the error occurs on the instruction
      mov  qword ptr [rdx],rax
      which seems odd since rdx doesn't appear to be used. Any help would be greatly appreciated. Thank you.
       
    • By lubbe75
      As far as I understand there is no real random or noise function in HLSL. 
      I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
      Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious? 
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!