## Recommended Posts

So, I've been playing a bit with geometry shaders recently and I've found a very interesting bug, let me show you the code example:

struct Vert2Geom
{
float4 mPosition : SV_POSITION;
float2 mTexCoord : TEXCOORD0;
float3 mNormal : TEXCOORD1;
float4 mPositionWS : TEXCOORD2;
};

struct Geom2Frag
{
float4 mPosition : SV_POSITION;
nointerpolation float4 mAABB : AABB;
float3 mNormal : TEXCOORD1;
float2 mTexCoord : TEXCOORD0;
nointerpolation uint mAxis : AXIS;
float3 temp : TEXCOORD2;
};

...

[maxvertexcount(3)]
void GS(triangle Vert2Geom input[3], inout TriangleStream<Geom2Frag> output)
{
...
}

So, as soon as I have this Geom2Frag structure - there is a crash, to be precise - the only message I get is:

D3D12: Removing Device.

Now, if Geom2Frag last attribute is just type of float2 (hence structure is 4 bytes shorter), there is no crash and everything works as should. I tried to look at limitations for Shader Model 5.1 profiles - and I either overlooked one for geometry shader outputs (which is more than possible - MSDN is confusing in many ways ... but 64 bytes limit seems way too low), or there is something iffy that shader compiler does for me.

Any ideas why this might happen?

##### Share on other sites

If you got a device removed, then it's likely that the GPU or driver crashed on you. You can try to confirm this by running on another GPU if you have one (you can try using enabling your integrated Intel GPU if you're on a desktop), or by selecting the WARP adapter.

You may also want to try compiling as gs_5_0 instead of gs_5_1 and seeing if the generated DXBC is any different. I've seen a few bugs specific to the *s_5_1 profile, mostly around arrays. Or of course you can always move on to the new fancy new shader compiler, and see if that works better for you.

EDIT: I also assumed that you've already run with the validation layer enabled, and checked for errors. You can also try enabling the GPU validator, since that can potentially catch some issues that aren't caught by the normal validation layer.

##### Share on other sites

Maybe it just needs the standard hlsl packing rules.

So your structure would have to be a multiple of float4 in size, and each inbuilt structure bigger than a float or int should start on a float4 size boundry. (although uint2 or float2 might be okay on a float2 sized boundry ?)

Edited by CortexDragon

##### Share on other sites

I tried both - gs_5_0 (no success with this one) and also enabling GPU validator. GPU validator told me nothing, the crash was exactly the same.

What is interesting, is that enabling GPU validator introduced a random error of CreateCommittedResource failing - it fails with:

0x887a0005The video card has been physically removed from the system, or a driver upgrade for the video card has occurred. The application should destroy and recreate the device. For help debugging the problem, call GetDeviceRemovedReason.

So I went ahead and called GetDeviceRemovedReason

0x887a0007 The device failed due to a badly formed command. This is a run-time issue; The application should destroy and recreate the device.

The funny thing is, this error is only experienced on random with SetEnableGPUBasedValidation set to TRUE.

Now I've checked where it was crashing also - and it was during the load phase on CreateCommittedResource, and the parameters seems valid (and same each run, the crash is random though!). Weird...

I haven't tried the new compiler, but it might be worth trying (I did try to pass flags to D3DCompileFromFile to disable optimizations, etc. Without any success or message.

I thought the same so I gave it a shot, changed structure to:

struct Geom2Frag
{
float4 mPosition : SV_POSITION;
nointerpolation float4 mAABB : AABB;
float4 mNormal : TEXCOORD1;
float4 mTexCoord : TEXCOORD0;
nointerpolation uint4 mAxis : AXIS;
float4 temp : TEXCOORD2;
};

This way it's 96 bytes, and there shouldn't be any alignment problems. And yes, it still crashes.

I'm trying one additional thing - I noticed that I'm currently not running recent AMD drivers (the update was released about 7 days back, I'm still on older version). Let me quick-try the driver update.

##### Share on other sites

And dang! Magic happened. I updated the driver to version 17.11.2 from the 17.11.1 (I believe driver number is actually another one, that the one that is showed in Radeon application) -> and magically it works!

To be precise BOTH things work - GPU based validation no longer seems to crash, and passing out the structure from geometry shader also doesn't seem to crash anymore.

## Create an account

Register a new account

• 9
• 13
• 40
• 15
• 11
• ### Similar Content

• Hi.
I wanted to experiment D3D12 development and decided to run some tutorials: Microsoft DirectX-Graphics-Samples, Braynzar Soft, 3dgep...Whatever sample I run, I've got the same crash.
All the initialization process is going well, no error, return codes ok, but as soon as the Present method is invoked on the swap chain, I'm encountering a crash with the following call stack:
The crash is an access violation to a null pointer ( with an offset of 0x80 )
I'm working on a notebook, a toshiba Qosmio x870 with two gpu's: an integrated Intel HD 4000 and a dedicated NVIDIA GTX 670M ( Fermi based ). The HD 4000 is DX11 only and as far as I understand the GTX 670M is DX12 with a feature level 11_0.
I checked that the good adapter was chosen by the sample, and when the D3D12 device is asked in the sample with a 11_0 FL, it is created with no problem. Same for all the required interfaces ( swap chain, command queue...).
I tried a lot of things to solve the problem or get some info, like forcing the notebook to always use the NVIDIA gpu, disabling the debug layer, asking for a different feature level ( by the way 11_0 is the only one that allows me to create the device, any other FL will fail at device creation )...
I have the latest NVIDIA drivers ( 391.35 ), the latest Windows 10 sdk ( 10.0.17134.0 ) and I'm working under
Visual Studio 2017 Community.
Thanks to anybody who can help me find the problem...
• By _void_
Hi guys!
In a lot of samples found in the internet, people when initialize D3D12_SHADER_RESOURCE_VIEW_DESC with resource array size 1 would normallay set its dimension as Texture2D. If the array size is greater than 1, then they would use dimension as Texture2DArray, for an example.
If I declare in the shader SRV as Texture2DArray but create SRV as Texture2D (array has only 1 texture) following the same principle as above, would this be OK? I guess, this should work as long as I am using array index 0 to access my texture?
Thanks!
• By _void_
Hey!

What is the recommended upper count for commands to record in the command list bundle?
According to MSDN it is supposed to be a small number but do not elaborate on the actual number.
I am thinking if I should pre-record commands in the command buffer and use ExecuteIndirect or maybe bundles instead.
The number of commands to record in my case could vary greatly.

Thanks!

• While working on a project using D3D12 I was getting an exception being thrown while trying to get a D3D12_CPU_DESCRIPTOR_HANDLE. The project is using plain C so it uses the COBJMACROS. The following application replicates the problem happening in the project.
#define COBJMACROS #pragma warning(push, 3) #include <Windows.h> #include <d3d12.h> #include <dxgi1_4.h> #pragma warning(pop) IDXGIFactory4 *factory; ID3D12Device *device; ID3D12DescriptorHeap *rtv_heap; int WINAPI wWinMain(HINSTANCE hinst, HINSTANCE pinst, PWSTR cline, int cshow) { (hinst), (pinst), (cline), (cshow); HRESULT hr = CreateDXGIFactory1(&IID_IDXGIFactory4, (void **)&factory); hr = D3D12CreateDevice(0, D3D_FEATURE_LEVEL_11_0, &IID_ID3D12Device, &device); D3D12_DESCRIPTOR_HEAP_DESC desc; desc.NumDescriptors = 1; desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV; desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; desc.NodeMask = 0; hr = ID3D12Device_CreateDescriptorHeap(device, &desc, &IID_ID3D12DescriptorHeap, (void **)&rtv_heap); D3D12_CPU_DESCRIPTOR_HANDLE rtv = ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart(rtv_heap); (rtv); } The call to ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart throws an exception. Stepping into the disassembly for ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart show that the error occurs on the instruction
mov  qword ptr [rdx],rax
which seems odd since rdx doesn't appear to be used. Any help would be greatly appreciated. Thank you.

• By lubbe75
As far as I understand there is no real random or noise function in HLSL.
I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious?