• 11
• 9
• 10
• 9
• 10
• Similar Content

• While working on a project using D3D12 I was getting an exception being thrown while trying to get a D3D12_CPU_DESCRIPTOR_HANDLE. The project is using plain C so it uses the COBJMACROS. The following application replicates the problem happening in the project.
#define COBJMACROS #pragma warning(push, 3) #include <Windows.h> #include <d3d12.h> #include <dxgi1_4.h> #pragma warning(pop) IDXGIFactory4 *factory; ID3D12Device *device; ID3D12DescriptorHeap *rtv_heap; int WINAPI wWinMain(HINSTANCE hinst, HINSTANCE pinst, PWSTR cline, int cshow) { (hinst), (pinst), (cline), (cshow); HRESULT hr = CreateDXGIFactory1(&IID_IDXGIFactory4, (void **)&factory); hr = D3D12CreateDevice(0, D3D_FEATURE_LEVEL_11_0, &IID_ID3D12Device, &device); D3D12_DESCRIPTOR_HEAP_DESC desc; desc.NumDescriptors = 1; desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV; desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; desc.NodeMask = 0; hr = ID3D12Device_CreateDescriptorHeap(device, &desc, &IID_ID3D12DescriptorHeap, (void **)&rtv_heap); D3D12_CPU_DESCRIPTOR_HANDLE rtv = ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart(rtv_heap); (rtv); } The call to ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart throws an exception. Stepping into the disassembly for ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart show that the error occurs on the instruction
mov  qword ptr [rdx],rax
which seems odd since rdx doesn't appear to be used. Any help would be greatly appreciated. Thank you.

• By lubbe75
As far as I understand there is no real random or noise function in HLSL.
I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious?

• Hi,
I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
• By NikiTo
Some people say "discard" has not a positive effect on optimization. Other people say it will at least spare the fetches of textures.

if (color.A < 0.1f) { //discard; clip(-1); } // tons of reads of textures following here // and loops too
Some people say that "discard" will only mask out the output of the pixel shader, while still evaluates all the statements after the "discard" instruction.

MSN>
discard: Do not output the result of the current pixel.
<MSN

As usual it is unclear, but it suggests that "clip" could discard the whole pixel(maybe stopping execution too)

I think, that at least, because of termal and energy consuming reasons, GPU should not evaluate the statements after "discard", but some people on internet say that GPU computes the statements anyways. What I am more worried about, are the texture fetches after discard/clip.

(what if after discard, I have an expensive branch decision that makes the approved cheap branch neighbor pixels stall for nothing? this is crazy)
• By NikiTo
I have a problem. My shaders are huge, in the meaning that they have lot of code inside. Many of my pixels should be completely discarded. I could use in the very beginning of the shader a comparison and discard, But as far as I understand, discard statement does not save workload at all, as it has to stale until the long huge neighbor shaders complete.
Initially I wanted to use stencil to discard pixels before the execution flow enters the shader. Even before the GPU distributes/allocates resources for this shader, avoiding stale of pixel shaders execution flow, because initially I assumed that Depth/Stencil discards pixels before the pixel shader, but I see now that it happens inside the very last Output Merger state. It seems extremely inefficient to render that way a little mirror in a scene with big viewport. Why they've put the stencil test in the output merger anyway? Handling of Stencil is so limited compared to other resources. Does people use Stencil functionality at all for games, or they prefer discard/clip?

Will GPU stale the pixel if I issue a discard in the very beginning of the pixel shader, or GPU will already start using the freed up resources to render another pixel?!?!

DX12 [D3D12] Failure to create a swap chain

This topic is 991 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Today, I've been trying to create a swap chain to no avail and I really can't figure out why.

When I call CreateSwapChain(), it fails with 0x887a0001 (DXGI_ERROR_INVALID_CALL) and the debug layer doesn't even report anything.

The way I create my swap chain is no different than the way the official samples do (The samples run just fine on my computer and my GPU fully support DX12). In fact, the code below is almost the same as the code found in the samples, so where could the problem be?

Any help is appreciated.

Here's the code:

bool Renderer::InitializePipeline(HWND hWnd)
{
HRESULT HR;

ID3D12Debug *debugController;

HR = D3D12GetDebugInterface(IID_PPV_ARGS(&debugController));

if (SUCCEEDED(HR))
{
debugController->EnableDebugLayer();
}

HR = D3D12CreateDevice(nullptr, D3D_FEATURE_LEVEL_11_0, IID_PPV_ARGS(&device));

if (FAILED(HR)) return false;

D3D12_COMMAND_QUEUE_DESC commandQueueDesc = {};

commandQueueDesc.Flags = D3D12_COMMAND_QUEUE_FLAG_NONE;
commandQueueDesc.Type = D3D12_COMMAND_LIST_TYPE_DIRECT;

HR = device->CreateCommandQueue(&commandQueueDesc, IID_PPV_ARGS(&commandQueue));

if (FAILED(HR)) return false;

DXGI_SWAP_CHAIN_DESC swapChainDesc = {};

swapChainDesc.BufferCount = 2; //Used to be 1, but changed it to 2 thanks to Alessio1989. There's still a problem I haven't figured out...
swapChainDesc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
swapChainDesc.BufferDesc.Height = 600;
swapChainDesc.BufferDesc.Width = 800;
swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDesc.OutputWindow = hWnd;
swapChainDesc.SampleDesc.Count = 1;
swapChainDesc.Windowed = true;

IDXGIFactory4 *factory;

HR = CreateDXGIFactory1(IID_PPV_ARGS(&factory));

if (FAILED(HR))
{
factory->Release();

return false;
}

HR = factory->CreateSwapChain(commandQueue, &swapChainDesc, &swapChain); //Fails every single time.

factory->Release();

if (FAILED(HR)) debugController->Release(); return false;

debugController->Release();

return true;
}
Edited by Jess1997

Share on other sites

Is that your actual code? As far as I know, there is no DXGI_SWAP_EFFECT_FLIP_DISCARD, so the above shouldn't compile.

If that's a typo, then please post the actual code as it appears in your source file.

Share on other sites
BufferCount must be greater or equal then 2 in flip mode.

Is that your actual code? As far as I know, there is no DXGI_SWAP_EFFECT_FLIP_DISCARD, so the above shouldn't compile.

If that's a typo, then please post the actual code as it appears in your source file.

Edited by Alessio1989

Share on other sites

Is that your actual code? As far as I know, there is no DXGI_SWAP_EFFECT_FLIP_DISCARD, so the above shouldn't compile.

If that's a typo, then please post the actual code as it appears in your source file.

Well, that's what they use in the samples and it does compile.

Share on other sites

Hmm, I did try to change the buffercount to 2 before and it didn't work. If that was a problem, it isn't the only one.

Edited by Jess1997

Share on other sites

Hmm, I did try to change the buffercount to 2 before and it didn't work. If that was a problem, it isn't the only one.

I think you should have a look to the first public samples https://github.com/Microsoft/DirectX-Graphics-Samples

Share on other sites

I think you should have a look to the first public samples https://github.com/Microsoft/DirectX-Graphics-Samples

My swap chain description is exactly the same as the one in the samples. It wasn't before when my buffercount was 1, but since I changed it to 2 like you told me to, it's exactly the same. I really don't get it, but I'll try to see if it could be the handle to my window (Even though my window works properly), because the rest is exactly as it is in the samples (Except for the fact that they use an IDXGISwapChain and then query the interface of a IDXGISwapChain3 right after they're done creating the swap chain, but I doubt it has anything to do with it). Anyway, I really appreciate your help.

Edited by Jess1997

Share on other sites

Do the MS samples run on your hardware?

Share on other sites

Sorry for the bum steer. I did a search and couldn't find it. I suspect I was looking at an earlier version of the D3D12 docs.

Edited by Dave Hunt

Share on other sites

Do the MS samples run on your hardware?

They run perfectly.