• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Jason Smith
      While working on a project using D3D12 I was getting an exception being thrown while trying to get a D3D12_CPU_DESCRIPTOR_HANDLE. The project is using plain C so it uses the COBJMACROS. The following application replicates the problem happening in the project.
      #define COBJMACROS #pragma warning(push, 3) #include <Windows.h> #include <d3d12.h> #include <dxgi1_4.h> #pragma warning(pop) IDXGIFactory4 *factory; ID3D12Device *device; ID3D12DescriptorHeap *rtv_heap; int WINAPI wWinMain(HINSTANCE hinst, HINSTANCE pinst, PWSTR cline, int cshow) { (hinst), (pinst), (cline), (cshow); HRESULT hr = CreateDXGIFactory1(&IID_IDXGIFactory4, (void **)&factory); hr = D3D12CreateDevice(0, D3D_FEATURE_LEVEL_11_0, &IID_ID3D12Device, &device); D3D12_DESCRIPTOR_HEAP_DESC desc; desc.NumDescriptors = 1; desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV; desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; desc.NodeMask = 0; hr = ID3D12Device_CreateDescriptorHeap(device, &desc, &IID_ID3D12DescriptorHeap, (void **)&rtv_heap); D3D12_CPU_DESCRIPTOR_HANDLE rtv = ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart(rtv_heap); (rtv); } The call to ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart throws an exception. Stepping into the disassembly for ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart show that the error occurs on the instruction
      mov  qword ptr [rdx],rax
      which seems odd since rdx doesn't appear to be used. Any help would be greatly appreciated. Thank you.
       
    • By lubbe75
      As far as I understand there is no real random or noise function in HLSL. 
      I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
      Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious? 
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By NikiTo
      Some people say "discard" has not a positive effect on optimization. Other people say it will at least spare the fetches of textures.
       
      if (color.A < 0.1f) { //discard; clip(-1); } // tons of reads of textures following here // and loops too
      Some people say that "discard" will only mask out the output of the pixel shader, while still evaluates all the statements after the "discard" instruction.

      MSN>
      discard: Do not output the result of the current pixel.
      clip: Discards the current pixel..
      <MSN

      As usual it is unclear, but it suggests that "clip" could discard the whole pixel(maybe stopping execution too)

      I think, that at least, because of termal and energy consuming reasons, GPU should not evaluate the statements after "discard", but some people on internet say that GPU computes the statements anyways. What I am more worried about, are the texture fetches after discard/clip.

      (what if after discard, I have an expensive branch decision that makes the approved cheap branch neighbor pixels stall for nothing? this is crazy)
    • By NikiTo
      I have a problem. My shaders are huge, in the meaning that they have lot of code inside. Many of my pixels should be completely discarded. I could use in the very beginning of the shader a comparison and discard, But as far as I understand, discard statement does not save workload at all, as it has to stale until the long huge neighbor shaders complete.
      Initially I wanted to use stencil to discard pixels before the execution flow enters the shader. Even before the GPU distributes/allocates resources for this shader, avoiding stale of pixel shaders execution flow, because initially I assumed that Depth/Stencil discards pixels before the pixel shader, but I see now that it happens inside the very last Output Merger state. It seems extremely inefficient to render that way a little mirror in a scene with big viewport. Why they've put the stencil test in the output merger anyway? Handling of Stencil is so limited compared to other resources. Does people use Stencil functionality at all for games, or they prefer discard/clip?

      Will GPU stale the pixel if I issue a discard in the very beginning of the pixel shader, or GPU will already start using the freed up resources to render another pixel?!?!



       
  • Advertisement
  • Advertisement
Sign in to follow this  

DX12 [D3D12] Problems loading pre-compiled shaders

This topic is 747 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am attempting to load pre-compiled shaders and create a PSO.  The moment I go to create the PSO, I receive the following D3D12 Error:

D3D12 ERROR: ID3D12Device::CreateInputLayout: Encoded Signature size doesn't match specified size. [ STATE_CREATION ERROR #63: CREATEINPUTLAYOUT_UNPARSEABLEINPUTSIGNATURE].

 

I have the following shaders which compile without errors in VS using Shader Model 5.1:

//VertexShader.hlsl

#include "PSInput.hlsli"

PSInput VSMain(float4 position : POSITION, float4 color : COLOR)
{
	PSInput result;

	result.position = position;
	result.color = color;

	return result;
}

//-------------------------------------------------
// PixelShader.hlsl

#include "PSInput.hlsli"

float4 PSMain(PSInput input) : SV_TARGET
{
	return input.color;
}

//-------------------------------------------------
// PSInput.hlsli

struct PSInput {
    float4 position : SV_POSITION;
    float4 color : COLOR;
};


.

Below is my code for reading in each shader .cso file and binding the shader bytecode to the PSO descriptor. I've checked that both vertexShaderData and pixelShaderData are non-null and that the data length is greater than zero.  Interestingly vertexShaderDataLength=668 bytes, while pxielShaderDataLength=14368 bytes (much greater than I expected so I wonder if this is something to worry about).

byte * vertexShaderData(nullptr);
uint vertexShaderDataLength(0);
ThrowIfFailed(
    ReadDataFromFile(
        GetAssetFullPath(L"VertexShader.cso").c_str(),
        &vertexShaderData,
        &vertexShaderDataLength
    )
 );


byte * pixelShaderData(nullptr);
uint pixelShaderDataLength(0);
ThrowIfFailed(
    ReadDataFromFile(
        GetAssetFullPath(L"PixelShader.cso").c_str(),
        &pixelShaderData,
        &pixelShaderDataLength
    )
 );

    
D3D12_GRAPHICS_PIPELINE_STATE_DESC psoDesc = {};
psoDesc.InputLayout = { inputElementDescriptor, _countof(inputElementDescriptor) };
psoDesc.pRootSignature = rootSignature;
psoDesc.VS = CD3DX12_SHADER_BYTECODE(vertexShaderData, vertexShaderDataLength);
psoDesc.PS = CD3DX12_SHADER_BYTECODE(pixelShaderData, pixelShaderDataLength);
// fill in remainder psoDesc ... 


device->CreateGraphicsPipelineState (     // <--- Exception Thrown here.
    &psoDesc,
    IID_PPV_ARGS(&pipelineState)
);

.

Since the report error mentioned unparsable input signature, I'll include my input element descriptor as well.  I'm using vertex interleaving with just positions and colors.

// Define the vertex input layout.
D3D12_INPUT_ELEMENT_DESC inputElementDescriptor[2];

// Positions
inputElementDescriptor[0].SemanticName = "POSITION";
inputElementDescriptor[0].SemanticIndex = 0;
inputElementDescriptor[0].Format = DXGI_FORMAT_R32G32B32_FLOAT;
inputElementDescriptor[0].InputSlot = 0;
inputElementDescriptor[0].AlignedByteOffset = 0;
inputElementDescriptor[0].InputSlotClass = D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA;
inputElementDescriptor[0].InstanceDataStepRate = 0;

// Colors
inputElementDescriptor[1].SemanticName = "COLOR";
inputElementDescriptor[1].SemanticIndex = 0;
inputElementDescriptor[1].Format = DXGI_FORMAT_R32G32B32_FLOAT;
inputElementDescriptor[1].InputSlot = 0;
inputElementDescriptor[1].AlignedByteOffset = sizeof(float) * 3;
inputElementDescriptor[1].InputSlotClass = D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA;
inputElementDescriptor[1].InstanceDataStepRate = 0;

.

The really odd part of this is that my program runs without error when I combine both the VertexShader and PixelShader into a single effects file "shader.hlsl" and compile the shaders at runtime.

ComPtr<ID3DBlob> vertexShader;
ComPtr<ID3DBlob> pixelShader;


// Compile vertex shader
ThrowIfFailed(
    D3DCompileFromFile (
        GetAssetFullPath(L"shaders.hlsl").c_str(),
        nullptr, nullptr, "VSMain", "vs_5_0", 
        compileFlags, 0, &vertexShader, nullptr
    )
);

// Compile pixel shader
ThrowIfFailed (
    D3DCompileFromFile (
        GetAssetFullPath(L"shaders.hlsl").c_str(),
        nullptr, nullptr, "PSMain", "ps_5_0",
        compileFlags, 0, &pixelShader, nullptr
    )
);

Edited by DustinB

Share this post


Link to post
Share on other sites
Advertisement

I tracked down this piece of information stating HLSL changes to Shader Model 5.1 requiring a specified root signature in order to compile shaders offline:

Note  For shader model 5.1 a root signature must be specified in order to compile shaders offline.

https://msdn.microsoft.com/en-us/library/windows/desktop/dn933268(v=vs.85).aspx

 

Specifying Root Signature in HLSL:

https://msdn.microsoft.com/en-us/library/windows/desktop/dn913202(v=vs.85).aspx

 

I'll give this a try and see if it fixes my issues.

Edited by DustinB

Share this post


Link to post
Share on other sites

You don't need to specify a root signature to compile sm 5.1 shaders. I'm pretty sure that bit in the documentation is a mix-up, since it sounds like it comes from the XB1 SDK. I suspect that something is wrong with how you pre-compile the shaders, or with how you load the data.

 

Share this post


Link to post
Share on other sites

Look at your vertex shader params and the input layout: 

//VertexShader.hlsl

#include "PSInput.hlsli"

PSInput VSMain(float4 position : POSITION, float4 color : COLOR)
{
	PSInput result;

	result.position = position;
	result.color = color;

	return result;
}

Since the report error mentioned unparsable input signature, I'll include my input element descriptor as well.  I'm using vertex interleaving with just positions and colors.

// Define the vertex input layout.
D3D12_INPUT_ELEMENT_DESC inputElementDescriptor[2];

// Positions
inputElementDescriptor[0].SemanticName = "POSITION";
inputElementDescriptor[0].SemanticIndex = 0;
inputElementDescriptor[0].Format = DXGI_FORMAT_R32G32B32_FLOAT;
inputElementDescriptor[0].InputSlot = 0;
inputElementDescriptor[0].AlignedByteOffset = 0;
inputElementDescriptor[0].InputSlotClass = D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA;
inputElementDescriptor[0].InstanceDataStepRate = 0;

// Colors
inputElementDescriptor[1].SemanticName = "COLOR";
inputElementDescriptor[1].SemanticIndex = 0;
inputElementDescriptor[1].Format = DXGI_FORMAT_R32G32B32_FLOAT;
inputElementDescriptor[1].InputSlot = 0;
inputElementDescriptor[1].AlignedByteOffset = sizeof(float) * 3;
inputElementDescriptor[1].InputSlotClass = D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA;
inputElementDescriptor[1].InstanceDataStepRate = 0;

 

You are using float4 for position and color in the vertex shader, but the inputlayout says it's float3.

Share this post


Link to post
Share on other sites

I've filed a bug and asked that this yellow box be removed from the documentation; it does indeed refer to the Xbox One XDK.

 

Using float4 in the shader is fine, it'll simply insert 1.0f into the 'w' channel.

Share this post


Link to post
Share on other sites

If I compile those VS and PS shaders offline to an object file I get sizes of 608 and 532 bytes respectively. I'm not sure how you managed to get a 14KB pixel shader object from just "return input.color". Are you sure the size on disk for PixelShader.cso is ~14KB?

Share this post


Link to post
Share on other sites

You are using float4 for position and color in the vertex shader, but the inputlayout says it's float3.

 

Which is perfectly fine and results in the shader receiving float4(x, y, z, 1).

Share this post


Link to post
Share on other sites

If I compile those VS and PS shaders offline to an object file I get sizes of 608 and 532 bytes respectively. I'm not sure how you managed to get a 14KB pixel shader object from just "return input.color". Are you sure the size on disk for PixelShader.cso is ~14KB?

 

It seems the bloat in size of the PS was due to setting the "enable debugging information" (/Zi) compiler flag.  I turned this off and the PixelShader.cso file compiled to only 532 bytes, matching your size output (I verified this from the filesystem as well).  I also tried disabling debugging information for my VS and VertexShader.cso is only 176 bytes.  With debugging information turned off I am still getting the D3D12 error.

 

As MJP mentioned, this definitely seems to be an issue only with compiling offline.  If I compile the shaders at runtime there are no D3D12 errors.

Edited by DustinB

Share this post


Link to post
Share on other sites

I've filed a bug and asked that this yellow box be removed from the documentation; it does indeed refer to the Xbox One XDK.

 

Using float4 in the shader is fine, it'll simply insert 1.0f into the 'w' channel.

 

Adam, is there an official link for filing bugs?  It would be nice to see if any DX team members could chime in. 

Edited by DustinB

Share this post


Link to post
Share on other sites

Adam, is there an official link for filing bugs?  It would be nice to see if any DX team members could chime in. 

 

At the bottom of every MSDN page there's a "Was this page helpful?", click No and explain in the Additional Feedback box what's up and we'll get it fixed.

 

I've had a look at the code you've posted and nothing obvious stands out, but nothing beats a standalone, single-file repro of the issue and then I/we can run it for ourselves and find out what's up.

Edited by Adam Miles

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement