Jump to content
  • Advertisement
NikiTo

DX12 Problem with rendering a simple triangle

Recommended Posts

Here is the code that is relevant for my problem. Everything else is omitted and is giving no problems.

 

The hlsl that compiles and later it successfully adds to the PSO:
 

struct VSInput
{
	float4 position : mPOSITION;
	float2 uv : mTEXCOORD;
};

struct PSInput
{
	float4 position : SV_POSITION;
	//float2 uv : TEXCOORD;
};

Texture2D g_texture : register(t0);
SamplerState g_sampler : register(s0);

PSInput VSMain(VSInput input)
{
	PSInput output;

	output.position = input.position;
	//output.uv = input.uv;

	return output;
}

float4 PSMain(PSInput input) : SV_TARGET
{
	//return g_texture.Sample(g_sampler, input.uv);
	return float4(1.0, 0.0, 0.0, 1.0);
}

 

The part of the C++ I consider relevant to the problem:
 

Vertex triangleVertices[] =
{
	{ { 0.0f, 0.25f, 0.0f }, { 0.5f, 0.0f } },
	{ { 0.25f, -0.25f, 0.0f }, { 1.0f, 1.0f } },
	{ { -0.25f, -0.25f, 0.0f }, { 0.0f, 1.0f } }
};
// FAILED macro is omited
D3DCompileFromFile(shadersPath.c_str(), nullptr, nullptr, "VSMain", "vs_5_0", 0, 0, &mvsByteCode, &errors);
D3DCompileFromFile(shadersPath.c_str(), nullptr, nullptr, "PSMain", "ps_5_0", 0, 0, &mpsByteCode, &errors);
D3D12_INPUT_ELEMENT_DESC mInputLayout[] =
{		
	{ "mPOSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 },
	{ "mTEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 }
};
renderQuadVertexBufferView.BufferLocation = mRenderQuadBufferDefault->GetGPUVirtualAddress();
renderQuadVertexBufferView.StrideInBytes = sizeof(Vertex);
renderQuadVertexBufferView.SizeInBytes = sizeof(triangleVertices);
mCommandList->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
mCommandList->IASetVertexBuffers(0, 1, &renderQuadVertexBufferView);
// this command executes painting the screen well
mCommandList->ClearRenderTargetView(RTVHandleCPU, clearColor, 0, nullptr);

// this command does not show the triangle
mCommandList->DrawInstanced(3, 1, 0, 0);


Before to attempt to render the triangle, I set the state of the vertex buffer to be D3D12_RESOURCE_STATE_VERTEX_AND_CONSTANT_BUFFER. Its heap is of the type DEFAULT.
Do you see any problem in the shown code? If I can discard this as the source of the problem, I could search in other places.

Share this post


Link to post
Share on other sites
Advertisement

Viewport and scissors doesn't help. No I haven't tried PIX/debug. I'm testing each HRESULT and everything for NULL. And If something is wrong it doesn't compile here, it doesn't serialize there, doesn't initialize descriptors, it doesn't close the command list if something is wrong. So I decided to not make it more complex with more APIs for debugging.

Now i'm so sleepy... Tomorrow I will try to read the vertex buffer and see if it contains expected vertex data.

Edited by NikiTo

Share this post


Link to post
Share on other sites

Absolutely try enabling the debug layer and check the Visual Studio output (or whatever you are using) for warnings or errors. I have recently added DX12 support to my engine and the debug output helped me out a lot. If that does not help, run the app with RenderDoc and see that you transform the vertices correctly.

https://renderdoc.org/

Edited by GuyWithBeard

Share this post


Link to post
Share on other sites

I don't know what else to try...

is this correct?

D3D12_VIEWPORT viewPort = {};
viewPort.TopLeftX = 0;
viewPort.TopLeftY = 0;
viewPort.Width = TextureWidth;
viewPort.Height = TextureHeight;
viewPort.MinDepth = 0;
viewPort.MaxDepth = 1;

D3D12_RECT m_scissorRect = {};
m_scissorRect.left = 0;
m_scissorRect.top = 0;
m_scissorRect.right = TextureWidth;
m_scissorRect.bottom = TextureHeight;

tried with Z for the vertices 0.5 and nothing

Edited by NikiTo

Share this post


Link to post
Share on other sites

Graphics debugging is not a guessing game. You need to run with all the debug tools you can, especially with d3d12 and inspect every elements for the mistake.

Enabling the debug layer prior to device creation is also pretty effective, many graphic issues won't trigger any HRESULT for example ( like improper descriptors in a heap or missing root parameters ).

PIX and Renderdoc are also very valuable once you have no validation errors and still not seeing what you should see.

 

In your case, i would look at the PSO creation parameters, like the output write mask and backface culling.

Edited by galop1n

Share this post


Link to post
Share on other sites

I would use debug for complex applications. I didn't expected when I started with such simple task to fall in such situation. I mean, it is a simple triangle! What I would need next for one single triangle?! Profiling performance tool?

I will rewrite it all from scratch and if the problem persists, I have no choice but try debugging tools too. :(

Share this post


Link to post
Share on other sites

D3D12 is not for everyone. This is an API for the 1% of applications where D3D11 is not enough ! AAA games, large data set processing and heavy renderer tools.

If you are not an expert at D3D11 nor know why you need D3D12, you don't need it and will just shoot a bullet in your foot using it. D3D12 is not a replacement of D3D11, both are made for cohabitation, and it won't change.

Rendering a triangle with D3D12 is a complex application already, you have to deal with gpu/cpu sync and life time management, manual memory management, complex idioms like queues, allocator and barriers, etc. Add a texture to your triangle and you reach a whole world of non trivial decisions to your design.

Share this post


Link to post
Share on other sites

You might also might trying to compile and run Microsoft DX12 samples on github.  See if they work... if they do just enable the debug layer. (which I think you should do anyway)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
  • Advertisement
  • Popular Tags

  • Similar Content

    • By NikiTo
      Recently I read that the APIs are faking some behaviors, giving to the user false impressions.
      I assume Shader Model 6 issues the wave instructions to the hardware for real, not faking them.

      Is Shader Model 6, mature enough? Can I expect the same level of optimization form Model 6 as from Model 5? Should I expect more bugs from 6 than 5?
      Would the extensions of the manufacturer provide better overall code than the Model 6, because, let say, they know their own hardware better?

      What would you prefer to use for your project- Shader Model 6 or GCN Shader Extensions for DirectX?

      Which of them is easier to set up and use in Visual Studio(practically)?
    • By mark_braga
      I am trying to get the DirectX Control Panel to let me do something like changing the break severity but everything is greyed out.
      Is there any way I can make the DirectX Control Panel work?
      Here is a screenshot of the control panel.
       

    • By Keith P Parsons
      I seem to remember seeing a version of directx 11 sdk that was implemented in directx12 on the microsoft website but I can't seem to find it anymore. Does any one else remember ever seeing this project or was it some kind off fever dream I had? It would be a nice tool for slowly porting my massive amount of directx 11 code to 12 overtime.
    • By NikiTo
      In the shader code, I need to determine to which AppendStructuredBuffers the data should append. And the AppendStructuredBuffers are more than 30.
      Is declaring 30+ AppendStructuredBuffers going to overkill the shader? Buffers descriptors should consume SGPRs.

      Some other way to distribute the output over multiple AppendStructuredBuffers?

      Is emulating the push/pop functionality with one single byte address buffer worth it? Wouldn't it be much slower than using AppendStructuredBuffer?
    • By Sobe118
      I am rendering a large number of objects for a simulation. Each object has instance data and the size of the instance data * number of objects is greater than 4GB. 
      CreateCommittedResource is giving me: E_OUTOFMEMORY Ran out of memory. 
      My PC has 128GB (only 8% ish used prior to testing this), I am running the DirectX app as x64. <Creating a CPU sided resource so GPU ram doesn't matter here, but using Titan X cards if that's a question>
      Simplified code test that recreates the issue (inserted the code into Microsofts D3D12HelloWorld): 
      unsigned long long int siz = pow(2, 32) + 1024; D3D12_FEATURE_DATA_D3D12_OPTIONS options; //MaxGPUVirtualAddressBitsPerResource = 40 m_device->CheckFeatureSupport(D3D12_FEATURE_D3D12_OPTIONS, &options, sizeof(options)); HRESULT oops = m_device->CreateCommittedResource( &CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_UPLOAD), D3D12_HEAP_FLAG_NONE, &CD3DX12_RESOURCE_DESC::Buffer(siz), D3D12_RESOURCE_STATE_GENERIC_READ, nullptr, IID_PPV_ARGS(&m_vertexBuffer)); if (oops != S_OK) { printf("Uh Oh"); } I tried enabling "above 4G" in the bios, which didn't do anything. I also tested using malloc to allocate a > 4G array, that worked in the app without issue. 
      Are there more options or build setup that needs to be done? (Using Visual Studio 2015)
      *Other approaches to solving this are welcome too. I thought about splitting up the set of items to render into a couple of sets with a size < 4G each but would rather have one set of objects. 
      Thank you.
    • By _void_
      Hey guys!
      I am not sure how to specify array slice for GatherRed function on Texture2DArray in HLSL.
      According to MSDN, "location" is one float value. Is it a 3-component float with 3rd component for array slice?
      Thanks!
    • By lubbe75
      I have a winforms project that uses SharpDX (DirectX 12). The SharpDX library provides a RenderForm (based on a System.Windows.Forms.Form). 
      Now I need to convert the project to WPF instead. What is the best way to do this?
      I have seen someone pointing to a library, SharpDX.WPF at Codeplex, but according to their info it only provides support up to DX11.
      (Sorry if this has been asked before. The search function seems to be down at the moment)
    • By korben_4_leeloo
      Hi.
      I wanted to experiment D3D12 development and decided to run some tutorials: Microsoft DirectX-Graphics-Samples, Braynzar Soft, 3dgep...Whatever sample I run, I've got the same crash.
      All the initialization process is going well, no error, return codes ok, but as soon as the Present method is invoked on the swap chain, I'm encountering a crash with the following call stack:
      https://drive.google.com/open?id=10pdbqYEeRTZA5E6Jm7U5Dobpn-KE9uOg
      The crash is an access violation to a null pointer ( with an offset of 0x80 )
      I'm working on a notebook, a toshiba Qosmio x870 with two gpu's: an integrated Intel HD 4000 and a dedicated NVIDIA GTX 670M ( Fermi based ). The HD 4000 is DX11 only and as far as I understand the GTX 670M is DX12 with a feature level 11_0. 
      I checked that the good adapter was chosen by the sample, and when the D3D12 device is asked in the sample with a 11_0 FL, it is created with no problem. Same for all the required interfaces ( swap chain, command queue...).
      I tried a lot of things to solve the problem or get some info, like forcing the notebook to always use the NVIDIA gpu, disabling the debug layer, asking for a different feature level ( by the way 11_0 is the only one that allows me to create the device, any other FL will fail at device creation )...
      I have the latest NVIDIA drivers ( 391.35 ), the latest Windows 10 sdk ( 10.0.17134.0 ) and I'm working under 
      Visual Studio 2017 Community.
      Thanks to anybody who can help me find the problem...
    • By _void_
      Hi guys!
      In a lot of samples found in the internet, people when initialize D3D12_SHADER_RESOURCE_VIEW_DESC with resource array size 1 would normallay set its dimension as Texture2D. If the array size is greater than 1, then they would use dimension as Texture2DArray, for an example.
      If I declare in the shader SRV as Texture2DArray but create SRV as Texture2D (array has only 1 texture) following the same principle as above, would this be OK? I guess, this should work as long as I am using array index 0 to access my texture?
      Thanks!
    • By _void_
      Hey!
       
      What is the recommended upper count for commands to record in the command list bundle?
      According to MSDN it is supposed to be a small number but do not elaborate on the actual number.
      I am thinking if I should pre-record commands in the command buffer and use ExecuteIndirect or maybe bundles instead.
      The number of commands to record in my case could vary greatly. 
       
      Thanks!
  • Advertisement
  • Popular Now

  • Forum Statistics

    • Total Topics
      631371
    • Total Posts
      2999610
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!