Jump to content
  • Advertisement
Sign in to follow this  
SteveHatcher

DX11 Simple draw command not working.

This topic is 1610 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi Guys,

 

I am working on a very primitive 'engine' that should just display a quad. The quad is its own class, and right now the program is working but just displaying the back buffers single color. I will post what I think is the relevant code and any help in identifying the problem would be greatly appreciated.

 

The quad 'object' is initialized in my engine::initialize routene

quad = new quad();
quad->initialize(graphics);

The quads initialization routine is:

bool quad::initialize(Graphics *g)
{
	graphics = g;                

	Vertex vtx[] =
		{
			Vertex(-0.2f, 0.2f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f),
			Vertex(0.2f, 0.2f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f),
			Vertex(-0.2f, -0.2f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f),
			Vertex(0.2f, -0.2f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f),
		};

	graphics->createVertexBuffer(ARRAYSIZE(vtx), vtx, vertexBuffer);
	graphics->createShaders();

	return true;
}

The graphics createVertexBuffer member function is:

HRESULT Graphics::createVertexBuffer(unsigned int numVertices, Vertex *vertexData, ID3D11Buffer* vertexBuffer)
{

	HRESULT result = E_FAIL;

	D3D11_BUFFER_DESC VertexBufferDesc;
	ZeroMemory(&VertexBufferDesc, sizeof(VertexBufferDesc));

	VertexBufferDesc.Usage = D3D11_USAGE_DEFAULT;
	VertexBufferDesc.ByteWidth = sizeof(Vertex)*numVertices;
	VertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
	VertexBufferDesc.CPUAccessFlags = 0;
	VertexBufferDesc.MiscFlags = 0;

	D3D11_SUBRESOURCE_DATA VertexBufferData;

	ZeroMemory(&VertexBufferData, sizeof(VertexBufferData));
	VertexBufferData.pSysMem = vertexData;
	result = d3d11Device->CreateBuffer(&VertexBufferDesc, &VertexBufferData, &vertexBuffer);

	return result;
}

The graphics createShaders() function is:

HRESULT Graphics::createShaders()
{
	D3DX11CompileFromFile("Effects.fx", 0, 0, "VS", "vs_4_0", 0, 0, 0, &VS_Buffer, 0, 0);
	D3DX11CompileFromFile("Effects.fx", 0, 0, "PS", "ps_4_0", 0, 0, 0, &PS_Buffer, 0, 0);

	d3d11Device->CreateVertexShader(VS_Buffer->GetBufferPointer(), VS_Buffer->GetBufferSize(), NULL, &VS);
	d3d11Device->CreatePixelShader(PS_Buffer->GetBufferPointer(), PS_Buffer->GetBufferSize(), NULL, &PS);

	d3d11DevCon->VSSetShader(VS, 0, 0);
	d3d11DevCon->PSSetShader(PS, 0, 0);

	return result;
}

The main loop contains the quad->draw command which is:

const void quad::draw()
{
	graphics->drawQuad(vertexBuffer); 
}

and finally drawQuad has a lot of stuff dumped in it which probably should be in other DirectX initialization areas but I have been trying everything to get it working.

bool Graphics::drawQuad(ID3D11Buffer *vertexBuffer)
{
	float bgColor[4] = { 1.0f, 1.0f, 1.0f, 1.0f }; //Clear our backbuffer
	d3d11DevCon->ClearRenderTargetView(renderTargetView, bgColor);

	d3d11DevCon->ClearDepthStencilView(depthStencilView, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);

	D3D11_BUFFER_DESC cbbd;
	ZeroMemory(&cbbd, sizeof(D3D11_BUFFER_DESC));

	cbbd.Usage = D3D11_USAGE_DEFAULT;
	cbbd.ByteWidth = sizeof(cbPerObject);
	cbbd.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
	cbbd.CPUAccessFlags = 0;
	cbbd.MiscFlags = 0;
	d3d11Device->CreateBuffer(&cbbd, NULL, &cbPerObjectBuffer);

	d3d11DevCon->ClearDepthStencilView(depthStencilView, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);

	UINT stride = sizeof(Vertex);
	UINT offset = 0;

	d3d11DevCon->IASetVertexBuffers(0, 1, &vertexBuffer, &stride, &offset);

	D3D11_VIEWPORT viewport;
	ZeroMemory(&viewport, sizeof(D3D11_VIEWPORT));

	viewport.TopLeftX = 0;
	viewport.TopLeftY = 0;
	viewport.Width = GAME_WIDTH;
	viewport.Height = GAME_HEIGHT;
	viewport.MinDepth = 0.0f;
	viewport.MaxDepth = 1.0f;

	d3d11Device->CreateInputLayout(layout, numElements, VS_Buffer->GetBufferPointer(), VS_Buffer->GetBufferSize(), &vertLayout);
	d3d11DevCon->IASetInputLayout(vertLayout);

	d3d11DevCon->RSSetViewports(1, &viewport);

	XMMATRIX WVP = XMMatrixIdentity();
	cbPerObj.WVP = XMMatrixTranspose(WVP);
	d3d11DevCon->UpdateSubresource(cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0);
	d3d11DevCon->VSSetConstantBuffers(0, 1, &cbPerObjectBuffer);

	d3d11DevCon->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
	d3d11DevCon->Draw(4, 0);

	return true;
}

and finally in the loop graphics->showBackbuffer(); is called which is simply

HRESULT Graphics::showBackbuffer()
{
	SwapChain->Present(0, 0);
	return result;
}

My base graphics.h class is set out like:

struct Vertex	//Overloaded Vertex Structure
{
	Vertex(){}
	Vertex(float x, float y, float z, float cr, float cg, float cb, float ca) : pos(x, y, z), color(cr, cg, cb, ca){}

	XMFLOAT3 pos;
	XMFLOAT4 color;
};

struct cbPerObject
{
	XMMATRIX  WVP;
};

const D3D11_INPUT_ELEMENT_DESC layout[] =
{
	{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
	{ "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};

const UINT numElements = ARRAYSIZE(layout);

class Graphics
{
private:
	//DirectX pointers
	IDXGISwapChain* SwapChain;
	ID3D11Device* d3d11Device = NULL;
	ID3D11DeviceContext* d3d11DevCon;
	ID3D11RenderTargetView* renderTargetView;
	ID3D11Texture2D* depthStencilBuffer;
	ID3D11DepthStencilView* depthStencilView;
	ID3D11Buffer* vertexBuffer;
	ID3D11Buffer* cbPerObjectBuffer;
	ID3D11VertexShader* VS;
	ID3D11PixelShader* PS;
	ID3D10Blob* VS_Buffer;
	ID3D10Blob* PS_Buffer;
	ID3D11InputLayout* vertLayout;
	ID3D11Buffer* vertexBufferInternal;

I have set the WVP to an identity matrix just because I don't want to worry about a camera class just yet. So I am setting as much to a simple value that should work as I can.

There is much more but hopefully the error lies somewhere in what I have posted. Any help is greatly appreciated.

 

Thanks

Share this post


Link to post
Share on other sites
Advertisement
  1. I cannot see ID3D11DeviceContext::OMSetRenderTargets() call
  2. Vertex & Pixel shaders might contain problem as well, would be useful to see them
  3. Graphics::createVertexBuffer creates buffer but doesn't save it to variable like you imagine; it needs pointer reference or double pointer

Share this post


Link to post
Share on other sites

Hi,

 

Thanks for the reply.

 

1. This is taken care of in the graphics::initialize section:

	ID3D11Texture2D* BackBuffer;
	SwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (void**)&BackBuffer);

	d3d11Device->CreateRenderTargetView(BackBuffer, NULL, &renderTargetView);
	BackBuffer->Release();

	d3d11DevCon->OMSetRenderTargets(1, &renderTargetView, NULL);

2. I am quite confident the Vertex and Pixel shaders are okay because this program I am making is adapted from a no classes version that has mostly the same code with the same shaders and works.

 

3. I have read some things like this but don't fully understand it quite yet. I found a sample from microsoft, is this what mine should be like instead?

createVertexBuffer(unsigned int numVertices, BasicVertex *vertexData, ID3D11Buffer **vertexBuffer);

Thanks for your help.

Edited by SteveHatcher

Share this post


Link to post
Share on other sites


is this what mine should be like instead?

That should do it. Does it work yet?

If not try to enable (if not yet enabled) debug layer. Debugging will cause D3D to throw messages into Visual Studio if any problems arise.

Share this post


Link to post
Share on other sites

Ahh yes, thank you very much. That worked perfectly.

 

Could you please give me a very dumbed down explanation as to why I need to pass a pointer to a pointer? When I did not use objects, I simply needed to save it as type ID3D11Buffer*.

 

Thanks

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By chiffre
      Introduction:
      In general my questions pertain to the differences between floating- and fixed-point data. Additionally I would like to understand when it can be advantageous to prefer fixed-point representation over floating-point representation in the context of vertex data and how the hardware deals with the different data-types. I believe I should be able to reduce the amount of data (bytes) necessary per vertex by choosing the most opportune representations for my vertex attributes. Thanks ahead of time if you, the reader, are considering the effort of reading this and helping me.
      I found an old topic that shows this is possible in principal, but I am not sure I understand what the pitfalls are when using fixed-point representation and whether there are any hardware-based performance advantages/disadvantages.
      (TLDR at bottom)
      The Actual Post:
      To my understanding HLSL/D3D11 offers not just the traditional floating point model in half-,single-, and double-precision, but also the fixed-point model in form of signed/unsigned normalized integers in 8-,10-,16-,24-, and 32-bit variants. Both models offer a finite sequence of "grid-points". The obvious difference between the two models is that the fixed-point model offers a constant spacing between values in the normalized range of [0,1] or [-1,1], while the floating point model allows for smaller "deltas" as you get closer to 0, and larger "deltas" the further you are away from 0.
      To add some context, let me define a struct as an example:
      struct VertexData { float[3] position; //3x32-bits float[2] texCoord; //2x32-bits float[3] normals; //3x32-bits } //Total of 32 bytes Every vertex gets a position, a coordinate on my texture, and a normal to do some light calculations. In this case we have 8x32=256bits per vertex. Since the texture coordinates lie in the interval [0,1] and the normal vector components are in the interval [-1,1] it would seem useful to use normalized representation as suggested in the topic linked at the top of the post. The texture coordinates might as well be represented in a fixed-point model, because it seems most useful to be able to sample the texture in a uniform manner, as the pixels don't get any "denser" as we get closer to 0. In other words the "delta" does not need to become any smaller as the texture coordinates approach (0,0). A similar argument can be made for the normal-vector, as a normal vector should be normalized anyway, and we want as many points as possible on the sphere around (0,0,0) with a radius of 1, and we don't care about precision around the origin. Even if we have large textures such as 4k by 4k (or the maximum allowed by D3D11, 16k by 16k) we only need as many grid-points on one axis, as there are pixels on one axis. An unsigned normalized 14 bit integer would be ideal, but because it is both unsupported and impractical, we will stick to an unsigned normalized 16 bit integer. The same type should take care of the normal vector coordinates, and might even be a bit overkill.
      struct VertexData { float[3] position; //3x32-bits uint16_t[2] texCoord; //2x16bits uint16_t[3] normals; //3x16bits } //Total of 22 bytes Seems like a good start, and we might even be able to take it further, but before we pursue that path, here is my first question: can the GPU even work with the data in this format, or is all I have accomplished minimizing CPU-side RAM usage? Does the GPU have to convert the texture coordinates back to a floating-point model when I hand them over to the sampler in my pixel shader? I have looked up the data types for HLSL and I am not sure I even comprehend how to declare the vertex input type in HLSL. Would the following work?
      struct VertexInputType { float3 pos; //this one is obvious unorm half2 tex; //half corresponds to a 16-bit float, so I assume this is wrong, but this the only 16-bit type I found on the linked MSDN site snorm half3 normal; //same as above } I assume this is possible somehow, as I have found input element formats such as: DXGI_FORMAT_R16G16B16A16_SNORM and DXGI_FORMAT_R16G16B16A16_UNORM (also available with a different number of components, as well as different component lengths). I might have to avoid 3-component vectors because there is no 3-component 16-bit input element format, but that is the least of my worries. The next question would be: what happens with my normals if I try to do lighting calculations with them in such a normalized-fixed-point format? Is there no issue as long as I take care not to mix floating- and fixed-point data? Or would that work as well? In general this gives rise to the question: how does the GPU handle fixed-point arithmetic? Is it the same as integer-arithmetic, and/or is it faster/slower than floating-point arithmetic?
      Assuming that we still have a valid and useful VertexData format, how far could I take this while remaining on the sensible side of what could be called optimization? Theoretically I could use the an input element format such as DXGI_FORMAT_R10G10B10A2_UNORM to pack my normal coordinates into a 10-bit fixed-point format, and my verticies (in object space) might even be representable in a 16-bit unsigned normalized fixed-point format. That way I could end up with something like the following struct:
      struct VertexData { uint16_t[3] pos; //3x16bits uint16_t[2] texCoord; //2x16bits uint32_t packedNormals; //10+10+10+2bits } //Total of 14 bytes Could I use a vertex structure like this without too much performance-loss on the GPU-side? If the GPU has to execute some sort of unpacking algorithm in the background I might as well let it be. In the end I have a functioning deferred renderer, but I would like to reduce the memory footprint of the huge amount of vertecies involved in rendering my landscape. 
      TLDR: I have a lot of vertices that I need to render and I want to reduce the RAM-usage without introducing crazy compression/decompression algorithms to the CPU or GPU. I am hoping to find a solution by involving fixed-point data-types, but I am not exactly sure how how that would work.
    • By cozzie
      Hi all,
      I was wondering it it matters in which order you draw 2D and 3D items, looking at the BeginDraw/EndDraw calls on a D2D rendertarget.
      The order in which you do the actual draw calls is clear, 3D first then 2D, means the 2D (DrawText in this case) is in front of the 3D scene.
      The question is mainly about when to call the BeginDraw and EndDraw.
      Note that I'm drawing D2D stuff through a DXGI surface linked to the 3D RT.
      Option 1:
      A - Begin frame, clear D3D RT
      B - Draw 3D
      C - BeginDraw D2D RT
      D - Draw 2D
      E - EndDraw D2D RT
      F - Present
      Option 2:
      A - Begin frame, clear D3D RT + BeginDraw D2D RT
      B - Draw 3D
      C - Draw 2D
      D - EndDraw D2D RT
      E- Present
      Would there be a difference (performance/issue?) in using option 2? (versus 1)
      Any input is appreciated.
    • By Sebastian Werema
      Do you know any papers that cover custom data structures like lists or binary trees implemented in hlsl without CUDA that work perfectly fine no matter how many threads try to use them at any given time?
    • By cozzie
      Hi all,
      Last week I noticed that when I run my test application(s) in Renderdoc, it crashes when it enable my code that uses D2D/DirectWrite. In Visual Studio no issues occur (debug or release), but when I run the same executable in Renderdoc, it crashes somehow (assert of D2D rendertarget or without any information). Before I spend hours on debugging/ figuring it out, does someone have experience with this symptom and/or know if Renderdoc has known issues with D2D? (if so, that would be bad news for debugging my application in the future );
      I can also post some more information on what happens, code and which code commented out, eliminates the problems (when running in RenderDoc).
      Any input is appreciated.
    • By lonewolff
      Hi Guys,
      I understand how to create input layouts etc... But I am wondering is it at all possible to derive an input layout from a shader and create the input layout directly from this? (Rather than manually specifying the input layout format?)
      Thanks in advance :)
       
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!