DX11 Problem with displaying Cube in DX11

Recommended Posts

Cranberry    147

Hey Guys,

I'm developing a game using DirectX 11, so I started of by creating the basic structure and tried to render a simple cube.

The problem is that the cube is somehow stretched vertically over the window and the colors (red for top / blue for bottom) are mixed:

I checked everything via the Graphics Debugger and all the values seem to be okay, but the cube already looks strange in the vertex shader:

I thought there might be a problem with the projection matrix but it looks okay to me:

m_ProjectionMatrix = XMMatrixPerspectiveFovLH(0.25f*3.14159265359, (float)m_pWindow->GetHeight() / (float)m_pWindow->GetWidth(), 1.0f, 1000.0f);


Has anyone had the same problem or does anybody know how I could solve this problem?

Cranberry

Edited by Cranberry

Share on other sites
MJP    19755

Are you setting the viewport properly?

Share on other sites
Cranberry    147
D3D11_VIEWPORT viewPort;
viewPort.TopLeftX = 0.0f;
viewPort.TopLeftY = 0.0f;
viewPort.Width = static_cast<float>(pWindow->GetWidth());
viewPort.Height = static_cast<float>(pWindow->GetHeight());
viewPort.MinDepth = 0.0f;
viewPort.MaxDepth = 1.0f;



Looks fine to me.

MJP    19755

Share on other sites
Cranberry    147

cbuffer cbPerObject
{
matrix worldMatrix;
matrix viewMatrix;
matrix projectionMatrix;
};

struct VertexIn
{
float3 Pos : POSITION;
float4 Color : COLOR;
};

struct VertexOut
{
float4 PosH : SV_POSITION;
float4 Color : COLOR;
};

VertexOut VS(VertexIn vin)
{
VertexOut vout;
vout.PosH = mul(float4(vin.Pos, 1.0f), worldMatrix);
vout.PosH = mul(vout.PosH, viewMatrix);
vout.PosH = mul(vout.PosH, projectionMatrix);
vout.Color = vin.Color;
return vout;
}

float4 PS(VertexOut pin) : SV_TARGET
{
return pin.Color;
}


Yeah I transpose all three of them.

Here's my Render function, maybe you can spot a mistake:

void Game::Render()
{
m_pRenderSystem->PreRender();

//Set geometry buffers
UINT stride = sizeof(Vertex);
UINT offset = 0;

m_pRenderSystem->GetDeviceContext()->IASetVertexBuffers(0, 1, &m_pCubeVertexBuffer, &stride, &offset);
m_pRenderSystem->GetDeviceContext()->IASetIndexBuffer(m_pCubeIndexBuffer, DXGI_FORMAT_R32_UINT, 0);

//Set constant buffer
D3D11_BUFFER_DESC cbPerObjectDesc;
cbPerObjectDesc.Usage = D3D11_USAGE_DYNAMIC;
cbPerObjectDesc.ByteWidth = sizeof(cbPerObject);
cbPerObjectDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
cbPerObjectDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
cbPerObjectDesc.MiscFlags = 0;
cbPerObjectDesc.StructureByteStride = 0;

m_pRenderSystem->GetDevice()->CreateBuffer(&cbPerObjectDesc, 0, &m_cbPerObject);

D3D11_MAPPED_SUBRESOURCE mappedResource;
cbPerObject* dataPtr;
unsigned int bufferNumber;

dataPtr = (cbPerObject*) mappedResource.pData;

XMMatrixTranspose(m_WorldMatrix);
XMMatrixTranspose(m_ViewMatrix);
XMMatrixTranspose(m_ProjectionMatrix);

dataPtr->worldMatrix = m_WorldMatrix;
dataPtr->viewMatrix = m_ViewMatrix;
dataPtr->projectionMatrix = m_ProjectionMatrix;

m_pRenderSystem->GetDeviceContext()->Unmap(m_cbPerObject, 0);

// Set the position of the constant buffer in the vertex shader.
bufferNumber = 0;

// Finanly set the constant buffer in the vertex shader with the updated values.
m_pRenderSystem->GetDeviceContext()->VSSetConstantBuffers(bufferNumber, 1, &m_cbPerObject);

//Draw cube
m_pRenderSystem->GetDeviceContext()->DrawIndexed(36, 0, 0);

m_pRenderSystem->PostRender();
}


Share on other sites
ongamex92    3255

vout.PosH = mul(float4(vin.Pos, 1.0f), worldMatrix);
vout.PosH = mul(vout.PosH, viewMatrix);
vout.PosH = mul(vout.PosH, projectionMatrix);

and

XMMatrixTranspose(m_WorldMatrix);

Should give you the deriered multiplication order. (im not in touch with XMMath)

Share on other sites
MJP    19755

XMMatrixTranspose will return the transpose of the matrix, it won't modify the matrix that's passed in as its argument. So you either need to assign the result to a temporary variable, or assign the result directly to your mapped constant buffer data.

As the above poster suggested, you can tell the shader compiler to expect row-major matrices which will allow you to skip the annoying transpose step in your C++ code. To do this you can use the pragma that he mentioned, or you can add "row_major" to the declaration of the matrix in your HLSL constant buffer definition. There are also flags that you can pass to fxc.exe or D3DCompile that will achieve the same thing.

Share on other sites
Cranberry    147

Thank you so much!

I was thinking of the old D3DXTransposeMatrix which modified the matrix via a reference.

Create an account

Register a new account

• Similar Content

• By isu diss
I'm trying to code Rayleigh part of Nishita's model (Display Method of the Sky Color Taking into Account Multiple Scattering). I get black screen no colors. Can anyone find the issue for me?

• By Endurion
I have a gaming framework with an renderer interface. Those support DX8, DX9 and latest, DX11. Both DX8 and DX9 use fixed function pipeline, while DX11 obviously uses shaders. I've got most of the parts working fine, as in I can switch renderers and notice almost no difference. The most advanced features are 2 directional lights with a single texture
My last problem is lighting; albeit there's documentation on the D3D lighting model I still can't get the behaviour right. My mistake shows most prominently in the dark side opposite the lights. I'm pretty sure the ambient calculation is off, but that one's supposed to be the most simple one and should be hard to get wrong.
Interestingly I've been searching high and low, and have yet to find a resource that shows how to build a HLSL shader where diffuse, ambient and specular are used together with material properties. I've got various shaders for all the variations I'm supporting. I stepped through the shader with the graphics debugger, but the calculation seems to do what I want. I'm just not sure the formula is correct.
This one should suffice though, it's doing two directional lights, texture modulated with vertex color and a normal. Maybe someone can spot one (or more mistakes). And yes, this is in the vertex shader and I'm aware lighting will be as "bad" as in fixed function; that's my goal currently.
• By Mercesa
Hey folks. So I'm having this problem in which if my camera is close to a surface, the SSAO pass suddenly spikes up to around taking 16 milliseconds.
When still looking towards the same surface, but less close. The framerate resolves itself and becomes regular again.
This happens with ANY surface of my model, I am a bit clueless in regards to what could cause this. Any ideas?
In attached image: y axis is time in ms, x axis is current frame. The dips in SSAO milliseconds are when I moved away from the surface, the peaks happen when I am very close to the surface.

Edit: So I've done some more in-depth profiling with Nvidia nsight. So these are the facts from my results
Count of command buffers goes from 4 (far away from surface) to ~20(close to surface).
The command buffer duration in % goes from around ~30% to ~99%
Sometimes the CPU duration takes up to 0.03 to 0.016 milliseconds per frame while comparatively usually it takes around 0.002 milliseconds.
I am using a vertex shader which generates my full-screen quad and afterwards I do my SSAO calculations in my pixel shader, could this be a GPU driver bug? I'm a bit lost myself. It seems there could be a CPU/GPU resource stall. But why would the amount of command buffers be variable depending on distance from a surface?

Edit n2: Any resolution above 720p starts to have this issue, and I am fairly certain my SSAO is not that performance heavy it would crap itself at a bit higher resolutions.

• In DirectX 11 we have a 24 bit integer depth + 8bit stencil format for depth-stencil resources ( DXGI_FORMAT_D24_UNORM_S8_UINT ). However, in an AMD GPU documentation for consoles I have seen they mentioned, that internally this format is implemented as a 64 bit resource with 32 bits for depth (but just truncated for 24 bits) and 32 bits for stencil (truncated to 8 bits). AMD recommends using a 32 bit floating point depth buffer instead with 8 bit stencil which is this format: DXGI_FORMAT_D32_FLOAT_S8X24_UINT.
Does anyone know why this is? What is the usual way of doing this, just follow the recommendation and use a 64 bit depthstencil? Are there performance considerations or is it just recommended to not waste memory? What about Nvidia and Intel, is using a 24 bit depthbuffer relevant on their hardware?
Cheers!

• By gsc
Hi! I am trying to implement simple SSAO postprocess. The main source of my knowledge on this topic is that awesome tutorial.
But unfortunately something doesn't work... And after a few long hours I need some help. Here is my hlsl shader:
float3 randVec = _noise * 2.0f - 1.0f; // noise: vec: {[0;1], [0;1], 0} float3 tangent = normalize(randVec - normalVS * dot(randVec, normalVS)); float3 bitangent = cross(tangent, normalVS); float3x3 TBN = float3x3(tangent, bitangent, normalVS); float occlusion = 0.0; for (int i = 0; i < kernelSize; ++i) { float3 samplePos = samples[i].xyz; // samples: {[-1;1], [-1;1], [0;1]} samplePos = mul(samplePos, TBN); samplePos = positionVS.xyz + samplePos * ssaoRadius; float4 offset = float4(samplePos, 1.0f); offset = mul(offset, projectionMatrix); offset.xy /= offset.w; offset.y = -offset.y; offset.xy = offset.xy * 0.5f + 0.5f; float sampleDepth = tex_4.Sample(textureSampler, offset.xy).a; sampleDepth = vsPosFromDepth(sampleDepth, offset.xy).z; const float threshold = 0.025f; float rangeCheck = abs(positionVS.z - sampleDepth) < ssaoRadius ? 1.0 : 0.0; occlusion += (sampleDepth <= samplePos.z + threshold ? 1.0 : 0.0) * rangeCheck; } occlusion = saturate(1 - (occlusion / kernelSize)); And current result: http://imgur.com/UX2X1fc
I will really appreciate for any advice!

• 21
• 15
• 18
• 10
• 18