# DX11 Render issue; please tell the dumb newbie where he's going wrong

## Recommended Posts

Apologies if this is in the wrong forum as perhaps it's a general problem, but I am specifically using the DirectX 11 runtime. I'm sure my problem is some basic matter that I just don't understand yet and I'm hopeful someone can help. I'm new to graphics, and sometimes I don't know what it is that I don't yet know.

Here's my situation. I have a dirt-simple DX11 renderer into which I'm loading mesh data that was parsed in from FBX (created in Maya and Blender). As long as the mesh is a simple primitive like a cube or a sphere everything works wonderfully. The mesh is recreated faithfully on my screen, and in fact I was getting to work on pixel shaders when I noticed something odd.

If the mesh is less simple -- say, if I grabbed a face in Maya and moved or extruded it a bit -- then that extruded geometry does not clip properly when I render the mesh. Exhibit A:

As you can see, the modified geometry doesn't cull correctly.

I didn't know what to make of that so I made a couple of geometry shaders to help me debug. That's what you see in the video. The lines are the calculated face normals and the vertex colors are coded R, G, B in the order received by the geometry shader. I had originally thought that the offending polys might be getting wound cw rather than ccw as a result of operations in Maya, but they all appear to be ccw. The calculated normals also appear to be pointing the right way.

I've looked at my depth/stencil states and they appear fine, but I expected that since I can draw spheres and cubes and such correctly all day. My near clip plane isn't set to zero; I thought of that.

Frankly I don't know what else to check. This has been driving me batty all day. Please send help.

** edit

Just wanted to add that I do have the debug layers going and I'm checking my HRESULTS and nothing is amiss as far as that goes. And I'm drawing with an opaque blend state.

Edited by Anonymous Noob

##### Share on other sites
kubera    1587

This is a correct forum.

It looks like your algorithm is ignoring detph.

Maybe you would paste your HLSL here.

##### Share on other sites

That would be nice but if that were the case wouldn't everything be drawing incorrectly? In an effort to nail down the problem I've been running a very simplified shader on this:

cbuffer mat_camera : register( b0 )
{
matrix gView;
matrix gProjection;
float3 gCamPosition;
float3 gCamTarget;
};

cbuffer mat_object : register( b1 )
{
matrix gWorld;
};

struct VS_IN
{
float3 position	: POSITION;
float3 normal	: NORMAL;
float3 tangent	: TANGENT;
float3 binormal	: BINORMAL;
float2 uv		: TEXCOORD0;
};

struct VS_OUT
{
float4 position : SV_POSITION;
float3 normal	: NORMAL;
float3 tangent	: TANGENT;
float3 binormal	: BINORMAL;
float2 uv		: TEXCOORD0;
};

VS_OUT main( VS_IN IN )
{
VS_OUT OUT;

OUT.position = float4( IN.position.xyz, 1.0 );
OUT.position = mul( OUT.position, gWorld );
OUT.position = mul( OUT.position, gView );
OUT.position = mul( OUT.position, gProjection );

OUT.normal = normalize( mul( IN.normal, ( float3x3 )gWorld ) );

OUT.tangent = IN.tangent;
OUT.binormal = IN.binormal;
OUT.uv = IN.uv;

return OUT;
}

Texture2D		gTex;
SamplerState	gSampler;

struct PS_IN
{
float4 position : SV_POSITION;
float3 normal	: NORMAL;
float3 tangent	: TANGENT;
float3 binormal	: BINORMAL;
float2 uv		: TEXCOORD0;
};

float4 main( PS_IN IN ) : SV_TARGET
{
float4 pixel = gTex.Sample( gSampler, IN.uv );
return  pixel;
}


##### Share on other sites
Mona2000    1967

It looks like a depth stencil issue to me. Can you post your depth stencil related code and/or double check it with a frame debugger?

I've looked at my depth/stencil states and they appear fine, but I expected that since I can draw spheres and cubes and such correctly all day.

Your spheres and cubes render correctly because of back-face culling (i.e. you never have two front-faces "overlapping").

##### Share on other sites

Solved. So after all that it turned out to be the order of operations in my handler for WM_SIZE. There's a very good reason why it was acting like there was no depth buffer: the call to OMSetRenderTargets was BEFORE I recreated the depth view. Oh, deary me. Obvious really when I saw it. And of course OMSetRenderTargets has to be a void-return; an HRESULT_STUPID_USER return might have saved me the time.

Mona, you were spot on. Thanks for pointing me in the right direction.

** edit

In retrospect, you know what else should have been a dead giveaway? The linestream coming out of my geometry shader, back when I thought it might have been poly winding. The lines weren't depth-clipped at all. Bloody hell I must have taken a stupid pill this morning.

Edited by Anonymous Noob

## Create an account

Register a new account

• ### Similar Content

• By isu diss
I'm trying to code Rayleigh part of Nishita's model (Display Method of the Sky Color Taking into Account Multiple Scattering). I get black screen no colors. Can anyone find the issue for me?

• By Endurion
I have a gaming framework with an renderer interface. Those support DX8, DX9 and latest, DX11. Both DX8 and DX9 use fixed function pipeline, while DX11 obviously uses shaders. I've got most of the parts working fine, as in I can switch renderers and notice almost no difference. The most advanced features are 2 directional lights with a single texture
My last problem is lighting; albeit there's documentation on the D3D lighting model I still can't get the behaviour right. My mistake shows most prominently in the dark side opposite the lights. I'm pretty sure the ambient calculation is off, but that one's supposed to be the most simple one and should be hard to get wrong.
Interestingly I've been searching high and low, and have yet to find a resource that shows how to build a HLSL shader where diffuse, ambient and specular are used together with material properties. I've got various shaders for all the variations I'm supporting. I stepped through the shader with the graphics debugger, but the calculation seems to do what I want. I'm just not sure the formula is correct.
This one should suffice though, it's doing two directional lights, texture modulated with vertex color and a normal. Maybe someone can spot one (or more mistakes). And yes, this is in the vertex shader and I'm aware lighting will be as "bad" as in fixed function; that's my goal currently.
• By Mercesa
Hey folks. So I'm having this problem in which if my camera is close to a surface, the SSAO pass suddenly spikes up to around taking 16 milliseconds.
When still looking towards the same surface, but less close. The framerate resolves itself and becomes regular again.
This happens with ANY surface of my model, I am a bit clueless in regards to what could cause this. Any ideas?
In attached image: y axis is time in ms, x axis is current frame. The dips in SSAO milliseconds are when I moved away from the surface, the peaks happen when I am very close to the surface.

Edit: So I've done some more in-depth profiling with Nvidia nsight. So these are the facts from my results
Count of command buffers goes from 4 (far away from surface) to ~20(close to surface).
The command buffer duration in % goes from around ~30% to ~99%
Sometimes the CPU duration takes up to 0.03 to 0.016 milliseconds per frame while comparatively usually it takes around 0.002 milliseconds.
I am using a vertex shader which generates my full-screen quad and afterwards I do my SSAO calculations in my pixel shader, could this be a GPU driver bug? I'm a bit lost myself. It seems there could be a CPU/GPU resource stall. But why would the amount of command buffers be variable depending on distance from a surface?

Edit n2: Any resolution above 720p starts to have this issue, and I am fairly certain my SSAO is not that performance heavy it would crap itself at a bit higher resolutions.

• In DirectX 11 we have a 24 bit integer depth + 8bit stencil format for depth-stencil resources ( DXGI_FORMAT_D24_UNORM_S8_UINT ). However, in an AMD GPU documentation for consoles I have seen they mentioned, that internally this format is implemented as a 64 bit resource with 32 bits for depth (but just truncated for 24 bits) and 32 bits for stencil (truncated to 8 bits). AMD recommends using a 32 bit floating point depth buffer instead with 8 bit stencil which is this format: DXGI_FORMAT_D32_FLOAT_S8X24_UINT.
Does anyone know why this is? What is the usual way of doing this, just follow the recommendation and use a 64 bit depthstencil? Are there performance considerations or is it just recommended to not waste memory? What about Nvidia and Intel, is using a 24 bit depthbuffer relevant on their hardware?
Cheers!

• By gsc
Hi! I am trying to implement simple SSAO postprocess. The main source of my knowledge on this topic is that awesome tutorial.
But unfortunately something doesn't work... And after a few long hours I need some help. Here is my hlsl shader:
float3 randVec = _noise * 2.0f - 1.0f; // noise: vec: {[0;1], [0;1], 0} float3 tangent = normalize(randVec - normalVS * dot(randVec, normalVS)); float3 bitangent = cross(tangent, normalVS); float3x3 TBN = float3x3(tangent, bitangent, normalVS); float occlusion = 0.0; for (int i = 0; i < kernelSize; ++i) { float3 samplePos = samples[i].xyz; // samples: {[-1;1], [-1;1], [0;1]} samplePos = mul(samplePos, TBN); samplePos = positionVS.xyz + samplePos * ssaoRadius; float4 offset = float4(samplePos, 1.0f); offset = mul(offset, projectionMatrix); offset.xy /= offset.w; offset.y = -offset.y; offset.xy = offset.xy * 0.5f + 0.5f; float sampleDepth = tex_4.Sample(textureSampler, offset.xy).a; sampleDepth = vsPosFromDepth(sampleDepth, offset.xy).z; const float threshold = 0.025f; float rangeCheck = abs(positionVS.z - sampleDepth) < ssaoRadius ? 1.0 : 0.0; occlusion += (sampleDepth <= samplePos.z + threshold ? 1.0 : 0.0) * rangeCheck; } occlusion = saturate(1 - (occlusion / kernelSize)); And current result: http://imgur.com/UX2X1fc
I will really appreciate for any advice!

• 15
• 14
• 16
• 10
• 18