DX11 DX11 - Screen Tearing

This topic is 1820 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Has anyone ever had issues with screen tearing under DX11?

- I'm pretty sure I'm buffering properly as per the example programs.
- I present with Present(1,0);
- It is properly limiting my frame rate to 60FPS.

Here is the code I'm using to initialize the swap chain.....

    memset(&l_desc,0,sizeof(DXGI_SWAP_CHAIN_DESC));

l_desc.BufferCount = 1;
l_desc.BufferDesc.Width = in_width;
l_desc.BufferDesc.Height = in_height;
l_desc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
l_desc.BufferDesc.RefreshRate.Numerator = 60;
l_desc.BufferDesc.RefreshRate.Denominator = 1;
l_desc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
l_desc.OutputWindow = *(HWND *)in_handle;
l_desc.SampleDesc.Count = 1;
l_desc.SampleDesc.Quality = 0;
l_desc.Windowed = TRUE;

l_result = l_d3d->QueryInterface(__uuidof(IDXGIDevice),(void **)&l_device);

if(SUCCEEDED(l_result))
{

if(SUCCEEDED(l_result))
{

if(SUCCEEDED(l_result))
{
l_result = l_factory->CreateSwapChain(l_d3d,&l_desc,&m_chain);
l_factory->Release();

if(FAILED(l_result))
{
MessageBox(NULL,"Couldn't create a swap chain on the device!","Error!",MB_OK | MB_ICONINFORMATION);
return false;
}
}

}

l_device->Release();
}

l_result = m_chain->GetBuffer(0,__uuidof(ID3D11Texture2D),(LPVOID *)&l_texture2D);

Any ideas as to why I'm still getting horrible tearing? Thanks in advance!

Share on other sites

I am not, do not aspire to, nor ever will be, a DirectX programmer.

But, as a C programmer, I can tell you that your error checking can fail silently. I don't know exactly what that code is supposed to do, but it's possible a number of the lines never run due to an error, with no error being reported.

As a graphics programmer, I'll tell you that screen tearing is often caused by a lack of synchronization between the update rate and the refresh rate. It sounded like you already knew that.

Edited by Geometrian

Share on other sites

Thanks for the feedback! Sadly the sloppy program flow in the pasted source has nothing to do with my issue. Does anyone have a similar experience with screen tearing in DX11 and if so how did you solve it?

Share on other sites

I'm quite certain that your problem is actually in these two lines:

    l_desc.BufferDesc.RefreshRate.Numerator = 60;
l_desc.BufferDesc.RefreshRate.Denominator = 1;


Your monitor may not actually be refreshing at 60Hz, but instead at slightly under or slightly over the 60.  Rather than just putting in hard-coded values you should enumerate your modes properly using IDXGIFactory::EnumAdapters, IDXGIAdapter::EnumOutputs and IDXGIOutput::GetDisplayModeList, then use the refresh rates provided by that enumeration instead.

Share on other sites

When in windowed mode, try using ID3D11DeviceContext::Flush before Present.

Share on other sites

Erik Rufelt - After a lot of investigation ID3D11DeviceContext::Flush has indeed solved my problem. Funny enough this was one of my first thoughts when I started looking into the issue but then I got obsessed with the refresh rate numerator \ denominator and the fact that my monitors are stuck in 59hz (which seems to be a common issue on windows7). Anyway thank you very much as the second I added the flush it all tearing ceased!!!

mhagain - You are indeed correct that enumerating the adapters does provide me with more suitable values for the refresh rate. A quick investigation indicates that for my current desktop the proper settings would be 59950 / 1000. Sadly these settings seem to have no effect on the issue even with the solution (flush) added. None the less it is probably a good idea to use this proper setup method so thank you for pointing that out.

SOLVED!

• 9
• 11
• 9
• 16
• 18
• Similar Content

• I wanted to see how others are currently handling descriptor heap updates and management.
I've read a few articles and there tends to be three major strategies :
1 ) You split up descriptor heaps per shader stage ( i.e one for vertex shader , pixel , hull, etc)
2) You have one descriptor heap for an entire pipeline
3) You split up descriptor heaps for update each update frequency (i.e EResourceSet_PerInstance , EResourceSet_PerPass , EResourceSet_PerMaterial, etc)
The benefits of the first two approaches is that it makes it easier to port current code, and descriptor / resource descriptor management and updating tends to be easier to manage, but it seems to be not as efficient.
The benefits of the third approach seems to be that it's the most efficient because you only manage and update objects when they change.

• hi,
until now i use typical vertexshader approach for skinning with a Constantbuffer containing the transform matrix for the bones and an the vertexbuffer containing bone index and bone weight.
Now i have implemented realtime environment  probe cubemaping so i have to render my scene from many point of views and the time for skinning takes too long because it is recalculated for every side of the cubemap.
For Info i am working on Win7 an therefore use one Shadermodel 5.0 not 5.x that have more options, or is there a way to use 5.x in Win 7
My Graphic Card is Directx 12 compatible NVidia GTX 960
the member turanszkij has posted a good for me understandable compute shader. ( for Info: in his engine he uses an optimized version of it )
Now my questions
is it possible to feed the compute shader with my orignial vertexbuffer or do i have to copy it in several ByteAdressBuffers as implemented in the following code ?
the same question is about the constant buffer of the matrixes
my more urgent question is how do i feed my normal pipeline with the result of the compute Shader which are 2 RWByteAddressBuffers that contain position an normal
for example i could use 2 vertexbuffer bindings
1 containing only the uv coordinates
2.containing position and normal
How do i copy from the RWByteAddressBuffers to the vertexbuffer ?

(Code from turanszkij )
Here is my shader implementation for skinning a mesh in a compute shader:

• Hi, can someone please explain why this is giving an assertion EyePosition!=0 exception?

It looks like DirectX doesnt want the 2nd parameter to be a zero vector in the assertion, but I passed in a zero vector with this exact same code in another program and it ran just fine. (Here is the version of the code that worked - note XMLoadFloat3(&m_lookAt) parameter value is (0,0,0) at runtime - I debugged it - but it throws no exceptions.
and here is the repo with the alternative version of the code that is working with a value of (0,0,0) for the second parameter.

• Hi, can somebody please tell me in clear simple steps how to debug and step through an hlsl shader file?
I already did Debug > Start Graphics Debugging > then captured some frames from Visual Studio and
double clicked on the frame to open it, but no idea where to go from there.

I've been searching for hours and there's no information on this, not even on the Microsoft Website!
They say "open the  Graphics Pixel History window" but there is no such window!
Then they say, in the "Pipeline Stages choose Start Debugging"  but the Start Debugging option is nowhere to be found in the whole interface.
Also, how do I even open the hlsl file that I want to set a break point in from inside the Graphics Debugger?

All I want to do is set a break point in a specific hlsl file, step thru it, and see the data, but this is so unbelievably complicated