# DX11 Performance Issue DirectX Maximized Window

## Recommended Posts

 I develop a test application with directx 11 und fl 10.1. Everything is working as expected and fine, but when I maximize the window with my graphics in it, the time per frame increases drastically. like 1ms to 40ms. If it stays at a lower resolution range, it works perfectly. But i need to support it in the maximized resolution also. Destination hardware and software specs: NVS 300 graphics card Windows 7 32-bit resolution 1920x1080 Application that draws few sinuses with direct3d, c# via sharpdx Windows forms with a control and sharpdx initialized swapchain, programmed to change backbuffer on resize event (would occur without that too though) I used a System.Stopwatch to find the issue at the code line: mSwapChain.Present(1, PresentFlags.None); where the time it needs when maximized increases by a lot suddenly. If i drag and resize it manually at some resolution the frame time jumps, seems weird. If i comment out the drawing code, I get the same behavior. On my local development machine with a HD4400 i don't have this issue though, there it works and the frame time isn't affected by resizing at all. Any help is appreciated ! I am farily new to programming in c#, windows and with DirectX also, so please be kind

I can buy an NVS 300 for US$6 now and is an architecture from 2008 - not exactly a powerhouse First instinct is to simply blame that hardware. What resolution in windowed mode runs at 1ms, compared to the 40ms of 1080p? Is there any in-between performance levels -- e.g. a certain resolution where it takes 10ms per frame? #### Share this post ##### Link to post ##### Share on other sites 40 minutes ago, Hodgman said: I can buy an NVS 300 for US$6 now and is an architecture from 2008 - not exactly a powerhouse  First instinct is to simply blame that hardware.

What resolution in windowed mode runs at 1ms, compared to the 40ms of 1080p? Is there any in-between performance levels -- e.g. a certain resolution where it takes 10ms per frame?

Haha yes it's pretty old hardware now, it is mandatory to support it though sadly

Okay I had present sync interval set to 1 so it's more like a jump from ~16ms to 40ms, but you know.

If i set it to sync interval 0 and change the creation of the graph window to like 1600x900 i get all my expected 2-3ms if not lower, even on that old hardware. I mean i can completely comment out my drawing and still, if i drag it to a certain resolution (from 1600x900 close to the resolution of maximum) it jumps from that 2-3ms to ~40ms. Staying in that resolution range isn't an option either though. There isn't any level in that, certain resolution and immidiate increase. It is not fullscreen and there is also a control panel on one side, so even in maximized it wont reach 1920x1080.

It's confusing me a lot. I read about present() can block if frames are queued, i know to little about that though to say if that is the issue, it is called in an idle loop, so that might hint to that though. But again, can the gpu performance suffer so much at a tiny resolution increase, if i drag it higher.

thanks for help

##### Share on other sites

Yeah it's certainly odd. That card is supposed to support high resolution multi-monitor output and hundreds of megs of texture data, so a single ~8MB texture (1920*1080*4B) should be fine, so your "no drawing" situation should have no reason for this performance issue...

I have seen random ~40ms stalls on the GPU side when video memory is over-comitted (e.g. trying to use 1.1GB of texture data on a 1GB GPU), as this causes the driver to panic half way through a frame and shuffle lots of resources around... but... if you're not actually drawing anything, this shouldn't be happening to you.

Present is where the CPU will block if it's getting too far ahead of the GPU. e.g. if the CPU does 1ms of work per frame and the GPU is doing 16ms of work per frame, then you should expect to see the CPU block inside Present for 15ms.

It's a bit complicated, but you can use the GPUView tool to inspect exactly what the GPU is doing at every point in time here and try to identify the stall.

##### Share on other sites

Okay i'm trying to installing it now. Hopefully it can help me, thanks. I will report back if i have trouble or find something issue with it.

thanks so far

##### Share on other sites

Solved by disabling Aero Theme... Noticed the performance suffers a lot if the window gets over the start button. Switched to windows classic theme for test and worked. Still learned something with this tool you showed me at least, so thanks day saved

## Create an account

Register a new account

• 10
• 12
• 10
• 10
• 11
• ### Similar Content

• Hi, right now building my engine in visual studio involves a shader compiling step to build hlsl 5.0 shaders. I have a separate project which only includes shader sources and the compiler is the visual studio integrated fxc compiler. I like this method because on any PC that has visual studio installed, I can just download the solution from GitHub and everything just builds without additional dependencies and using the latest version of the compiler. I also like it because the shaders are included in the solution explorer and easy to browse, and double-click to open (opening files can be really a pain in the ass in visual studio run in admin mode). Also it's nice that VS displays the build output/errors in the output window.
Anyone with some experience in this?

• Hello!
Have a problem with reflection shader for D3D11:
1>engine_render_d3d11_system.obj : error LNK2001: unresolved external symbol IID_ID3D11ShaderReflection
#include <D3Dcompiler.h>
#include <D3DCompiler.inl>
#pragma comment(lib, "D3DCompiler.lib")
//#pragma comment(lib, "D3DCompiler_47.lib")
As MSDN tells me but still no fortune. I think lot of people did that already, what I missing?
where recommend to use SDK headers and libs before Wind SDK, but I am not using DirectX SDK for this project at all, should I?

• Hi there, this is my first post in what looks to be a very interesting forum.
I am using DirectXTK to put together my 2D game engine but would like to use the GPU depth buffer in order to avoid sorting back-to-front on the CPU and I think I also want to use GPU instancing, so can I do that with SpriteBatch or am I looking at implementing my own sprite rendering?

• I am trying to draw a screen-aligned quad with arbitrary sizes.

currently I just send 4 vertices to the vertex shader like so:
pDevCon->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
pDevCon->Draw(4, 0);

then in the vertex shader I am doing this:
float4 main(uint vI : SV_VERTEXID) : SV_POSITION
{
float2 texcoord = float2(vI & 1, vI >> 1);
return float4((texcoord.x - 0.5f) * 2, -(texcoord.y - 0.5f) * 2, 0, 1);
}
that gets me a screen-sized quad...ok .. what's the correct way to get arbitrary sizes?...I have messed around with various numbers, but I think I don't quite get something in these relationships.
one thing I tried is:

float4 quad = float4((texcoord.x - (xpos/screensizex)) * (width/screensizex), -(texcoord.y - (ypos/screensizey)) * (height/screensizey), 0, 1);

.. where xpos and ypos is number of pixels from upper right corner..width and height is the desired size of the quad in pixels
this gets me somewhat close, but not right.. a bit too small..so I'm missing something ..any ideas?

.
• By Stewie.G
Hi,
I've been trying to implement a gaussian blur recently, it would seem the best way to achieve this is by running a bur on one axis, then another blur on the other axis.
I think I have successfully implemented the blur part per axis, but now I have to blend both calls with a proper BlendState, at least I think this is where my problem is.
Here are my passes:
D3DX11_TECHNIQUE_DESC techDesc; mBlockEffect->mTech->GetDesc( &techDesc ); for(UINT p = 0; p < techDesc.Passes; ++p) { deviceContext->IASetVertexBuffers(0, 2, bufferPointers, stride, offset); deviceContext->IASetIndexBuffer(mIB, DXGI_FORMAT_R32_UINT, 0); mBlockEffect->mTech->GetPassByIndex(p)->Apply(0, deviceContext); deviceContext->DrawIndexedInstanced(36, mNumberOfActiveCubes, 0, 0, 0); } No blur

PS_BlurV

PS_BlurH

P0 + P1

As you can see, it does not work at all.
I think the issue is in my BlendState, but I am not sure.
I've seen many articles going with the render to texture approach, but I've also seen articles where both shaders were called in succession, and it worked just fine, I'd like to go with that second approach. Unfortunately, the code was in OpenGL where the syntax for running multiple passes is quite different (http://rastergrid.com/blog/2010/09/efficient-gaussian-blur-with-linear-sampling/). So I need some help doing the same in HLSL :-)

Thanks!