Sign in to follow this  

Windowed VS Not Windowed - Major Speed Differences

This topic is 4681 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Why is my windowed Direct3D so much faster than full screen? I have a triangle that rotates around on the screen. In the time it takes the triangle to rotate once fullscreen, it must have rotated 10 times in windowed mode. Here's the initialization code for both modes:
////////////////////////////////
// InitWindowed()
////////////////////////////////
BOOL InitWindowed( D3DPRESENT_PARAMETERS* D3DPresentParams )
{
	D3DDISPLAYMODE display;
	HRESULT hResult = g_pDirect3D->GetAdapterDisplayMode(
		D3DADAPTER_DEFAULT, &display );
	if( hResult != D3D_OK )
		return FALSE;

	D3DPresentParams->Windowed = TRUE;
	D3DPresentParams->BackBufferFormat = display.Format;
	D3DPresentParams->SwapEffect = D3DSWAPEFFECT_DISCARD;
	D3DPresentParams->hDeviceWindow = g_hWnd;
	D3DPresentParams->BackBufferCount = 1;

	return TRUE;
}

////////////////////////////////
// InitFullscreen()
////////////////////////////////
void InitFullscreen( D3DPRESENT_PARAMETERS* D3DPresentParams )
{
	D3DPresentParams->Windowed = FALSE;
	D3DPresentParams->BackBufferCount = 1;
	D3DPresentParams->BackBufferWidth = 1024;
	D3DPresentParams->BackBufferHeight = 768;
	D3DPresentParams->BackBufferFormat = D3DFMT_X8R8G8B8;
	D3DPresentParams->SwapEffect = D3DSWAPEFFECT_DISCARD;
	D3DPresentParams->hDeviceWindow = g_hWnd;
}

I use X8R8G8B8 for full screen no matter what the current format is on the graphics card. However, I did check what full screen mode would act like by using the same initialization as my windowed mode, but it doesn't make a difference. Is there a reason for this major speed difference? Here are a couple more relevant code snippets: Set up the vertex buffer, store vertices, etc:
////////////////////////////////
// Setup()
////////////////////////////////
BOOL Setup()
{
	g_pDevice->SetRenderState( D3DRS_LIGHTING, FALSE );
	g_pDevice->SetRenderState( D3DRS_CULLMODE, D3DCULL_NONE );

	HRESULT hResult = g_pDevice->CreateVertexBuffer( 3*sizeof(CUSTOMVERTEX),
		D3DUSAGE_WRITEONLY, MyFVF, D3DPOOL_MANAGED, &pVertexBuffer );
	if( FAILED( hResult ) )
	{
		DXTRACE_ERR( "Error creating vertex buffer", hResult );
		return FALSE;
	}

	VOID* pVertices;
	hResult = pVertexBuffer->Lock( 0, 0, (BYTE**)&pVertices, 0 );
	if( FAILED( hResult ) )
	{
		DXTRACE_ERR( "Error locking vertex buffer", hResult );
		return FALSE;
	}
	memcpy( pVertices, triangle, sizeof( triangle ) );
	pVertexBuffer->Unlock();

	g_pDevice->SetStreamSource( 0, pVertexBuffer, sizeof( CUSTOMVERTEX ) );
	g_pDevice->SetVertexShader( MyFVF );
	
	SetupWorld();

	return TRUE;
}

////////////////////////////////
// SetupWorld()
////////////////////////////////
void SetupWorld()
{
	D3DXMATRIX matView, matProj;

	D3DXMatrixScaling( &matScale, 5.0f, 5.0f, 5.0f );
	g_pDevice->SetTransform( D3DTS_WORLD, &matScale );

	D3DXMatrixLookAtLH( &matView,
		&D3DXVECTOR3( 0.0f, 0.0f, -15.0f ),		// Camera position
		&D3DXVECTOR3( 0.0f, 0.0f, 0.0f ),		// Look at position
		&D3DXVECTOR3( 0.0f, 1.0f, 0.0f ) );		// Up vector
	g_pDevice->SetTransform( D3DTS_VIEW, &matView );

	D3DXMatrixPerspectiveFovLH( &matProj,
		D3DX_PI / 4, 1.0f, 1.0f, 500.0f );
	g_pDevice->SetTransform( D3DTS_PROJECTION, &matProj );

	return;
}

Render function, called per frame:
////////////////////////////////
// Render()
////////////////////////////////
void Render()
{
	g_pDevice->Clear( 0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB( 0,0,0 ), 1.0f, 0 );
	g_pDevice->BeginScene();

	D3DXMATRIX matRotX;
	D3DXMATRIX matRotY;
	D3DXMATRIX matRotZ;
	g_iRotationFactor += 0.004f;
	D3DXMatrixRotationX( &matRotX, g_iRotationFactor );
	D3DXMatrixRotationY( &matRotY, g_iRotationFactor );
	D3DXMatrixRotationZ( &matRotZ, g_iRotationFactor );
	D3DXMatrixMultiply( &matWorld, &matRotY, &matRotZ );
	D3DXMatrixMultiply( &matWorld, &matWorld, &matRotX );
	D3DXMatrixMultiply( &matWorld, &matScale, &matWorld );
	g_pDevice->SetTransform( D3DTS_WORLD, &matWorld );

	g_pDevice->DrawPrimitive( D3DPT_TRIANGLELIST, 0, 1 );
	g_pDevice->EndScene();
	g_pDevice->Present( NULL, NULL, NULL, NULL );
}

Debug output is almost the same either way. Any insight on this would be appreciated.

Share this post


Link to post
Share on other sites
When you are in Fullscreen, the framerate is limited to the refresh rate. (60hz, 72hz, etc)

the easiest way to get around this is to impliment a timer function and limit your program to the framerate you want(usally 30fps), and for your polygon updates, use a function like:

newtime = Timer.GetCurrentTime();
if(newtime >= oldtime)
{
Do Stuff....
Update Poly....
oldtime = newtime + 30;
}

this will limit your poly to updating every 30 timer units(usally ms)

Share this post


Link to post
Share on other sites
Quote:
Original post by IndigoStallion
When you are in Fullscreen, the framerate is limited to the refresh rate. (60hz, 72hz, etc)

the easiest way to get around this is to impliment a timer function and limit your program to the framerate you want(usally 30fps), and for your polygon updates, use a function like:

newtime = Timer.GetCurrentTime();
if(newtime >= oldtime)
{
Do Stuff....
Update Poly....
oldtime = newtime + 30;
}

this will limit your poly to updating every 30 timer units(usally ms)

Never lock the framerate. Instead, give all movements in units / second, and then multiply them by the number of seconds passed between each frame.

Share this post


Link to post
Share on other sites
Does this frame rate apply to windowed, fullscreen, or both? Where should I implement the techniques you've shown?

Secondly, does this imply the triangle will rotate faster or slower?

Share this post


Link to post
Share on other sites
Quote:
Original post by IndigoStallion
When you are in Fullscreen, the framerate is limited to the refresh rate. (60hz, 72hz, etc)

the easiest way to get around this is to impliment a timer function and limit your program to the framerate you want(usally 30fps), and for your polygon updates, use a function like:

Actually, I think the easiest way to get around this is to simply disable it. You can do it by setting your

D3DPRESENT_PARAMETERS.PresentationInterval = D3DPRESENT_INTERVAL_IMMEDIATE

Right now, you aren't setting it at all, defaulting it to D3DPRESENT_INTERVAL_ONE.

Share this post


Link to post
Share on other sites
I agree, never lock the framerate, and if you do, don't make it some horribly low fps like 30. MAYBE after 100-150 fps if you have leftover time you could do some other things. Gamers notice 30fps and how bad it looks.

Share this post


Link to post
Share on other sites
I triple that. Gamers with good hardware should never have to suffer unnecessarily low framerates.

Quote:

Does this frame rate apply to windowed, fullscreen, or both? Where should I implement the techniques you've shown?

Both. Anything that changes over a period of time should use this technique. It allows for all objects to move at a constant speed regardless of framerate.

As for WHY your app runs faster in Windowed mode: It can partially be due to syncing with the refresh-rate, but possibly it could also be that your hardware was designed to work that way. Just another reason all movement should be time-based.

Share this post


Link to post
Share on other sites
Quote:
Original post by MasterWorks
I agree, never lock the framerate, and if you do, don't make it some horribly low fps like 30. MAYBE after 100-150 fps if you have leftover time you could do some other things. Gamers notice 30fps and how bad it looks.


Correct. With very high framerates, you can experience tearing (on some hardware), so it's not a bad idea to give the user an option to cap the framerate. With Valve's Source engine, it lets you set the specific rate.

Also, never just stall the CPU just to limit framerate (ie do a sleep()). Instead, continue onto other tasks, like AI, physics, ect... That way, the game is still being maintained - but it's just not rendering.

Share this post


Link to post
Share on other sites
Quote:
Original post by Raloth
Never lock the framerate. Instead, give all movements in units / second, and then multiply them by the number of seconds passed between each frame.
Careful. You never lock the renderering framerate, sure, but lockstep game updates can make things like networking infinitely easier.

Share this post


Link to post
Share on other sites
I agree with the second solution of using units and checking how much time has passed.

But no one apparently meantioned WHY you have problems (no offense one person tried but I don't think it was too exact) when your in Windowed mode, your producing FAR fewer pixels and that means you get done rendering faster as well as problems with refresh rates. You have to limit your rendering in some way, and there's a bunch of suggestions in here)

Share this post


Link to post
Share on other sites
Quote:
Original post by circlesoft
Quote:
Original post by IndigoStallion
When you are in Fullscreen, the framerate is limited to the refresh rate. (60hz, 72hz, etc)

the easiest way to get around this is to impliment a timer function and limit your program to the framerate you want(usally 30fps), and for your polygon updates, use a function like:

Actually, I think the easiest way to get around this is to simply disable it. You can do it by setting your

D3DPRESENT_PARAMETERS.PresentationInterval = D3DPRESENT_INTERVAL_IMMEDIATE

Right now, you aren't setting it at all, defaulting it to D3DPRESENT_INTERVAL_ONE.


Uhmm circlesoft pointed out the problem and solution(above), seems too much misinformation causes more misinformation. Stick the the real solution to the real problem.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
My program has the same problem too.I have frame rate lock capability in my engine(locked to 60 fps with monitor refresh rate=60hz),but still,teariing is so obvious when objects are moving very fast in the scene.Any idea on this ?

Share this post


Link to post
Share on other sites
The only way to always guarantee a constant game speed without locking
the frame rate is just to take the time delta between the frames, and multiply
all of your object velocities by it. If you're measuring the time delta
in seconds, your velocities would be per second, giving you some idea of how
much the object would move / rotate in a given time frame.
Note that if you're using mouse delta to do movement, you do NOT multiply
by the time delta, because the mouse delta is time-based anyways.
Hope that helps.
Happy coding.

Share this post


Link to post
Share on other sites

Just a supplemental question if you guys don't mind-

I've been scaling my variable processing by time_delta's for a while now, but I always use the time_t data type and clock() MS provides for keeping track of time. Is there a faster/more efficient way?

~Raised

Share this post


Link to post
Share on other sites
Sorry to ask the obvious, but which size is your window?

I mean, we can see your fullscreen is 1024*768, but you dont tell the size of your window. In a larger screen, more pixels are drawn, so it takes longer to draw. So if you window is 300*300 then you have a window about 1/8 of the 1024*768. So it can be faster to draw a tiny window than a fullscreen.

Luck!
Guimo

Share this post


Link to post
Share on other sites
In my engine, 320x240 fullscreen is horribly slow and 320x240 windowed is amazingly fast (after all, it only has to draw a handful of pixels). 1024x768 seems to be the optimal reselution for my engine, though 800x600 works very well too

Share this post


Link to post
Share on other sites
You can look at the "SkinnedMesh" tutorial from the DirectX9.0c (summer update) SDK and how it uses (float) fEnlapsedTime in OnFrameRender() and OnFrameMove() fonctions and everywhere constant game speed is needed.

Share this post


Link to post
Share on other sites
I would like to say never let the fps run wild and high. Lock it but lock it high. Make sure you have room for slowdowns. 60fps is a great place to get the game locked to. pgr2 is at 30fps just to let you know. just my 2 cents

Share this post


Link to post
Share on other sites
Is vsyncing then bad because it locks the framerate? Since it's set to the monitor's refresh rate, it should be capped out somewhere between 60 and 100 fps, which seems like enough. Also, I have not experienced it myself but as someone else already mentioned tearing could be a problem.

Share this post


Link to post
Share on other sites
60 fps is a terrible place to lock your framerate. It is WAY too low. 100fps looks so much better, assuming your refresh rate is good, but even locking at 100fps is unacceptable.

Vsync is bad because it locks the framerate and the leftover time is wasted. If your game is cruising at a vsync'd 100fps and the action gets hot you might fall to 50 fps for a short time. Which looks terrible. (Remember that fluctuations in frame rate are the most noticeable, even to casual gamers.)

With vsync off, if your game is cruising at 200fps and things get heavy, you might fall to 110fps. As long as this lower fps is still above your refresh rate things will continue to look good. Any (modern) game locked at 30-60fps is poorly implemented IMO.

Of course this refers to the rendering pipeline, the logic/collision/AI/etc pipelines can afford to be updated much less often.

Share this post


Link to post
Share on other sites
Quote:
Original post by RaisedByWolves
I've been scaling my variable processing by time_delta's for a while now, but I always use the time_t data type and clock() MS provides for keeping track of time. Is there a faster/more efficient way?

I've always used (and it seems like others do, too) the QueryPerformanceCounter construct. IIRC, it is much more accurate than other WINAPIs. There are a lot of references out there to it on the 'net, so I'm sure you will have no problem using it, if you want to. Also, the new SDK sample framework implements a nifty timer class that uses QPC.

Quote:
Vsync is bad because it locks the framerate and the leftover time is wasted. If your game is cruising at a vsync'd 100fps and the action gets hot you might fall to 50 fps for a short time. Which looks terrible. (Remember that fluctuations in frame rate are the most noticeable, even to casual gamers.)


VSYNC isn't so bad, because it prevents tearing on the monitor. Since both the monitor and hardware are refreshing at the same interval, no ugly artifacts crop up. Also, remember that the human eye itself refreshes at a rate of 70fps (IIRC). Therefore, it is quite impossible to tell the difference between 200fps and 100fps (if the game is implemented correctly and framerate independent).

Quote:
Any (modern) game locked at 30-60fps is poorly implemented IMO.

In my opinion, any game that doesn't push modern hardware enough to get a framerate of 30-60fps is poorly implemented. Applications should push hardware to the edge. Otherwise, what's the purpose of buying new hardware? The real challenge in graphics programming is developing applications that get the maximum out of every machine they run on.

For example, Half-Life 2 fluctuates between 30 and 75 fps on my machine, depending on the environment. Of course, you can degrade the options (I have mine the entire way up), but those framerates are perfectly acceptable to me (anything above 50).

I see your point against simply *capping* the framerate, because it is annoying to have a framerate as low as 30. However, if the app simply runs a bit slow (but provides me with a good experience), I am more than fine with it [smile].

Share this post


Link to post
Share on other sites
Quote:
VSYNC isn't so bad, because it prevents tearing on the monitor. Since both the monitor and hardware are refreshing at the same interval, no ugly artifacts crop up.


I still don't see you addressing my point that when you VSYNC, your framerate will fall below the refresh rate when the action gets hot. A varying framerate looks _BAD_, much worse than "tearing" which I have rarely seen to be an actual problem.

Quote:
Also, remember that the human eye itself refreshes at a rate of 70fps (IIRC).


There is no way to prove the statement that the human eye has can only see ~70 fps, or any (XX) fps. The human eye most CERTAINLY does not 'refresh itself' that often, it is not a video camera! The human eye can certainly tell the difference between 60fps and 100fps, and probably up to several hundred fps (although monitors can't do that so who cares.) Ask any good Quake player!

Quote:
In my opinion, any game that doesn't push modern hardware enough to get a framerate of 30-60fps is poorly implemented. Applications should push hardware to the edge. Otherwise, what's the purpose of buying new hardware?


The point of buying new hardware is that you NEED faster hardware to get a steady 100fps. Whether your benchmark is 10fps, 20fps, or 100fps, obviously you can do more with faster hardware. I would agree that an app that is getting 400fps should add graphical detail. That doesn't mean you have to choke it down to 30fps. -I- always value fluidity over detail. Certainly there might be genres where this doesn't apply, but for FPS/action games I believe it does. 30fps just doesn't give the responsiveness you need to play a good game. Try turning your mouse cursor rate to 30Hz, it looks like crap compared to 100hz.

Quote:
The real challenge in graphics programming is developing applications that get the maximum out of every machine they run on.


I certainly agree. Our differences lie in our definitions of the word 'maximum'. -I- think that getting the 'maximum' means adding the most graphical detail possible while staying above the monitor refresh at all times (if possible). For any given game you can always add more detail/features, the question is 'at what cost'? If that 'cost' involves going below the refresh rate, then sorry, your hardware isn't good enough. Buy something faster. (Obviously a game that allows you to adjust the detail gives us both what we want.)

Quote:
For example, Half-Life 2 fluctuates between 30 and 75 fps on my machine, depending on the environment. Of course, you can degrade the options (I have mine the entire way up), but those framerates are perfectly acceptable to me (anything above 50).


Well, I guess your standards are different. I would rather play at 100fps. If I was getting less than 60, then I would need to get stronger graphics hardware (as per your question) or less detail. And if I was getting 100fps and you're getting 50, I will have a massive advantage over you playing deathmatch. If you're playing single player or otherwise playing without human opponents, I can certainly see how you would prefer gorgeous pixel shaders to higher framerates. Everything I have said is from the perspective of a very competitive online gamer; certainly you might not care about split second differences in timing for offline play. But playing online you MUST have silky smooth framerates or you will get fragged by somebody who does, assuming the game is both intense and well programmed.

-Jay


Share this post


Link to post
Share on other sites

This topic is 4681 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this