Variable Speed

Started by
11 comments, last by TomKQT 13 years, 1 month ago
A few days ago, I started reading 3D Game Programming with DirectX 9.0c by Frank Luna. I thought it would be fun to write my own wrapper for DX, so I threw one together last night. The problem is that when I launch the application it, apparently, decides to run in one of two speeds: fast or slow (To be clear, it isn't really slow. It's just noticeably slower).

Fast: http://imgur.com/ReJ2Xl&UH61c
Slow: http://imgur.com/ReJ2X&UH61cl

As you can see, it's only about a 0.02ms difference, but I don't know what's causing it. For all I know, it may get progressively worse as I add sprites/text/whatever. When the application decides to run slowly, the speed will never go faster than the what you see in the "Slow" image above.

What I've Noticed/Tried:
  • The speed varies between launches, so at first, I thought I was using an uninitialized variable somewhere; However, I checked my code, and all of my variables are initialized.
  • I removed my timer class and measured FPS with Fraps. The application behaves the same way.
  • My test application does the same thing on my friend's computer.
  • I ran Very Sleepy, a profiler, and profiled the application in both speeds. There are differences between the two, but as I've never used a profiler, I don't really know how to interpret the information.
  • Interestingly, Frank Luna's sample programs suffer from this same problem. Either we've both made the same mistake, or something is going wrong when I compile.

I zipped my VSProject and source files and uploaded them to megaupload in case you'd like to take a look, but you'll need boost installed to compile GameTimer:
http://www.megaupload.com/?d=V2QG6EBS

IDE: Visual Studio 2008 Express
DX SDK: June 2010
Advertisement
Do you even have a timer precise enough to measure 0.02ms?
Perhaps you should put this one aside and worry instead when you go from 15 to 17ms ;)

Do you even have a timer precise enough to measure 0.02ms?
Perhaps you should put this one aside and worry instead when you go from 15 to 17ms ;)


I think it is ms per frame:
mMSPerFrame = 1000.0f/mFPS;
If it isn't, I'm very confused.

EDIT - Using dimensional analysis:
Frames / 1 [s]Second [/s]* 1 [s]Second [/s]/ 1000 ms = Frames / 1000 ms

...so to get ms per frame, I just have to take the reciprocal of Frames/ms. Maybe I'm missing something though.
.02ms is not a difference in speed that you should care about. That kind of a speed difference can be from so many thing because it is such a small amount.
Wisdom is knowing when to shut up, so try it.
--Game Development http://nolimitsdesigns.com: Reliable UDP library, Threading library, Math Library, UI Library. Take a look, its all free.

.02ms is not a difference in speed that you should care about. That kind of a speed difference can be from so many thing because it is such a small amount.


It's not that I feel like I'll need the 0.02ms later. It just seems like I've done something wrong. Do your D3D applications exhibit the behavior that I described in the first post?

[quote name='smasherprog' timestamp='1299378192' post='4782296']
.02ms is not a difference in speed that you should care about. That kind of a speed difference can be from so many thing because it is such a small amount.


It's not that I feel like I'll need the 0.02ms later. It just seems like I've done something wrong. Do your D3D applications exhibit the behavior that I described in the first post?
[/quote]

If you are talking about “Introduction to 3D Game Programming with DirectX 9.0”, he uses timeGetTime() to get time readouts.
Always use QueryPerformanceCounter() for time functions.
Specifically, you should never use milliseconds to measure game time, and timeGetTime() can lag/stutter anyway.
Also, he is wrong to cast timeGetTime() to a float. Never cast your game time to a floating-point type. You may only cast deltas (time since the last frame).

That aside, there is nothing inherently wrong with your foundation yet.

Make a CTime class that uses QueryPerformanceCounter() to get time data and always measure game time in microseconds, and never ever cast the game time to a floating-point type.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

If you're going to recommend using QueryPerformanceCounter please please PLEASE also highlight the fact that it has known issues on modern PCs and that there is additional work you need to do to get around them. Otherwise it's a bad recommendation which should be ignored.

QueryPerformanceCounter will still go berserk on an Intel i7 running Windows 7 unless you know exactly what you're doing with it.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.


If you are talking about “Introduction to 3D Game Programming with DirectX 9.0”, he uses timeGetTime() to get time readouts.
Always use QueryPerformanceCounter() for time functions.
Specifically, you should never use milliseconds to measure game time, and timeGetTime() can lag/stutter anyway.
Also, he is wrong to cast timeGetTime() to a float. Never cast your game time to a floating-point type. You may only cast deltas (time since the last frame).

That aside, there is nothing inherently wrong with your foundation yet.

Make a CTime class that uses QueryPerformanceCounter() to get time data and always measure game time in microseconds, and never ever cast the game time to a floating-point type.


L. Spiro

Actually, he is using QueryPerformanceCounter in the book. Perhaps this is a different edition than the one you read.

So if the game time shouldn't be casted to a float, do I keep track of it in counts until it's needed?


If you're going to recommend using QueryPerformanceCounter please please PLEASE also highlight the fact that it has known issues on modern PCs and that there is additional work you need to do to get around them. Otherwise it's a bad recommendation which should be ignored.

QueryPerformanceCounter will still go berserk on an Intel i7 running Windows 7 unless you know exactly what you're doing with it.

Thanks for the heads up. The MSDN does say that QueryPerformancCounter can behave oddly on multicore pcs if it's called on different cores. Is that the problem you're referring to, or is it a problem specifically with Intel i7s?
Your game is not going to be running at thousands of frames per second. The tearing would be unacceptable - you will be vertically synced. Benchmark real situations.

I suspect you might have a dual core processor, and the difference is that if your application is started on a "busier" core, that it will run fractionally slower. OSes try not to switch threads between processors if they can avoid it, because doing so is quite expensive. This is why it varies from run to run.

I don't think it is worth worrying about.

If you're going to recommend using QueryPerformanceCounter please please PLEASE also highlight the fact that it has known issues on modern PCs and that there is additional work you need to do to get around them. Otherwise it's a bad recommendation which should be ignored.

QueryPerformanceCounter will still go berserk on an Intel i7 running Windows 7 unless you know exactly what you're doing with it.


Do you have any good source for this?
As far as I know QueryPerformanceCounter does not have any of those problems on anything built in the last several years. Please enlighten me if I'm mistaken. :)

This topic is closed to new replies.

Advertisement