Jump to content
  • Advertisement
Sign in to follow this  

Fixed(Variable) FPS using hardware

This topic is 2892 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Basically, I've been reading the GameDev article, The Game Loop, and I've gotta take my hat off to the author behind the article and any colleagues (forumers?) that helped him out. Now, the basis behind this thread is to determine the best framerate for your player.

The article uses this code:

const int TICKS_PER_SECOND = 25;
const int SKIP_TICKS = 1000 / TICKS_PER_SECOND;
const int MAX_FRAMESKIP = 5;

However, why should a player be limited to 25 FPS just because the programmer has said to? I mean, I run an i7 860 @ 3.5GHz and two AMD 4890s. Basically, even most current triple A games will be running well over 25 FPS for me.

I'm not here to belittle the code, but more to see if it is worthwhile making it more efficient. The code there is brilliant, but as an innovator, and a somewhat tinkerer (I fall somewhere in the middle of the hardcore techie and the tinkerer).

Here's my modification to the code. Please note that this is pseudocode, as I haven't nutted out the C++ code as of yet. I'd rather see my pseudocode cop the brunt of it and not waste my time on C++ if it's not necessary.

It is important to note that I know it isn't perfect, and that the game in question is both hypothetical and can run on any system that supports Windows 95b or better. I also slapped this up as an example - if it were actually going to be implemented, it would have been far more specific (that's why I limited my hardware choices).

BEGIN gameLoop()
const int TICKS_PER_SECOND = getHardware();
const int SKIP_TICKS = 1000 / TICKS_PER_SECOND;
END gameLoop

BEGIN FUNCTION(int) getHardware()
int cpu0 = 0;
int cpu1 = 0;
int cpu2 = 0;
Get cpuArcitecture;
Get cpuSpeed;
Get cpuCores;
CASEWHERE cpuArcitecture IS:
pentium: cpu = 1;
coreTwo: cpu = 2;
iSeries: cpu = 3;
duron: cpu = 1;
athlon: cpu = 2;
phenom: cpu= 3;
default: cpu = 1;
IF (cpuSpeed < 1024) THEN
cpu1 = 1;
ELSE IF (cpuSpeed >= 1024 AND cpuSpeed < 2048) THEN
cpu1 = 2;
ELSE IF (cpuSpeed >= 2048 AND cpuSpeed < 3072) THEN
cpu1 = 3;
cpu1 = 4;
Get gpuArcitecture;
Get gpuSpeed;
Get gpuMemory;
int gpu0 = 0;
int gpu1 = 0;
int gpu2 = 0;
CASEWHERE gpuArcitecture IS:
8500GT: gpu0 = 1;
ati3850: gpu0 = 2;
ati4890: gpu0 = 3;
default: gpu0 = 1;
IF (gpuSpeed < 300) THEN
gpu1 = 1;
ELSE IF (gpuSpeed >= 300 AND < 500) THEN
gpu1 = 2;
ELSE IF (gpuSpeed >= 500 AND < 1000) THEN
gpu1 = 3;
gpu1 = 4;

IF (gpuMemory < 300) THEN
gpu2 = 1;
ELSE IF (gpuMemory >= 300 AND gpuMemory < 500) THEN
gpu2 = 2;
ELSE IF (gpuMemory >= 500 AND gpuMemory < 1000) THEN
gpu2 = 3;
gpu2 = 4;

Get memoryType; // e.g. DDR, DDR2, DDR3, etc.
Get memoryAmount; // This one especially, seeing as no one takes advantage of the clockspeed of memory. Also note that this is raw speed.
int mem0 = 0;
int mem1 = 0;

CASEWHERE memoryType IS:
DDR: mem0 = 1;
DDR2: mem0 = 2;
DDR3: mem0 = 3;

IF (memoryAmount < 512) THEN
mem1 = 1;
ELSE IF (memoryAmount >= 512 AND memoryAmount < 1024) THEN
mem1 = 2;
ELSE IF (memoryAmount >= 1024 AND memoryAmount < 2048) THEN
mem1 = 3;
ELSE IF (memoryAmount >= 2084 AND memoryAmount < 3072) THEN
mem1 = 4;
ELSE IF (memoryAmount >= 3072 AND memoryAmount < 4096) THEN
mem1 = 5;
mem1 = 6;

int determinedFramerate = 0;

IF (cpu0 >= 1 AND cpu1 >= 1 AND cpu2 >= 1 AND gpu0 >= 1 AND gpu1 >= 1 AND gpu2 >= 1 AND mem0 >= 1 AND mem1 >= 1) THEN
determinedFramerate = 25;
ELSE IF (cpu0 >= 2 AND cpu1 >= 2 AND cpu2 >= 2 AND gpu0 >= 2 AND gpu1 >= 2 AND gpu2 >= 2 AND mem0 >= 2 AND mem1 >= 2) THEN
determinedFramerate = 33;
ELSE IF (cpu0 >= 3 AND cpu1 >= 3 AND cpu2 >= 3 AND gpu0 >= 3 AND gpu1 >= 3 AND gpu2 >= 3 AND mem0 >= 3 AND mem1 >= 3) THEN
determinedFramerate = 66;
determinedFramerate = 25;

return determinedFramerate;
END FUNCTION getHardware

Remember, this is an example, and real world computing would throw this code out. However, I hope you get what I'm saying. I guess what I want is a constant framerate so that there is consistency, but I don't want to penalise or disadvantage players (the social and ethical side is coming out in me). Obviously there will be much more to that than what I've given, but if the theory here works (or is on the right path), then I'm sure that with some tweaking, the constant framerate can become variable depending on the computer the game is played on. Also as a side note, I know my identifiers have horrible names - normally I don't name them so, but I couldn't come up with good names for the situation.

Anywho, I doubt that's going to be an easy task, but my question isn't how can you do it moreso as it is is this possible, is it feasible and is it worthwhile? Yes I'm willing to put the hours of research in to find out how to do it, but only if it is actually worth it. I'm hoping there are some like-minded people out there who are also able to benefit from this too.

Cheers and happy holidays

Share this post

Link to post
Share on other sites
I believe their final revision to the code was to implement what's known as a fixed timestep system. The game enviornment changes in discrete 1/25s intervals, but the frame rate the player sees can and will vary. Fixed timesteps are very useful as they add some much needed consistency to floating point calculations and especially the numerical methods which are used to evaluate various equations used by the physics engine.

So on a fast computer the game will run at 60 FPS (generally you don't want it to be any higher or you'll start seeing tearing which looks much worse), and each frame is an interpolation between two states of the game world. The world itself changes 25 times per second. If you change the number of times the game state updates based on CPU speed, it's possible that certain events could occur on one person's machine that would not show up on another. For example, objects might pass through walls on a slower machine where the timestep is 1/25, while not doing so on the developer's machine which runs timesteps of 1/50s.

Share this post

Link to post
Share on other sites
objects might pass through walls on a slower machine where the timestep is 1/25, while not doing so on the developer's machine which runs timesteps of 1/50s.

I also believe this is what they are talking about. My particular physics engine updates itself at 50Hz, where as my frame rate is ~170 FPS.

Either way, this was a pretty good write up, and I enjoyed seeing another person's point of view (especially taking in to account hardware).

Share this post

Link to post
Share on other sites

So on a fast computer the game will run at 60 FPS (generally you don't want it to be any higher or you'll start seeing tearing which looks much worse)

Tearing occurs when a screen update happens halfway through an update - tearing is prevented by enabling waiting for the vertical sync before updating the video memory. This would be at 60 fps if my refresh rate was at 60, 75 if it was at 75 etc.

However, users are quite free to choose to disable vertical sync in their graphics card settings, completely overriding any program behaviour so it is not something that should ever be relied on for timing. Most AAA titles have an option in their own menus to enable/disable vertical sync as on slower computers you can make the sacrifice of having some tearing in order to get a slightly improved framerate if the user so chooses.

Other than that, tax0010's post is quite correct - fixed timesteps for update but giving up any idea of having any control over the actual framerate is the only robust solution on PCs.

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!