Jump to content
  • Advertisement
Sign in to follow this  
cresty

Couple o' Qs...

This topic is 3920 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So i wanna limit the FPS in my application, sofar i have used this function to delay after each frame:
void Delay(int ms)
{
    int s;
    s = clock();
    while((clock() - s) <= ms)
}

//Now i render here...

void GameLoop()
{
  Render();
  Delay(33);
}

According to the SDl documentation, delaying by 33 ms will limit your application to 33 FPS, how have they come to this conclusion? Lower delay would increase the FPS right? But by how much would i delay to limit it to say... 50 FPS? Sofar ive been using the DevIL lib to load and use tectures but i was thinking i would write my own procedures but none of the tutorials or books i have read on OpenGL mentions how to do this, only bits and pieces here and there but not a good step through step guide, are there any? Thanks.

Share this post


Link to post
Share on other sites
Advertisement
Well, the way they've come to this conclusion is like this:

Let's say your application is perfect and uses 0ms to complete one frame, but you're delaying 33ms each loop:

1000ms / 33ms = 33 FPS (approx.)

Therefore, to limit your application to 50FPS:

50FPS = 1000 / delay
delay = 20ms

Simple mathematics.

Although, just for the record, this seems like a clumsy way to be limiting your frames per second, because you're never going to GET 50FPS this way. I'm not an expert on what you're doing or how it works, I can only provide that simple logic stuff. Someone else will be able to show you a better way of doing this.

Share this post


Link to post
Share on other sites
Their reasoning is flawed. You shouldn't wait 33 ms between each frame, you should handle a frame every 33 ms instead. You may have to wait 20 ms between each update, or 10, depending on how long the frame took.

So, measure how long a frame takes, and then only wait for the remainder of the allowed frame time. As for how long a frame can take, it's easy to calculate: a second has 1000 milliseconds. If you want to run at 50 fps, each frame can take 1000 / 50 = 20 ms.


You could also consider if making everything time-dependent is a better solution here. For example, moving at a steady speed per frame makes the game frame-dependent, but multiplying a speed with the frame-time makes it time-dependent, so your game continues to run properly even at variabele frame-rates.

Share this post


Link to post
Share on other sites
Hi Cresty,

I think there's a better way of doing this. Instead of using a delay to do this, consider using a timer. Now, I'm going to get parts of this wrong because I don't have the sourcecode of things I've done in the past in front of me (I'm at work right now), but you set a timer to trigger every x number of milliseconds. At that time, you draw to the screen. That way, you're not worried about how long it actually takes to get everything ready to draw, you're just worried about when to actually draw the screen.

Think of it this way. In the code you have, you do everything like move sprites, check collisions, etc, then draw to your double-buffer, then you wait however long your delay is, then you copy the double-buffer to the screen. Like this:

<gameloop>
update world <= this part could take a long time
draw to buffer
sleep
copy buffer to screen
</gameloop>

The update world part could take quite a while, depending on the complexity of that particular frame being drawn. And that snippet of time isn't taken into account when you're calculating your FPS and determining how long to sleep. It throws your whole FPS calculation out the window.

Instead, if you use a timer to trigger when you draw to the screen then the time it takes to update your world is irrelevant. The timer is going to trigger every x number of milliseconds. And instead of having a significant portion of valuable processing time taken up with sleeping, you can use it for processing your game. Also, I think that this is less CPU intensive, but that might be language dependent because some languages use up CPU cycles during sleep. I could be wrong on that.

Timers are callback functions. In the past I've used them with Allegro to control the FPS in my games:
http://gpwiki.org/index.php/Allegro:Tutorials:Basics

But you can write your own timer functions if you want. Or, just use Allegro!

How about someone more knowledgable in this area offer advice on how to set it up for this guy?

- Goishin

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!