Sustained frame rates

Started by
3 comments, last by RichardS 18 years, 5 months ago
It's been a while since I've seen something like this, but with the advent of the new xbox I got reminded of an old SGI adage that stated "Sustained 60 is better than variable 300." The commment was in regard to framerates and how the best thing to do was to always maintain a framerate vs. varying it. The real question is if there's an easy way to do that with OpenGL? I know on my old Octane everything seemed to just default to that, but I never looked into how to code it. Thoughts?
Advertisement
The default framerate you were seeing was probably due to the refresh rate you set for your monitor (unless you'd disabled vsync).

There are two possibilities I can think of - the first is to make your program framerate independant to a certain extent by linking your scene updates with time - see this NeHe tutorial.

Alternatively, you could try to maintain a particular framerate by skipping the "drawing part" of a frame when things get too slow. Updates to the game world data should still happen for this frame, but nothing gets drawn at the end of it.

This is kind of similar to the first option, except you are attacking the problem from the opposite viewpoint (that things will take longer to update than required for a given framerate rather than things being too fast).

I think in most cases doing the first option will suffice (and I would highly recommend it as the result is smooth movement - framerate only really becomes an issue when things appear jerky & become too unpredictable for the player to react properly), but you could use both together to account for all possibilities.
As an aside, this is not something that is specific to OpenGL, the basic principles apply to any API or game.
"I must not fear. Fear is the mindkiller. Fear is the little death that brings total obliteration. I will face my fear. I will permit it to pass over me and through me. And when it has gone past me I will turn to see fear's path. Where the fear has gone there will be nothing. Only I will remain." ~Frank Herbert, DuneMy slice of the web
Quote:Original post by TheSteve
It's been a while since I've seen something like this, but with the advent of the new xbox I got reminded of an old SGI adage that stated "Sustained 60 is better than variable 300." The commment was in regard to framerates and how the best thing to do was to always maintain a framerate vs. varying it. The real question is if there's an easy way to do that with OpenGL? I know on my old Octane everything seemed to just default to that, but I never looked into how to code it. Thoughts?

Depends on the situation, and what they are describing.


They might be referring to the number of frames per second with frames being divided up nearly equally over time. Your physics and AI should operate at a fixed rate. That was not always true of graphics systems, nor is it true of many non-professional games.

These days a good design separates the display from everything else, so that's not a problem.

As long as the 'variable' display is divided equally over time, then it's fine to render as fast as possible up to the refresh rate of the screen. Interpolate the display based on the time between the two physics steps and you'll be fine.



If they're referring to variable display as not divided equally over time, then they are correct. A worst-case example for variable 300 FPS display time is that one frame takes 0.999 seconds, then the remaining 299 frames are generated in the remaining 0.001 seconds. That's a BAD situation, but less extreme versions usually happen.

That situtaion is the reason you should be measuring the time of frames,rather than FPS. Your in-game benchmarks should have the average, min, and max over a given time frame. If your min and max get too far apart, you've got a problem.


frob.
Well, the framerate definetly wasn't based on the monitor. Almost all SGI programs on all different monitors ran at 60 fps, even if you had the lastest monitor and an incredible Onyx 2 (which was the bomb at the time). As far as it being variable with time, I'm not sure about that. All I know is that the big thing back in the 1995-1999 glory days of SGI was that they always maintained a 60fps sustained rate and nothing more/less. Their argument, which I believe is true from watching the xbox, is that anything variable is going to ultimately look worse than sustained. You can even see this in action to this day if you go and look at an xbox 360 running one of their new games. At 30 fps, it looks damn amazing. Not just in terms of "nice graphics," but in terms of overall smoothness and consistency. I think there's a more aggressive and perhaps a great deal more complicated way to do it. Hopefuly there's a big IRIX buff on this forum.
The general premise is entirely true. Constant 30fps 'feels' much smoother than a program that runs at a mean of 45fps, but has a high std deviation (> 5-8 perhaps?).

There isn't any point in rendering at 300fps. While it may make the owner of a 7800 GTX feel good, I think you're burning my laptop's battery down.

I disagree with most people at gamedev, in that I prefer to use Vsync. It keeps the framerate very consistant, while keeping resource usage down. But my programs are multithreaded, and use lots of background CPU time (on-the-fly resource loading for visualizing multi-terabyte data sets). The extra CPU freed up when the rendering thread sleeps (during a blocked buffer swap) makes a huge difference on single processor machines.

This topic is closed to new replies.

Advertisement