OpenGL Sustained frame rates

Recommended Posts

It's been a while since I've seen something like this, but with the advent of the new xbox I got reminded of an old SGI adage that stated "Sustained 60 is better than variable 300." The commment was in regard to framerates and how the best thing to do was to always maintain a framerate vs. varying it. The real question is if there's an easy way to do that with OpenGL? I know on my old Octane everything seemed to just default to that, but I never looked into how to code it. Thoughts?

Share on other sites
The default framerate you were seeing was probably due to the refresh rate you set for your monitor (unless you'd disabled vsync).

There are two possibilities I can think of - the first is to make your program framerate independant to a certain extent by linking your scene updates with time - see this NeHe tutorial.

Alternatively, you could try to maintain a particular framerate by skipping the "drawing part" of a frame when things get too slow. Updates to the game world data should still happen for this frame, but nothing gets drawn at the end of it.

This is kind of similar to the first option, except you are attacking the problem from the opposite viewpoint (that things will take longer to update than required for a given framerate rather than things being too fast).

I think in most cases doing the first option will suffice (and I would highly recommend it as the result is smooth movement - framerate only really becomes an issue when things appear jerky & become too unpredictable for the player to react properly), but you could use both together to account for all possibilities.
As an aside, this is not something that is specific to OpenGL, the basic principles apply to any API or game.

Share on other sites
Quote:
 Original post by TheSteveIt's been a while since I've seen something like this, but with the advent of the new xbox I got reminded of an old SGI adage that stated "Sustained 60 is better than variable 300." The commment was in regard to framerates and how the best thing to do was to always maintain a framerate vs. varying it. The real question is if there's an easy way to do that with OpenGL? I know on my old Octane everything seemed to just default to that, but I never looked into how to code it. Thoughts?

Depends on the situation, and what they are describing.

They might be referring to the number of frames per second with frames being divided up nearly equally over time. Your physics and AI should operate at a fixed rate. That was not always true of graphics systems, nor is it true of many non-professional games.

These days a good design separates the display from everything else, so that's not a problem.

As long as the 'variable' display is divided equally over time, then it's fine to render as fast as possible up to the refresh rate of the screen. Interpolate the display based on the time between the two physics steps and you'll be fine.

If they're referring to variable display as not divided equally over time, then they are correct. A worst-case example for variable 300 FPS display time is that one frame takes 0.999 seconds, then the remaining 299 frames are generated in the remaining 0.001 seconds. That's a BAD situation, but less extreme versions usually happen.

That situtaion is the reason you should be measuring the time of frames,rather than FPS. Your in-game benchmarks should have the average, min, and max over a given time frame. If your min and max get too far apart, you've got a problem.

frob.

Share on other sites
Well, the framerate definetly wasn't based on the monitor. Almost all SGI programs on all different monitors ran at 60 fps, even if you had the lastest monitor and an incredible Onyx 2 (which was the bomb at the time). As far as it being variable with time, I'm not sure about that. All I know is that the big thing back in the 1995-1999 glory days of SGI was that they always maintained a 60fps sustained rate and nothing more/less. Their argument, which I believe is true from watching the xbox, is that anything variable is going to ultimately look worse than sustained. You can even see this in action to this day if you go and look at an xbox 360 running one of their new games. At 30 fps, it looks damn amazing. Not just in terms of "nice graphics," but in terms of overall smoothness and consistency. I think there's a more aggressive and perhaps a great deal more complicated way to do it. Hopefuly there's a big IRIX buff on this forum.

Share on other sites
The general premise is entirely true. Constant 30fps 'feels' much smoother than a program that runs at a mean of 45fps, but has a high std deviation (> 5-8 perhaps?).

There isn't any point in rendering at 300fps. While it may make the owner of a 7800 GTX feel good, I think you're burning my laptop's battery down.

I disagree with most people at gamedev, in that I prefer to use Vsync. It keeps the framerate very consistant, while keeping resource usage down. But my programs are multithreaded, and use lots of background CPU time (on-the-fly resource loading for visualizing multi-terabyte data sets). The extra CPU freed up when the rendering thread sleeps (during a blocked buffer swap) makes a huge difference on single processor machines.

Create an account

Register a new account

• Forum Statistics

• Total Topics
627735
• Total Posts
2978854
• Similar Content

• Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
So, here's what the plan is so far as far as loading goes:
Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!

• I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks

• A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

-What I'm using:
C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.
-Questions
Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.

• 10
• 10
• 21
• 14
• 12