Frame Rates

Started by
4 comments, last by Krohm 18 years ago
How do you get the fps in which your opengl api is running?
Advertisement
the easiest way is to just add to a counter everytime you render your scene. Check if the time since your last update is greater than a second: If so, save the number of frames, and set it to 0 again.

If you are using c++, try something like this:


if (timeGetTime() > lastTimeCheck + 1000)
{
fps = numFrames;
numFrames = 0;
lastTimeCheck = timeGetTime();
}
++ frames;



somewhere in your render code. All the variable types are ints. You will also need to link to "winmm.lib" IIRC... Also, that is probably (aka i'm 99% sure) windows only. I think linux/mac use a function called "getTimeOfDay".

There are other ways (such as timing each frame), but usually a more acurate timer would be needed.
Sean Henley [C++ Tutor]Rensselaer Polytechnic Institute
The proposed method is heavily raccomanded.
Many games however estimate FPS on a per-frame basis.
Hard truth: there's no way to estimate per-frame delta in a cross-platform way.
Historically this has been done using RDTSC but this has proven to be unreliable in multi-core "speed-stepping" environments.

This method has the advantage of having a per-frame evaluation so you can use it for LOD adjustments... if you feel brave.
It obviously introduces eats more performance, but don't expect this loss could be noticed.

If you are on win32, QueryPerformanceXXX is for you. If you want other options... bad, bad, bad troubles.
I heard on linux there's a device which produces up to 8khz timings but I wasn't able to find it on my sistem. I think it's called /dev/clock or maybe /dev/timer. In general, I would say there were no nice timing opportunities last time I checked (admittedly many months ago).

Now that linux has nice soft-realtime features, maybe using a bunch of sleeping threads could give you what you need but I'm not sure.

Previously "Krohm"

Quote:Krohm:
Now that linux has nice soft-realtime features, maybe using a bunch of sleeping threads could give you what you need but I'm not sure.


linux can give you time in nano seconds (the exact resolution can be queried with clock_getres) which should be more than enough for mesuring the time a frame took to render.

see man 3 clock_gettime fore details

also gettimeofday gives you time in seconds and microseconds. This might be enough for some aplications

No need for threads, devices, etc.

Quote:Original post by Krohm
Hard truth: there's no way to estimate per-frame delta in a cross-platform way.


Sorry to burst your bubble here a little, there is a way, or atleast will be.
Nvidia has a new extention that can mesure time on the gpu in nanoseconds.
It's called EXT_timer_query and you can read about it on page 41 in
http://developer.nvidia.com/object/opengl-nvidia-extensions-gdc-2006.html

this page is good to
http://developer.nvidia.com/object/gdc-2006-presentations.html

Quote:Original post by nefthy
linux can give you time in nano seconds (the exact resolution can be queried with clock_getres) which should be more than enough for mesuring the time a frame took to render.

see man 3 clock_gettime fore details

also gettimeofday gives you time in seconds and microseconds. This might be enough for some aplications

No need for threads, devices, etc.

I'm glad you find that adeguate. In most system I've tested, those functions you point out does have a much higher granularity (tested on 5 systems from various distributions).
On most systems clock_gettime returned a resolution which is roughtly 1 millisecond.

Now, I am sure I could be misunderstood by this.
The point is that linux does provide adeguate resolution for "standard" graphical apps. No doubt this is overkill for shipping applications. Unluckly, who's posting here usually does not have access to a full-featured artist pipeline and ends up with much higher framerates.

It is excessively easy to draw some hundred thousands triagles, yet having a framerate over 700FPS. In those cases, a finer granularity is needed.

I believe the average posted does not even have access to VTune for the matter, so having RDTSC-like accuracy could be a definite help in cheap local-profiling.
As a side note, on my system RDTSC does have a granularity of ~20clks (serialization included) while PerfCounter seems to be higher, but still around tenths of microsecond.
Quote:Original post by lc_overlord
Sorry to burst your bubble here a little, there is a way, or atleast will be.
Nvidia has a new extention that can mesure time on the gpu in nanoseconds.
It's called EXT_timer_query and you can read about it on page 41 in
http://developer.nvidia.com/object/opengl-nvidia-extensions-gdc-2006.html
this page is good to
http://developer.nvidia.com/object/gdc-2006-presentations.html

For what counts, it has been discussed on the GL forum a month ago (try searching it). There are still many things unclear about this.

Reading DX performance counters can be an expensive operation. If they manage to make this cheap under GL that's much better, but this is yet to be released and diffusion of the extension is hard to predict. Just NV3x does not support all the available counters and this may be a problem.
I agree however it's a very interesting possibility, I'll wait for it.

Previously "Krohm"

This topic is closed to new replies.

Advertisement