Not Fullscreen app not getting past 60FPS

Started by
4 comments, last by zacaj 12 years, 8 months ago
I just got my development suite setup on my laptop (nvidia, win7x64), and my opengl application (not fullscreen) doesnt seem to get past ~60FPS +-2. I know there can sometimes be problems in fullscreen apps with the refresh rate of the screen, but I thought those didnt apply in windowed apps. My current best guess it that it has something to do with the nvidia drivers, since my desktop is ATI, and it gets 1000+ FPS in windowed mode. Ive tried removing all the objects from the scene, and also adding lots of objects, and the FPS doesnt change till I add a lot of objects, which leads me to beleive its something outside of the app thats limiting the FPS.
I dont really care either way, but I need a way to get accurate timings for optimization. My current technique is just to get the time once a frame, subtract from the previous time, and use that to calculate FPS+MSPF. This, of course, includes glfwSwapBuffers(), which is taking about 10% of execution time according to the profiler. But, if I remove this from the timed section, I dont think Ill get good timing taking GPU rendering time into account, which I think I was getting before(unless Im misunderstanding the parallelism of the GPU?) Is there a way to get it unlimited, or a better way to measure timing?
Advertisement

I just got my development suite setup on my laptop (nvidia, win7x64), and my opengl application (not fullscreen) doesnt seem to get past ~60FPS +-2. I know there can sometimes be problems in fullscreen apps with the refresh rate of the screen, but I thought those didnt apply in windowed apps. My current best guess it that it has something to do with the nvidia drivers, since my desktop is ATI, and it gets 1000+ FPS in windowed mode. Ive tried removing all the objects from the scene, and also adding lots of objects, and the FPS doesnt change till I add a lot of objects, which leads me to beleive its something outside of the app thats limiting the FPS.
I dont really care either way, but I need a way to get accurate timings for optimization. My current technique is just to get the time once a frame, subtract from the previous time, and use that to calculate FPS+MSPF. This, of course, includes glfwSwapBuffers(), which is taking about 10% of execution time according to the profiler. But, if I remove this from the timed section, I dont think Ill get good timing taking GPU rendering time into account, which I think I was getting before(unless Im misunderstanding the parallelism of the GPU?) Is there a way to get it unlimited, or a better way to measure timing?


Try changing the default v-sync setting in the driver configuration, or use the WGL_EXT_swap_control extension (Windows only) , for Linux you can use GLX_SGI_swap_control if the driver supports it.
[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!
Application controlled vsync doesn't work under OpenGL with (at least some) NVIDIA drivers; the only way to set it is through the control panel.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Wierd. It was set to letting the application choose, but the application was somehow turning on vsync, despite me using the same settings to start glfw on both computers. Changing it to Force Off fixed it. I guess nvidia/ati or something else changes the default state for a window

but the application was somehow turning on vsync, despite me using the same settings to start glfw on both computers.


What somehow?
You can control v-sync yourself.
http://www.opengl.org/wiki/Platform_specifics:_Windows#SwapInterval_aka_vsync
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
I know about that, but at the time I didnt have code to explicitelly enable or disable it. It seems my laptop defaulted to it being enabled when in a new context, and my desktop defaulted to it being disabled. Once I added the code (using what you linked to), it was controllable just fine

This topic is closed to new replies.

Advertisement