I just got my development suite setup on my laptop (nvidia, win7x64), and my opengl application (not fullscreen) doesnt seem to get past ~60FPS +-2. I know there can sometimes be problems in fullscreen apps with the refresh rate of the screen, but I thought those didnt apply in windowed apps. My current best guess it that it has something to do with the nvidia drivers, since my desktop is ATI, and it gets 1000+ FPS in windowed mode. Ive tried removing all the objects from the scene, and also adding lots of objects, and the FPS doesnt change till I add a lot of objects, which leads me to beleive its something outside of the app thats limiting the FPS.
I dont really care either way, but I need a way to get accurate timings for optimization. My current technique is just to get the time once a frame, subtract from the previous time, and use that to calculate FPS+MSPF. This, of course, includes glfwSwapBuffers(), which is taking about 10% of execution time according to the profiler. But, if I remove this from the timed section, I dont think Ill get good timing taking GPU rendering time into account, which I think I was getting before(unless Im misunderstanding the parallelism of the GPU?) Is there a way to get it unlimited, or a better way to measure timing?
Not Fullscreen app not getting past 60FPS
I just got my development suite setup on my laptop (nvidia, win7x64), and my opengl application (not fullscreen) doesnt seem to get past ~60FPS +-2. I know there can sometimes be problems in fullscreen apps with the refresh rate of the screen, but I thought those didnt apply in windowed apps. My current best guess it that it has something to do with the nvidia drivers, since my desktop is ATI, and it gets 1000+ FPS in windowed mode. Ive tried removing all the objects from the scene, and also adding lots of objects, and the FPS doesnt change till I add a lot of objects, which leads me to beleive its something outside of the app thats limiting the FPS.
I dont really care either way, but I need a way to get accurate timings for optimization. My current technique is just to get the time once a frame, subtract from the previous time, and use that to calculate FPS+MSPF. This, of course, includes glfwSwapBuffers(), which is taking about 10% of execution time according to the profiler. But, if I remove this from the timed section, I dont think Ill get good timing taking GPU rendering time into account, which I think I was getting before(unless Im misunderstanding the parallelism of the GPU?) Is there a way to get it unlimited, or a better way to measure timing?
Try changing the default v-sync setting in the driver configuration, or use the WGL_EXT_swap_control extension (Windows only) , for Linux you can use GLX_SGI_swap_control if the driver supports it.
Application controlled vsync doesn't work under OpenGL with (at least some) NVIDIA drivers; the only way to set it is through the control panel.
Wierd. It was set to letting the application choose, but the application was somehow turning on vsync, despite me using the same settings to start glfw on both computers. Changing it to Force Off fixed it. I guess nvidia/ati or something else changes the default state for a window
but the application was somehow turning on vsync, despite me using the same settings to start glfw on both computers.
What somehow?
You can control v-sync yourself.
http://www.opengl.org/wiki/Platform_specifics:_Windows#SwapInterval_aka_vsync
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement