fbo frame rate issues

Started by
6 comments, last by Spoonbender 16 years, 3 months ago
Hi, I was wondering if anyone could help me with this. I render a simple scene to a frame buffer object at about 4000 fps. Sometimes when running the program i only get about 400 fps, only 1/10th! Then after a few seconds of waiting sometimes the frame rate goes up again to its full speed, but more often it just stays low. Another thread on this forum mentioned a similar problem. http://www.gamedev.net/community/forums/topic.asp?topic_id=469438 Conclusion of the thread is that with an older card and drivers the problem is gone. As it happens i also have a 6600 GT in an older pc and i did the test. System 1 Core 2 Duo 6400, 2GB, 7600GT with forceware 169.21(latest), WinXp Result: 400 fps System 2 Athlon XP 2200, 768MB, 6600GT with forceware 93.71, WinXp Result: 1000 fps System 2, but with forceware 169.21(latest) Result: 800 fps Weird, installing the newest drivers decreased performance by 20 % on an older card! I ran the program on a friend his computer ... also had a 7600GT and again only 400 fps. What to do? The fbo only works efficiently on older cards with older drivers :( I will test some more different drivers though. Anyone knows of this or knows a way around? Any hint would be welcome, Thx!
Advertisement
Hi, don't know, if you can do anything about this, It's some bugs in NVidia drivers (well, as far as I know, it looks like that), try download older versions or wait for new version.

Or it may be some issues of applications FBO format or something like that (F.e. If you are using OpenGL - use GL_ATI_float_buffers instead of GL_ARB_float_buffers on ATI cards, it'll be faster)

I appologize, that I don't know more, but I have ATI card nowadays, so I'm not-so-much interested in NVidia today.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

Quote:Original post by barbecue
What to do?

Nothing, or rather continue working and worry about performance when you have a complex enough scene to tax the GPU. First, the difference between 1000 and 400 fps is only about 1 ms per frame, which is actually very small, and in any case performance at those framerates is not at all representative of what performance would be in a more complex scene. You just can't draw any valid conclusions regarding the performance of various drivers or cards when your fps is that high.

Somewhere there is an article explaining the fps vs. ms/frame difference and why in general the latter is better for comparing performance, but unfortunately I can't seem to find it. Maybe somebody else has it bookmarked though.
thx for the replies :)

There are indeed some driver issues. Everytime i install a different driver the framerate changes drastically :). But my scene isn't really presentative for an average environment so i'll just have to accept that.

I feel stupid, as i identified the real problem behind the large framerate loss.
It wasn't the fbo, but the vbo :s.
When i disable everything on my scene except a cube i get a very high framerate. When i render the cube using a vbo the frame rate cripples. I know in general using a vbo for the smallest objects is useless, but i have no idea why the impact is so great. Other objects using vbo do fine, so i'll debug a little more :)
Drivers are usually highly optimized for 20-100fps range because that is the only range that matters all that much. Faster than 100fps is completely pointless, slower than 20fps and the user probably won't be playing it much....

What does this have to do with your problem? There are optimizations that have a fairly high overhead per frame but improve performance proportional to workload. For instance, keeping a hashtable of transformed points is a huge timesaver when there are tons of points, but when there are only around 8 or so then creating a hashtable that supports millions of vertices and clearing it is a huge burden. I don't write drivers, but that example case would explain your situation.
-----------------------------Join us at iClips. We are developing a cool 3D virtual world to go with our livestreams. Email me.look here
Quote:Original post by dumbsnake
Drivers are usually highly optimized for 20-100fps range because that is the only range that matters all that much. Faster than 100fps is completely pointless, slower than 20fps and the user probably won't be playing it much....

What does this have to do with your problem? There are optimizations that have a fairly high overhead per frame but improve performance proportional to workload. For instance, keeping a hashtable of transformed points is a huge timesaver when there are tons of points, but when there are only around 8 or so then creating a hashtable that supports millions of vertices and clearing it is a huge burden. I don't write drivers, but that example case would explain your situation.

Exactly. What you're trying to do is a perfect example of premature optimization. If by "the frame rate cripples" you mean drops to 400 then nothing is necessarily wrong and you should move on. If at some point in the future your framerate has dropped to 20 then you can revisit whether your FBO's and VBO's are working correctly, but at the moment you just can't tell.
4000 FPS? Are you sure that's enough?

Sorry, couldn't resist :)
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
Quote:Original post by barbecue
There are indeed some driver issues. Everytime i install a different driver the framerate changes drastically :). But my scene isn't really presentative for an average environment so i'll just have to accept that.


Not just that, but you're measuring the wrong thing as well.
FPS is not a valid measure of performance. Try the inverse, seconds per frame.

1000 FPS means 1 ms per frame.
400 FPS means 2.5ms per frame.

The difference is NOT 600 frames per second. It is 1.5 milliseconds per frame.

If you'd made the exact same change when your framerate was 100 (10ms per frame), it wouldn't drop 400 FPS. It'd go down to around 87 frames per second. Suddenly it's only a 13FPS loss.
If you'd started with a framerate of 30 (33ms per frame), it'd have dropped to 29FPS, thats one frame per second difference.

See my point? Once you get down to realistic framerates, the performance hit you took becomes negligible. It only looks big because you're dealing in insane framerates right now.
If you want to understand the performance cost of a change you're making to your code, look at how much time it adds to each frame, in milliseconds.
Don't look at how it changes your framerate, because that's misleading at best.

Another important point is that you're not really stressing the system right now. If you were, then the system with a 7600 would beat the 6600.
Instead it's being limited by the ridiculously high overhead you get from rendering tiny scenes at 1000 frames per second. That gives you a very big uncertainty factor. Even the slightest variation in driver implementation might change that overhead, and double your framerate, even if it makes zero difference in real-world conditions.

So please, if you're going to worry about performance, make sure you have a test application that actually generates valid data. Right now, you're just wasting your time. What you're seeing has *nothing* to do with how the hardware or the drivers actually perform under load, which is what matters.

This topic is closed to new replies.

Advertisement