Texture performance

Started by
11 comments, last by derodo 18 years, 4 months ago
Hi again, I've been some tests with my home-made engine, and I found that the more textures you load in the GL, the slower the framerate gets... ...I know it's obvious, but the point here is that I do NOTHING but render a black screen with just a text showing the processing time spent on each frame. That is: * First I initialize the GL * Then I load, lets say, "n" textures (256x256), I just load them and define the textures in the GL (with glGenTextures and glTexImage2D). * Then I start the null rendering process, that justs call glClear, generates some font output, calls glFinish () and finally swap buffers. For what I have seen, the numbers show that whenever I get passed the 32MB texture limit, the GL performance starts to drop dramatically. I don't make any render call (but the text output) and I do not use texturing at all. I have an AGP Radeon 9700Pro with 128MB memory, so the 32MB limit may be just a coincidence. The numbers I obtained from several tests showed framerates above 1000FPS (I know that's not a good performance metric, anyway) when there're few texures defined (and upluloaded to the GL), and framerates near 200FPS when more texture memory was used. I never get it below the 200FPS. Does this make any sense? Has anyone experienced this? Any hints?
Advertisement
you're not loading the textures every frame are you?
That is, you're not calling glTexImage2D() each frame...

If your code isn't too long winded, perhaps you can post it.
Hehe, of course I'm not calling glTexImage2D on each frame :)

I only load the textures before the main loop, and then I just proceed with the "null" rendering.

I'm investigating more in depth this issue, as I don't think its a normal behaviour.

I'll try to post the code, but I need to double check my rendering pipeline implementation to make sure I'm not making any obvious mistake.
Any frame rate over 150 fps is so dependent on arbitrary card/driver/batch/frame overhead that it's meaningless. If you see a meaningful difference in frame rate once you run your scene at actual load (i e, if there's a 5% difference when you're rendering enough geometry to be at < 100 fps), then you might start being concerned.
enum Bool { True, False, FileNotFound };
Where's Moses when you need him?

THOU SHALT NOT BENCHMARK USING FPS.


Change the output to frame time instead of FPS and you'll see a much clearer picture.

Btw, frame time = 1 / fps
I know FPS is not a good performance metric. I have made some timings withing my code, and found that when the framerate drops (even when I'm rendering NOTHING, just LOADING a LOT of textures before the main loop) all the time is spent in the glFinish()+swap buffers operations.

And yes, if I switch and start to render the "world" in the main loop, there's no performance loss...it keeps running at the same FPS as it does when nothing is rendered ?:-/

I'm just curious about this, as I assume is the way the driver works...
Quote:Original post by derodo
I know FPS is not a good performance metric. I have made some timings withing my code, and found that when the framerate drops (even when I'm rendering NOTHING, just LOADING a LOT of textures before the main loop) all the time is spent in the glFinish()+swap buffers operations.

And yes, if I switch and start to render the "world" in the main loop, there's no performance loss...it keeps running at the same FPS as it does when nothing is rendered ?:-/

I'm just curious about this, as I assume is the way the driver works...


Perhaps the driver is slightly confused about which textures have highest priority and which don't- hence will do a lot of swapping of textures between main / AGP ram. I presume you're loading so many textures that they won't all fit in AGP ram- so some will end up in the main memory. Now, normally the driver will probably try and be smart- keeping the most frequently used textures in memory to improve performance. But in the case where you are rendering nothing, it will have no information to make assumptions upon- so any texture could be fair game for swapping in/out of video memory. Its just a little theory, but I could be completely wrong...
That would make sense if I were loading "too much" textures, but what I've found from my tests is just that the more textures I load, the more time it takes for glFinish()+swap buffers to complete, even if all of them fit in the card memory.

I got an ATI 9700PRO with 128MB RAM, and I can clearly see the performance loss just by loading about 32BM of texture data.

Maybe it's just a common driver behavior, and I'm worrying about nothing...I will post a super simple source code for you to look at when I get back home.

Thanks again for your comments, guys.
Quote:Original post by derodo
That would make sense if I were loading "too much" textures, but what I've found from my tests is just that the more textures I load, the more time it takes for glFinish()+swap buffers to complete, even if all of them fit in the card memory.

I got an ATI 9700PRO with 128MB RAM, and I can clearly see the performance loss just by loading about 32BM of texture data.

Maybe it's just a common driver behavior, and I'm worrying about nothing...I will post a super simple source code for you to look at when I get back home.

Thanks again for your comments, guys.


Yeah, this is certainly strange. My only other thought is that there is some overhead going on in the background with the display driver managing textures. I can't imagine what this overhead would be though, perhaps someone has some ideas ? Once a texture is loaded into VRAM and it doesn't need to be moved because of other textures, I would presume there would be very little work for the driver to actually do.

Just wondering, have you tried rolling back or updating your drivers to see how it behaves with different versions ? Another thing, have you tried to remove the glFinish() statement ? It probably has nothing to do with it, but try commenting it out anyway and see if it has any effect on your framerates..



I was just about to test with another driver version. But I want to make some more tests before, as the application I'm running is not so simple and I may be doing some work behind the scenes that could be disturbing the results (it shouldn't but just in case).

When I get home today, I will make a simple dumb test to really make sure the beheviour is as I explain here.

And by the way, I inserted the glFinish() after I noticed this situation to see if it could be of any help in my analysis.

I'll let you know when I have the new results.

This topic is closed to new replies.

Advertisement