# OpenGL glSwapBuffers() - odd performance pattern

This topic is 2126 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi.
I'm working on my first big project using OpenGL, and I've encountered a problem that's left me stumped.
I wanted to see how many sprites I could handle without going below 60 fps on my computer. It seems like it would be able to handle 10k sprites just fine, but at regular inters (AFAICT, exactly every second regardless of framerate) my framerate drops by as much as 75%. I've narrowed it down to the call to glSwapBuffers() arbitrarily taking longer for no apparent reason. Even if all I do every frame is call my renderer, I still get lag.

I'll post my main rendering function, in the hope that someone can spot something I'm doing that's obviously wrong:
 typedef GLuint texture_t; struct Vertex{ real_t x,y; }; struct Color{ real_t rgba[4]; }; struct ComplexVertex{ Vertex screen_coordinate; Color color; Vertex texture_coordinate; }; struct Quad{ ComplexVertex vertices[4]; }; typedef Quad VertexArrayElement; void Device::draw(){ this->sprite_count=this->sprite_list.size(); if (this->sprite_list.size()){ size_t sprite_count=this->sprite_list.size(); this->sorting_structure.resize(sprite_count); sprite *s_p=&this->sorting_structure[0]; this->vertex_array.resize(sprite_count); OpenGL::VertexArrayElement *va_p=&this->vertex_array[0]; boost::unordered_set<texture_t> textures_in_use; glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnableClientState(GL_COLOR_ARRAY); // ... populate textures_in_use and sorting_structure ... if (textures_in_use.size()>1){ std::sort(this->sorting_structure.begin(),this->sorting_structure.end(),spritecmp()); texture_t last_texture=0; size_t interval_count=0; this->intervals.resize(textures_in_use.size()); this->copy_array.resize(this->vertex_array.size()); // ... populate intervals ... glVertexPointer(2,GL_FLOAT,sizeof(OpenGL::ComplexVertex),&(this->copy_array[0].vertices[0].screen_coordinate.x)); glTexCoordPointer(2,GL_FLOAT,sizeof(OpenGL::ComplexVertex),&(this->copy_array[0].vertices[0].texture_coordinate.x)); glColorPointer(4,GL_FLOAT,sizeof(OpenGL::ComplexVertex),&(this->copy_array[0].vertices[0].color.rgba[0])); for (size_t a=0;a<this->intervals.size();a++){ glBindTexture(GL_TEXTURE_2D,this->intervals[a].texture); glDrawArrays(GL_QUADS,this->intervals[a].start*4,this->intervals[a].size*4); } }else{ glVertexPointer(2,GL_FLOAT,sizeof(OpenGL::ComplexVertex),&(this->vertex_array[0].vertices[0].screen_coordinate.x)); glTexCoordPointer(2,GL_FLOAT,sizeof(OpenGL::ComplexVertex),&(this->vertex_array[0].vertices[0].texture_coordinate.x)); glColorPointer(4,GL_FLOAT,sizeof(OpenGL::ComplexVertex),&(this->vertex_array[0].vertices[0].color.rgba[0])); glBindTexture(GL_TEXTURE_2D,this->sorting_structure.front().texture); glDrawArrays(GL_QUADS,0,sprite_count*4); } } SDL_GL_SwapBuffers(); } } 

Any ideas?

##### Share on other sites
Update: I've determined that the problem doesn't appear to be my code. I observed the exact same performance pattern when I tried running Quake 3 Arena without vsync. Only OpenGL applications have this problem.
Well, now I really don't know what to do. If it's a driver issue, all I can do is hope it gets fixed eventually or try to find an older version that doesn't have this problem.

##### Share on other sites
I'm completely guessing, but it could be because your CPU-side milliseconds-per-frame is too low and your GPU-side milliseconds-per-frame too high.

For example, say you CPU-side loop takes 1ms to complete, but the [font=courier new,courier,monospace]gl*[/font] calls that it made have resulted in 3ms worth of GPU work. These [font=courier new,courier,monospace]gl*[/font] calls push commands into a queue, which is consumed by the GPU. Usually, there is a decent amount of latency in the queue (e.g. 33ms) and this is normal.
However, if every 1ms, you're adding 3ms of work to the queue, then that latency will increase over time -- after 1 second of gameplay, the latency in that queue will have grown to 3 seconds!
If the driver detects that there is too much latency in this queue, then it can choose to stall the CPU completely (by blocking in [font=courier new,courier,monospace]SwapBuffers[/font]) until the GPU has caught up.

If this is the case, you can fix the problem by -
* not sending more frames worth of commands to the GPU than necessary. e.g. a 60Hz monitor can only display 60FPS, so running Q3 at 600FPS is useless.
* capping your CPU-side framerate yourself, so that the driver doesn't have to.
* profiling how much time the GPU is taking to actually process your [font=courier new,courier,monospace]gl[/font] commands (N.B. you can't do this by timing [font=courier new,courier,monospace]gl[/font] API calls, you need to insert timing events or use a profiling tool) and ensure that your GPU milliseconds-per-frame is roughly equal to your CPU milliseconds-per-frame.

##### Share on other sites
Suppose I inserted an empty loop in the middle of my game loop such that I can be sure that the time spent outside of rendering amounts to over 95% of the run time of the program and the framerate never exceeds 40 fps. What does it mean if the framerate still drops quite visibly?
I can also monitor GPU usage from Process Explorer and it practically idles to run my program.

EDIT: If the answer to the above question is "it doesn't mean anything", is there any way I can improve my code? My target is being able to draw as many sprites as possible as efficiently as possible, considering that they'll be moving constantly and independently, and that sprites may be created or destroyed at any time. Edited by Helios_vmg

##### Share on other sites
"My target is being able to draw as many sprites as possible as efficiently as possible, considering that they'll be moving constantly and independently, and that sprites may be created or destroyed at any time."

Surely you just need it to draw enough for your game? Otherwise, you're just getting side-tracked optimising something you don't need optimising.

##### Share on other sites
Well, it's an engine, not a specific game, so I don't really have a fixed target performance. Although I admit I am getting sidetracked a bit.

##### Share on other sites
I figured it out. Remember I said the lag spikes happened once every second regardless of framerate? Remember also I said I could monitor GPU usage from Process Explorer? Turns out, PE's probing of the GPU was adding a huge amount of delay (up to 15 ms) to a few frames once every second.
I still think it's strange that only OpenGL applications behave like this, though.

Hodgman: What you said about sending work to the GPU faster than it can handle it, how could I determine if this was happening? Not that I think it's happening now, but I would like to be prepared if the need arose. Could you recommend a profiler?

##### Share on other sites
I can also monitor GPU usage from Process Explorer and it practically idles to run my program
"Resource usage" stats from apps like these are a nice hint, but generally shouldn't be trusted as a source of profiling data.
Turns out, PE's probing of the GPU was adding a huge amount of delay (up to 15 ms) to a few frames once every second.[/quote]Hah! That's interesting... So it doesn't occur if you ensure PE isn't running?
Hodgman: What you said about sending work to the GPU faster than it can handle it, how could I determine if this was happening?
[/quote]I'm a bit out of the loop with the GL side of things. I used gDEBugger a while ago, but can't remember how good it's profiling tools were.

To build your own GPU-side frame-timer, you basically want to create a ring-buffer of "event"/"query" objects and submit one at the start of each frame
-- e.g. an array of 3 or more queries, and increment a counter (wrapping around to 0 at the top of the array) that selects which one you'll be submitting that frame.
You need more than one query because of the latency between the CPU and GPU.
Before you reuse a event/query, you can read it's actual time value, which hopefully has actually been written by the time your array/ring wraps around.
I've not done this with OpenGL, but I believe you can use the ARB_timer_query extension.

Most of the game engines I've used have had some kind of on-screen display of this timer info , e.g. two bars at the bottom of the screen, one for CPU time in ms and one for GPU time in ms, with lines/markings on the bars showing 16.6 and 33.3ms (60FPS and 30FPS, respectively).

##### Share on other sites
Hah! That's interesting... So it doesn't occur if you ensure PE isn't running?
Yep. If I shut it off I can get a stable framerate with up to 15k sprites on the screen, which is well above my target. If I couldn't handle such a measly number without stuttering, I was going going to be in trouble once I turned scripting and audio back on and added the 3D bits.

To build your own GPU-side frame-timer, you basically want to create a ring-buffer of "event"/"query" objects and submit one at the start of each frame
-- e.g. an array of 3 or more queries, and increment a counter (wrapping around to 0 at the top of the array) that selects which one you'll be submitting that frame.
You need more than one query because of the latency between the CPU and GPU.
Before you reuse a event/query, you can read it's actual time value, which hopefully has actually been written by the time your array/ring wraps around.
I've not done this with OpenGL, but I believe you can use the ARB_timer_query extension.
Most of the game engines I've used have had some kind of on-screen display of this timer info , e.g. two bars at the bottom of the screen, one for CPU time in ms and one for GPU time in ms, with lines/markings on the bars showing 16.6 and 33.3ms (60FPS and 30FPS, respectively).
Alright, thanks. I'll look into it later.

1. 1
2. 2
3. 3
4. 4
Rutin
12
5. 5

• 12
• 16
• 10
• 14
• 10
• ### Forum Statistics

• Total Topics
632659
• Total Posts
3007692
• ### Who's Online (See full list)

There are no registered users currently online

×