Sign in to follow this  

OpenGL glSwapBuffers() - odd performance pattern

This topic is 1874 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi.
I'm working on my first big project using OpenGL, and I've encountered a problem that's left me stumped.
I wanted to see how many sprites I could handle without going below 60 fps on my computer. It seems like it would be able to handle 10k sprites just fine, but at regular inters (AFAICT, exactly every second regardless of framerate) my framerate drops by as much as 75%. I've narrowed it down to the call to glSwapBuffers() arbitrarily taking longer for no apparent reason. Even if all I do every frame is call my renderer, I still get lag.

I'll post my main rendering function, in the hope that someone can spot something I'm doing that's obviously wrong:
[code]
typedef GLuint texture_t;

struct Vertex{
real_t x,y;
};

struct Color{
real_t rgba[4];
};

struct ComplexVertex{
Vertex screen_coordinate;
Color color;
Vertex texture_coordinate;
};

struct Quad{
ComplexVertex vertices[4];
};

typedef Quad VertexArrayElement;

void Device::draw(){
this->sprite_count=this->sprite_list.size();
if (this->sprite_list.size()){
size_t sprite_count=this->sprite_list.size();
this->sorting_structure.resize(sprite_count);
sprite *s_p=&this->sorting_structure[0];
this->vertex_array.resize(sprite_count);
OpenGL::VertexArrayElement *va_p=&this->vertex_array[0];
boost::unordered_set<texture_t> textures_in_use;
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);

// ... populate textures_in_use and sorting_structure ...

if (textures_in_use.size()>1){
std::sort(this->sorting_structure.begin(),this->sorting_structure.end(),spritecmp());
texture_t last_texture=0;
size_t interval_count=0;
this->intervals.resize(textures_in_use.size());
this->copy_array.resize(this->vertex_array.size());

// ... populate intervals ...

glVertexPointer(2,GL_FLOAT,sizeof(OpenGL::ComplexVertex),&(this->copy_array[0].vertices[0].screen_coordinate.x));
glTexCoordPointer(2,GL_FLOAT,sizeof(OpenGL::ComplexVertex),&(this->copy_array[0].vertices[0].texture_coordinate.x));
glColorPointer(4,GL_FLOAT,sizeof(OpenGL::ComplexVertex),&(this->copy_array[0].vertices[0].color.rgba[0]));
for (size_t a=0;a<this->intervals.size();a++){
glBindTexture(GL_TEXTURE_2D,this->intervals[a].texture);
glDrawArrays(GL_QUADS,this->intervals[a].start*4,this->intervals[a].size*4);
}
}else{
glVertexPointer(2,GL_FLOAT,sizeof(OpenGL::ComplexVertex),&(this->vertex_array[0].vertices[0].screen_coordinate.x));
glTexCoordPointer(2,GL_FLOAT,sizeof(OpenGL::ComplexVertex),&(this->vertex_array[0].vertices[0].texture_coordinate.x));
glColorPointer(4,GL_FLOAT,sizeof(OpenGL::ComplexVertex),&(this->vertex_array[0].vertices[0].color.rgba[0]));
glBindTexture(GL_TEXTURE_2D,this->sorting_structure.front().texture);
glDrawArrays(GL_QUADS,0,sprite_count*4);
}
}
SDL_GL_SwapBuffers();
}
}
[/code]

Any ideas?

Share this post


Link to post
Share on other sites
Update: I've determined that the problem doesn't appear to be my code. I observed the exact same performance pattern when I tried running Quake 3 Arena without vsync. Only OpenGL applications have this problem.
Well, now I really don't know what to do. If it's a driver issue, all I can do is hope it gets fixed eventually or try to find an older version that doesn't have this problem.

Share this post


Link to post
Share on other sites
I'm completely guessing, but it could be because your CPU-side milliseconds-per-frame is too low and your GPU-side milliseconds-per-frame too high.

For example, say you CPU-side loop takes 1ms to complete, but the [font=courier new,courier,monospace]gl*[/font] calls that it made have resulted in 3ms worth of GPU work. These [font=courier new,courier,monospace]gl*[/font] calls push commands into a queue, which is consumed by the GPU. Usually, there is a decent amount of latency in the queue (e.g. 33ms) and this is normal.
However, if every 1ms, you're adding 3ms of work to the queue, then that latency will increase over time -- after 1 second of gameplay, the latency in that queue will have grown to 3 seconds!
If the driver detects that there is too much latency in this queue, then it can choose to stall the CPU completely (by blocking in [font=courier new,courier,monospace]SwapBuffers[/font]) until the GPU has caught up.

If this is the case, you can fix the problem by -
* not sending more frames worth of commands to the GPU than necessary. e.g. a 60Hz monitor can only display 60FPS, so running Q3 at 600FPS is useless.
* capping your CPU-side framerate yourself, so that the driver doesn't have to.
* profiling how much time the GPU is taking to actually process your [font=courier new,courier,monospace]gl[/font] commands ([i]N.B. you can't do this by timing [font=courier new,courier,monospace]gl[/font] API calls, you need to insert timing events or use a profiling tool[/i]) and ensure that your GPU milliseconds-per-frame is roughly equal to your CPU milliseconds-per-frame.

Share this post


Link to post
Share on other sites
Suppose I inserted an empty loop in the middle of my game loop such that I can be sure that the time spent outside of rendering amounts to over 95% of the run time of the program and the framerate never exceeds 40 fps. What does it mean if the framerate still drops quite visibly?
I can also monitor GPU usage from Process Explorer and it practically idles to run my program.

EDIT: If the answer to the above question is "it doesn't mean anything", is there any way I can improve my code? My target is being able to draw as many sprites as possible as efficiently as possible, considering that they'll be moving constantly and independently, and that sprites may be created or destroyed at any time. Edited by Helios_vmg

Share this post


Link to post
Share on other sites
"My target is being able to draw as many sprites as possible as efficiently as possible, considering that they'll be moving constantly and independently, and that sprites may be created or destroyed at any time."


Surely you just need it to draw enough for your game? Otherwise, you're just getting side-tracked optimising something you don't need optimising.

Share this post


Link to post
Share on other sites
Well, it's an engine, not a specific game, so I don't really have a fixed target performance. Although I admit I am getting sidetracked a bit.

Share this post


Link to post
Share on other sites
I figured it out. Remember I said the lag spikes happened once every second regardless of framerate? Remember also I said I could monitor GPU usage from Process Explorer? Turns out, PE's probing of the GPU was adding a huge amount of delay (up to 15 ms) to a few frames once every second.
I still think it's strange that only OpenGL applications behave like this, though.

Hodgman: What you said about sending work to the GPU faster than it can handle it, how could I determine if this was happening? Not that I think it's happening now, but I would like to be prepared if the need arose. Could you recommend a profiler?

Share this post


Link to post
Share on other sites
[quote name='Helios_vmg' timestamp='1353988791' post='5004411']I can also monitor GPU usage from Process Explorer and it practically idles to run my program[/quote]"Resource usage" stats from apps like these are a nice hint, but [i]generally[/i] shouldn't be trusted as a source of profiling data.[quote]Turns out, PE's probing of the GPU was adding a huge amount of delay (up to 15 ms) to a few frames once every second.[/quote]Hah! That's interesting... [img]http://public.gamedev.net//public/style_emoticons/default/wacko.png[/img] So it doesn't occur if you ensure PE isn't running?[quote]Hodgman: What you said about sending work to the GPU faster than it can handle it, how could I determine if this was happening?
[/quote]I'm a bit out of the loop with the GL side of things. I used [url="http://www.gremedy.com/"]gDEBugger[/url] a while ago, but can't remember how good it's profiling tools were.

To build your own GPU-side frame-timer, you basically want to create a ring-buffer of "event"/"query" objects and submit one at the start of each frame
-- e.g. an array of 3 or more queries, and increment a counter (wrapping around to 0 at the top of the array) that selects which one you'll be submitting that frame.
You need more than one query because of the latency between the CPU and GPU.
Before you reuse a event/query, you can read it's actual time value, which hopefully has actually been written by the time your array/ring wraps around.
I've not done this with OpenGL, but I believe you can use the [url="http://www.opengl.org/registry/specs/ARB/timer_query.txt"]ARB_timer_query[/url] extension.

Most of the game engines I've used have had some kind of on-screen display of this timer info , e.g. two bars at the bottom of the screen, one for CPU time in ms and one for GPU time in ms, with lines/markings on the bars showing 16.6 and 33.3ms (60FPS and 30FPS, respectively).

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1354264230' post='5005622']Hah! That's interesting... So it doesn't occur if you ensure PE isn't running?[/quote]Yep. If I shut it off I can get a stable framerate with up to 15k sprites on the screen, which is well above my target. If I couldn't handle such a measly number without stuttering, I was going going to be in trouble once I turned scripting and audio back on and added the 3D bits.

[quote name='Hodgman' timestamp='1354264230' post='5005622']To build your own GPU-side frame-timer, you basically want to create a ring-buffer of "event"/"query" objects and submit one at the start of each frame
-- e.g. an array of 3 or more queries, and increment a counter (wrapping around to 0 at the top of the array) that selects which one you'll be submitting that frame.
You need more than one query because of the latency between the CPU and GPU.
Before you reuse a event/query, you can read it's actual time value, which hopefully has actually been written by the time your array/ring wraps around.
I've not done this with OpenGL, but I believe you can use the ARB_timer_query extension.
Most of the game engines I've used have had some kind of on-screen display of this timer info , e.g. two bars at the bottom of the screen, one for CPU time in ms and one for GPU time in ms, with lines/markings on the bars showing 16.6 and 33.3ms (60FPS and 30FPS, respectively).[/quote]Alright, thanks. I'll look into it later.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Similar Content

    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

  • Popular Now