Sign in to follow this  
Helios_vmg

OpenGL glSwapBuffers() - odd performance pattern

Recommended Posts

Hi.
I'm working on my first big project using OpenGL, and I've encountered a problem that's left me stumped.
I wanted to see how many sprites I could handle without going below 60 fps on my computer. It seems like it would be able to handle 10k sprites just fine, but at regular inters (AFAICT, exactly every second regardless of framerate) my framerate drops by as much as 75%. I've narrowed it down to the call to glSwapBuffers() arbitrarily taking longer for no apparent reason. Even if all I do every frame is call my renderer, I still get lag.

I'll post my main rendering function, in the hope that someone can spot something I'm doing that's obviously wrong:
[code]
typedef GLuint texture_t;

struct Vertex{
real_t x,y;
};

struct Color{
real_t rgba[4];
};

struct ComplexVertex{
Vertex screen_coordinate;
Color color;
Vertex texture_coordinate;
};

struct Quad{
ComplexVertex vertices[4];
};

typedef Quad VertexArrayElement;

void Device::draw(){
this->sprite_count=this->sprite_list.size();
if (this->sprite_list.size()){
size_t sprite_count=this->sprite_list.size();
this->sorting_structure.resize(sprite_count);
sprite *s_p=&this->sorting_structure[0];
this->vertex_array.resize(sprite_count);
OpenGL::VertexArrayElement *va_p=&this->vertex_array[0];
boost::unordered_set<texture_t> textures_in_use;
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);

// ... populate textures_in_use and sorting_structure ...

if (textures_in_use.size()>1){
std::sort(this->sorting_structure.begin(),this->sorting_structure.end(),spritecmp());
texture_t last_texture=0;
size_t interval_count=0;
this->intervals.resize(textures_in_use.size());
this->copy_array.resize(this->vertex_array.size());

// ... populate intervals ...

glVertexPointer(2,GL_FLOAT,sizeof(OpenGL::ComplexVertex),&(this->copy_array[0].vertices[0].screen_coordinate.x));
glTexCoordPointer(2,GL_FLOAT,sizeof(OpenGL::ComplexVertex),&(this->copy_array[0].vertices[0].texture_coordinate.x));
glColorPointer(4,GL_FLOAT,sizeof(OpenGL::ComplexVertex),&(this->copy_array[0].vertices[0].color.rgba[0]));
for (size_t a=0;a<this->intervals.size();a++){
glBindTexture(GL_TEXTURE_2D,this->intervals[a].texture);
glDrawArrays(GL_QUADS,this->intervals[a].start*4,this->intervals[a].size*4);
}
}else{
glVertexPointer(2,GL_FLOAT,sizeof(OpenGL::ComplexVertex),&(this->vertex_array[0].vertices[0].screen_coordinate.x));
glTexCoordPointer(2,GL_FLOAT,sizeof(OpenGL::ComplexVertex),&(this->vertex_array[0].vertices[0].texture_coordinate.x));
glColorPointer(4,GL_FLOAT,sizeof(OpenGL::ComplexVertex),&(this->vertex_array[0].vertices[0].color.rgba[0]));
glBindTexture(GL_TEXTURE_2D,this->sorting_structure.front().texture);
glDrawArrays(GL_QUADS,0,sprite_count*4);
}
}
SDL_GL_SwapBuffers();
}
}
[/code]

Any ideas?

Share this post


Link to post
Share on other sites
Update: I've determined that the problem doesn't appear to be my code. I observed the exact same performance pattern when I tried running Quake 3 Arena without vsync. Only OpenGL applications have this problem.
Well, now I really don't know what to do. If it's a driver issue, all I can do is hope it gets fixed eventually or try to find an older version that doesn't have this problem.

Share this post


Link to post
Share on other sites
I'm completely guessing, but it could be because your CPU-side milliseconds-per-frame is too low and your GPU-side milliseconds-per-frame too high.

For example, say you CPU-side loop takes 1ms to complete, but the [font=courier new,courier,monospace]gl*[/font] calls that it made have resulted in 3ms worth of GPU work. These [font=courier new,courier,monospace]gl*[/font] calls push commands into a queue, which is consumed by the GPU. Usually, there is a decent amount of latency in the queue (e.g. 33ms) and this is normal.
However, if every 1ms, you're adding 3ms of work to the queue, then that latency will increase over time -- after 1 second of gameplay, the latency in that queue will have grown to 3 seconds!
If the driver detects that there is too much latency in this queue, then it can choose to stall the CPU completely (by blocking in [font=courier new,courier,monospace]SwapBuffers[/font]) until the GPU has caught up.

If this is the case, you can fix the problem by -
* not sending more frames worth of commands to the GPU than necessary. e.g. a 60Hz monitor can only display 60FPS, so running Q3 at 600FPS is useless.
* capping your CPU-side framerate yourself, so that the driver doesn't have to.
* profiling how much time the GPU is taking to actually process your [font=courier new,courier,monospace]gl[/font] commands ([i]N.B. you can't do this by timing [font=courier new,courier,monospace]gl[/font] API calls, you need to insert timing events or use a profiling tool[/i]) and ensure that your GPU milliseconds-per-frame is roughly equal to your CPU milliseconds-per-frame.

Share this post


Link to post
Share on other sites
Suppose I inserted an empty loop in the middle of my game loop such that I can be sure that the time spent outside of rendering amounts to over 95% of the run time of the program and the framerate never exceeds 40 fps. What does it mean if the framerate still drops quite visibly?
I can also monitor GPU usage from Process Explorer and it practically idles to run my program.

EDIT: If the answer to the above question is "it doesn't mean anything", is there any way I can improve my code? My target is being able to draw as many sprites as possible as efficiently as possible, considering that they'll be moving constantly and independently, and that sprites may be created or destroyed at any time. Edited by Helios_vmg

Share this post


Link to post
Share on other sites
"My target is being able to draw as many sprites as possible as efficiently as possible, considering that they'll be moving constantly and independently, and that sprites may be created or destroyed at any time."


Surely you just need it to draw enough for your game? Otherwise, you're just getting side-tracked optimising something you don't need optimising.

Share this post


Link to post
Share on other sites
I figured it out. Remember I said the lag spikes happened once every second regardless of framerate? Remember also I said I could monitor GPU usage from Process Explorer? Turns out, PE's probing of the GPU was adding a huge amount of delay (up to 15 ms) to a few frames once every second.
I still think it's strange that only OpenGL applications behave like this, though.

Hodgman: What you said about sending work to the GPU faster than it can handle it, how could I determine if this was happening? Not that I think it's happening now, but I would like to be prepared if the need arose. Could you recommend a profiler?

Share this post


Link to post
Share on other sites
[quote name='Helios_vmg' timestamp='1353988791' post='5004411']I can also monitor GPU usage from Process Explorer and it practically idles to run my program[/quote]"Resource usage" stats from apps like these are a nice hint, but [i]generally[/i] shouldn't be trusted as a source of profiling data.[quote]Turns out, PE's probing of the GPU was adding a huge amount of delay (up to 15 ms) to a few frames once every second.[/quote]Hah! That's interesting... [img]http://public.gamedev.net//public/style_emoticons/default/wacko.png[/img] So it doesn't occur if you ensure PE isn't running?[quote]Hodgman: What you said about sending work to the GPU faster than it can handle it, how could I determine if this was happening?
[/quote]I'm a bit out of the loop with the GL side of things. I used [url="http://www.gremedy.com/"]gDEBugger[/url] a while ago, but can't remember how good it's profiling tools were.

To build your own GPU-side frame-timer, you basically want to create a ring-buffer of "event"/"query" objects and submit one at the start of each frame
-- e.g. an array of 3 or more queries, and increment a counter (wrapping around to 0 at the top of the array) that selects which one you'll be submitting that frame.
You need more than one query because of the latency between the CPU and GPU.
Before you reuse a event/query, you can read it's actual time value, which hopefully has actually been written by the time your array/ring wraps around.
I've not done this with OpenGL, but I believe you can use the [url="http://www.opengl.org/registry/specs/ARB/timer_query.txt"]ARB_timer_query[/url] extension.

Most of the game engines I've used have had some kind of on-screen display of this timer info , e.g. two bars at the bottom of the screen, one for CPU time in ms and one for GPU time in ms, with lines/markings on the bars showing 16.6 and 33.3ms (60FPS and 30FPS, respectively).

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1354264230' post='5005622']Hah! That's interesting... So it doesn't occur if you ensure PE isn't running?[/quote]Yep. If I shut it off I can get a stable framerate with up to 15k sprites on the screen, which is well above my target. If I couldn't handle such a measly number without stuttering, I was going going to be in trouble once I turned scripting and audio back on and added the 3D bits.

[quote name='Hodgman' timestamp='1354264230' post='5005622']To build your own GPU-side frame-timer, you basically want to create a ring-buffer of "event"/"query" objects and submit one at the start of each frame
-- e.g. an array of 3 or more queries, and increment a counter (wrapping around to 0 at the top of the array) that selects which one you'll be submitting that frame.
You need more than one query because of the latency between the CPU and GPU.
Before you reuse a event/query, you can read it's actual time value, which hopefully has actually been written by the time your array/ring wraps around.
I've not done this with OpenGL, but I believe you can use the ARB_timer_query extension.
Most of the game engines I've used have had some kind of on-screen display of this timer info , e.g. two bars at the bottom of the screen, one for CPU time in ms and one for GPU time in ms, with lines/markings on the bars showing 16.6 and 33.3ms (60FPS and 30FPS, respectively).[/quote]Alright, thanks. I'll look into it later.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628337
    • Total Posts
      2982156
  • Similar Content

    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Thanks, 
      Dejay Hextrix 
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
    • By Abecederia
      So I've recently started learning some GLSL and now I'm toying with a POM shader. I'm trying to optimize it and notice that it starts having issues at high texture sizes, especially with self-shadowing.
      Now I know POM is expensive either way, but would pulling the heightmap out of the normalmap alpha channel and in it's own 8bit texture make doing all those dozens of texture fetches more cheap? Or is everything in the cache aligned to 32bit anyway? I haven't implemented texture compression yet, I think that would help? But regardless, should there be a performance boost from decoupling the heightmap? I could also keep it in a lower resolution than the normalmap if that would improve performance.
      Any help is much appreciated, please keep in mind I'm somewhat of a newbie. Thanks!
  • Popular Now