Sign in to follow this  
Followers 0
BlueSpud

OpenGL
Possible Causes of Performance Degradation Over Time

3 posts in this topic

Hey,

So I was working on my OpenGL engine today and I noticed that if I leave it running for ~a minute and a half I start to experience some significant framerate drops. The first thing I did was make sure that there were no OpenGL allocations every frame. After both a code check and a separate profile application check I verified that I am not calling something like glGenBuffers() or something like that. Then I checked for possible issues with dynamic allocations per-frame. The code executed every frame is relatively large, but it doesn't seem like there are any dynamic allocations in the rendering pipeline. I checked the graph of memory usage and it doesn't even seem like I'm allocating any memory except for a little bit here and there, which I assume is coming from PhysX or OpenAL which are both in the program.  

 

I thought that perhaps my CPU or GPU could be thermal throttling (I'm on a laptop). I ran the program until I experienced performance issues, closed it and immediately ran it again. The performance issues stopped once the application was re-run. What also strikes me as odd is that when I tab-out and tab back in the performance issues stop for a short period of time.

 

I'm a bit puzzled at what is happening here. I highly doubt it's a driver issue as other games work fine, although I haven't tried it on another computer. I've run some standard time-elapsed CPU profiling to try to figure out where the code was getting slow (I know these are generally unreliable because the GPU operates independently) and it just seems like anywhere that is touched by OpenGL just runs slower after the game is left running. Any ideas to what might be causing this? I've noticed this issue for a while but I've always ignored it thinking it was just something else running on the computer until I did extensive testing now so I don't know what code specifically changed this. 

 

Thanks

0

Share this post


Link to post
Share on other sites

Posted (edited)

The alt-tabbing behavior sounds like thermal throttling to me, or possibly a flush of accumulated GPU resources (though I'm only familiar with what DirectX does when it loses its device - not sure if OpenGL does that as well).

Have you tried using GPU-Z or some similar GPU monitoring tool to check whether it shows anything interesting in the "PerfCap Reason" field? Edited by Nypyren
0

Share this post


Link to post
Share on other sites

The alt-tabbing behavior sounds like thermal throttling to me.

 

Yeah - I suspect that too. I wouldn't be surprised if the management of the CPU/GPU clock speed is dynamic enough that launching a new program or alt-tabbing out could lead to a temporary speed recovery.

0

Share this post


Link to post
Share on other sites

 

The alt-tabbing behavior sounds like thermal throttling to me.

 

Yeah - I suspect that too. I wouldn't be surprised if the management of the CPU/GPU clock speed is dynamic enough that launching a new program or alt-tabbing out could lead to a temporary speed recovery.

 

 

Here are some graphs of the thermals running the game for ~7 minutes.

http://imgur.com/a/zZ1iq

It seemed like the there could perhaps be some throttling, but for the long patch where the temperature remained constant, the game was still running at 60FPS (V-sync was enabled). This was a fresh reboot, so it could still be a temperature issue, but I'll have to do more extensive tests later on that if there aren't any other ideas.

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Similar Content

    • By recp
      Hi,
      I'm working on new asset importer (https://github.com/recp/assetkit) based on COLLADA specs, the question is not about COLLADA directly
      also I'm working on a new renderer to render (https://github.com/recp/libgk) imported document.
      In the future I'll spend more time on this renderer of course, currently rendering imported (implemented parts) is enough for me
      assetkit imports COLLADA document (it will support glTF too),
      importing scene, geometries, effects/materials, 2d textures and rendering them seems working
      My actual confusion is about shaders. COLLADA has COMMON profile and GLSL... profiles,
      GLSL profile provides shaders for effects so I don't need to wory about them just compile, link, group them before render

      The problem occours in COMMON profile because I need to write shaders,
      Actually I wrote them for basic matrials and another version for 2d texture
      I would like to create multiple program but I am not sure how to split this this shader into smaller ones,

      Basic material version (only colors):
      https://github.com/recp/libgk/blob/master/src/default/shader/gk_default.frag
      Texture version:
      https://gist.github.com/recp/b0368c74c35d9d6912f524624bfbf5a3
      I used subroutines to bind materials, actually I liked it,
      In scene graph every node can have different program, and it switches between them if parentNode->program != node->program
      (I'll do scene graph optimizations e.g.  view frustum culling, grouping shaders... later)

      I'm going to implement transparency but I'm considering to create separate shaders,
      because default shader is going to be branching hell
      I can't generate shader for every node because I don't know how many node can be exist, there is no limit.
      I don't know how to write a good uber-shader for different cases:

      Here material struct:
      struct Material { ColorOrTexture emission; ColorOrTexture ambient; ColorOrTexture specular; ColorOrTexture reflective; ColorOrTexture transparent; ColorOrTexture diffuse; float shininess; float reflectivEyety; float transparency; float indexOfRefraction; }; ColorOrTexture could be color or 2d texture, if there would be single colorOrTex then I could split into two programs,
      Also I'm going to implement transparency, I am not sure how many program that I needed

      I'm considering to maintain a few default shaders for COMMON profile,
      1-no-texture, 2-one of colorOrTexture contains texture, 3-........

      Any advices in general or about how to optimize/split (if I need) these shaders which I provied as link?
      What do you think the shaders I wrote, I would like to write them without branching if posible,
      I hope I don't need to write 50+ or 100+ shaders, and 100+ default programs

      PS: These default shaders should render any document, they are not specific, they are general purpose...
             I'm compiling and linking default shaders when app launched

      Thanks
    • By CircleOfLight97
      Hi guys,
      I would like to contribute to a game project as a developer (open source possibly). I have some experiences in C/C++ in game development (perso projects). I don't know either unreal or unity but I have some knowledges in opengl, glsl and shading theory as I had some courses at university regarding to that. I have some knowledges in maths and basic in physics. I know a little how to use blender to do modelling, texturing and simple game assets (no characters, no animation no skinning/rigging). I have no game preferences but I like aventure game, dungeon crawler, platformers, randomly generated things. I know these kind of projects involve a lot of time and I'd be really to work on it but if there are no cleary defined specific design goals/stories/gameplay mechanics I would like to not be part of it x) and I would rather prefer a smaller but well defined project to work on that a huge and not 'finishable' one.
      CircleOfLight97
    • By gamesthatcouldbeworse
      Hi, I finally released KILL COMMANDO on gamejolt for free. It is a retro-funsplatter-shooter with C64 style. Give it a try.
    • By phil67rpg

      void TimerFunction(int value) {  glutPostRedisplay();  glutTimerFunc(1000, TimerFunction, 1); } void drawScene() {  glClear(GL_COLOR_BUFFER_BIT);      drawScene_bug();  TimerFunction(1);  eraseScene_bug(); // drawScene_bug_two(); // eraseScene_bug_two(); drawScene_ship(); drawScene_bullet();  glutSwapBuffers(); }
  • Popular Now