Sign in to follow this  
Prune

OpenGL Video RAM fragmentation and resource management for continuously running system?

Recommended Posts

Prune    224
Basically I don't know how to handle this situation: Consider a continuously running system which loads and unloads modules based on a schedule very frequently (on average every minute) but the OpenGL rendering never leaves full-screen; the module switch is completely seamless (maybe a half-second fade in between one module stopping drawing to the OpenGL render thread and the next starting). Now, since I can't preload all the data simultaneously at system startup , and things like textures and VBOs will have to be dynamically loaded, how do I avoid video memory fragmentation from eventually killing performance (and possibly stability)? I need the system to be very stable and cannot do restarts more often than once in 24 hours (preferably once a week).

Share this post


Link to post
Share on other sites
Numsgil    501
First load in all the data that won't be changing between module loads/unloads, if any. After that, load in a module's texture data. When the module is done, unload it completely before loading the next module data. If you do it this way, there won't be any fragmentation.

Share this post


Link to post
Share on other sites
Prune    224
Quote:
Original post by Promit
Is this actually an observed problem, or just paranoia?

My boss wants a system that is near unconditionally stable. I better be paranoid if I want to keep my job.

Quote:
Original post by Numsgil
First load in all the data that won't be changing between module loads/unloads, if any. After that, load in a module's texture data. When the module is done, unload it completely before loading the next module data. If you do it this way, there won't be any fragmentation.

Thanks for the suggestion. I'm wondering what the best way is to load stuff so that I can do a quick switchover. I would have to preload stuff for the next module before the first is finished, but I'm wondering if a whole bunch of glTexImage, glBufferData, and glMapBuffer will not screw up framerate of whatever's currently running. I could have the next module queueing messages to the render thread to do these with some sparseness (in time) but that seems like not the simplest solution.

Share this post


Link to post
Share on other sites
Promit    13246
In that case, you're going to have to do heavy stress testing anyway. Why not write a simpler first pass implementation, and then refine based on what you see?

Share this post


Link to post
Share on other sites
Prune    224
I guess just that in my experience major refinements end up being rewrites of half the stuff heh...

Numsgil's suggestion requires that I unload a module completely before loading the next one. That's a problem in terms of seamlessly going from one to the other, with a half-second black screen or logo being displayed between them acceptable. I'd really need to start loading the next one before unloading the current.

Now imagine doing such a switch around every minute for 24 hours. How likely that video memory would get fragmented and it'll end up with constant swapping with main memory?
At least with system RAM there there is a clear solution--preallocate memory pools. But no void* analogue for VRAM; to do memory management there would be a pain since there'd be preallocate 'free' lists of all different types of objects such as texture with x channels and y resolution (with glTexImage2D), and checking the list for a match to put the new texture in with glTexSubImage2D when loading actual textures, then sending back to free list (example at http://www.codeguru.com/cpp/g-m/opengl/texturemapping/article.php/c5573/ ), and then for VBOs there'd be something like this, etc. and it'll be horrible...

Share this post


Link to post
Share on other sites
Numsgil    501
The completely-unload-one-before-loading-the-next is the method we're using at work. In our experience unloading a level/module/whatever should be nearly instantaneous. Certainly not anywhere near as long as loading a level/module/whatever. This makes sense when you consider that the driver doesn't have to upload any data to the card when it unloads a texture, it just deletes a handle somewhere.

Share this post


Link to post
Share on other sites
Prune    224
Sure, the unloading is near instantaneous, but the loading isn't. That's the problem. Consider timeline:

<--0.5sFadeIn----1minModule1Draws-----0.5sFadeOut--><--0.5sSplash--><--0.5sFadeIn----1minModule2Draws-----0.5sFadeOut-->... [to module n then repeat]

After one module finishes running I can't go much over half a second till the second starts drawing. So completely unloading the first before loading the second doesn't seem viable.

Share this post


Link to post
Share on other sites
gjaegy    126
what you could do is to load the resource you need to load (textures, vbo, etc...) in system mem buffers only (you can do this asynchronously - in a loading thread).

Once all the resources have been loaded into RAM, unload your previous module (including GPUs objects), then simply creates you GPUs buffer and fill them from you RAM buffers. That should be pretty fast, at least this should be the fastest solution (appart from sharing resource between modules, i.e. re-using texture objects/vbo - but this is nearly impossible as they should all have the same size...)

Share this post


Link to post
Share on other sites
Prune    224
gjaegy, that's the approach I intend to start with. I thought about virtual texturing but the complexity and shader overhead are a turnoff...

Quote:
Original post by V-man
module?

Mostly independent media content packages, such as games, videos, etc. Since the functionality of each is different, i.e. game logic, animation, simulation, rendering and GI algorithms, etc., they're individual dynamically linked (at runtime) libraries and associated data, and only a small subset will fit in memory at a time, in general. The system is non-deterministic due to users interaction, but it needs to run without supervision and users are not intended to have any supervisory role whatsoever and can only interact with the content, not the system it runs on. That's about as descriptive as I can be without violating my NDA LOL

Share this post


Link to post
Share on other sites
Prune    224
Some will be small and all of those could be preloaded. But some will be using around a GB of RAM (I already have one that does that).
There will be around a dozen or so modules running one after another, and then the cycle repeats indefinitely.
I'm ignoring for now the problem of system memory fragmentation if I'm loading from HD (very likely given the memory limit on a 32-bit system), but might do a custom new/delete for that if it's a problem.

Share this post


Link to post
Share on other sites
swiftcoder    18432
Quote:
Original post by Prune
Some will be small and all of those could be preloaded. But some will be using around a GB of RAM (I already have one that does that).
And how much data will be shared between modules?

It sounds like you should be able to load all the new assets into RAM, and then flush out all the data in the GPU and submit the new data. This way you can preload from the HD, and you shouldn't have any fragmentation in the GPU.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By povilaslt2
      Hello. I'm Programmer who is in search of 2D game project who preferably uses OpenGL and C++. You can see my projects in GitHub. Project genre doesn't matter (except MMO's :D).
    • By ZeldaFan555
      Hello, My name is Matt. I am a programmer. I mostly use Java, but can use C++ and various other languages. I'm looking for someone to partner up with for random projects, preferably using OpenGL, though I'd be open to just about anything. If you're interested you can contact me on Skype or on here, thank you!
      Skype: Mangodoor408
    • By tyhender
      Hello, my name is Mark. I'm hobby programmer. 
      So recently,I thought that it's good idea to find people to create a full 3D engine. I'm looking for people experienced in scripting 3D shaders and implementing physics into engine(game)(we are going to use the React physics engine). 
      And,ye,no money =D I'm just looking for hobbyists that will be proud of their work. If engine(or game) will have financial succes,well,then maybe =D
      Sorry for late replies.
      I mostly give more information when people PM me,but this post is REALLY short,even for me =D
      So here's few more points:
      Engine will use openGL and SDL for graphics. It will use React3D physics library for physics simulation. Engine(most probably,atleast for the first part) won't have graphical fron-end,it will be a framework . I think final engine should be enough to set up an FPS in a couple of minutes. A bit about my self:
      I've been programming for 7 years total. I learned very slowly it as "secondary interesting thing" for like 3 years, but then began to script more seriously.  My primary language is C++,which we are going to use for the engine. Yes,I did 3D graphics with physics simulation before. No, my portfolio isn't very impressive. I'm working on that No,I wasn't employed officially. If anybody need to know more PM me. 
       
    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
  • Popular Now