Jump to content
  • Advertisement
Sign in to follow this  
Prune

OpenGL Video RAM fragmentation and resource management for continuously running system?

This topic is 3529 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Basically I don't know how to handle this situation: Consider a continuously running system which loads and unloads modules based on a schedule very frequently (on average every minute) but the OpenGL rendering never leaves full-screen; the module switch is completely seamless (maybe a half-second fade in between one module stopping drawing to the OpenGL render thread and the next starting). Now, since I can't preload all the data simultaneously at system startup , and things like textures and VBOs will have to be dynamically loaded, how do I avoid video memory fragmentation from eventually killing performance (and possibly stability)? I need the system to be very stable and cannot do restarts more often than once in 24 hours (preferably once a week).

Share this post


Link to post
Share on other sites
Advertisement
First load in all the data that won't be changing between module loads/unloads, if any. After that, load in a module's texture data. When the module is done, unload it completely before loading the next module data. If you do it this way, there won't be any fragmentation.

Share this post


Link to post
Share on other sites
Quote:
Original post by Promit
Is this actually an observed problem, or just paranoia?

My boss wants a system that is near unconditionally stable. I better be paranoid if I want to keep my job.

Quote:
Original post by Numsgil
First load in all the data that won't be changing between module loads/unloads, if any. After that, load in a module's texture data. When the module is done, unload it completely before loading the next module data. If you do it this way, there won't be any fragmentation.

Thanks for the suggestion. I'm wondering what the best way is to load stuff so that I can do a quick switchover. I would have to preload stuff for the next module before the first is finished, but I'm wondering if a whole bunch of glTexImage, glBufferData, and glMapBuffer will not screw up framerate of whatever's currently running. I could have the next module queueing messages to the render thread to do these with some sparseness (in time) but that seems like not the simplest solution.

Share this post


Link to post
Share on other sites
In that case, you're going to have to do heavy stress testing anyway. Why not write a simpler first pass implementation, and then refine based on what you see?

Share this post


Link to post
Share on other sites
I guess just that in my experience major refinements end up being rewrites of half the stuff heh...

Numsgil's suggestion requires that I unload a module completely before loading the next one. That's a problem in terms of seamlessly going from one to the other, with a half-second black screen or logo being displayed between them acceptable. I'd really need to start loading the next one before unloading the current.

Now imagine doing such a switch around every minute for 24 hours. How likely that video memory would get fragmented and it'll end up with constant swapping with main memory?
At least with system RAM there there is a clear solution--preallocate memory pools. But no void* analogue for VRAM; to do memory management there would be a pain since there'd be preallocate 'free' lists of all different types of objects such as texture with x channels and y resolution (with glTexImage2D), and checking the list for a match to put the new texture in with glTexSubImage2D when loading actual textures, then sending back to free list (example at http://www.codeguru.com/cpp/g-m/opengl/texturemapping/article.php/c5573/ ), and then for VBOs there'd be something like this, etc. and it'll be horrible...

Share this post


Link to post
Share on other sites
The completely-unload-one-before-loading-the-next is the method we're using at work. In our experience unloading a level/module/whatever should be nearly instantaneous. Certainly not anywhere near as long as loading a level/module/whatever. This makes sense when you consider that the driver doesn't have to upload any data to the card when it unloads a texture, it just deletes a handle somewhere.

Share this post


Link to post
Share on other sites
Sure, the unloading is near instantaneous, but the loading isn't. That's the problem. Consider timeline:

<--0.5sFadeIn----1minModule1Draws-----0.5sFadeOut--><--0.5sSplash--><--0.5sFadeIn----1minModule2Draws-----0.5sFadeOut-->... [to module n then repeat]

After one module finishes running I can't go much over half a second till the second starts drawing. So completely unloading the first before loading the second doesn't seem viable.

Share this post


Link to post
Share on other sites
what you could do is to load the resource you need to load (textures, vbo, etc...) in system mem buffers only (you can do this asynchronously - in a loading thread).

Once all the resources have been loaded into RAM, unload your previous module (including GPUs objects), then simply creates you GPUs buffer and fill them from you RAM buffers. That should be pretty fast, at least this should be the fastest solution (appart from sharing resource between modules, i.e. re-using texture objects/vbo - but this is nearly impossible as they should all have the same size...)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!