Sign in to follow this  

OpenGL Cleanup at end of program

Recommended Posts

Does one need to clean up OpenGL resources when a program finishes?

I can't think of any not-annoying way to clean up global resources in an SFML project, since the window (and thus context) gets destroyed before my resources.

Do they leak to VRAM, or does the context take them with it?

Is there any decent design to delete global resources before a program ends (and before the window is closed)?


Share this post

Link to post
Share on other sites
To not 'clean up' after oneself is simply bad practice. Destructors are a very good way of dealing with such issues.

Personally, as a matter of course, for every byte allocated, I make sure there is an appropriate 'clean up' routine without exception. It's almost pathological - every time I new or malloc I create a delete/free. Edited by mark ds

Share this post

Link to post
Share on other sites
Like I wrote above, my resources get deleted (as in, the destructor is called) after the window is closed, so there is no context to delete them from.

Guess I'll just use heap memory (new) and delete them manually before the program ends.

Share this post

Link to post
Share on other sites
If you get to the point that the resources are released at the wrong time, then it's a question of design and not so much about how to nicely release them at the correct point.

For example, if you're using the destructor to automatically release the resource but the resources are released too late, then the scope of the object owning the resources is simply too wide. You say that your resources are global, and that is probably the reason: they are global and have the wides possible scope. If you define your resources in a narrower scope than the window, then you don't have this problem anymore (in reality it may not be as easy as just sticking the objects in a tighter scope though, you need to adapt the program to not use global resources anymore). Edited by Brother Bob

Share this post

Link to post
Share on other sites
The context will take everything with it when it's deleted. If you want to clean up before destroying the context then capture the close-window message and do your cleanup before actually destroying the window. I don't know how to do that in SFML but I'm sure there's some kind of event callback or something that you can register for to do things when the user tries to exit.

Share this post

Link to post
Share on other sites
its important to distinguish between ogl resources and other resources. ogl resources get freed automatically when your drawing context is deleted, that is when you call Window::close. You only want to delete them manually if you want to unload resources at runtime, and load something else, like streaming textures. if you have time for it then it is advisable to do manual deleting just for the sake of practicing.
other resources such as cpu side memory are usually freed by your os upon exit, but special oses may not. you still want to delete stuff here though to make sure youre not doing memory leaking.
Edit: the non-annoying way would be to encapsulate your objects in classes and make a on_delete function in each class. then when you react to the sf::Event::Close, just call the on_delete function of each object, then in the end call window.close(). you may want to employ some kind of global class that holds all these objects in some way, so that you can access your objects anywhere in your code. by making your global class lightweight and the actual objects heavyweight you can easily manage memory later (ie. the global object will live until your app is closed, everybody else dies as soon as they get deleted / get out of context). Edited by Yours3!f

Share this post

Link to post
Share on other sites
This is an issue of debate and I usually stay out of these, but in this case I have a strong opinion because I know that there is actually one side that is correct.

So firstly all your classes should know to delete whatever they allocate. Why would there be an exception to the rule?
If your texture class creates an OpenGL texture ID, it should also delete it. Simple logic. This should be in the destructor, though it could be in a general-purpose Reset() function that is also called by the destructor, meaning it is still a destructor function, but also could tailor to more areas of allocation.
Should your texture class destructor have some of knowledge as to why it is being destructed?
Obviously No. Not only does that never make sense, it just uses more code.

From there we can extrapolate.
Should a certain array of objects (etc.) be deleted when you shut down the game?
Well, according to #1, that doesn’t make sense.
Deleting all of your global objects (or whatever) cascades into deleting all of the OpenGL resources you have allocated, and some people seem to think that deleting these objects is not necessary at shut-down.
It is basically a fallacy in every sense of the word.
Yes, the memory will not be leaked because of the OS, but by the logic of #1 it should never have ended up in the hands of the OS. The basic principal behind #1 is that there are no special cases between what is created is deleted. Saying you will not delete something, OpenGL or otherwise, just because the game is shutting down, is a raw violation of pure and simple logic.

But there is more to it.
If your game is designed such that shutting down is not a special case, it can be assumed that anything reported as a leak is not just a shut-down leak but a run-time leak as well.
In other words, if shutting down produces a spam of error messages related to leaks, you are likely to just ignore them as just shut-down leaks even though some of them really are run-time leaks.

There is basically no excuse, for any reason, no exceptions, for leaking any form of memory.
Not only is it bad practice, but it also makes no sense what-so-ever and it ultimately just hinders your ability to track down actual run-time leaks.

Asking this question is basically like looking for an excuse to be lazy.
Products made by lazy people are garbage. Do you want to be lazy just because you [i]can[/i]? This is how advocates of “just let the memory go” are.

L. Spiro

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
    • Total Posts
  • Similar Content

    • By lonewolff
      Hi guys,
      With OpenGL not having a dedicated SDK, how were libraries like GLUT and the likes ever written?
      Could someone these days write an OpenGL library from scratch? How would you even go about this?
      Obviously this question stems from the fact that there is no OpenGL SDK.
      DirectX is a bit different as MS has the advantage of having the relationship with the vendors and having full access to OS source code and the entire works.
      If I were to attempt to write the most absolute basic lib to access OpenGL on the GPU, how would I go about this?
    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
  • Popular Now