Jump to content

  • Log In with Google      Sign In   
  • Create Account

#ActualBrother Bob

Posted 09 July 2013 - 03:34 AM

Regarding glGetError: Each type of error has an associated bit that is set on error, and cleared and returned on the call to glGetError. That means it is enough to call it enough times to have GL_INVALID_OPERATION return twice without an intermediate GL_NO_ERROR.

 

But I also agree that is is not an elegant solution. The best choice is, like it or not, to use the platform API you're running on to actually ask for the context directly. Is it really that bad to wrap 2-3 lines of code in an #if per platform, and how many platforms with distinct OpenGL layers are you really targeting?

 

edit: And if you insist on using OpenGL to determine wither itself is initialized, you should be aware of why that is a bad idea and that you should stick to the API that is actually responsible for initializing OpenGL. From the OpenGL specification:


Issuing GL commands when the program is not connected to a context results in undefined behavior.

Thus, using OpenGL to determine its state is undefined. It is then up to you to decide if relying on undefined behavior is better than relying on platform dependent, but defined, behavior.


#1Brother Bob

Posted 09 July 2013 - 03:27 AM

Regarding glGetError: Each type of error has an associated bit that is set on error, and cleared and returned on the call to glGetError. That means it is enough to call it enough times to have GL_INVALID_OPERATION return twice without an intermediate GL_NO_ERROR.

 

But I also agree that is is not an elegant solution. The best choice is, like it or not, to use the platform API you're running on to actually ask for the context directly. Is it really that bad to wrap 2-3 lines of code in an #if per platform, and how many platforms with distinct OpenGL layers are you really targeting?


PARTNERS