Jump to content

  • Log In with Google      Sign In   
  • Create Account


Ubik

Member Since 07 Nov 2002
Offline Last Active Sep 21 2014 08:59 AM

Topics I've Started

Cases for multithreading OpenGL code?

15 May 2014 - 03:15 PM

I have wanted to support multiple contexts to be used in separate threads with shared resources in this convenience GL wrapper I've been fiddling with. My goal hasn't been to expose everything it can do but in a nice way, but the multi-context support has seemed like a good thing, to align the wrapper with a pretty big aspect of the underlying system. However, multithreaded stuff is hard, so I finally started to question if supporting multiple contexts with resource sharing is even worth it.

 

If the intended use is PC gaming - a single simple window and so on (a single person project too, to put things to scale, and currently targeting version 3.3 if that has any relevance), what reasons would there be to take the harder route? My understanding is that the benefits might actually be pretty limited, but my knowledge of all the various implementations and their capabilities is definitely limited.


Relying on thread_local variables bad?

29 March 2014 - 04:46 PM

I have my personal hobby probl^H^H^H^H^Hproject of making a convenience wrapper for OpenGL in C++. Posting in this subforum, however, because this isn't all that much about OpenGL itself.
 
First, usually OpenGL code looks a bit like this:
glBindBuffer(bufferId);
glBufferData(pointerToData);

glBindBuffer(anotherBufferId);
glBufferData(pointerToMoreData);

glBindBuffer(bufferId);
glDrawElements(/* element type, count and such here */); // Drawing using the last bound object

My thought was to make that look more like this:
buffer.data(pointerToData);
buffer.use();
anotherBuffer.data(pointerToMoreData);
context.drawElements(/* glDraw* parameters here */); // Draws using buffer, not anotherBuffer
The actual OpenGL bind commands have been moved behind the scenes and instead there is a use() method that marks an object to be used when drawing. glBind* should only be called when necessary. This would require some state tracking made by myself, by tracking what's "bound for drawing" and what's actually bound at the moment. The OpenGL context that the gl* functions operate on is very much a thread local concept itself, so it would make sense to have a hidden (from library user's perspective) state tracker object. Using thread_local to store a pointer to my own state tracker seems very obvious choice.

The thread local pointer to the tracker would have to be used pretty much once for every OpenGL call that manipulates GL state. Does this seem like a bad idea to you?

---------------------

After some rubber ducking (partly whily trying to figure out how to put out things in this post), I've started to think maybe it's after all better to make the user call bind explicitly, and then just validate the calls with the thread-local state. The validation could be disabled in release mode as the program doesn't actually rely on it - well, except maybe for skipping unnecessary bind calls. (Sort of annoyingly shaders in OpenGL work like in the use() example above, though.)
buffer.bind();
anotherBuffer.data(pointerToData); // Throws or does some other terrible thing when validation is on!
I wish there was a compile time way to enforce ordering of calls that's not terribly cludgy, but I guess runtime validation has to do.

The Object initialization design issue thread has been going over at least tangentially related issues, so hopefully some of that interest and insight could leak into this thread too.

My thoughts on error handling, what are yours?

12 May 2013 - 09:03 AM

I've been thinking about error handling in OpenGL lately, as I'm slowly rewriting my hobby project, a C++ OpenGL wrapper. Error handling definitely one of those less exciting things, but still worth talking about in my opinion. I don't have any specific question here, other than maybe the very generic "how to do error handling?" or "how do you do it?" so would be happy to hear your thoughts and what approaches have you taken in your projects. Of course, comments on the text below are very welcome also.
 
--------------
 
So, here's my current thoughts on the subject. These are what I see to be the four overall options how to do the error handling:
 
1. No error handling in the application. Possibly the option to use in release mode, but with external tools to intercept calls this could work when developing too. I'm not knowledgeable about these at all, which probably explains why these options are not centered around what tool to use but how to handle errors within the code.
 
2. The "old-style" glGetError after every call to OpenGL. Use of glGet* functions is apparently not very good idea performance-wise, so probably will not be used in release configuration. Here I should add that I don't like having builds that differ from each other much, but in this case it seems justified. Maybe this could be done behind an if instead of #ifdef?
 
3. Synchronous use of the newer debug callback functionality, when using a debug context. (See http://www.opengl.org/registry/specs/ARB/debug_output.txt) I guess this kind of equates to the driver doing the glGetError internally, except that it can give much more in depth error messages.
 
4. Asynchronous use of the debug callback functionality. Set up similarly than the previous option, but has some implications which is why I separated it into its own point. More about this below.
 
 
There's also the more concrete level, that could be split into setup phase and then the handling of each actual OpenGL call. The setup phase is not very relevant to options 1 and 2, but obviously the 3 and 4 need to have the debug callback set up. Now, finally the part that really makes me think - let's go through the error handling options 2 to 4:
 
2. glGetError: Error handling can be done just by adding a check after every call. However, just outputting GL_INVALID_VALUE is hardly useful, so some context needs to be added. In C/C++ there are some macros like __FILE__ and __LINE__ that can be used. Going this way ends up with code where after every GL call there is something like "CHECK_GL_ERROR();" there. There could be more than that (in some cases also throw an exception etc.), but mostly there's going to be just the macro.
 
3. Synchronous debug callback: This does not require any special code where the OpenGL calls are made (at least not when nothing needs to happen in case of an error), but the context will be missing. I'm unsure if the vendor-specific debug messages will contain the function that was called - though that would make very much sense. Avoiding calling the same function from many places is not always possible (see glEnable).
 
Anyway, to add context to the callback, I could store somewhere the __FILE__ and __LINE__ before doing the call, and from there the callback can pull the information and add it to the output. So now the macro should be situated before the GL call. At this point I started to think about wrapping the call entirely into a macro call, because then the same same macro could then expand to either option 2 or 3. It could be done as something like "CHECKED_GL_CALL(glBindBuffer(GL_ARRAY_BUFFER, bufferId));" Variadic macros, as far as I know, would make it possible to do something like "CALL_GL(glBindBuffer, GL_ARRAY_BUFFER, bufferId);" For some reason the second option appeals to me bit more, but the first one is probably easier to reason about when first time seeing the code.
 
4. Asynchronous debug callback: This is the preferred way to use the debug callback according to the debug output extension text The downside is that setting up the file and line information somewhere doesn't seem to really work because of the asynchronicity. They'd be nice to have, but maybe that's just the price that has to be paid for the performance? All the macro trickery is therefore unnecessary, but I'm thinking about if it would still be worth doing the wrapping calls in macros thing to allow the other options.

Yet another batching thread

24 August 2004 - 07:38 AM

I came up with an idea to use with batching, so as short as i can write: First you give all resource types a numerical value where more "important" resource types have larger value e.g.

Vertex buffers: 1
Textures: 10
Shaders: 100

When a resource of some type is created, it is given a value that is simply the number of objects of that type at the creation time. (Simply: 1, 2, 3 etc. Damn my limited expression abilites.) The values are scaled after loading, so that the lastly created objects value is the weight of its resource type. The highest scaled texture id would be 10.

You propably have guessed that this approach assumes floating point numbers, but i think that this is more flexible as you can insert new resources between the old ones, without going over the highest value.

When a rendering call is finally made it gets a value that is sum of the resources scaled-id's. The batching can now be made with this single value which should be correct, or at least correct enough. I admit that i havent done any deep analysis of the idea (i got it yesterday), but it seems to work. The troubles that i can think of could be solved with giving larger difference to the resource weights.

*sigh* A long post again, i hope that i didnt take away any (if any) of your replying-willingnes. [I should put a smiley here]

PARTNERS