Multiple contexts vs single

Started by
2 comments, last by sgt_barnes 10 years, 11 months ago

I'm adding multiwindow support to my game engine. All drawing will be done on the same render thread, so is there any reason why an OpenGL context per window would be a better idea than sharing a single context between all windows? The latter seems the simplest solution, but I haven't found a concrete example of when one method would be preferred over the other, and therefore an answer.

Advertisement

You should use one shared context for multiple windows if your windows share objects (e.g. multiple views of the same scene), and use a per-window context if they are completely independant (e.g. one window with a 3D model viewer and another one showing video playback).

Or in other words: All the resources you load into GL (textures, vertex arrays, shaders, and whatnot) are bound to a context, so if you want to use them in more than one window, you have to share the context between them.

The only true advantages of using several contexts in several windows I see are a) vertical sync and b) not having to maintain the viewport (set once and forget, unless window is resized). Resources associated with anything you draw must be loaded into the GL either way, whether it's into one context or several separate ones, and anything you draw must be somehow processed.

On the contrary, there is the chance that you can reuse one or the other resource in another window using only one context (in a game, chances are that you can reuse the bigger part), whereas with multiple contexts you must duplicate that data (or use shared contexts, but this adds a noticeable overhead).

In theory, with vertical sync enabled, your thread is blocked within SwapBuffers. In practice, drivers will usually either block only at the next command that draws something after SwapBuffers, or they will even let you submit 2-3 frames ahead.

Now, drawing into several windows, say 4 or 5 of them, might result in you blocking after having drawn 2 or 3 windows, and only refreshing the others next frame (or, some time later). If they had separate contexts, you could keep submitting draw commands for all of them, and hopefully (if you do not exceed the frame time) they'll all update consistenly.

But of course a 1-frame lag is probably not very noticeable at all, and you can likely get it done right with a single context too. You could submit all drawing commands to all windows first, and only swap each buffer after that. I have never tried this, so can't tell for sure, but it should work just fine.

So, all in all, the advantages of many contexts versus only one aren't that great, I'd stick with one.

Thanks guys, I was heading towards a single context as I wanted to display the same world within these windows therefore preventing having to reload data into multiple contexts, but never thought about the vsync stuff, interesting! I wonder if you call SwapBuffers after all windows have been drawn to for that frame, and you are using displays with different refresh rates and vsync enabled, how long the graphics driver would block for? Either way I'll stick to a single context and have some fun experimenting, thanks. smile.png

Also quick question, on Windows I create the context when the first window is created with wglCreateContext(HDC) so that I have a device context to pass in, but when destroying the context I don't need a window to exist right, I can destroy my windows then separately I can call wglMakeCurrent(NULL, NULL) followed by wglDeleteContext(HGLRC)? So in theory I could open a window creating a context, delete it so the context exists with no windows open, then create another window which can use the context again (as long as the device context is compatible)? Not that I'd want to, just want to abstract my GL context deletion from window deletion.

[...] using displays with different refresh rates [...] delete window after creating context and use with another window

Behold, you found another case where different contexts may be an advantage or necessary. You can only (successfully) call wglCreateContext on a HDC that lives on the same device and that has the same pixel format (see MSDN for more info). This is not problematic for the theoretical case that you described about destroying the original window.

However, in a multi-display setup, it is in principle possible that different displays are attached to different devices, so that wouldn't work by the wording of the documentation.

Now don't ask me what happens when you move a window from one screen to another... this works fine in my single-GPU-multiple-display setup here (obviously, same device), but who knows what happens if you have 2 GPUs on 2 displays. Still, I figure that moving a window to another desktop is a legitimate thing to do.

Woha, difficult topic! ;-)

I talked to a PNY representative at CeBIT trade fair once, asking the same question. He took the demo CD I brought him, stuffed it in a Computer with two displays attached to two Quadro cards, started our single-context program and moved it to the other display. It worked flawlessly.

That said, I really think that this works only - if it works at all - with cards from the same vendor.

[Edit: typos]

This topic is closed to new replies.

Advertisement