Multiple GPU/Window framework design

Started by
-1 comments, last by smurfkiller 9 years, 5 months ago

Hello, I'm stuck with a design problem regarding an opengl rendering framework. There is unfortunately very little information about multi context opengl for some reason. Here is a simple description: multiple (NV)GPUs, multiple screens (one viewer split up in several perspective viewports on each screen). One glcontext for loading textures, buffers, shaders, shared with one glcontext per GPU (created with (NV)affinity). Each GPU context rendering in own thread. Resources (buffers, textures) created in own (resource)thread. Window/input message handling in main thread. Initialization of "everything" in main thread. So far so good, not many problems involved in creating and sharing contexts for each GPU. Here is the problem: let say that I have a model, representing some entity in the "world". Model consists of texture(s) and render surfaces. Each rendersurface is a block of vertex/index buffer. Engine knows how to render each surface by setting correct state and shaders. So eventually, after culling, we have a list of rendering surfaces sorted by type/textures/etc, and gl commands are invoked for each surface. Using core profile (opengl 4 and up), I'm forced to use VAOs. This means, that when a model is loaded and vertex/index buffers are created in the resource thread/context, they are available in all the other GPU contexes. But for each new loaded model(model surface) I have to, for each GPU, create a VAO. So basically mo model must look something like:

Model {

Surface {

int vertbuff;

int indexbuff;

int vao[NUM_GPUS]; // omg

} surfaces[];

}

I might got it all wrong here but if not, it is terrible! On top of that, you might read that the GPU data is actually not shared between GPUs it is copied, so basically shared contexts are a very questionable feature. To continue, each GPU glcontext have a frame context structure with current state view/perspective matrices and FBOs for shadow/HDR/etc. rendering. I realized that sharing FBO textures and render buffers are not a good idea across GPU rendering threads, since you never know if a FBO texture is read or written to. For each GPU, views/windows are rendered sequentially (only switching DCs). Alot of examples recommend creating a context for each window, but i don't think they grasp all the problems regarding to this.

One possible solution that can simplify design a little bit is to start a separate process for each GPU (several application instances). Other problems arise though. You have to send view rotation offsets and position to each process (if viewer camera is not fixed). Also inter process frame sync might be needed. Any procedural content also might need to be synced (ca probably be solved with global random seed values).

Please, suggestions to this would be very very appreciated. I would not like to start anything until these problems are solved.

Crush your enemies, see them driven before you, and hear the lamentations of their women!

This topic is closed to new replies.

Advertisement