Members - Reputation: 123
Posted 06 June 2012 - 04:23 AM
Also, calling one of your gpus a 'slave' is not a nice thing to do, it makes gpu sad.
Crossbones+ - Reputation: 4912
Posted 06 June 2012 - 05:33 AM
That said, there are vendor-specific extensions to (kind of) circumvent the abstraction, such as e.g. WGL_nv_gpu_affinity or AMD_gpu_association, which aim to let you associate a context with a GPU (much the same as CPU affinity works). I cannot tell how well they work in presence of WDDM. Also note that most IHVs come with some kind of physics middleware (e.g. PhysiX) and support for GPU computing (OpenCL, CUDA), both competing over resources with graphics. Some let the user configure which GPU to use for what, some may not, but in any case you don't know. If for no other reason, I would not touch affinity with a 3 meter pole because of this.
Lastly, it is noteworthy though that the documentation explicitly says:
Thus, there is no notion of "magical sharing", if you don't rely on the OS/driver/API abstraction to move the texture where it's needed without you knowing, you need to do an explicit roundtrip as Shuun said.
Sharing OpenGL objects between affinity-contexts, by calling wglShareLists, will only succeed if the contexts have identical affinity masks.
Edited by samoth, 06 June 2012 - 05:35 AM.
Members - Reputation: 116
Posted 06 June 2012 - 11:21 AM
The point is that currently there are lots of systems with dual-gpu configuration, thanks to Intel/AMD and their integrated GPU cores in processors. I want to exploit dual-GPU config for shadow mapping, where one GPU would render the scene itself, and the other one would be responsible for shadows, e.g. creating the shadow map and transfering the map to the primary GPU. I will try to see into it a bit more...