Is there any way to transfer textures between multiple (two) GPU in DX or OGL? Let's say I render something off-screen on a slave GPU and use it on primary GPU.
Thank You!
Texture transfer between two GPUs
I doubt there would be other way of doing that besides having to transfer texture memory from one gpu to cpu memory and then creating a texture on second gpu from that memory.
Also, calling one of your gpus a 'slave' is not a nice thing to do, it makes gpu sad.
Also, calling one of your gpus a 'slave' is not a nice thing to do, it makes gpu sad.
All modern APIs (and since the time of WDDM, operating systems too) try very hard to abstract away all of this and give you as little information or guarantees as possible. It is hard for a program to predict with any kind of certainity on which graphics card it will run (or what GPU memory may be available) at any time. Insofar, there is no notion of a "slave GPU".
That said, there are vendor-specific extensions to (kind of) circumvent the abstraction, such as e.g. [font=courier new,courier,monospace]WGL_nv_gpu_affinity[/font] or [font=courier new,courier,monospace]AMD_gpu_association[/font], which aim to let you associate a context with a GPU (much the same as CPU affinity works). I cannot tell how well they work in presence of WDDM. Also note that most IHVs come with some kind of physics middleware (e.g. PhysiX) and support for GPU computing (OpenCL, CUDA), both competing over resources with graphics. Some let the user configure which GPU to use for what, some may not, but in any case you don't know. If for no other reason, I would not touch affinity with a 3 meter pole because of this.
Lastly, it is noteworthy though that the documentation explicitly says:
That said, there are vendor-specific extensions to (kind of) circumvent the abstraction, such as e.g. [font=courier new,courier,monospace]WGL_nv_gpu_affinity[/font] or [font=courier new,courier,monospace]AMD_gpu_association[/font], which aim to let you associate a context with a GPU (much the same as CPU affinity works). I cannot tell how well they work in presence of WDDM. Also note that most IHVs come with some kind of physics middleware (e.g. PhysiX) and support for GPU computing (OpenCL, CUDA), both competing over resources with graphics. Some let the user configure which GPU to use for what, some may not, but in any case you don't know. If for no other reason, I would not touch affinity with a 3 meter pole because of this.
Lastly, it is noteworthy though that the documentation explicitly says:
Sharing OpenGL objects between affinity-contexts, by calling wglShareLists, will only succeed if the contexts have identical affinity masks.[/quote]
Thus, there is no notion of "magical sharing", if you don't rely on the OS/driver/API abstraction to move the texture where it's needed without you knowing, you need to do an explicit roundtrip as Shuun said.
Thank You for replies!
The point is that currently there are lots of systems with dual-gpu configuration, thanks to Intel/AMD and their integrated GPU cores in processors. I want to exploit dual-GPU config for shadow mapping, where one GPU would render the scene itself, and the other one would be responsible for shadows, e.g. creating the shadow map and transfering the map to the primary GPU. I will try to see into it a bit more...
The point is that currently there are lots of systems with dual-gpu configuration, thanks to Intel/AMD and their integrated GPU cores in processors. I want to exploit dual-GPU config for shadow mapping, where one GPU would render the scene itself, and the other one would be responsible for shadows, e.g. creating the shadow map and transfering the map to the primary GPU. I will try to see into it a bit more...
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement