Hi.
I wonder what's happening at the low level when I have several GPUs and many displays connected to them. For example I got 4 video cards and 16 displays. The cards are not connected with sli/crossfire. And I want to render something on all these screens. For example a window which has its part on every display. Let's say this window uses Direct3d.
Which video card does actual rendering? Can I change that?
Is it possible to render the same scene on every GPU without transfering lots of data between them each frame?
Is it possible to us GPGPU on one card and use another one for rendering?
Can anyone point me to docs/examples which show how all this stuff works?
Thank you.
What's happening under the hood when several GPUs are used?
Is it possible to us GPGPU on one card and use another one for rendering?
Yes, you can absolutely have one graphics card to use for desktop rendering and games, which will be the one recognized by DirectX/OpenGL/etc... and can have another graphics card, completely separate from the other, which is possibly not even connected to a monitor (i.e. pure compute device) and work with it via OpenCL/CUDA/other. You can look into your device manager, it shows clearly what the OS sees.
Is it possible to render the same scene on every GPU without transfering lots of data between them each frame?[/quote]
Sure, you can give each GPU the same scene data, etc.. in its memory, and let them render it independently (they'll render the same thing... but I don't think that's what you meant to ask).
Nvidia has compute-only drivers, but they only work for their Tesla cards (which are very expensive!). I'm not sure if AMD has anything similar.
Hi.
I wonder what's happening at the low level when I have several GPUs and many displays connected to them. For example I got 4 video cards and 16 displays. The cards are not connected with sli/crossfire. And I want to render something on all these screens. For example a window which has its part on every display. Let's say this window uses Direct3d.
Which video card does actual rendering? Can I change that?
Is it possible to render the same scene on every GPU without transfering lots of data between them each frame?
Is it possible to us GPGPU on one card and use another one for rendering?
Can anyone point me to docs/examples which show how all this stuff works?
Thank you.
If you have two GPUs with SLI enabled, enumeration of adapters leads to a "single" adapter. If you disable SLI, enumeration shows two adapters. If Adapter0 has the monitor attached to it and Adapter1 has no monitor, if you make "draw" calls to Adapter1, you'll see a noticeable decrease in frame rate compared to the SLI-enabled case. The shader output on Adapter1 has to make its way to the monitor somehow. Of course this statement has the implication that you can make rendering calls on both adapters even though only one has a monitor attached.
If you have to read-back from one GPU and upload to another, you'll see a performance hit. On a single GPU, you can share a 2D texture created by one device with another device (on my AMD Radeon HD cards, I can actually share structured buffers, but that is not part of the DirectX documentation--and this does not work on NVIDIA cards). I believe DX11.1 has improved support for sharing resources, but I don't recall what they are off top of my head (they are mentioned online in the MSDN docs).
I tend to use the primary GPU for rendering (visual) and the other for compute shaders, but the output from my compute shaders is read-back and not ever used for visual display (on the machine that generated that data).
An experiment I have not yet tried is to have SLI disabled and two monitors, one per graphics card, and examine the performance.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement