Jump to content

  • Log In with Google      Sign In   
  • Create Account

What's happening under the hood when several GPUs are used?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
3 replies to this topic

#1 valyard   Members   -  Reputation: 102

Like
0Likes
Like

Posted 12 October 2012 - 09:05 AM

Hi.


I wonder what's happening at the low level when I have several GPUs and many displays connected to them. For example I got 4 video cards and 16 displays. The cards are not connected with sli/crossfire. And I want to render something on all these screens. For example a window which has its part on every display. Let's say this window uses Direct3d.

Which video card does actual rendering? Can I change that?
Is it possible to render the same scene on every GPU without transfering lots of data between them each frame?
Is it possible to us GPGPU on one card and use another one for rendering?


Can anyone point me to docs/examples which show how all this stuff works?

Thank you.

Sponsor:

#2 Bacterius   Crossbones+   -  Reputation: 8947

Like
0Likes
Like

Posted 12 October 2012 - 10:48 PM

Is it possible to us GPGPU on one card and use another one for rendering?

Yes, you can absolutely have one graphics card to use for desktop rendering and games, which will be the one recognized by DirectX/OpenGL/etc... and can have another graphics card, completely separate from the other, which is possibly not even connected to a monitor (i.e. pure compute device) and work with it via OpenCL/CUDA/other. You can look into your device manager, it shows clearly what the OS sees.

Is it possible to render the same scene on every GPU without transfering lots of data between them each frame?

Sure, you can give each GPU the same scene data, etc.. in its memory, and let them render it independently (they'll render the same thing... but I don't think that's what you meant to ask).

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#3 MJP   Moderators   -  Reputation: 11438

Like
0Likes
Like

Posted 13 October 2012 - 12:00 PM

Nvidia has compute-only drivers, but they only work for their Tesla cards (which are very expensive!). I'm not sure if AMD has anything similar.

#4 Dave Eberly   Members   -  Reputation: 1161

Like
0Likes
Like

Posted 15 October 2012 - 12:58 AM

Hi.


I wonder what's happening at the low level when I have several GPUs and many displays connected to them. For example I got 4 video cards and 16 displays. The cards are not connected with sli/crossfire. And I want to render something on all these screens. For example a window which has its part on every display. Let's say this window uses Direct3d.

Which video card does actual rendering? Can I change that?
Is it possible to render the same scene on every GPU without transfering lots of data between them each frame?
Is it possible to us GPGPU on one card and use another one for rendering?


Can anyone point me to docs/examples which show how all this stuff works?

Thank you.


If you have two GPUs with SLI enabled, enumeration of adapters leads to a "single" adapter. If you disable SLI, enumeration shows two adapters. If Adapter0 has the monitor attached to it and Adapter1 has no monitor, if you make "draw" calls to Adapter1, you'll see a noticeable decrease in frame rate compared to the SLI-enabled case. The shader output on Adapter1 has to make its way to the monitor somehow. Of course this statement has the implication that you can make rendering calls on both adapters even though only one has a monitor attached.

If you have to read-back from one GPU and upload to another, you'll see a performance hit. On a single GPU, you can share a 2D texture created by one device with another device (on my AMD Radeon HD cards, I can actually share structured buffers, but that is not part of the DirectX documentation--and this does not work on NVIDIA cards). I believe DX11.1 has improved support for sharing resources, but I don't recall what they are off top of my head (they are mentioned online in the MSDN docs).

I tend to use the primary GPU for rendering (visual) and the other for compute shaders, but the output from my compute shaders is read-back and not ever used for visual display (on the machine that generated that data).

An experiment I have not yet tried is to have SLI disabled and two monitors, one per graphics card, and examine the performance.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS