Jump to content
  • Advertisement
Sign in to follow this  
CDProp

OpenGL Making good use of multiple video cards on Windows 7 or 8

This topic is 2140 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm sorry if this is the wrong forum for this.

 

I was wondering if someone could point me to some information on how best to make use of multiple video cards, as with Crossfire and SLI, specifically on Windows 7 or 8. I understand that the API probably abstracts much of this away, but any information about a) how the resources are managed between the two cards, b) how a graphics programmer might be able to influence that resource management, would be hugely helpful.

 

I am also specifically interested in knowing:

1) If one could build a machine that has two video cards (not necessarily with an SLI/Crossfire link), each driving a different display (with the desktop spanned across both).

2) If a graphics programmer, when creating their graphics context (OpenGL in my case), can enumerate the video cards somehow and decide which card to use.

 

I know this is a huge topic, but anyone can point me to some good resources on the subject, that would be a huge help.

Share this post


Link to post
Share on other sites
Advertisement

Neither D3D nor OpenGL have any concept of multi-GPU collaboration via SLI or Crossfire. In both cases it is completely up to the driver to interpret your commands and distribute them among the multiple GPU's, and any data dependencies between the two GPU's needs to be inferred from those commands and handled transparently by the driver. I'm not sure about AMD, but historically Nvidia won't even enable SLI for your app unless it's on an approved whitelist. They have means of letting you programatically specify that you want SLI enabled (and also the SLI rendering mode), but only if you're an approved developer under their NDA program.

As for your follow up questions:

 

1) You can definitely do this under Windows

2) You can definitely do this in D3D11, so I would assume that you would be able to it in OpenGL as well

Share this post


Link to post
Share on other sites


2) You can definitely do this in D3D11, so I would assume that you would be able to it in OpenGL as well

 

Unfortunately you can't in OpenGL usually. It requires extensions that are only available with drivers for the work-station cards, such as NVidia Quadro. I've read that AMD drivers will be nice and use the correct card depending on which monitor your window is created on, if both cards use the same driver, but I haven't tested it.

If the cards are from different vendors it always uses the card which has the primary monitor. Standard WGL does not have functionality for it.

 

(Which is completely dumb as you can actually create one OpenGL context on one card, then go into the display settings dialog and change the primary monitor to be on the other card, and then create the next context, and both work side by side correctly using the different cards. I've tested that on Win7 for AMD + NVidia and it works well, it's just that the pixel formats and driver is always chosen from the primary monitor).

Share this post


Link to post
Share on other sites
For OpenGL, these vendor-specific extensions are available. I didn't see anything for Intel or for cases where you have two different vendors' cards installed (though the driver setup on at least one popular OS makes it currently impossible to have different vendors' drivers installed):

http://developer.download.nvidia.com/opengl/specs/WGL_nv_gpu_affinity.txt

https://www.opengl.org/registry/specs/AMD/wgl_gpu_association.txt

Share this post


Link to post
Share on other sites

This is just for my own computer, so luckily I have control over what video cards are being used, but unfortunately it looks like that extension is only available for the workstation cards, as Erik Rufelt said. At least, it isn't available on my GTX 680. Bummer!

Share this post


Link to post
Share on other sites

You can easily make an SLI profile with Nvidia Inspector yourself assuming you don't need any custom synchronization of render targets that Nvidia must implement in their driver. Most games don't require any synchronization, so just force AFR as the rendering mode and you're good to go. It's a bit more complicated for OpenGL, but I even have a working SLI profile for my Java game, although SLI only works in fullscreen for some reason. The only drawback is that your users will also need Nvidia Inspector to import it, but I think it's safe to assume that anyone who's spent that much money on GPUs and has managed to get SLI working in the first place can manage the additional step of importing a profile, especially if you make a simple tutorial.

 

It seems like AMD supports something similar with Catalyst:

CFModes.png

 

A simpler but confusing solution is to abuse your executable's file name. For example, if I generate an executable for my game and name it "etqw.exe", which is the same the executable for Enemy Territory: Quake Wars, I get the same effect as making a custom profile for my game. For DirectX, I'd recommend "planetside.exe" which should work for simple games. This should work for both AMD and Nvidia.

Edited by theagentd

Share this post


Link to post
Share on other sites

Nvidia won't even enable SLI for your app unless it's on an approved whitelist.

 

So thats why my GTX 690 won't utilize my second GPU on my games.....

Edited by Tispe

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!