Sign in to follow this  

IDXGIAdapter with the best rendering performance

This topic is 409 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

How do you obtain the IDXGIAdapter with the best rendering performance in case of multiple adapters?

IDXGIFactory3 *factory = NULL;
CreateDXGIFactory(__uuidof(IDXGIFactory3), (void**)&factory);
IDXGIAdapter1 *a1;
for (UINT i = 0; factory->EnumAdapters1(i, &a1) != DXGI_ERROR_NOT_FOUND; ++i)
{
    IDXGIAdapter2 *a2;
    a1->QueryInterface(__uuidof(IDXGIAdapter2), (void **)&a2);
    DXGI_ADAPTER_DESC2 desc;
    a2->GetDesc2(&desc);
    wchar_t *d = desc.Description;
}

L"Intel(R) HD Graphics 5500"

L"NVIDIA GeForce GTX 960M"  -> best performance for rendering

L"Microsoft Basic Render Driver"

Share this post


Link to post
Share on other sites

I had a Twitter conversation with Andrew Lauritzen at Intel on this topic just the other week!

 

The truth is there is no foolproof method of determining which of the available adapters has the best performance. A "proper" game would probably pick one based on a simple heuristic and then allow the user to override the default GPU using an options screen / saved setting.

 

For the first run you could do one of the following:

 

  1. Run a quick micro benchmark of fill-rate / vertex-rate / bandwidth (one or all) and see which card comes out on top.
  2. Pick adapters in the following order: AMD/NVIDIA, then Intel, then anything else.
  3. Pick the one with the most dedicated video memory from the adapter description. In a laptop with a "hybrid graphics" setup, it's almost certain that only one of the two hardware GPUs has any dedicated VRAM, in which case, pick that one.

Of course method 2 and 3 won't necessarily decide which GPU is faster in the case of two discrete desktop GPUs, but "amount of VRAM" isn't a bad heuristic for performance in general.

 

In short, do something sensible on first run and then allow the user to override your choice on future runs.

Share this post


Link to post
Share on other sites

I used the third approach as a starting point

 

  1. Run a quick micro benchmark of fill-rate / vertex-rate / bandwidth (one or all) and see which card comes out on top.
  2. Pick adapters in the following order: AMD/NVIDIA, then Intel, then anything else.
  3. Pick the one with the most dedicated video memory from the adapter description. In a laptop with a "hybrid graphics" setup, it's almost certain that only one of the two hardware GPUs has any dedicated VRAM, in which case, pick that one.

I used the 3th approach as a starting point, but since I now took the non-default adapter, I obtain a DXGI_ERROR_NOT_FOUND at index 0 while iterating IDXGIOutputs? Do I need to attach an output somehow, since the dedicated GPU is not used at this stage?

const HRESULT result_output = m_adapter->EnumOutputs(0, &output);
Edited by matt77hias

Share this post


Link to post
Share on other sites

If your render adapter is not connected to a given IDXGIOutput, you will not be able to enter fullscreen with that pairing. So if your users want to use exclusive fullscreen mode, you'll have to use the potentially less powerful adapter that's actually connected to the relevant output. It definitely doesn't make sense to show refresh rates for non-fullscreen modes. As for resolutions... if you're not in fullscreen, perhaps you just want to infer it from the window size.

 

There is one exception here, which is the hybrid laptop case, where the IHV control panel can make DXGI report that the discrete GPU is connected to monitors which it's not physically connected to, and the OS does some additional work to make fullscreen happen. The general case does not work though.

Share this post


Link to post
Share on other sites

There is one exception here, which is the hybrid laptop case, where the IHV control panel can make DXGI report that the discrete GPU is connected to monitors which it's not physically connected to, and the OS does some additional work to make fullscreen happen. The general case does not work though.

The exception is the current situation over here. How do you enforce this if applicable on a machine?

Share this post


Link to post
Share on other sites

Force? You can't - it's an end-user decision at the end of the day with any current implementation.

 

Suggest? There's an option for that. If your application exports specific symbols, then the IHV control panels will default the option to discrete GPU instead of integrated.

http://stackoverflow.com/questions/10535950/forcing-nvidia-gpu-programmatically-in-optimus-laptops 

http://stackoverflow.com/questions/17458803/amd-equivalent-to-nvoptimusenablement 

 

There's plenty of hits on this forum for more info:

http://www.gamedev.net/index.php?s=1658cb0cf03d48bf9b120bfea92c0e0e&app=googlecse#gsc.tab=0&gsc.q=nvoptimusenablement 

Share this post


Link to post
Share on other sites

Force? You can't - it's an end-user decision at the end of the day with any current implementation.

 

Suggest? There's an option for that. If your application exports specific symbols, then the IHV control panels will default the option to discrete GPU instead of integrated.

http://stackoverflow.com/questions/10535950/forcing-nvidia-gpu-programmatically-in-optimus-laptops 

http://stackoverflow.com/questions/17458803/amd-equivalent-to-nvoptimusenablement 

 

There's plenty of hits on this forum for more info:

http://www.gamedev.net/index.php?s=1658cb0cf03d48bf9b120bfea92c0e0e&app=googlecse#gsc.tab=0&gsc.q=nvoptimusenablement 

Thanks

 

P.S.: the rename the .exe to some famous game .exe is a pretty original answer. 

Share this post


Link to post
Share on other sites

To conclude:

/**
 NVIDIA Optimus enablement

 @pre NVIDIA Control Panel > Preferred graphics processor > "Auto-select"
 */
extern "C" {
    _declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}

/** 
 AMD "Optimus" enablement
 */
extern "C" {
    __declspec(dllexport) int AmdPowerXpressRequestHighPerformance = 1;
}
Edited by matt77hias

Share this post


Link to post
Share on other sites

This topic is 409 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this