Jump to content
  • Advertisement
Sign in to follow this  
NightCreature83

Selecting the actual GPU

This topic is 776 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So ever since I switched my project to VS2015 and windows 10 it wont actually select the NV gpu as the gpu to render with. I have code in the project like this which should select the adapter that is not the intel adapter.

IDXGIAdapter* adapter = nullptr;
IDXGIFactory * pFactory;
HRESULT hr = CreateDXGIFactory(__uuidof(IDXGIFactory), (void**)(&pFactory));
if (hr == S_OK)
{
    for (size_t counter = 0; hr == S_OK; ++counter)
    {
        adapter = nullptr;
        hr = pFactory->EnumAdapters((UINT)counter, &adapter);
        MSG_TRACE_CHANNEL("RENDER SYSTEM ADAPTER INFO:", "%d", counter);
        if (adapter != nullptr)
        {
            DXGI_ADAPTER_DESC adapterDesc;
            adapter->GetDesc(&adapterDesc);
            MSG_TRACE_CHANNEL("RENDER SYSTEM ADAPTER INFO:", "vendor id: %x", adapterDesc.VendorId);
            MSG_TRACE_CHANNEL("RENDER SYSTEM ADAPTER INFO:", "device id: %x", adapterDesc.DeviceId);
            MSG_TRACE_CHANNEL("RENDER SYSTEM ADAPTER INFO:", "subsytem id: %x", adapterDesc.SubSysId);
            MSG_TRACE_CHANNEL("RENDER SYSTEM ADAPTER INFO:", "revision: %d", adapterDesc.Revision);
            MSG_TRACE_CHANNEL("RENDER SYSTEM ADAPTER INFO:", "Dedicated VRAM: %llu MiB", adapterDesc.DedicatedVideoMemory / (1024 * 1024));
            MSG_TRACE_CHANNEL("RENDER SYSTEM ADAPTER INFO:", "Dedicated RAM: %llu MiB", adapterDesc.DedicatedSystemMemory / (1024 * 1024));
            MSG_TRACE_CHANNEL("RENDER SYSTEM ADAPTER INFO:", "Shared RAM: %llu MiB", adapterDesc.SharedSystemMemory / (1024 * 1024));
            std::string str;
            convertToCString(adapterDesc.Description, str);
            MSG_TRACE_CHANNEL("RENDER SYSTEM ADAPTER INFO:", "description: %s", str.c_str());

            if (adapterDesc.VendorId != 0x8086) //This is intels vendor ID, I run this on NV and AMD/ATi chips
            {
                break;
            }
        }
    }

    pFactory->Release();
}

if (!m_deviceManager.createDevice(adapter))
{
    ExitProcess(1); //Fix this exit cleanly
}

 

However when running this on a 970M GTX, even with the GPU preffered set to NV in the NV control panel, the game just wont render anything to the hwnd. I have done render doc traces that show me that the D3D runtime is actually drawing stuff.

 

I have traced all the windows and D3D calls and none of these are failing, they all return S_OK, which means everything should work.

Any idea how I can actually pick the NV Gpu to run the application on because the debug build on the iGPU bearly reaches 60fps and the game is not drawing all that much to begin with. So going forward keeping it on the iGPU is not going to be fast enough for me, debugging a slide show is a pain in the ass (I was CPU bound and because of that had 3fps I fixed those things).

Share this post


Link to post
Share on other sites
Advertisement

However when running this on a 970M GTX, even with the GPU preffered set to NV in the NV control panel, the game just wont render anything to the hwnd. I have done render doc traces that show me that the D3D runtime is actually drawing stuff.

Having the wrong GPU selected shouldn't stop your results from reaching the hwnd... it should just make them slower.

If you've got a blank window, the problem could lie elsewhere.

Share this post


Link to post
Share on other sites

You should use the method that NVIDIA document for this, rather than trying to roll your own: http://docs.nvidia.com/gameworks/content/technologies/desktop/optimus.htm
 

extern "C" {
__declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}


AMD have similar, discussed here: http://stackoverflow.com/questions/17458803/amd-equivalent-to-nvoptimusenablement
 

extern "C" {
__declspec(dllexport) int AmdPowerXpressRequestHighPerformance = 1;
}

 

To put it all together:
 

extern "C" {
__declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
__declspec(dllexport) int AmdPowerXpressRequestHighPerformance = 1;
}


So you just declare these as globals at the top of one of your files and the NVIDIA (or AMD) will be selected.

Share this post


Link to post
Share on other sites

You should use the method that NVIDIA document for this, rather than trying to roll your own: http://docs.nvidia.com/gameworks/content/technologies/desktop/optimus.htm
 

extern "C" {
__declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}


AMD have similar, discussed here: http://stackoverflow.com/questions/17458803/amd-equivalent-to-nvoptimusenablement
 

extern "C" {
__declspec(dllexport) int AmdPowerXpressRequestHighPerformance = 1;
}

 

To put it all together:
 

extern "C" {
__declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
__declspec(dllexport) int AmdPowerXpressRequestHighPerformance = 1;
}


So you just declare these as globals at the top of one of your files and the NVIDIA (or AMD) will be selected.

I think NVidia and AMD should let me decide what I want to run on and not make me jump through hoops. ANd I have tried the optimus enablement it doesnt work sadly still only wants to draw on the iGPU.

 

It enumerates the device so it should let me tell the application which adapter to use and not just fallback on the iGPU anyway.

Edited by NightCreature83

Share this post


Link to post
Share on other sites

I think NVidia and AMD should let me decide what I want to run on and not make me jump through hoops. ANd I have tried the optimus enablement it doesnt work sadly still only wants to draw on the iGPU.
 
It enumerates the device so it should let me tell the application which adapter to use and not just fallback on the iGPU anyway.


The problem is that's not how Optimus works.  In an Optimus setup you don't have two GPUs and get to choose which one you wish to use.  The choice is instead:

  1. Use only the Intel.
  2. Use the NVIDIA for all rendering commands, following which the framebuffer is transferred to the Intel which then handles the Present.

Option 2 is what you want, but by enumerating GPUs and only selecting the NVIDIA you're not actually getting option 2.

 

This is all discussed in the Optimus whitepaper: http://www.nvidia.com/object/LO_optimus_whitepapers.html

Share this post


Link to post
Share on other sites

 

I think NVidia and AMD should let me decide what I want to run on and not make me jump through hoops. ANd I have tried the optimus enablement it doesnt work sadly still only wants to draw on the iGPU.
 
It enumerates the device so it should let me tell the application which adapter to use and not just fallback on the iGPU anyway.


The problem is that's not how Optimus works.  In an Optimus setup you don't have two GPUs and get to choose which one you wish to use.  The choice is instead:

  1. Use only the Intel.
  2. Use the NVIDIA for all rendering commands, following which the framebuffer is transferred to the Intel which then handles the Present.

Option 2 is what you want, but by enumerating GPUs and only selecting the NVIDIA you're not actually getting option 2.

 

This is all discussed in the Optimus whitepaper: http://www.nvidia.com/object/LO_optimus_whitepapers.html

 

The problem I am seeing though is that even with the extern set the performance doesn't actually get better, it still stuck at 50 fps or so in debug which means its actually running on the intel chip and not the NV chip. Through tests I have seen that in release the intel chip gives me ~600fps but when the adapater in code actually selects the NV it jumps to 1700Fps.

So it still feels that with the extern set it still isnt actually drawing on the NV chip which is what I want, to me it seems the Optimus stuff is causing more problems than that it is solving.

 

Edit I get these FPS counts because I dont draw them through the renderer, I update my windows title bar every frame.

Edited by NightCreature83

Share this post


Link to post
Share on other sites

 

I think NVidia and AMD should let me decide what I want to run on and not make me jump through hoops. ANd I have tried the optimus enablement it doesnt work sadly still only wants to draw on the iGPU.
 
It enumerates the device so it should let me tell the application which adapter to use and not just fallback on the iGPU anyway.


The problem is that's not how Optimus works.  In an Optimus setup you don't have two GPUs and get to choose which one you wish to use.  The choice is instead:

  1. Use only the Intel.
  2. Use the NVIDIA for all rendering commands, following which the framebuffer is transferred to the Intel which then handles the Present.

Option 2 is what you want, but by enumerating GPUs and only selecting the NVIDIA you're not actually getting option 2.

 

This is all discussed in the Optimus whitepaper: http://www.nvidia.com/object/LO_optimus_whitepapers.html

 

 

As of Windows 8.1, hybrid GPU configurations are officially supported by the OS, so this whitepaper is somewhat out-of-date. For D3D9, you are correct that your only choice is one or the other, and that the decision is made outside of the control of your application, but for DXGI (i.e. D3D11 or D3D12) there is no adapter hiding. The only thing that those exports will do is change the default enumeration order of the GPUs. Which, if you're enumerating explicitly, doesn't matter too much.

 

The OS also attempts to be smart and optimize the path for getting contents from the discrete GPU to the integrated display for the laptop common case. We attempt to eliminate as many copies as possible, and avoid CPU-side stalls or stalls in the desktop compositor by using advanced sync. We've recognized there's some latent issues here and are working to improve the situation.

 

So, some questions:

  1. What DXGI swap effect are you using for your swapchain? Try switching from blt model (i.e. SEQUENTIAL or DISCARD) to flip model (FLIP_SEQUENTIAL or FLIP_DISCARD). If you're already using flip, try going back.
  2. Are you using the internal laptop panel or an external monitor? Can you try using an external monitor?

Share this post


Link to post
Share on other sites

Yeah I saw that all the control panel in NVidia did was show me the NV chip before the intel chip, however my current problem is that if I don't select teh intel chip the draw calls still happen except that its not drawing to the provided HWND when the device is created. I am currently using a DXGI_SWAP_EFFECT_DISCARD so Ill give flip a try, and I will try an external monitor

Share this post


Link to post
Share on other sites

What Oberon_Command mentions seems to be the issue once I used the GetClientRect and used does values all started working again and it actually selects the NV chip as the one to render on, thanks for that :)

 

It also makes it so you can just use DXGI to select your render device, no need to deal with the extern variables like further up in the thread :).

Edited by NightCreature83

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!