Jump to content

  • Log In with Google      Sign In   
  • Create Account

NightCreature83

Member Since 21 Feb 2008
Offline Last Active Today, 06:08 AM

Posts I've Made

In Topic: Selecting the actual GPU

31 May 2016 - 02:28 PM

This is the info I get from DXGI on Windows 10

[RENDER SYSTEM ADAPTER INFO:] : ..\..\Graphics\RenderSystem.cpp(105) : 0
[RENDER SYSTEM ADAPTER INFO:] : ..\..\Graphics\RenderSystem.cpp(110) : vendor id: 8086
[RENDER SYSTEM ADAPTER INFO:] : ..\..\Graphics\RenderSystem.cpp(111) : device id: 416
[RENDER SYSTEM ADAPTER INFO:] : ..\..\Graphics\RenderSystem.cpp(112) : subsytem id: 11021462
[RENDER SYSTEM ADAPTER INFO:] : ..\..\Graphics\RenderSystem.cpp(113) : revision: 6
[RENDER SYSTEM ADAPTER INFO:] : ..\..\Graphics\RenderSystem.cpp(114) : Dedicated VRAM: 112 MiB
[RENDER SYSTEM ADAPTER INFO:] : ..\..\Graphics\RenderSystem.cpp(115) : Dedicated RAM: 0 MiB
[RENDER SYSTEM ADAPTER INFO:] : ..\..\Graphics\RenderSystem.cpp(116) : Shared RAM: 8150 MiB
[RENDER SYSTEM ADAPTER INFO:] : ..\..\Graphics\RenderSystem.cpp(119) : description: Intel(R) HD Graphics 4600
[RENDER SYSTEM ADAPTER INFO:] : ..\..\Graphics\RenderSystem.cpp(105) : 1
[RENDER SYSTEM ADAPTER INFO:] : ..\..\Graphics\RenderSystem.cpp(110) : vendor id: 10de
[RENDER SYSTEM ADAPTER INFO:] : ..\..\Graphics\RenderSystem.cpp(111) : device id: 13d8
[RENDER SYSTEM ADAPTER INFO:] : ..\..\Graphics\RenderSystem.cpp(112) : subsytem id: 11021462
[RENDER SYSTEM ADAPTER INFO:] : ..\..\Graphics\RenderSystem.cpp(113) : revision: 161
[RENDER SYSTEM ADAPTER INFO:] : ..\..\Graphics\RenderSystem.cpp(114) : Dedicated VRAM: 2991 MiB
[RENDER SYSTEM ADAPTER INFO:] : ..\..\Graphics\RenderSystem.cpp(115) : Dedicated RAM: 0 MiB
[RENDER SYSTEM ADAPTER INFO:] : ..\..\Graphics\RenderSystem.cpp(116) : Shared RAM: 8150 MiB
[RENDER SYSTEM ADAPTER INFO:] : ..\..\Graphics\RenderSystem.cpp(119) : description: NVIDIA GeForce GTX 970M

If you set the preffered adapter to be NVidia in the control panel of the driver than DXGI will report that adapter first.


In Topic: Selecting the actual GPU

31 May 2016 - 12:21 PM

What Oberon_Command mentions seems to be the issue once I used the GetClientRect and used does values all started working again and it actually selects the NV chip as the one to render on, thanks for that :)

 

It also makes it so you can just use DXGI to select your render device, no need to deal with the extern variables like further up in the thread :).


In Topic: Selecting the actual GPU

31 May 2016 - 10:05 AM

Yeah I saw that all the control panel in NVidia did was show me the NV chip before the intel chip, however my current problem is that if I don't select teh intel chip the draw calls still happen except that its not drawing to the provided HWND when the device is created. I am currently using a DXGI_SWAP_EFFECT_DISCARD so Ill give flip a try, and I will try an external monitor


In Topic: Selecting the actual GPU

31 May 2016 - 03:49 AM

 

I think NVidia and AMD should let me decide what I want to run on and not make me jump through hoops. ANd I have tried the optimus enablement it doesnt work sadly still only wants to draw on the iGPU.
 
It enumerates the device so it should let me tell the application which adapter to use and not just fallback on the iGPU anyway.


The problem is that's not how Optimus works.  In an Optimus setup you don't have two GPUs and get to choose which one you wish to use.  The choice is instead:

  1. Use only the Intel.
  2. Use the NVIDIA for all rendering commands, following which the framebuffer is transferred to the Intel which then handles the Present.

Option 2 is what you want, but by enumerating GPUs and only selecting the NVIDIA you're not actually getting option 2.

 

This is all discussed in the Optimus whitepaper: http://www.nvidia.com/object/LO_optimus_whitepapers.html

 

The problem I am seeing though is that even with the extern set the performance doesn't actually get better, it still stuck at 50 fps or so in debug which means its actually running on the intel chip and not the NV chip. Through tests I have seen that in release the intel chip gives me ~600fps but when the adapater in code actually selects the NV it jumps to 1700Fps.

So it still feels that with the extern set it still isnt actually drawing on the NV chip which is what I want, to me it seems the Optimus stuff is causing more problems than that it is solving.

 

Edit I get these FPS counts because I dont draw them through the renderer, I update my windows title bar every frame.


In Topic: Selecting the actual GPU

31 May 2016 - 01:21 AM

You should use the method that NVIDIA document for this, rather than trying to roll your own: http://docs.nvidia.com/gameworks/content/technologies/desktop/optimus.htm
 

extern "C" {
__declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}


AMD have similar, discussed here: http://stackoverflow.com/questions/17458803/amd-equivalent-to-nvoptimusenablement
 

extern "C" {
__declspec(dllexport) int AmdPowerXpressRequestHighPerformance = 1;
}

 

To put it all together:
 

extern "C" {
__declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
__declspec(dllexport) int AmdPowerXpressRequestHighPerformance = 1;
}


So you just declare these as globals at the top of one of your files and the NVIDIA (or AMD) will be selected.

I think NVidia and AMD should let me decide what I want to run on and not make me jump through hoops. ANd I have tried the optimus enablement it doesnt work sadly still only wants to draw on the iGPU.

 

It enumerates the device so it should let me tell the application which adapter to use and not just fallback on the iGPU anyway.


PARTNERS