How do I detect the available video memory?

Started by
8 comments, last by Hawkblood 7 years, 7 months ago

Well, that's the question. I HAD a method in DX9, but I can't seem to find my example. I don't even think it would work with DX11 anyway.

Advertisement

Copied from this MS sample code:


IDXGIFactory * pFactory;
HRESULT hr = CreateDXGIFactory(__uuidof(IDXGIFactory), (void**)(&pFactory) );

void EnumerateUsingDXGI( IDXGIFactory* pDXGIFactory )
{
    assert( pDXGIFactory != 0 );

    for( UINT index = 0; ; ++index )
    {
        IDXGIAdapter* pAdapter = nullptr;
        HRESULT hr = pDXGIFactory->EnumAdapters( index, &pAdapter );
        if( FAILED( hr ) ) // DXGIERR_NOT_FOUND is expected when the end of the list is hit
            break;

        DXGI_ADAPTER_DESC desc;
        memset( &desc, 0, sizeof( DXGI_ADAPTER_DESC ) );
        if( SUCCEEDED( pAdapter->GetDesc( &desc ) ) )
        {
            wprintf( L"\nDXGI Adapter: %u\nDescription: %s\n", index, desc.Description );

            for( UINT iOutput = 0; ; ++iOutput )
            {
                IDXGIOutput* pOutput = nullptr;
                hr = pAdapter->EnumOutputs( iOutput, &pOutput );
                if( FAILED( hr ) ) // DXGIERR_NOT_FOUND is expected when the end of the list is hit
                    break;

                DXGI_OUTPUT_DESC outputDesc;
                memset( &outputDesc, 0, sizeof( DXGI_OUTPUT_DESC ) );
                if( SUCCEEDED( pOutput->GetDesc( &outputDesc ) ) )
                {
                    wprintf( L"hMonitor: 0x%0.8Ix\n", ( DWORD_PTR )outputDesc.Monitor );
                    wprintf( L"hMonitor Device Name: %s\n", outputDesc.DeviceName );
                }

                SAFE_RELEASE( pOutput );
            }

            wprintf(
                L"\tGetVideoMemoryViaDXGI\n\t\tDedicatedVideoMemory: %Iu MB (%Iu)\n\t\tDedicatedSystemMemory: %Iu MB (%Iu)\n\t\tSharedSystemMemory: %Iu MB (%Iu)\n",
                desc.DedicatedVideoMemory / 1024 / 1024, desc.DedicatedVideoMemory,
                desc.DedicatedSystemMemory / 1024 / 1024, desc.DedicatedSystemMemory,
                desc.SharedSystemMemory / 1024 / 1024, desc.SharedSystemMemory );
        }

        SAFE_RELEASE( pAdapter );
    }
}


SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

That's a place to start, but I would like to have the available as the program runs; creating and destroying textures..... I do need what you showed me, but I need more.

I compared the memory info with what DX Caps program was showing me. It turns out my program is using the on-board video card and not my NVIDIA card. I have tried to change which adapter it uses, but it keeps crashing on my "EnumOutputs()" function.

I do need what you showed me, but I need more.

Think you need to include what else you are looking for or are in need of.

Developer with a bit of Kickstarter and business experience.

YouTube Channel: Hostile Viking Studio
Twitter: @Precursors_Dawn

on-the-fly amount of available video memory.

NEW: my laptop has 2 video cards available: the on-board one and the NVIDIA installed as the one I want to use for all games (which it seems to use for them except my program).

I need to learn how to tell the program that I want to use the NVIDIA instead of the on-board one. My attempts so far are failing at "EnumOutputs()" function. I need more info......STUDY!

There's no standard way to do this prior to D3D12 :(

You can use NVAPI and AGS to talk to the user's driver directly instead of using a Windows API.

e.g. NvAPI_GPU_GetMemoryInfo

I suppose I could just keep track of the memory usage and not exceed it. The reason I wanted to do it on-the-fly is that I would scale down images to keep it away from the limit as needed.

Hodgeman, do you know of an easy to read example to select the second video card? I know it can be done because the DX examples can do it. The problem with the DX examples is that they are VERY hard to follow. Here is a code snippet of the problem area:


	result = CreateDXGIFactory(__uuidof(IDXGIFactory), (void**)&factory);
	if(FAILED(result))
	{
		MessageBox(hWnd, "create factory.", "Error", MB_OK);
		return false;
	}

UINT WhichAdapter=1;
UINT q=0;
//vector <IDXGIAdapter*> vAdapter;
while (factory->EnumAdapters(q,&adapter)!=DXGI_ERROR_NOT_FOUND){
	DXGI_ADAPTER_DESC pDesc;
	adapter->GetDesc(&pDesc);
	testString+=*vString(q)+" ";
	for (int z=0;z<128;z++){
		if (pDesc.Description[z]!='\0')
		testString+=pDesc.Description[z];
		else break;
	}
	testString+="\n";
	q++;
}



	// Use the factory to create an adapter for the primary graphics interface (video card).
	result = factory->EnumAdapters(0, &adapter);
	if(FAILED(result))
	{
		MessageBox(hWnd, "enum adapters.", "Error", MB_OK);
		return false;
	}

	// Enumerate the primary adapter output (monitor).
	result = adapter->EnumOutputs(0, &adapterOutput);
	if(FAILED(result))
	{
		MessageBox(hWnd, "enum outputs.", "Error", MB_OK);
		return false;
	}


I can view the available video adapters by way of the string I display once D3D is running. 0 is the on-board video and 1 is the NVIDIA card. If I put a 1 in the EnumAdapters(1,&adapter) line it fails at EnumOutputs. I'm stumped.......

It fails because the NVIDIA card doesn't have any outputs. It's how those dual GPU setups work, the final output has to go through the integrated card no matter what.
Both NVIDIA and AMD provide ways to force using the dedicated card; the easiest one is adding this to your program:
extern "C"
{
	__declspec(dllexport) DWORD NvOptimusEnablement = 1;
	__declspec(dllexport) int AmdPowerXpressRequestHighPerformance = 1;
}

This topic is closed to new replies.

Advertisement