Jump to content
  • Advertisement
Sign in to follow this  
Hawkblood

DX11 Converting resources GPU/CPU

This topic is 801 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

bool TextureClass::Load(ENGINE *ge, WCHAR* filename)
{
	HRESULT result;

	// Load the texture in.
	result = D3DX11CreateShaderResourceViewFromFileW(ge->Graphics.dev, filename, NULL, NULL, &m_texture, NULL);
	if(FAILED(result))
	{
		return false;
	}

	ID3D11Resource *ppResource;
	m_texture->GetResource(&ppResource);
	D3D11_RESOURCE_DIMENSION pResourceDimension;
	ppResource->GetType(&pResourceDimension);

	auto txt=reinterpret_cast<ID3D11Texture2D*>( ppResource );
	pTexture=txt;

	txt->GetDesc(&odesc);
	desc=odesc;
	Width=(float)desc.Width;
	Height=(float)desc.Height;
	OnGPU=true;

	return true;
}

void TextureClass::Shutdown()
{
	// Release the texture resource.
	if(m_texture!=NULL)
	{
		pTexture->Release();
		pTexture=NULL;
		m_texture->Release();
		m_texture = NULL;
	}

}
bool TextureClass::TextureCreate(ENGINE *ge,int sx,int sy){
//	GE=ge;
	ZeroMemory(&desc,sizeof(desc));
	desc.Width = sx;
	desc.Height = sy;
	desc.MipLevels = 1;
	desc.ArraySize = 1;
	desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
	desc.SampleDesc.Count = 1;
	desc.Usage = D3D11_USAGE_DYNAMIC;//D3D11_USAGE_DYNAMIC
	desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;//D3D11_BIND_SHADER_RESOURCE
	desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;//D3D11_CPU_ACCESS_WRITE
	desc.MiscFlags = 0;//D3D11_RESOURCE_MISC_GENERATE_MIPS
	odesc=desc;

	pTexture = NULL;
	HRESULT result=ge->Graphics.dev->CreateTexture2D( &desc, NULL, &pTexture );//this works. doesn't fail
	if(FAILED(result)) {
		return false;
	}

	result=ge->Graphics.dev->CreateShaderResourceView(pTexture,NULL,&m_texture);
	if(FAILED(result)){
		return false;
	}

	Width=(float)desc.Width;
	Height=(float)desc.Height;
	OnGPU=true;

	return true;
}
void TextureClass::ConvertToCPUMem(ENGINE *ge,bool CPUAccess){
		HRESULT hr;

		desc=odesc;
		desc.Usage = D3D11_USAGE_STAGING;
		desc.BindFlags = 0;
		if (CPUAccess) desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ|D3D11_CPU_ACCESS_WRITE;
		else desc.CPUAccessFlags=0;


		ID3D11Texture2D *pTexture2 = NULL;
		hr=ge->Graphics.dev->CreateTexture2D( &desc, NULL, &pTexture2 );
		ge->Graphics.devcon->CopyResource(pTexture2,pTexture);
		Shutdown();
		ge->Graphics.dev->CreateShaderResourceView(pTexture2,NULL,&m_texture);
		pTexture=pTexture2;
		OnGPU=false;
}
void TextureClass::ConvertToGraphicsMem(ENGINE *ge){
		HRESULT hr;

		desc=odesc;

		ID3D11Texture2D *pTexture2 = NULL;
		hr=ge->Graphics.dev->CreateTexture2D( &desc, NULL, &pTexture2 );
		ge->Graphics.devcon->CopyResource(pTexture2,pTexture);
		Shutdown();
		ge->Graphics.dev->CreateShaderResourceView(pTexture2,NULL,&m_texture);
		pTexture=pTexture2;
		OnGPU=true;
}

I am trying to minimize my GPU memory use by putting unused resources in CPU memory. This is what I came up with but it has problems.

 

I tested it by converting the CPU mem and then back followed by a render. It works fine for a few frames, then it disappears.

 

pTexture is a ID3D11Texture2D*

desc and odesc are D3D11_TEXTURE2D_DESC

 

 

Before you ask, I will eventually need this so that I can keep textures in CPU mem when not in use. I will have LOTS of textures and I don't want to load from disk every time I need to swap out.

Share this post


Link to post
Share on other sites
Advertisement

FYI the Windows video memory manager will do this for you automatically. Every time the driver submits a command buffer to the GPU, it includes a list of all resources that are referenced by those commands. The OS will then try to make sure that those resources are available in GPU memory, potentially moving things to or from system memory in order to make this happen. WDDM calls this concept "residency", and in D3D12 it's actually handled manually by the application.

Share this post


Link to post
Share on other sites

So the shader resources are not on in GPU memory already?

 

No, they are.

 

What will happen is that the move from GPU memory to CPU memory and back is handled automatically.

 

What "The OS will then try to make sure that those resources are available in GPU memory, potentially moving things to or from system memory" means is that if the resource is already in GPU memory then it doesn't need to be moved.

Share this post


Link to post
Share on other sites

Ok. Say I have 2Gb on my video card. If I load 3Gb's of images into my program, it will all be in CPU memory and will be passed to the video card when needed?

Share this post


Link to post
Share on other sites

Ok. Say I have 2Gb on my video card. If I load 3Gb's of images into my program, it will all be in CPU memory and will be passed to the video card when needed?

 

No.

 

What will likely happen is that each resource will go into video memory as it is loaded.  When you run out of video memory, the least-most-recently-used resource(s) will be swapped out of video memory to make room.

Share this post


Link to post
Share on other sites

If you have a 2GB video card, you should definitely make sure that you don't use more than 2GB of resources within any one frame, as this will cause the OS to move resources between GPU/CPU in the middle of your frame, which can add dozens of milliseconds of stalling :)

 

If you only ever use 1GB of resources per frame, but use 3GB of resources in total, then it's not too bad. Hopefully the OS will move resources between GPU/CPU only occasionally as required.

 

If the OS is doing a bad job though, then yes, you can micro-manage it yourself by destroying resources and recreating them later.

BTW, a lot of games do destroy GPU resources when they're not required, and do later re-load them from disk again. That's basically how all open-world games work :wink:

Share this post


Link to post
Share on other sites

Thanks guys/galls, that info is what I needed. Now I have to come up with a way to scale resources according to the GPU memory. I already have a detection function for the available memory, so now I need to use it.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!