Jump to content
  • Advertisement
Sign in to follow this  
DwarvesH

DX11 How to use D3DX11CreateShaderResourceViewFromFile to create a CPU readable texture?

This topic is 1230 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello everybody!

 

I have a small problem: I can't figure out how to create a texture that is readable on the CPU side after loading. I have googled the issue and the normal instructions are changing the Usage and CpuBindFlags, but as soon as I touch the flags, D3DX11CreateShaderResourceViewFromFile returns FAILED.

 

I need to read the texture for two things:

1. I have finally transitioned to VTF. Thousands of lines of CPU code for building and managing terrain LOD are now moved to the GPU. This means that I need to have my terrain data in a GPU readable format: textures. I read my terrain map into a texture and I would like to split it up into small chunks of 128x128 (size not important). So I want to read one or multiple large textures, lock them, use the data on the CPU to get all mip levels in a chunk and create a default usage and bind flags texture for each chunk, then discard the large texture.

2. I have several special partitioning schemes that have their data saved on disk as a texture. I need to read that texture, process it, and then discard it.

 

Both steps are done once at load time so performance is not a problem.

 

I have a simple wrapper for the texture class:

bool Create(wchar_t* path) {
		HRESULT result;

		// Load the texture in.
		D3DX11_IMAGE_LOAD_INFO info;
		info.Width = D3DX11_DEFAULT;
		info.Height = D3DX11_DEFAULT;
		info.Depth = D3DX11_DEFAULT;
		info.FirstMipLevel = D3DX11_DEFAULT;
		info.MipLevels = D3DX11_DEFAULT;
		info.Usage = (D3D11_USAGE) D3DX11_DEFAULT;
		info.BindFlags = D3DX11_DEFAULT;
		info.CpuAccessFlags = D3D11_CPU_ACCESS_READ;
		info.MiscFlags = D3DX11_DEFAULT;
		info.Format = DXGI_FORMAT_FROM_FILE;
		info.Filter = D3DX11_DEFAULT;
		info.MipFilter = D3DX11_DEFAULT;
		info.pSrcInfo = NULL;
		//info.CpuAccessFlags = D3D11_CPU_ACCESS_WRITE | D3D11_CPU_ACCESS_READ;
		result = D3DX11CreateShaderResourceViewFromFile(DeviceSingleton, path, &info, NULL, &Handle, NULL);
		if(FAILED(result)) 
			return false;

		ID3D11Texture2D* tex;
		Handle->GetResource((ID3D11Resource**)&tex);
		D3D11_TEXTURE2D_DESC desc;
		tex->GetDesc(&desc);
		tex->Release();

		Width = desc.Width;
		Height = desc.Height;

		return true;
	}

I tried all combinations of Usage and CpuAccessFlags and the creation fails. It only works with D3DX11_DEFAULT for all values.

 

And if I leave all at default, my Lock method fails:

void* Lock() {
		ID3D11Resource* res = nullptr;
		
		Handle->GetResource(&res);
		res->QueryInterface(&TexHandle);

		D3D11_MAPPED_SUBRESOURCE mappedResource;		
		// Lock the constant buffer so it can be written to.
		HRESULT result = DeviceContextSingleton->Map(TexHandle, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
		if(FAILED(result))
			return nullptr;

		return mappedResource.pData;
	}

The fields are:

ID3D11ShaderResourceView* Handle;
ID3D11Texture2D* TexHandle;

Thank you for your time reading this!

Share this post


Link to post
Share on other sites
Advertisement

In the DX11 docs it states:

 

 

D3D11_CPU_ACCESS_READ

The resource is to be mappable so that the CPU can read its contents. Resources created with this flag cannot be set as either inputs or outputs to the pipeline and must be created with staging usage (see D3D11_USAGE).

 

So, have you tried with D3D11_USAGE_STAGING?

 

Edit: Bind flags may need to be set to zero (no flags at all) since Staged resources cannot be bound...

Edited by vinterberg

Share this post


Link to post
Share on other sites


So I want to read one or multiple large textures, lock them, use the data on the CPU to get all mip levels in a chunk and create a default usage and bind flags texture for each chunk, then discard the large texture.

I have several special partitioning schemes that have their data saved on disk as a texture. I need to read that texture, process it, and then discard it.

 

Why do you need to create a GPU resource then? Just save your "large textures" as raw height values and read them directly from disk to CPU. That would make much more sense (and would be faster). 

Share this post


Link to post
Share on other sites

In the DX11 docs it states:

 

 

D3D11_CPU_ACCESS_READ

The resource is to be mappable so that the CPU can read its contents. Resources created with this flag cannot be set as either inputs or outputs to the pipeline and must be created with staging usage (see D3D11_USAGE).

 

So, have you tried with D3D11_USAGE_STAGING?

 

Edit: Bind flags may need to be set to zero (no flags at all) since Staged resources cannot be bound...

 

Thanks for the help!

 

Unfortunately, even with Usage and BindFlags and CpuFlags set as such, the texture still fails to create. I even tried setting the MiscFlags to zero as one link suggested.

 

Right now I'm trying to create a second staging texture, and copy the first one to the second, and try to lock that one. This is far more complicated than it should be, with the maze of textures and resources and resource views in DirectX 11.

 

I created the texture and the ShaderResourceView, and I'm trying to figure out how to get a Resource from the created files to pass them to CopyResouce.

 

 

 


So I want to read one or multiple large textures, lock them, use the data on the CPU to get all mip levels in a chunk and create a default usage and bind flags texture for each chunk, then discard the large texture.

I have several special partitioning schemes that have their data saved on disk as a texture. I need to read that texture, process it, and then discard it.

 

Why do you need to create a GPU resource then? Just save your "large textures" as raw height values and read them directly from disk to CPU. That would make much more sense (and would be faster). 

 

 

For the terrain there are two stages.

 

The first stage is the preprocessing stage. I need the input height map to be in GPU readable and in texture format because:

1. Artists provide textures.

2. The output of the GPU simplex noise shader is a texture

3. The input is passed on the the normal map calculator shader, the self shadowing shader, etc..

 

The output of these stages is one or multiple textures.

 

These I need to split into small tiles for streaming and the second stage, which is the level loading. They are optionally cached to disk. This stages does not work with the big textures, only the small ones.

 

Performance is not an issue. The whole thing should take fractions of a second, except for the disk access of course.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!