• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By isu diss
      How do I fill the gap between sky and terrain? Scaling the terrain or procedural terrain rendering?

    • By Jiraya
      For a 2D game, does using a float2 for position increases performance in any way?
      I know that in the end the vertex shader will have to return a float4 anyway, but does using a float2 decreases the amount of data that will have to be sent from the CPU to the GPU?
       
    • By ucfchuck
      I am feeding in 16 bit unsigned integer data to process in a compute shader and i need to get a standard deviation.
      So I read in a series of samples and push them into float arrays
      float vals1[9], vals2[9], vals3[9], vals4[9]; int x = 0,y=0; for ( x = 0; x < 3; x++) { for (y = 0; y < 3; y++) { vals1[3 * x + y] = (float) (asuint(Input1[threadID.xy + int2(x - 1, y - 1)].x)); vals2[3 * x + y] = (float) (asuint(Input2[threadID.xy + int2(x - 1, y - 1)].x)); vals3[3 * x + y] = (float) (asuint(Input3[threadID.xy + int2(x - 1, y - 1)].x)); vals4[3 * x + y] = (float) (asuint(Input4[threadID.xy + int2(x - 1, y - 1)].x)); } } I can send these values out directly and the data is as expected

                             
      Output1[threadID.xy] = (uint) (vals1[4] ); Output2[threadID.xy] = (uint) (vals2[4] ); Output3[threadID.xy] = (uint) (vals3[4] ); Output4[threadID.xy] = (uint) (vals4[4] ); however if i do anything to that data it is destroyed.
      If i add a
      vals1[4] = vals1[4]/2; 
      or a
      vals1[4] = vals[1]-vals[4];
      the data is gone and everything comes back 0.
       
       
      How does one go about converting a uint to a float and performing operations on it and then converting back to a rounded uint?
    • By fs1
      I have been trying to see how the ID3DInclude, and how its methods Open and Close work.
      I would like to add a custom path for the D3DCompile function to search for some of my includes.
      I have not found any working example. Could someone point me on how to implement these functions? I would like D3DCompile to look at a custom C:\Folder path for some of the include files.
      Thanks
    • By stale
      I'm continuing to learn more about terrain rendering, and so far I've managed to load in a heightmap and render it as a tessellated wireframe (following Frank Luna's DX11 book). However, I'm getting some really weird behavior where a large section of the wireframe is being rendered with a yellow color, even though my pixel shader is hard coded to output white. 

      The parts of the mesh that are discolored changes as well, as pictured below (mesh is being clipped by far plane).

      Here is my pixel shader. As mentioned, I simply hard code it to output white:
      float PS(DOUT pin) : SV_Target { return float4(1.0f, 1.0f, 1.0f, 1.0f); } I'm completely lost on what could be causing this, so any help in the right direction would be greatly appreciated. If I can help by providing more information please let me know.
  • Advertisement
  • Advertisement
Sign in to follow this  

DX11 How to use D3DX11CreateShaderResourceViewFromFile to create a CPU readable texture?

This topic is 993 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello everybody!

 

I have a small problem: I can't figure out how to create a texture that is readable on the CPU side after loading. I have googled the issue and the normal instructions are changing the Usage and CpuBindFlags, but as soon as I touch the flags, D3DX11CreateShaderResourceViewFromFile returns FAILED.

 

I need to read the texture for two things:

1. I have finally transitioned to VTF. Thousands of lines of CPU code for building and managing terrain LOD are now moved to the GPU. This means that I need to have my terrain data in a GPU readable format: textures. I read my terrain map into a texture and I would like to split it up into small chunks of 128x128 (size not important). So I want to read one or multiple large textures, lock them, use the data on the CPU to get all mip levels in a chunk and create a default usage and bind flags texture for each chunk, then discard the large texture.

2. I have several special partitioning schemes that have their data saved on disk as a texture. I need to read that texture, process it, and then discard it.

 

Both steps are done once at load time so performance is not a problem.

 

I have a simple wrapper for the texture class:

bool Create(wchar_t* path) {
		HRESULT result;

		// Load the texture in.
		D3DX11_IMAGE_LOAD_INFO info;
		info.Width = D3DX11_DEFAULT;
		info.Height = D3DX11_DEFAULT;
		info.Depth = D3DX11_DEFAULT;
		info.FirstMipLevel = D3DX11_DEFAULT;
		info.MipLevels = D3DX11_DEFAULT;
		info.Usage = (D3D11_USAGE) D3DX11_DEFAULT;
		info.BindFlags = D3DX11_DEFAULT;
		info.CpuAccessFlags = D3D11_CPU_ACCESS_READ;
		info.MiscFlags = D3DX11_DEFAULT;
		info.Format = DXGI_FORMAT_FROM_FILE;
		info.Filter = D3DX11_DEFAULT;
		info.MipFilter = D3DX11_DEFAULT;
		info.pSrcInfo = NULL;
		//info.CpuAccessFlags = D3D11_CPU_ACCESS_WRITE | D3D11_CPU_ACCESS_READ;
		result = D3DX11CreateShaderResourceViewFromFile(DeviceSingleton, path, &info, NULL, &Handle, NULL);
		if(FAILED(result)) 
			return false;

		ID3D11Texture2D* tex;
		Handle->GetResource((ID3D11Resource**)&tex);
		D3D11_TEXTURE2D_DESC desc;
		tex->GetDesc(&desc);
		tex->Release();

		Width = desc.Width;
		Height = desc.Height;

		return true;
	}

I tried all combinations of Usage and CpuAccessFlags and the creation fails. It only works with D3DX11_DEFAULT for all values.

 

And if I leave all at default, my Lock method fails:

void* Lock() {
		ID3D11Resource* res = nullptr;
		
		Handle->GetResource(&res);
		res->QueryInterface(&TexHandle);

		D3D11_MAPPED_SUBRESOURCE mappedResource;		
		// Lock the constant buffer so it can be written to.
		HRESULT result = DeviceContextSingleton->Map(TexHandle, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
		if(FAILED(result))
			return nullptr;

		return mappedResource.pData;
	}

The fields are:

ID3D11ShaderResourceView* Handle;
ID3D11Texture2D* TexHandle;

Thank you for your time reading this!

Share this post


Link to post
Share on other sites
Advertisement

In the DX11 docs it states:

 

 

D3D11_CPU_ACCESS_READ

The resource is to be mappable so that the CPU can read its contents. Resources created with this flag cannot be set as either inputs or outputs to the pipeline and must be created with staging usage (see D3D11_USAGE).

 

So, have you tried with D3D11_USAGE_STAGING?

 

Edit: Bind flags may need to be set to zero (no flags at all) since Staged resources cannot be bound...

Edited by vinterberg

Share this post


Link to post
Share on other sites


So I want to read one or multiple large textures, lock them, use the data on the CPU to get all mip levels in a chunk and create a default usage and bind flags texture for each chunk, then discard the large texture.

I have several special partitioning schemes that have their data saved on disk as a texture. I need to read that texture, process it, and then discard it.

 

Why do you need to create a GPU resource then? Just save your "large textures" as raw height values and read them directly from disk to CPU. That would make much more sense (and would be faster). 

Share this post


Link to post
Share on other sites

In the DX11 docs it states:

 

 

D3D11_CPU_ACCESS_READ

The resource is to be mappable so that the CPU can read its contents. Resources created with this flag cannot be set as either inputs or outputs to the pipeline and must be created with staging usage (see D3D11_USAGE).

 

So, have you tried with D3D11_USAGE_STAGING?

 

Edit: Bind flags may need to be set to zero (no flags at all) since Staged resources cannot be bound...

 

Thanks for the help!

 

Unfortunately, even with Usage and BindFlags and CpuFlags set as such, the texture still fails to create. I even tried setting the MiscFlags to zero as one link suggested.

 

Right now I'm trying to create a second staging texture, and copy the first one to the second, and try to lock that one. This is far more complicated than it should be, with the maze of textures and resources and resource views in DirectX 11.

 

I created the texture and the ShaderResourceView, and I'm trying to figure out how to get a Resource from the created files to pass them to CopyResouce.

 

 

 


So I want to read one or multiple large textures, lock them, use the data on the CPU to get all mip levels in a chunk and create a default usage and bind flags texture for each chunk, then discard the large texture.

I have several special partitioning schemes that have their data saved on disk as a texture. I need to read that texture, process it, and then discard it.

 

Why do you need to create a GPU resource then? Just save your "large textures" as raw height values and read them directly from disk to CPU. That would make much more sense (and would be faster). 

 

 

For the terrain there are two stages.

 

The first stage is the preprocessing stage. I need the input height map to be in GPU readable and in texture format because:

1. Artists provide textures.

2. The output of the GPU simplex noise shader is a texture

3. The input is passed on the the normal map calculator shader, the self shadowing shader, etc..

 

The output of these stages is one or multiple textures.

 

These I need to split into small tiles for streaming and the second stage, which is the level loading. They are optionally cached to disk. This stages does not work with the big textures, only the small ones.

 

Performance is not an issue. The whole thing should take fractions of a second, except for the disk access of course.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement