DX10 CreateShaderResourceView bug

Started by
2 comments, last by sirob 15 years, 9 months ago
Hi, at first I'll specify my problem. I context of a larger DX10 project I want to create a TextureArray2D of size Width x Height x SliceCount with n mip levels. The binding of the texture array is: shader resource and render target. Easy. Now comes the trick. I have a shader which performs a 3d maximum value mip-mapping, mip level by mip level in parallel and slice by slice serial. Nothing interessting. To realize this I create shader resource views for all texture array elemts and one mip slice. Like this image: (c) 2008 Microsoft Corporation. All rights reserved. Therefore the DX10 documentation states sth. for CreateRenderTargetView, but not for CreateShaderResourceView: Texture Views (Direct3D 10) "Create a view object for a render target by calling CreateRenderTargetView. ... You can use a subresource (a mipmap level, array index combination) to bind to any array of subresources. So you could bind to the second mipmap level and only update this particular mipmap level." MSDN Texture Views (Direct3D 10) So I ask my self if this works for CreateShaderResourceView, too. The example code I appended below realizes my allocation task in its simples form. The runtime brings an E_OUTOFMEMORY error. The key argument to CreateShaderResourceView is the D3D10_SHADER_RESOURCE_VIEW_DESC structure. For an TextureArray2D it uses 4 fields to controll the view binding. FirstArraySlice: first slice of array ArraySize: number of array slices to bind from FirstArraySlice on MostDetailedMip: index of mip level you want to use MipLevels: number of mip levels you want to use from MostDetailedMip on For further comments, please see source code. I encourage you to copy it to a test project for a trial. Its ready for usage. You just have to initialize the D3D10Device correctly.

  ID3D10Device *dev;  //get a device from somewhere

  ID3D10Texture2D *m_texture;  //our test tetxure
  ID3D10ShaderResourceView **m_srv;  //our array of binded mip levels for all slices each

  UINT width=8;  //texture width
  UINT height=8;  //texture height
  UINT sliceCount=8;  //number of array elements in texture array
  UINT mipCount=4;  //number of all mip levels for size 8x8 (1+log(8)/log(2))

  m_srv=new ID3D10ShaderResourceView*[sliceCount];  //create shader resource view array

  D3D10_TEXTURE2D_DESC desc;
  ZeroMemory(&desc, sizeof(desc));  //zero out description of texture
  desc.Format=DXGI_FORMAT_R32_FLOAT;
  desc.Width=width;
  desc.Height=height;  
  desc.ArraySize=sliceCount;
  desc.MipLevels=0; //automatic calculation of all mip levels

  desc.BindFlags=D3D10_BIND_SHADER_RESOURCE|D3D10_BIND_RENDER_TARGET;  //shader resource and render target
  //desc.CPUAccessFlags=0;
  //desc.MiscFlags=0;
  //desc.Usage=0;

  desc.SampleDesc.Count=1;  //default sampling
  //desc.SampleDesc.Quality=0; 

  dev->CreateTexture2D(&desc, NULL, &m_texture);  //create texture

  D3D10_SHADER_RESOURCE_VIEW_DESC srvDesc;
  ZeroMemory(&srvDesc, sizeof(srvDesc));  //zero out shader resouce view description
  srvDesc.Format=DXGI_FORMAT_R32_FLOAT;
  srvDesc.ViewDimension=D3D10_SRV_DIMENSION_TEXTURE2DARRAY;  //its an texture array 2d
  srvDesc.Texture2DArray.FirstArraySlice=0;  //we want all slices, so begin at 0
  srvDesc.Texture2DArray.ArraySize=sliceCount;  //we want all slices, so take all

  for(UINT s=0;s<mipCount;++s)  //create a view for each mip slice/mip level
  {
    srvDesc.Texture2DArray.MostDetailedMip=s;  //this is our mip level index, this most detailed for the binded resource
    srvDesc.Texture2DArray.MipLevels=1;  //we just want one mip map
    dev->CreateShaderResourceView(m_texture, &srvDesc, &m_srv);  <span class="cpp-comment">//create mip map</span>
  }  


</pre></div><!–ENDSCRIPT–>

So what do guys say… my error or sth. other strange?

Cheers
Maik
Advertisement
Mmm, it looks like you're making mistakes with resources array and array of resources

You are making mistakes with targets.

If you need that your pixel shader must return a float4 struct, you have to create a REAL texture array
ID3D10Texture2D     *textures[10];

So you will create 10 RenderTargets view and 10 DepthStencil view, and you will set all 10 with OMSetRenderTargets.

You can use TEXTURE2DARRAY flag in resources creation, only if you want to use the indexed multiple rendertargets.

The rendertarget will be only 1, but there will be an index that will understand where your draw will go (the uint SV_RenderTargetArrayIndex).

You are forced to use GeometryShader in order to set primitive index, and set another index to decide what index you want to render. In HLSL, the Texture2DArray object, has got a Sample() method, that takes a float 3 as texcoord: 2 for coords, another for the index.

If you need more informations....
Thanks for you answer. Em no... I'm making no mistake with render targets or views. It's not the main problem... the main problem is, that DX runtime isn't able to allocate the resource.

I discussed the probplem over at another DirectX forum. We found out, that by using the reference device all is fine, no runtime E_OUTOFMEMORY. Somehow the hardware device has a problem for this special case of view creation.

The bug can be reproduced on other computers as well.

My Hardware:
GeForce 8800GTX 768MB
ForceWare 175.19
DirectX10 SDK June 2008
Windows Vista 64Bit

I reported the bug to Microsoft DirectX support.

Cheers
Maik
How many iterations of the loop pass before it fails?

And how many resource views do you have in general, in your application?
Sirob Yes.» - status: Work-O-Rama.

This topic is closed to new replies.

Advertisement