Sign in to follow this  
Sieras

DX11 [SlimDX] DX11 Structured buffer creation error

Recommended Posts

Sieras    316
Hi, First of all I'd like to say thanks for all the hard work and support you guys are giving here. Really appreciated all the work you put in SlimDX, keep it up. Also I've learned so much from this site simply by searching and reading the forums and articles. Cheers! Now..I'm a beginner in 3D programming (started learning not that long ago), but I'm not new to the whole programming thing. I put up a basic DX10 rendering engine in C# to get familiar with these things, it will be used for my project later. This evening I've decided to play around with compute shaders (was always excited about this DX11 feature) and try to add some basic GPU compute support to my engine. My starting point was the BasicCompute11 sample from the latest DX SDK (I'm trying to port in to SlimDX with C#) which hasn't worked out as I've expected.. I'm getting a E_INVALIDARG error when I try to create a structured buffer. After some more testing and debugging I'm really out of ideas, but I hope that you guys will be able to say what I'm missing here. Here's the code which is relevant to my problem: Struct element which will gonna fill the buffer:
          struct test_buff
          {
              public int ii;
              public float ff;        
          }


And the part where I create the buffer:
            BufferDescription b_desc = new BufferDescription();
            b_desc.BindFlags = (BindFlags)0x80L | BindFlags.ShaderResource; // 0x80L - UnorderedAccessView
            b_desc.SizeInBytes = Marshal.SizeOf(typeof(test_buff)) * 1024;
            b_desc.OptionFlags = ResourceOptionFlags.StructuredBuffer;
            b_desc.StructureByteStride = Marshal.SizeOf(typeof(test_buff));
            b_desc.Usage = ResourceUsage.Default;
            b_desc.CpuAccessFlags = CpuAccessFlags.None;            

            Buffer buf1 = new Buffer(device, b_desc);


After enabling debugging I'm getting this error information:
D3D11: ERROR: ID3D11Device::CreateBuffer: When creating a buffer with the MiscFlag D3D11_RESOURCE_MISC_BUFFER_STRUCTURED specified, the StructureByteStride must be greater than zero, no greater than 2048, and a multiple of 4. [ STATE_CREATION ERROR #2097340: CREATEBUFFER_INVALIDSTRUCTURESTRIDE ]
First-chance exception at 0x74e8b727 in Demo.exe: Microsoft C++ exception: _com_error at memory location 0x0032ed3c..
D3D11: ERROR: ID3D11Device::CreateBuffer: CreateBuffer returning E_INVALIDARG, meaning invalid parameters were passed. [ STATE_CREATION ERROR #69: CREATEBUFFER_INVALIDARG_RETURN ]


It seems like my b_desc.StructureByteStride is invalid or something... although I've checked - Marshal.SizeOf(typeof(test_buff)) returns 8 as it supposed to be in this case (which is indeed a multiple of 4). I've even tried setting the StructureByteStride with other values which hasn't changed anything. Any help or tips would be great :) If you need any more info - just ask. Thanks for your time. [Edited by - Sieras on November 6, 2009 4:53:03 PM]

Share this post


Link to post
Share on other sites
sirob    1181
You've said it yourself:
Quote:

Marshal.SizeOf(typeof(test_buff)) returns 8 as it supposed to be in this case
So, 8 * 1024 = 8192 which is certainly above the upper allowed limit of 2048, which is stated in the error message. Looks like your buffer is simply too large.

Share this post


Link to post
Share on other sites
Sieras    316
Quote:
Original post by sirob
You've said it yourself:
Quote:

Marshal.SizeOf(typeof(test_buff)) returns 8 as it supposed to be in this case
So, 8 * 1024 = 8192 which is certainly above the upper allowed limit of 2048, which is stated in the error message. Looks like your buffer is simply too large.


Thanks for your reply.

Well I thought so too in the first place, tried changing that value any other value so that the whole buffer size would be smaller than 2048 and it still gives me the same error. It facts it works fine in the BasicCompute11 SDK sample with 1024 multiplied by 8 (struct size) so it shouldn't be a problem here.

It seems like compiler complains about StructureByteStride value and not the whole buffer size.

Any more ideas? :)

Share this post


Link to post
Share on other sites
Dysprosium88    100
Hi Sieras

I've also translated the BasicCompute11 from the DirectX SDK to C# / SlimDX to get familar with GPU computing.

There were a few errors in SlimDX which were closed in revision r1237 (StructureByteStride not used) and r1242 (BufferEx properties missing).

So make sure you got the latest svn trunk of SlimDX, especially when you are using DirectX 11 features.

Share this post


Link to post
Share on other sites
Sieras    316
Quote:
Original post by Dysprosium88
Hi Sieras

I've also translated the BasicCompute11 from the DirectX SDK to C# / SlimDX to get familar with GPU computing.

There were a few errors in SlimDX which were closed in revision r1237 (StructureByteStride not used) and r1242 (BufferEx properties missing).

So make sure you got the latest svn trunk of SlimDX, especially when you are using DirectX 11 features.


Thanks for your reply. I've downloaded and compiled the latest HEAD version from SVN and now it's working as it supposed to. Now I can get back to work :)

Thanks again. I'll try to keep SlimDX up-to-date next time.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By lonewolff
      Hi Guys,
      I am presently trying to render a video frame to a quad in DirectX 11 using Microsoft Media Foundations. But I am having a problem with 'TransferVideoFrame'.
      m_spMediaEngine->TransferVideoFrame(texture, rect, rcTarget, &m_bkgColor); The call keeps returning 'invalid parameter'.
      Some googling revealed this - https://stackoverflow.com/questions/15711054/how-do-i-grab-frames-from-a-video-stream-on-windows-8-modern-apps
      Which explains that there is (was??) a bug with the NVidia drivers back in 2013. Other searches say this bug was fixed.
      The very odd thing is, if I run the application through Visual Studio Graphical debugger the video renders perfectly to the quad with no errors, so this would suggest that my code is sound.
      I can't do as the link suggests above and try to create the device as 9_x, as this is an extension that I am creating for GameMaker 2 and I don't have access to renderer creation (only the device handle - which is fine most of the time).
      I am presently downloading VS 2017 to see if it behaves better with more recent SDK's (as I am currently using VS 2015).
      Thanks in advance
       
       
    • By AlexWIN32
      Hello!
       
      A have an issue with my point light shadows realisation.

       
      First of all, the pixel shader path:
      //.... float3 toLight = plPosW.xyz - input.posW; float3 fromLight = -toLight; //... float depthL = abs(fromLight.x); if(depthL < abs(fromLight.y)) depthL = abs(fromLight.y); if(depthL < abs(fromLight.z)) depthL = abs(fromLight.z); float4 pH = mul(float4(0.0f, 0.0f, depthL, 1.0f), lightProj); pH /= pH.w; isVisible = lightDepthTex.SampleCmpLevelZero(lightDepthSampler, normalize(fromLight), pH.z).x;

      lightProj matrix creation
      Matrix4x4 projMat = Matrix4x4::PerspectiveFovLH(0.5f * Pi, 0.01f, 1000.0f, 1.0f);  
      thats how i create Depth cube texture
       
      viewport->TopLeftX = 0.0f; viewport->TopLeftY = 0.0f; viewport->Width = static_cast<float>(1024); viewport->Height = static_cast<float>(1024); viewport->MinDepth = 0.0f; viewport->MaxDepth = 1.0f; D3D11_TEXTURE2D_DESC textureDesc; textureDesc.Width = 1024; textureDesc.Height = 1024; textureDesc.MipLevels = 1; textureDesc.ArraySize = 6; textureDesc.Format = DXGI_FORMAT_R24G8_TYPELESS; textureDesc.SampleDesc.Count = 1; textureDesc.SampleDesc.Quality = 0; textureDesc.Usage = D3D11_USAGE_DEFAULT; textureDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE; textureDesc.CPUAccessFlags = 0; textureDesc.MiscFlags = D3D11_RESOURCE_MISC_TEXTURECUBE; ID3D11Texture2D* texturePtr; HR(DeviceKeeper::GetDevice()->CreateTexture2D(&textureDesc, NULL, &texturePtr)); for(int i = 0; i < 6; ++i){ D3D11_DEPTH_STENCIL_VIEW_DESC dsvDesc; dsvDesc.Flags = 0; dsvDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; dsvDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2DARRAY; dsvDesc.Texture2DArray = D3D11_TEX2D_ARRAY_DSV{0, i, 1}; ID3D11DepthStencilView *outDsv; HR(DeviceKeeper::GetDevice()->CreateDepthStencilView(texturePtr, &dsvDesc, &outDsv)); edgeDsv = outDsv; } D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc; srvDesc.Format = DXGI_FORMAT_R24_UNORM_X8_TYPELESS; srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURECUBE; srvDesc.TextureCube = D3D11_TEXCUBE_SRV{0, 1}; ID3D11ShaderResourceView *outSRV; HR(DeviceKeeper::GetDevice()->CreateShaderResourceView(texturePtr, &srvDesc, &outSRV));  
      then i create six target oriented cameras and finally draw scene to cube depth according to each camera
      Cameras creation code:  
      std::vector<Vector3> camDirs = { { 1.0f, 0.0f, 0.0f}, {-1.0f, 0.0f, 0.0f}, { 0.0f, 1.0f, 0.0f}, { 0.0f, -1.0f, 0.0f}, { 0.0f, 0.0f, 1.0f}, { 0.0f, 0.0f, -1.0f}, }; std::vector<Vector3> camUps = { {0.0f, 1.0f, 0.0f}, // +X {0.0f, 1.0f, 0.0f}, // -X {0.0f, 0.0f, -1.0f}, // +Y {0.0f, 0.0f, 1.0f}, // -Y {0.0f, 1.0f, 0.0f}, // +Z {0.0f, 1.0f, 0.0f} // -Z }; for(size_t b = 0; b < camDirs.size(); b++){ edgesCameras.SetPos(pl.GetPos()); edgesCameras.SetTarget(pl.GetPos() + camDirs); edgesCameras.SetUp(camUps); edgesCameras.SetProjMatrix(projMat); }  
      I will be very gratefull for any help!
      P.s sorry for my poor English)
       
    • By isu diss
      HRESULT FBXLoader::Open(HWND hWnd, char* Filename) { HRESULT hr = S_OK; if (FBXM) { FBXIOS = FbxIOSettings::Create(FBXM, IOSROOT); FBXM->SetIOSettings(FBXIOS); FBXI = FbxImporter::Create(FBXM, ""); if (!(FBXI->Initialize(Filename, -1, FBXIOS))) MessageBox(hWnd, (wchar_t*)FBXI->GetStatus().GetErrorString(), TEXT("ALM"), MB_OK); FBXS = FbxScene::Create(FBXM, "MCS"); if (!FBXS) MessageBox(hWnd, TEXT("Failed to create the scene"), TEXT("ALM"), MB_OK); if (!(FBXI->Import(FBXS))) MessageBox(hWnd, TEXT("Failed to import fbx file content into the scene"), TEXT("ALM"), MB_OK); if (FBXI) FBXI->Destroy(); FbxNode* MainNode = FBXS->GetRootNode(); int NumKids = MainNode->GetChildCount(); FbxNode* ChildNode = NULL; for (int i=0; i<NumKids; i++) { ChildNode = MainNode->GetChild(i); FbxNodeAttribute* NodeAttribute = ChildNode->GetNodeAttribute(); if (NodeAttribute->GetAttributeType() == FbxNodeAttribute::eMesh) { FbxMesh* Mesh = ChildNode->GetMesh(); NumVertices = Mesh->GetControlPointsCount();//number of vertices MyV = new FBXVTX[NumVertices]; for (DWORD j = 0; j < NumVertices; j++) { FbxVector4 Vertex = Mesh->GetControlPointAt(j);//Gets the control point at the specified index. MyV[j].Position = XMFLOAT3((float)Vertex.mData[0], (float)Vertex.mData[1], (float)Vertex.mData[2]); } NumIndices = Mesh->GetPolygonVertexCount();//number of indices; for cube 20 MyI = new DWORD[NumIndices]; MyI = (DWORD*)Mesh->GetPolygonVertices();//index array NumFaces = Mesh->GetPolygonCount(); MyF = new FBXFACEX[NumFaces]; for (int l=0;l<NumFaces;l++) { MyF[l].Vertices[0] = MyI[4*l]; MyF[l].Vertices[1] = MyI[4*l+1]; MyF[l].Vertices[2] = MyI[4*l+2]; MyF[l].Vertices[3] = MyI[4*l+3]; } UV = new XMFLOAT2[NumIndices]; for (int i = 0; i < Mesh->GetPolygonCount(); i++)//polygon(=mostly rectangle) count { FbxLayerElementArrayTemplate<FbxVector2>* uvVertices = NULL; Mesh->GetTextureUV(&uvVertices); for (int j = 0; j < Mesh->GetPolygonSize(i); j++)//retrieves number of vertices in a polygon { FbxVector2 uv = uvVertices->GetAt(Mesh->GetTextureUVIndex(i, j)); UV[4*i+j] = XMFLOAT2((float)uv.mData[0], (float)uv.mData[1]); } } } } } else MessageBox(hWnd, TEXT("Failed to create the FBX Manager"), TEXT("ALM"), MB_OK); return hr; } I've been trying to load fbx files(cube.fbx) into my programme. but I get this. Can someone pls help me?
       

    • By lonewolff
      Hi Guys,
      I am having a bit of a problem with a dynamic texture.
      It is creating without error and I am attempting to initialize the first pixel to white to make sure I am mapping correctly. But when I draw the texture to the quad it displays the whole quad white (instead of just one pixel).
      This is how I am creating, mapping, and setting the first pixel to white. But as mentioned, when I draw the quad, the entire quad is white.
       
      // Create dynamic texture D3D11_TEXTURE2D_DESC textureDesc = { 0 }; textureDesc.Width = 2048; textureDesc.Height = 2048; textureDesc.MipLevels = 1; textureDesc.ArraySize = 1; textureDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; textureDesc.SampleDesc.Count = 1; textureDesc.Usage = D3D11_USAGE_DYNAMIC; textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE; textureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; textureDesc.MiscFlags = 0; HRESULT result = d3dDevice->CreateTexture2D(&textureDesc, NULL, &textureDynamic); if (FAILED(result)) return -1; result = d3dDevice->CreateShaderResourceView(textureDynamic, 0, &textureRV); if (FAILED(result)) return -2; D3D11_MAPPED_SUBRESOURCE resource; if (FAILED(d3dContext->Map(textureDynamic, 0, D3D11_MAP_WRITE_DISCARD, 0, &resource))) return -1; memset(resource.pData, 255, 4); d3dContext->Unmap(textureDynamic, 0);  
      Hopefully I have just made an oversight somewhere.
      Any assistance would be greatly appreciated
      (If I change the 255 value to 128 the quad then turns grey, so the mapping is definitely doing something. Just can't work out why it is colouring the whole quad and not the first pixel)
    • By KaiserJohan
      Just a really quick question - is there any overhead to using DrawIndexedInstanced even for geometry you just render once vs using DrawIndexed? Or is the details obfuscated by the graphics driver?
      I would assume no but you never know  
  • Popular Now