I'm converting my OpenGL 4 renderer to DirectX 11. Once I got over making a simple textured triangle, the rest went very easy, I now have a deferred shader setup with ambient/point/directional lights and am now reimplementing the shadow mapping parts, starting with point lights (omnidirectional shadowmaps).
Here's how I create my texture cubemap:
// create shadowmap texture/view/srv D3D11_TEXTURE2D_DESC depthBufferDesc; ZeroMemory(&depthBufferDesc, sizeof(D3D11_TEXTURE2D_DESC)); depthBufferDesc.ArraySize = 6; depthBufferDesc.Format = DXGI_FORMAT_D32_FLOAT; // potential issue? UNORM instead? depthBufferDesc.Width = shadowmapSize; depthBufferDesc.Height = shadowmapSize; depthBufferDesc.MipLevels = 1; depthBufferDesc.SampleDesc.Count = 1; depthBufferDesc.Usage = D3D11_USAGE_DEFAULT; depthBufferDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL; depthBufferDesc.MiscFlags = D3D11_RESOURCE_MISC_TEXTURECUBE; DXCALL(device->CreateTexture2D(&depthBufferDesc, NULL, &mShadowmapTexture)); DXCALL(device->CreateDepthStencilView(mShadowmapTexture, NULL, &mShadowmapView)); DXCALL(device->CreateShaderResourceView(mShadowmapTexture, NULL, &mShadowmapSRV));
I'm currently at a loss of how to select which face of the cubemap to use as the depth buffer. In OpenGL I would iterate 6 times and select the cubemap face with:
GLCALL(glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, mShadowmap, 0));
But in DirectX11, cubemaps seems to be defined as Texture2D but with different array size. Looking at the OMSetRenderTargets(), it dosn't specify anywhere which cubemap face to use either.
Any pointers on how to select the proper face to use as depth texture?
I've been browsing around like mad on MSDN, and it is fine as a reference for structures and functions, but awful for anything else
Edited by KaiserJohan, 05 August 2014 - 04:27 PM.