• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Husbj

Members
  • Content count

    133
  • Joined

  • Last visited

Community Reputation

658 Good

About Husbj

  • Rank
    Member
  1. No worries, thank you for taking the time; I truly appreciate it  :)   So yeah, that's spot on. Thanks for the in-depth explanations which helped a lot (I've got to admit I really should have signed up for more math courses at uni). Using your "quick fix" solution I get this much more pleasing result: Now if I could just get the light bleeding at the base (where the meshes and ground connect in this case) reduced so that the beginning of the shadows don't disappear like in the above screenshot I think this should almost be ready to deploy  :) I will look into your suggestions about using a linear shadow map further when I get some time off. I have to ask though, will it really accomplish much? As far as I understand it, the precision is pretty much lost as soon as the depth maps are rendered as they are non-linear in the first place. So converting it back into view-space depth in the EVSM generation (post processing) step should not really improve the resolution, or am I missing something here? For the record, my depth map is D32_FLOAT and the EVSM map A32R32G32B32_FLOAT. I can see how it would have a higher impact if the latter used 16-bit channels instead.     Again, thanks a lot for your time and effort MJP. Cheers!
  2. Certainly. As a matter of fact my EVSM conversion is based on your Shadows example implementation  :) I have quite a few different functions coming together in my lighting system, and I'm also using various wrapper classes for D3D functionality, but I hope it will make sense.     So first out, the shadow cube faces are rendered in a single pass using a geometry shader which copies the scene meshes into 6 different spaces and writes them to different render target array indices. The main program code for this looks like so: // This is a temporary Texture2DArray with 8 slices. Initial depth maps are rendered here until // it fits no more (ie. 2x4 for 2 directional lights, can pad out with single slices for spot lights). // For point lights I'm currently just rendering to slices 0..5. gGlob.pShadowCamera->SetDepthStencilBuffer(evsmDepthBuffer); // Provide cube mapping view-projection transforms to the shader technique const float radius = light->GetRange(); const XMMATRIX matView = XMMatrixInverse(nullptr, light->GetTransform().GetFinalTransform()); const XMMATRIX matProj = XMMatrixPerspectiveFovLH((float)D3DXToRadian(90.0f), 1.0f, 0.1f, radius); { XMFLOAT3 lightPos; light->GetTransform().GetFinalPosition(lightPos); const XMVECTOR origin = XMLoadFloat3(&lightPos); static const XMVECTOR lookat[6] { origin + XMVectorSet(1.0f, 0.0f, 0.0f, 0.0f), origin + XMVectorSet(-1.0f, 0.0f, 0.0f, 0.0f), origin + XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f), origin + XMVectorSet(0.0f, -1.0f, 0.0f, 0.0f), origin + XMVectorSet(0.0f, 0.0f, 1.0f, 0.0f), origin + XMVectorSet(0.0f, 0.0f, -1.0f, 0.0f), }; static const XMVECTOR up[6] { XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f), XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f), XMVectorSet(0.0f, 0.0f, -1.0f, 0.0f), XMVectorSet(0.0f, 0.0f, 1.0f, 0.0f), XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f), XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f) }; for (size_t n = 0; n < 6; n++) cbCubeMatrix->Set<XMMATRIX>(sizeof(XMMATRIX) * n, XMMatrixLookAtLH(origin, lookat[n], up[n]) * matProj); auto pBuffer = cbCubeMatrix->ResolveD3DBuffer(); gGlob.pDevCon->GSSetConstantBuffers(ResourceSlotBindings::CB_CUBEMAP_TRANSFORM, 1, &pBuffer); // Only needed by the geometry shader } /* MESH CULLING CODE OMITTED */ // Render the depth (shadow) map (note that this automatically clears the associated depth-stencil view) gGlob.pShadowCamera->RenderShadowMap(SHADOWTECH_POINT, gGlob.pCullingCamera, true); // Only a single render pass here so we can reset the per-pass data // And store the projection matrix so that it is available for transforming sampled depths on the GPU light->SetProjectionMatrix(matProj); // This one is transposed and written into a StructuredBuffer if different from the previous value And here are the shaders used in the above render: struct gs_depth_in { float4 pos : SV_POSITION; float4 posW : WORLD_POSITION; float2 texcoord : TEXCOORD; }; struct gs_depth_out { float4 pos : SV_POSITION; float2 texcoord : TEXCOORD; uint rtIndex : SV_RENDERTARGETARRAYINDEX; }; cbuffer CubeMatrix : register(CBID_CUBEMAP_TRANSFORM) { float4x4 CubeViewProj[6]; }; gs_depth_in VS_CubicalShadowMapping(vs_in IN) { gs_depth_in OUT; OUT.posW = mul(IN.pos, Transform[IN.iID].World); OUT.pos = mul(OUT.posW, ViewProj); OUT.texcoord = IN.texcoord; return OUT; } [maxvertexcount(18)] void GS_CubicalShadowMapping(triangle gs_depth_in input[3], inout TriangleStream<gs_depth_out> TriStream) { gs_depth_out output; for (uint rt = 0; rt < 6; rt++) { // Should be automatically unrolled by the for (uint v = 0; v < 3; v++) { // shader compiler as the range is constant output.pos = mul(input[v].posW, CubeViewProj[rt]); output.texcoord = input[v].texcoord; output.rtIndex = rt; TriStream.Append(output); } TriStream.RestartStrip(); } } void PS_CubicalShadowMapping(gs_depth_out IN) { clip(DiffuseMap.Sample(DiffuseSampler, IN.texcoord).a - 0.1); return; } The pixel shader is only used for objects with transparent textures, ie. in my example scene I'm using a null pixel shader.     Following this the depth map arrays are converted to the exponential variance map. This is the actual TextureCube, or TextureCubeArray if using feature level 4.1 or above (the temporary depth map is just a texture array without the cubemap flag set). The CPU-side code for this is as follows: // This version handles multiple simultaneous render targets void LightManager::GenerateExponentialShadowMaps(PImage depthArray, PImage output, size_t firstDepthSlice, size_t firstOutputSlice, size_t numSlices) { #ifndef NDEBUG assert(numSlices <= 8); #endif // Assume that the input (depth buffer) is currently bound to the output depth-stencil buffer slot; as such we'll need to unbind it. gGlob.pShadowCamera->SetDepthStencilBuffer(nullptr); // We don't need any depth stencil buffer for this; indeed the shader technique disables depth writes / reads // Set output render targets for(size_t n = 0; n < numSlices; n++) { gGlob.pShadowCamera->SetRenderTarget(output, firstOutputSlice + n, n, true); cbAddressEVSM->Set<UINT>(n * sizeof(XMUINT4), firstDepthSlice + n); // NOTE: HLSL interprets each array element as a 4-component (128-bit) vector in order to speed up indexing } cbAddressEVSM->ResolveD3DBuffer(); // Update technique cbuffer to reflect the target input depth slices to sample // Begin scene drawing gGlob.BeginRenderPass(gGlob.pShadowCamera, false, true, true); // We don't have to clear the render target or set the frame buffer as the shaders do not rely on these // Set input layout Mesh::SetCurrentTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP); // Apply necessary shader technique changes (TODO: consider changing this to only update the data that actually *needs* to be changed) techEVSMGenerator[numSlices - 1]->Set(); // Set the input texture gGlob.pCurrentTexture[0] = depthArray.operator->(); gGlob.pCurrentTexResourceView[0] = depthArray->GetResourceView(); gGlob.pDevCon->PSSetShaderResources(0, 1, gGlob.pCurrentTexResourceView); // Draw screen quad gGlob.pDevCon->DrawInstanced(4, 1, 0, 0); // Unbind the texture again in anticipation of this probably being rendered to again in the (likely immediately) following depth pass gGlob.pCurrentTexture[0] = nullptr; gGlob.pCurrentTexResourceView[0] = nullptr; gGlob.pDevCon->PSSetShaderResources(0, 1, gGlob.pCurrentTexResourceView); // Conclude draw pass (the render target should be overwritten before there is any reason to assume it would be bound as input) gGlob.EndRenderPass(gGlob.pShadowCamera, true); } The "shader technique" mentioned in the comments above is just a collection of shaders, constant buffers and various states. The states used for the techEVSMGenerators disable blending (BLEND_ONE for source, BLEND_ZERO for target), sets AlphaLineAntiAliasingEnabled = true and disables depth reads / writes like so: dss.disabled = PDepthStencilState(new DepthStencilState()); dss.disabled->SetDepthEnabled(false); dss.disabled->SetStencilEnabled(false); dss.disabled->SetDepthFunction(D3D11_COMPARISON_LESS); dss.disabled->SetStencilWriteMask(0xff); dss.disabled->SetStencilReadMask(0xff); dss.disabled->SetFrontFaceStencilFunction(D3D11_COMPARISON_ALWAYS); dss.disabled->SetFrontFaceStencilPassOperation(D3D11_STENCIL_OP_KEEP); dss.disabled->SetFrontFaceStencilFailOperation(D3D11_STENCIL_OP_KEEP); dss.disabled->SetFrontFaceStencilDepthFailOperation(D3D11_STENCIL_OP_KEEP); dss.disabled->SetBackFaceStencilFunction(D3D11_COMPARISON_ALWAYS); dss.disabled->SetBackFaceStencilPassOperation(D3D11_STENCIL_OP_KEEP); dss.disabled->SetBackFaceStencilFailOperation(D3D11_STENCIL_OP_KEEP); dss.disabled->SetBackFaceStencilDepthFailOperation(D3D11_STENCIL_OP_KEEP); The vertex shader creates a "screen quad" like so (the viewport size is set to the dimensions of the EVSM (the render target at this point)): // Vertex locations in clip space for a screen quad static const float4x4 Vertex = { -1, -1, 0, 1, -1, 1, 0, 1, 1, -1, 0, 1, 1, 1, 0, 1 }; // Likewise we can set up a constant array of the texture coordinates for the corner vertices of the screen quad static const float4x2 Texcoord = { 0, 1, 0, 0, 1, 1, 1, 0 }; // Simple vertex shader output struct struct vs_out { float4 pos : SV_POSITION; float2 texcoord : TEXCOORD; }; // The vertex shader program vs_out VS_ScreenQuad(uint id : SV_VertexId) { vs_out OUT; OUT.pos = Vertex[id]; OUT.texcoord = Texcoord[id]; return OUT; } The pixel shader is set based on MSAA settings and the number of simultaneous inputs / outputs. Corresponding multisampling is enabled on the temporary depth buffer array described above. Here is the source for the version that converts 6 maps with 8x MSAA, which is the one used in my previously posted images. Take note that changing the MSAA level / disabling it in no way affects the shadow artifacts. struct sOutput6 { float4 channel1 : SV_Target0; float4 channel2 : SV_Target1; float4 channel3 : SV_Target2; float4 channel4 : SV_Target3; float4 channel5 : SV_Target4; float4 channel6 : SV_Target5; }; // .... sOutput6 PS_EVSM_6_MSAA8(vs_out IN) : SV_Target0 { sOutput6 OUT; const float weight = 1.0 / 8.0; const int3 coord[6] = { int3(IN.pos.xy, SourceSlice[0]), int3(IN.pos.xy, SourceSlice[1]), int3(IN.pos.xy, SourceSlice[2]), int3(IN.pos.xy, SourceSlice[3]), int3(IN.pos.xy, SourceSlice[4]), int3(IN.pos.xy, SourceSlice[5]) }; float4 average[6] = { float4(0, 0, 0, 0), float4(0, 0, 0, 0), float4(0, 0, 0, 0), float4(0, 0, 0, 0), float4(0, 0, 0, 0), float4(0, 0, 0, 0) }; [unroll] for (uint s = 0; s < 6; s++) { [unroll] for (uint n = 0; n < 8; n++) { average[s] += evsm::ComputeMoments(DepthMSAA8.Load(coord[s], n)) * weight; } } OUT.channel1 = average[0]; OUT.channel2 = average[1]; OUT.channel3 = average[2]; OUT.channel4 = average[3]; OUT.channel5 = average[4]; OUT.channel6 = average[5]; return OUT; } And  the EVSM helper functions: namespace evsm { /** * Helper function; applies exponential warp to the specified depth value (given in the 0..1 range) **/ float2 WarpDepth(float depth, float2 exponents) { depth = 2.0 * depth - 1.0; // Rescale to (-1..+1) range return float2(exp(exponents.x * depth), -exp(-exponents.y * depth)); } /** * Helper function; retrieves the negative and positive exponents used for warping depth values. * The returned exponents will be clamped to fit in a 32-bit floating point value. **/ float2 GetSafeExponents(float positiveExponent, float negativeExponent) { float2 res = float2(positiveExponent, negativeExponent); return min(res, 42.0); } /** * Reduces light bleeding **/ float ReduceLightBleeding(float pMax, float amount) { // Remove the [0, amount] tail and linearly rescale [amount, 1]. return Linstep(amount, 1.0f, pMax); } /** * Helper function; computes probabilistic upper bound **/ float ChebyshevUpperBound(float2 moments, float mean, float minVariance, float lightBleedingReduction) { // Compute variance float variance = moments.y - (moments.x * moments.x); variance = max(variance, minVariance); // Compute probabilistic upper bound float d = mean - moments.x; float pMax = variance / (variance + (d * d)); pMax = ReduceLightBleeding(pMax, lightBleedingReduction); // One-tailed Chebyshev return (mean <= moments.x ? 1.0f : pMax);         }             /**              * Helper function; computes the EVSM moments based on the provided (clip) depth value (range 0..1).              **/             float4 ComputeMoments(float depth) {                 float2 vsmDepth = evsm::WarpDepth(depth, VarianceExponents); // UPDATE: exponents provided through cbuffer, pre-clamped                 return float4(vsmDepth.xy, vsmDepth.xy * vsmDepth.xy);             } };        And the Linstep utility function: /** * Linear "stepping" function, akin to the smoothstep intrinsic. * Returns 0 if value < minValue, 1 if value > maxValue and otherwise linearly * rescales the value such that minValue yields 0 and maxValue yields 1. **/ float Linstep(float minValue, float maxValue, float value) { return saturate((value - minValue) / (maxValue - minValue)); } The EVSM cube can then be blurred in two steps via an intermediary texture. Disabling this blurring does not affect the artifacts, but here are the pixel shaders all the same. The same screen quad vertex shader from above is used: /* Six render targets */ ps_out6 PS_MultiBlur6_Horizontal(vs_out IN) { uint2 coords = uint2(IN.pos.xy); ps_out6 res; // Zero-initialized by default [loop] for(int i = -SampleRadius; i <= SampleRadius; i++) { uint2 tc = uint2((uint)clamp((int)coords.x, 0, MapSize), coords.y); float weight = Kernel[KernelOffset + i].weight; res.result1 += TextureArray[uint3(tc, ArraySlice)] * weight; res.result2 += TextureArray[uint3(tc, ArraySlice + 1)] * weight; res.result3 += TextureArray[uint3(tc, ArraySlice + 2)] * weight; res.result4 += TextureArray[uint3(tc, ArraySlice + 3)] * weight; res.result5 += TextureArray[uint3(tc, ArraySlice + 4)] * weight; res.result6 += TextureArray[uint3(tc, ArraySlice + 5)] * weight; } return res; } ps_out6 PS_MultiBlur6_Vertical(vs_out IN) { uint2 coords = uint2(IN.pos.xy); ps_out6 res; // Zero-initialized by default [loop] for(int i = -SampleRadius; i <= SampleRadius; i++) { uint2 tc = uint2(coords.x, (uint)clamp((int)coords.y + i, 0, MapSize)); float weight = Kernel[KernelOffset + i].weight; res.result1 += TextureArray[uint3(tc.xy, ArraySlice)] * weight; res.result2 += TextureArray[uint3(tc.xy, ArraySlice + 1)] * weight; res.result3 += TextureArray[uint3(tc.xy, ArraySlice + 2)] * weight; res.result4 += TextureArray[uint3(tc.xy, ArraySlice + 3)] * weight; res.result5 += TextureArray[uint3(tc.xy, ArraySlice + 4)] * weight; res.result6 += TextureArray[uint3(tc.xy, ArraySlice + 5)] * weight; } return res; } That concludes the shadow map generation. Finally the scene is rendered "normally". This is the relevant part of the pixel shader that samples the shadow map. The pointLight[x].matProj matrix is the same one that was set near the beginning of this post and is used to translate the into the appropriate light space depth range used by the shadow maps. P is the world-space position of the fragment currently being shaded and pointLight[lightId].wPos is the world-space position of the light source. The PointLightTable is used to only process visible point lights in the scene. /* Process point lights */ tableIndex = 0; lightId = pointLightData.y > 0 ? PointLightTable.Load(pointLightData.x) : 0xffffffff; // Used to offset by + tableIndex * 4, but that is always 0 here so [loop] for(n = 0; n < NumPointLights; n++) { // Figure out sample location vector (normal) and partial derivatives float3 shadowPos = -(PointLight[lightId].wPos - P); float3 shadowPosDDX = ddx(shadowPos); float3 shadowPosDDY = ddy(shadowPos); [branch] if(n == lightId) { sLightContrib contrib; contrib = ComputePointLightContribNew(PointLight[lightId], V, P.xyz, N, shadowPos, shadowPosDDX, shadowPosDDY); total.diffuse += contrib.diffuse; total.specular += contrib.specular; // Look for next light table index if applicable if(++tableIndex < pointLightData.y) lightId = PointLightTable.Load(pointLightData.x + (tableIndex * 4)); else lightId = 0xffffffff; } } /** * New, simplified implementation for use with cubical exponential variance shadow maps. **/ sLightContrib ComputePointLightContribNew(sPointLight light, float3 V, float3 P, float3 N, float3 shadowPos, float3 shadowPosDDX, float3 shadowPosDDY) { sLightContrib res = (sLightContrib)0; // Compute distance and a direction vector between the currently rendered pixel's world position (P) and the light's position float3 L = light.wPos - P; float dist = length(L); L /= dist; // Sample shadow map if applicable; no need to do lighting calculations if in (complete) shadow float depth = (light.matProj._43 / length(shadowPos)) + light.matProj._33; float shadowFactor = SampleExponentialVarianceShadowCube(depth, shadowPos, shadowPosDDX, shadowPosDDY, light.lightBleedingReduction, light.minVarianceBias, light.shadowMapId); [branch] if(shadowFactor != 0.0) { // Compute attenuation float att = saturate(CalculateAttenuation(dist, light.attCoef.x, light.attCoef.y, light.attCoef.z)) * light.intensity * shadowFactor; // Compute diffuse and specular contributions of this light source res.diffuse = CalculateDiffuseContrib(L, N, light.colour) * att; res.specular = CalculateSpecularContrib(V, L, N, light.colour, 20.0) * att; // TODO: Allow changing the specular intensity later on, probably through another argument to this function } return res; } And finally, the actual shadow map sampling function. The EVSM-functions are the same that were presented above: /** * Samples a cubical exponential variance shadow map. * As the sampled texture is a cubemap, it is sampled using the provided normal vector (which doesn't need to * be normalized). The light space depth should correspond to what is stored in the shadow map, ie. be a * non-linear light projection clip space coordinate. **/ float SampleExponentialVarianceShadowCube( float lightSpaceDepth, float3 normal, float3 normalDDX, float3 normalDDY, float lightBleedingReduction, float minVarianceBias, uint arraySlice ) { // Sample shadow map float2 warpedDepth = evsm::WarpDepth(lightSpaceDepth, VarianceExponents); #if SHADER_MODEL >= 41 float4 moments = ShadowCubeEV.SampleGrad( ShadowMapEVSampler, float4(normal, arraySlice), normalDDX, normalDDY ); #else float4 moments = ShadowCubeEV.SampleGrad(ShadowMapEVSampler, normal, normalDDX, normalDDY); #endif float2 depthScale = minVarianceBias * 0.01 * VarianceExponents * warpedDepth; float2 minVariance = depthScale * depthScale; float posContrib = evsm::ChebyshevUpperBound(moments.xz, warpedDepth.x, minVariance.x, lightBleedingReduction); float negContrib = evsm::ChebyshevUpperBound(moments.yw, warpedDepth.y, minVariance.y, lightBleedingReduction); return min(posContrib, negContrib); } That should be about it, let me know if I've missed anything or if something is unclear (I bet lots of things are  ^_^).   Edit: I apologize for the bad spacing; apparently the editor insists on removing empty lines between code snippets. Edit 2: Added the evsm::ComputeMoments HLSL function that I accidentally left out before.
  3. No worries. The artifact would be the large "4-pointed star" false shadow on the floor mesh. I didn't make it clear, but the light source is in the middle of the ring of digit meshes, a bit above their top heights, so it isn't at floor level. Here are two new screenshots, including a decal to indicate the light source's position in the scene: Floor self-shadowing with harder edges to better show the outline.   As you can see here there is significant self-shadowing on the '4' and '1' meshes, which align with the corners of the shadow map faces (the shadowed areas in the first image), while the other meshes (which end up more in the middle of their respective shadow map faces) do not show this. The floor mesh has been removed from the shadow mapping calculations to better emphasize that a similar self-shadowing effect is present on other meshes as well.     Edit: the camera is facing north (+Z) in all screenshots, while the shadow map is rendered along the +X, -X, +Y, -Y, +Z and -Z directions from the middle of the decal's position above.
  4. Shameless bump here, but may it be that my way of calculating the light space depth (see the first post) does not hold up as the cube face edges are approached? The self-shadowing certainly increases the closer to such an edge it is.
  5. Yep, it's time for even more shadow mapping issues on my end...   I'm encountering a rather weird artifact with my omnidirectional light's shadow mapping, as can be seen in the following image:  It appears to align with the corners where the shadow cube's faces meet. I'm sampling the skybox (also a cubemap) in the same way though, so I don't think my sampling direction is faulty: #if SHADER_MODEL >= 41 float4 moments = ShadowCubeEV.SampleGrad( ShadowMapEVSampler, float4(shadowVector, arraySlice), shadowVectorDDX, shadowVectorDDY ); #else float4 moments = ShadowCubeEV.SampleGrad(ShadowMapEVSampler, shadowVector, shadowVectorDDX, shadowVectorDDY); #endif The shadow "direction" vector (it is actually a non-unit vector, which works fine with TextureCube sampling) is simply computed as the world-space distance between the light source and the pixel being shaded: float3 shadowVector = -(PointLight[lightId].wPos - P); // World-space distance between light source and fragment being shaded float3 shadowVectorDDX = ddx(shadowPos); // These may possibly be wrong, but shouldn't cause the afforementioned issues? float3 shadowVectorDDY = ddy(shadowPos); // Need to be supplied as different code branches provide manual partial derivatives Furthermore I'm using exponential variance shadow maps. The shadow maps are generated from normal depth maps, rendered using a 90° FOV perspective camera with its range set to the radius of the light source and generated for each cardinal direction in the scene (east, west, north, south, up and down). I'm translating the shaded pixel's world-space coordinate into the light space of these maps like so: float lightSpaceDepth = (light.matProj._43 / length(shadowVector)) + light.matProj._33; light.matProj is the projection matrix used when rendering the depth maps as described above. There is nothing in the shadow maps that to me would hint at the experienced artifact either; feel free to have a look for yourself (click on the images to view higher resolution versions):   Positive moment (x component) Range: ~1.85x1017 .. 2.35x1017   Negative moment (y component) Range: ~6.94x10-3 .. 6.74x10-3   Positive quadratic moment (z component) Range: ~3.42x1034 .. 5.54x1034   Negative quadratic moment (w component) Range: 4.50x105 .. 5.00x105   So my question then is whether anybody happen to recognize the artifact and / or have any ideas what may be causing it? More code / images can be provided on request.     Update 1: It appears that this is a self-shadowing issue near the edges of the cubemap faces. If the ground object is excluded from the shadow pass, the floor artifacts go away, however the digit meshes aligning with these corner directions (1/4/6/9) then show severe self-shadowing as well. It is not (realistically) rectifiable by using various depth / variance biases. 
  6. Thanks guys. I was starting to think cubemaps may be the way to go after all, and I think that I will indeed switch to that route. The cube-to-paraboloid map approach brought up by Hodgman sounds like it could be worth a look though; it could probably be combined with the last step of my variance map post-processing.   Thanks for your input. I'm leaning that way as well now, however out of interest, could you perhaps explain more in detail how you went about implementing this? I'm still having trouble understanding exactly what the referenced paper is suggesting and how / where it would be applied. 
  7. After some time away I'm revisiting my lighting system and in trying to shape up my point light shadow mapping, came across the paper Practical Implementation of Dual Paraboloid Shadow Maps by Osman, Bukowski & McEvoy. The paper itself doesn't go into very much depth, but the following passage caught my eyes: This sounds good, but I cannot quite figure out what they mean with "doing the paraboloid projection in the pixel shader instead". Surely this cannot be done during shadow map generation (the pixel coordinate is determined by the vertex shader's output). The other interpretation is that it is done when sampling the shadow map during scene rendering, but if you just render the depth map via a normal orthographic projection, won't you loose data that do not "fit" the projection but that would be captured through the paraboloid projection? Also if this really worked, you wouldn't need to have tessellated shadow casters either would you?     Finally a bonus question; it would seem that the most recent papers / articles I can find on the DPSM technique are from 2008, with most dating as far back as 2002. Has this approach generally been made obsolete by modern hardware being more readily capable of cubemapping or is there yet another (superior) technique that I'm unaware of?     Edit: I sort of messed up with the title here; I was originally going to state that I'm investigating this because applying VSM and blurring to DPSM's exaggerate their inherent problems, thus it would be very nice to be able to do this in linear space indeed.
  8. I see... indeed, IDXGIKeyedMutex snuck into my D3D9 project through the Windows 7 API rather than D3D9 itself, leading me to assume it was available.  Thanks for pointing that out, as well as the links! Now all that is left is to wait for the next weekend so that I can dig into it. :)
  9. Ah that's a typo haha. I cut the relevant part out of a function, it was originally an argument identifier, so I thought I'd replace it with DXGI_FORMAT_B8G8R8A8_UNORM (which is indeed what it should have been) for clarity.
  10. I'm trying to set up a shared texture (render target) such that it is created on a D3D11 device and subsequently opened and drawn to from a D3D9Ex device. Unfortunately I've found the information on how to actually go about this to be rather scarce and the debug layers don't seem to want to be more helpful than to tell me the "opened and created resources don't match". Therefore I thought I would ask and see if anybody around here would be able and willing to shed some more light on how resource sharing is supposed to be carried out?   So what I do is to first create a Texture2D from my IDirect3D11Device like so: tex2DDesc.Width = width; // Tested with 256 tex2DDesc.Height = height; // Tested with 256 tex2DDesc.MipLevels = 1; tex2DDesc.ArraySize = 1; tex2DDesc.Format = DXGI_FORMAT_A8G8B8R8_UNORM; tex2DDesc.SampleDesc.Count = 1; tex2DDesc.SampleDesc.Quality         = 0; tex2DDesc.Usage = D3D11_USAGE_DEFAULT; tex2DDesc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE; tex2DDesc.CPUAccessFlags = 0; tex2DDesc.MiscFlags = D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX; // Create render target if(FAILED(gGlob.pDevice->CreateTexture2D(&tex2DDesc, nullptr, &pTexture2D))) Fail("Failed to create shared texture", "Could not create the shared texture!", 1); // Create shader resource view srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D; srvDesc.Format = DXGI_FORMAT_A8G8B8R8_UNORM; srvDesc.Texture2D.MipLevels = 1; srvDesc.Texture2D.MostDetailedMip         = 0; if(FAILED(gGlob.pDevice->CreateShaderResourceView(pTexture2D, &srvDesc, &pResourceView)))         Fail("Failed to create shared texture", "Could not create SRV for shared texture!", 2); // Obtain the keyed mutex interface if(FAILED(pTexture2D->QueryInterface(__uuidof(IDXGIKeyedMutex), (void**)&pMutex))) Fail("Failed to create shared texture", "Could not obtain resource locking interface!", 3); // Obtain IDXGIResource interface for the texture; this is needed to get its shared handle IDXGIResource *pResource = nullptr; if(FAILED(pTexture2D->QueryInterface(__uuidof(IDXGIResource), (void**)&pResource))) Fail("Failed to create shared texture", "Could not obtain resource interface!", 4); if(FAILED(pResource->GetSharedHandle(&hSharedResource))) Fail("Failed to create shared texture", "Could not obtain shared resource handle!", 5); // Shouldn't need this open anymore; the handle should presumably be valid for as long as the // resource (the texture2d) remains in existance SAFE_RELEASE(pResource); After this, an attempt is made to access it from an IDirect3DDevice9Ex: hr = m_pD3DEx->CreateTexture(iWidth, iHeight, 1, D3DUSAGE_RENDERTARGET, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &pTexture, &hSharedResource); // The above returns E_INVALIDARG The D3D9 debug layer reports the following when invoking CreateTexture as per above: Direct3D9: (ERROR) :Opened and created resources don't match, unable to open the shared resource. Direct3D9: (ERROR) :Error during initialization of texture. CreateTexture failed. Direct3D9: (ERROR) :Failure trying to create a texture The D3D11 debug layer remains silent and all D3D11 calls above succeed.   From what I've been able to gather while searching for info on this, this seems to be how you are supposed to set it up. Most places that mention it seem to make a point of having D3D9Ex create the resource and then open it using ID3D11Device::OpenSharedResource however. But certainly this can't be a requirement? In my project it makes a whole lot better sense to have the D3D11 device create the shared textures than the other way around. Furthermore there is the slightly ambigious route of getting the IDXGIResource interface from the IDXGITexture2d in order to obtain the shared handle. Could it be that you are in fact supposed to get this from somewhere else? The MSDN documentation seems prone to refer to surfaces when talking about resource sharing; is it in fact supposed to be so that you are supposed to be sharing the IDXGISurface view of the resource? But while D3D9 uses surfaces, what would the corresponding thing be in D3D11 if so, since it is apparently imperative that both resources be created / opened using the same type? (Furthermore it appears that texture resources should indeed be shareable out-of-the-box going by some mentions here: https://msdn.microsoft.com/en-us/library/windows/desktop/bb219800(v=vs.85).aspx#Sharing_Resources).     Grateful for any pointers  :)
  11. I'm going to bump this on the off chance that somebody who haven't already seen it would have any ideas. Sorry if this isn't allowed.
  12. Hm I don't think that is what's causing the seams though. It isn't that things line up incorrectly between the hemispheres, but rather that a line of "nothingness" (the depth clear value) end up around it. The most obvious cause I can imagine is that the texels at the very edge of the projected circle blend together with the cleared outside. Since the shadow map is circular there will always be this "invalid" outline, so that's where I was thinking that maybe you could try to project a bit beyond a hemisphere and then only sample the middle 95% or something, such that you get a more proper "clamping" behaviour at the edge texels. Another thing I thought of just now is that perhaps it would be better (if more expensive) to perform the paraboloid space transform in the pixel shader instead of the vertex shader? I imagine there will be more edges in the vertex shader approach, especially if the geometry isn't that detailed. I'm not sure if it would fix this particular issue though but it could be worth a try.
  13. Ah yes, that should work; there are only two functions being imported from there so should be an easy thing, thanks.   I probably should see about making the switch and look further into what that will entail in regards to older OS support one of these days though.
  14. Hah, yeah I suppose I really should try for that sooner or later.   Mainly a somewhat large pre-existing codebase and laziness, but also the need to support Windows 7 and Vista. I'm not sure whether upgrading to the Windows SDK would affect that, though I do recall reading that this was more for Windows 8+ (ie DirectX 11.1 and above)? I may very well be wrong there though.   As your linked-to article says however: This would also be a problem. Not an insurmountible one though as I don't have that much audio functionality yet; it would probably be possible to change to another library without too much of a hassle.
  15. Yet another shadow mapping issue post from me, this is starting to feel like it's on a loop...  :rolleyes:   Anyway, I'm experiencing edge shimmering / swimming artifacts for my directional light shadows when "moving" (or rather rotating) the light source. From what I've gathered this is more or less to be expected and not something that is easily dealt with. However, I noticed that the most intrusive artifacts seem to be somewhat periodic rather than random, as you would expect from "roughly the same" view matrix being recalculated for very similar, yet slightly different light directions in succession. I have a video clip showcasing this available here: [media]https://youtu.be/2CAKeGPgkUk[/media]   Is this a common artifact? And what may be causing it; as said it doesn't look like the random pixel changes you would normally expect from edge shimmering? The green rectangle on the left shows a comparison between the current and last frame's shadow maps (only for the first cascade, but that is what is clearly visible on the columns in the video); for green pixels the stored depth matches within 0.01; if it goes beyond this the pixel at that location is displayed as red instead. As can be seen the red pixels jump around pretty much arbitrarily around the edges of objects for the most part, then there is a significant change where most edges get a full red pixel outline every so often when the light direction changes enough for things to move a pixel in general on the depth map. Finally (going kind of hand-in hand with the question about what may be causing it), is there any idea on how to alleviate it? I suppose I can live with "normal" edge shimmering if need be but when shadows "jump back" every so often it gets very abrasive in my opinion. I don't think I've seen this particular issue in any games I've played either. Using variance shadow mapping and applying MSAA to it actually makes the issue less apparent, even though it doesn't cover it up too well. Simple blurring of the shadow map has no effect however.