• Advertisement
Sign in to follow this  

DX11 RPG Shadows, CSM, TSM, what else?

This topic is 1620 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, what shadowing methods are the rave these days?

 

What Shadowing method would you consider best for DX9 and for DX11 hardware under these conditions:

-Large Area, multiple actors and objects, One Sun Light, First person perspective and Isometric perspective

-Room, multiple actors and objects, Several Lights, First person perspective and Isometric perspective

 

Cheers!

Share this post


Link to post
Share on other sites
Advertisement

Variance shadow maps can look quite nice if you tweak the parameters a little and they are very resource friendly. They also allow you to filter the shadow map and blur it to provide nice soft edged shadows at the cost of a single blur.

Share this post


Link to post
Share on other sites

For DX9 I would go with standard PSSM. For DX11 and more powerful GPUs I would try to use EVSM with SDSM. Of course for local lights (spotlights etc.) use a single cascade.

SDSM - ("Sample Distribution Shadow Maps") is an interesting PSSM extension, which provides optimal shadow map frustum.

EVSM (Chen, Tatarchuk - "Lighting Research at Bungie", SIGGRAPH09). It combines ESM and VSM in order to fix light bleeding issues.

Edited by Krzysztof Narkowicz

Share this post


Link to post
Share on other sites

Is it possible to get SDSM using Shader Model 3.0?

 

Edit:

The TSM paper does a comparison against PSM, and the pictures show TSM is better. Is this correct? Or do all Shadowing papers make other techniques look bad in comparison?

Edited by Tispe

Share this post


Link to post
Share on other sites

Sorry, I meant PSSM, not PSM. PSM/TSM and similar algorithms have a lot of temporal aliasing when camera moves. IMHO "normal" shadows look much better.

 

Yes, it's possible to implement SDSM on SM3.0. Just calculate depth range on GPU using pixel shaders and then read it on CPU. For read back use a few textures in round-robin fashion, so GPU won't be stalled by CPU. Results will be lagged by a frame or two, but it should look ok.

Share this post


Link to post
Share on other sites

Is there a difference between PSSM and CSM? I've toyed with CascadedShadowMaps11.exe in the SDK but I can't really find a good fit.

Share this post


Link to post
Share on other sites

You can probably use RTW. It's one of the best shadow mapping algorithms.

 

Wow, that's really incredible. Probably not useful for the OP's isometric scenes, but for first person I can't imagine that there's much out there that works better.

Share this post


Link to post
Share on other sites

RTW is definitely an eye opener. Though I am not familiar with how to use a warping map, or even create one. Please correct me if I am wrong on something I am about to say.

 

From what i've gathered so far, using the forward method, a regular shadow map is colored brighter for regions of interest and darker for uninterested regions. I am not sure if the depth channel is colored or another channel. Then this colored shadow map is collapsed to 1D textures. I think all pixels in the same row is summed up to be one pixel in the 1D map, or the brightest color is picked. Then the same for all the pixels on the same column, which is placed in a second 1D texture. These 1D textures are blurred and then they are used to construct a warping map. How this is done, I don't know. Then this warping map is used to render the shadow map once again, this time with regions of interest magnified. And there is the final shadow map. Use this together with the warping map to transform each rendered pixel in view space to the shadow map and check if the depth is greater or not.

 

The consuming part is to do CPU analysis on the original shadow map and color it, I think. Then maybe (I am guessing) it is possible to collapse the result using the GPU. By rendering the colored shadow map to a render target with 1px height for the columns, and then again to another render target with 1px width for the rows. Gaussian blur on these 1D render targets might also be done I guess. Now, render the warping map(is this a texture?) by sampling each of the 1D textures once for the corresponding pixel. I'm not sure how to use the warping map to mess around in the vertex and pixel shaders....

 

 

Am I wrong?

Edited by Tispe

Share this post


Link to post
Share on other sites

I'm currently working on a demo-app for RTW shadows, Maybe my experience is helpful, even if I have only implemented backwards projection so far and also still have to fix a few things and probably add some VSM variant on top.

 


From what i've gathered so far, using the forward method, a regular shadow map is colored brighter for regions of interest and darker for uninterested regions. I am not sure if the depth channel is colored or another channel. Then this colored shadow map is collapsed to 1D textures. I think all pixels in the same row is summed up to be one pixel in the 1D map, or the brightest color is picked. Then the same for all the pixels on the same column, which is placed in a second 1D texture. These 1D textures are blurred and then they are used to construct a warping map. How this is done, I don't know. Then this warping map is used to render the shadow map once again, this time with regions of interest magnified. And there is the final shadow map. Use this together with the warping map to transform each rendered pixel in view space to the shadow map and check if the depth is greater or not.

The coloring is only used for easier visualization in the paper, you just need a standard low-res shadowmap that you can use to detect depth-discontinuities, which then get assigned higher importance values. This is of course no problem in a small scene and for a single light, but I'm worried about the performance hit in a real scene and with potentially more lights.

The process is basically shadow map->2D importance map->2x 1D importance maps->blur->2x 1D warping maps. The warping maps are then used to to render the full-res shadow map and during final rendering.

 


The consuming part is to do CPU analysis on the original shadow map and color it, I think. Then maybe (I am guessing) it is possible to collapse the result using the GPU. By rendering the colored shadow map to a render target with 1px height for the columns, and then again to another render target with 1px width for the rows. Gaussian blur on these 1D render targets might also be done I guess. Now, render the warping map(is this a texture?) by sampling each of the 1D textures once for the corresponding pixel. I'm not sure how to use the warping map to mess around in the vertex and pixel shaders....

There is no CPU analysis necessary, everything can be done in pixel shaders on the GPU. Also with square SMs the 1D maps can be combined into a 2 pixel wide texture.

 

A sample implementation of the whole thing is available on Paul Rosen's homepage, in the software section. Very helpful.

 

For the criteria used in the paper, the backwards projection also needs a view-space depthbuffer and normals, so you'd want to use these for your general lighting as well.

 

Then there's the tessellation thing. It's only mentioned as an afterthought or optional thing in the paper, but for good results you'll need to ensure decently tessellated geometry, either using HW tessellation or during content creation.

Another thing I still have to work out is that shadow-acne actually increased in some areas using RTW, but this should be easy to fix by scaling the z-bias depending on the local shadow resolution / importance values.

 

Also some WIP pictures:

Projective-Aliasing still causing self-shadowing artifacts, even with hand-tuned bias values, so these still need some help.

 

2WwNPU0l.png

 

Artifacts on an untessellated cube:

mrA2SDMl.png

 

Fixed:

VxPL22dl.png

 

 


You can probably use RTW. It's one of the best shadow mapping algorithms.

Got any real-world experience with RTW you want to share? :)

Share this post


Link to post
Share on other sites

I was hoping on implementing RTW on Shader Model 3.0. So no tesselation. Will it be possibe/worth it?

Share this post


Link to post
Share on other sites

For RTW is it possible to use the previous frame's shadow map to compute the warping maps?

Share this post


Link to post
Share on other sites

Could you share your demo source?  smile.png

I'll try to :p I'd just have to finish it first, which will take some time. In the meantime you can look at the original demo source here: http://www.cspaul.com/wiki/doku.php?id=software:rtw

 

 

I was hoping on implementing RTW on Shader Model 3.0. So no tesselation. Will it be possibe/worth it?

Possible? Sure. Worth it? I don't know. Personally I'd start with CSM/PSSM for SM 3.0 and then compare it to RTW if you decide to add it later.

 

 

For RTW is it possible to use the previous frame's shadow map to compute the warping maps?

I think that should be possible with some more shader work.

Share this post


Link to post
Share on other sites

It seems you want to have dynamic illumination with shadowing from everything on everything (aka global direct illumination). 

Cascaded shadow mapping is gold standard for now in large-scale environment.

Use CSM with PCF on Poison Kernel. It works well. This method is used in many titles in the market.

Some time ago I had been testing many others. This had given max result for scalability and perf.

Also you should use deferred tech instead forward even in large-scale scene.

And hold in mind what types of BRDF you want to see in your game. May be it's just diffuse and specular terms.

Or may be complex aniso BRDF, or may be you'll have decided to add local subsurface scattering via spherical harmonics or wavelets.

You should have chosen a good solution at the beginning, because there are 4 render targets in Direct3d9.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By mister345
      Hi, can somebody please tell me in clear simple steps how to debug and step through an hlsl shader file?
      I already did Debug > Start Graphics Debugging > then captured some frames from Visual Studio and
      double clicked on the frame to open it, but no idea where to go from there.
       
      I've been searching for hours and there's no information on this, not even on the Microsoft Website!
      They say "open the  Graphics Pixel History window" but there is no such window!
      Then they say, in the "Pipeline Stages choose Start Debugging"  but the Start Debugging option is nowhere to be found in the whole interface.
      Also, how do I even open the hlsl file that I want to set a break point in from inside the Graphics Debugger?
       
      All I want to do is set a break point in a specific hlsl file, step thru it, and see the data, but this is so unbelievably complicated
      and Microsoft's instructions are horrible! Somebody please, please help.
       
       
       

    • By mister345
      I finally ported Rastertek's tutorial # 42 on soft shadows and blur shading. This tutorial has a ton of really useful effects and there's no working version anywhere online.
      Unfortunately it just draws a black screen. Not sure what's causing it. I'm guessing the camera or ortho matrix transforms are wrong, light directions, or maybe texture resources not being properly initialized.  I didnt change any of the variables though, only upgraded all types and functions DirectX3DVector3 to XMFLOAT3, and used DirectXTK for texture loading. If anyone is willing to take a look at what might be causing the black screen, maybe something pops out to you, let me know, thanks.
      https://github.com/mister51213/DX11Port_SoftShadows
       
      Also, for reference, here's tutorial #40 which has normal shadows but no blur, which I also ported, and it works perfectly.
      https://github.com/mister51213/DX11Port_ShadowMapping
       
    • By xhcao
      Is Direct3D 11 an api function like glMemoryBarrier in OpenGL? For example, if binds a texture to compute shader, compute shader writes some values to texture, then dispatchCompute, after that, read texture content to CPU side. I know, In OpenGL, we could call glMemoryBarrier before reading to assure that texture all content has been updated by compute shader.
      How to handle incoherent memory access in Direct3D 11? Thank you.
    • By _Engine_
      Atum engine is a newcomer in a row of game engines. Most game engines focus on render
      techniques in features list. The main task of Atum is to deliver the best toolset; that’s why,
      as I hope, Atum will be a good light weighted alternative to Unity for indie games. Atum already
      has fully workable editor that has an ability to play test edited scene. All system code has
      simple ideas behind them and focuses on easy to use functionality. That’s why code is minimized
      as much as possible.
      Currently the engine consists from:
      - Scene Editor with ability to play test edited scene;
      - Powerful system for binding properties into the editor;
      - Render system based on DX11 but created as multi API; so, adding support of another GAPI
        is planned;
      - Controls system based on aliases;
      - Font system based on stb_truetype.h;
      - Support of PhysX 3.0, there are samples in repo that use physics;
      - Network code which allows to create server/clinet; there is some code in repo which allows
        to create a simple network game
      I plan to use this engine in multiplayer game - so, I definitely will evolve the engine. Also
      I plan to add support for mobile devices. And of course, the main focus is to create a toolset
      that will ease games creation.
      Link to repo on source code is - https://github.com/ENgineE777/Atum
      Video of work process in track based editor can be at follow link: 
       
       

    • By mister345
      I made a spotlight that
      1. Projects 3d models onto a render target from each light POV to simulate shadows
      2. Cuts a circle out of the square of light that has been projected onto the render target
      as a result of the light frustum, then only lights up the pixels inside that circle 
      (except the shadowed parts of course), so you dont see the square edges of the projected frustum.
       
      After doing an if check to see if the dot product of light direction and light to vertex vector is greater than .95
      to get my initial cutoff, I then multiply the light intensity value inside the resulting circle by the same dot product value,
      which should range between .95 and 1.0.
       
      This should give the light inside that circle a falloff from 100% lit to 0% lit toward the edge of the circle. However,
      there is no falloff. It's just all equally lit inside the circle. Why on earth, I have no idea. If someone could take a gander
      and let me know, please help, thank you so much.
      float CalculateSpotLightIntensity(     float3 LightPos_VertexSpace,      float3 LightDirection_WS,      float3 SurfaceNormal_WS) {     //float3 lightToVertex = normalize(SurfacePosition - LightPos_VertexSpace);     float3 lightToVertex_WS = -LightPos_VertexSpace;          float dotProduct = saturate(dot(normalize(lightToVertex_WS), normalize(LightDirection_WS)));     // METALLIC EFFECT (deactivate for now)     float metalEffect = saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace)));     if(dotProduct > .95 /*&& metalEffect > .55*/)     {         return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace)));         //return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace))) * dotProduct;         //return dotProduct;     }     else     {         return 0;     } } float4 LightPixelShader(PixelInputType input) : SV_TARGET {     float2 projectTexCoord;     float depthValue;     float lightDepthValue;     float4 textureColor;     // Set the bias value for fixing the floating point precision issues.     float bias = 0.001f;     // Set the default output color to the ambient light value for all pixels.     float4 lightColor = cb_ambientColor;     /////////////////// NORMAL MAPPING //////////////////     float4 bumpMap = shaderTextures[4].Sample(SampleType, input.tex);     // Expand the range of the normal value from (0, +1) to (-1, +1).     bumpMap = (bumpMap * 2.0f) - 1.0f;     // Change the COORDINATE BASIS of the normal into the space represented by basis vectors tangent, binormal, and normal!     float3 bumpNormal = normalize((bumpMap.x * input.tangent) + (bumpMap.y * input.binormal) + (bumpMap.z * input.normal));     //////////////// LIGHT LOOP ////////////////     for(int i = 0; i < NUM_LIGHTS; ++i)     {     // Calculate the projected texture coordinates.     projectTexCoord.x =  input.vertex_ProjLightSpace[i].x / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f;     projectTexCoord.y = -input.vertex_ProjLightSpace[i].y / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f;     if((saturate(projectTexCoord.x) == projectTexCoord.x) && (saturate(projectTexCoord.y) == projectTexCoord.y))     {         // Sample the shadow map depth value from the depth texture using the sampler at the projected texture coordinate location.         depthValue = shaderTextures[6 + i].Sample(SampleTypeClamp, projectTexCoord).r;         // Calculate the depth of the light.         lightDepthValue = input.vertex_ProjLightSpace[i].z / input.vertex_ProjLightSpace[i].w;         // Subtract the bias from the lightDepthValue.         lightDepthValue = lightDepthValue - bias;         float lightVisibility = shaderTextures[6 + i].SampleCmp(SampleTypeComp, projectTexCoord, lightDepthValue );         // Compare the depth of the shadow map value and the depth of the light to determine whether to shadow or to light this pixel.         // If the light is in front of the object then light the pixel, if not then shadow this pixel since an object (occluder) is casting a shadow on it.             if(lightDepthValue < depthValue)             {                 // Calculate the amount of light on this pixel.                 float lightIntensity = saturate(dot(bumpNormal, normalize(input.lightPos_LS[i])));                 if(lightIntensity > 0.0f)                 {                     // Determine the final diffuse color based on the diffuse color and the amount of light intensity.                     float spotLightIntensity = CalculateSpotLightIntensity(                         input.lightPos_LS[i], // NOTE - this is NOT NORMALIZED!!!                         cb_lights[i].lightDirection,                          bumpNormal/*input.normal*/);                     lightColor += cb_lights[i].diffuseColor*spotLightIntensity* .18f; // spotlight                     //lightColor += cb_lights[i].diffuseColor*lightIntensity* .2f; // square light                 }             }         }     }     // Saturate the final light color.     lightColor = saturate(lightColor);    // lightColor = saturate( CalculateNormalMapIntensity(input, lightColor, cb_lights[0].lightDirection));     // TEXTURE ANIMATION -  Sample pixel color from texture at this texture coordinate location.     input.tex.x += textureTranslation;     // BLENDING     float4 color1 = shaderTextures[0].Sample(SampleTypeWrap, input.tex);     float4 color2 = shaderTextures[1].Sample(SampleTypeWrap, input.tex);     float4 alphaValue = shaderTextures[3].Sample(SampleTypeWrap, input.tex);     textureColor = saturate((alphaValue * color1) + ((1.0f - alphaValue) * color2));     // Combine the light and texture color.     float4 finalColor = lightColor * textureColor;     /////// TRANSPARENCY /////////     //finalColor.a = 0.2f;     return finalColor; }  
      Light_vs.hlsl
      Light_ps.hlsl
  • Advertisement