Screen-space shadowing

Started by
14 comments, last by Jason Z 9 years, 10 months ago

We are using screenspace shadow tracing. Its quite cheap. I use 24 x gather4 samples from quarter resolution 16bit depth buffer. Then I just count intersecting samples and use expontential shadow term. pow(shadowTerm, intersectedSamples)

Its work really well actually.

Advertisement

That's awesome :D

Is all the shadowing done in screen-space, or are there traditional techniques used as well?

That's awesome biggrin.png

Is all the shadowing done in screen-space, or are there traditional techniques used as well?

Typical cascades shadow map are maybe showing from moon light but I can't be sure because those point lights are so much brighter than anything else. There is also temporallly smoothed SAO variation with multi bounce lighting that contributes fully shadowed areas quite well. https://www.dropbox.com/s/x7tvd8bags5x3pj/GI.png

I thinks it's best to combine screen space shadow tracing with tiled shading . For directional lights surely we need shadow maps but usually point and spot lights have small ranges. And we can assume most of the shadow casters that are going to affect the final result are in G-Buffer .

In this way we can create raytace jobs for each light and each pixel . Let's consider pixel at 0,0 . We know from tiled shading that 16 lights may light this pixel. we create 16 trace jobs ( the direction and start point) , And now we can dispatch a thread for any trace job in compute shader and write the results in a buffer and when sahding using this data.

Take a look at AMD Leo Demo . I think they used somehow similar approach.

I also believe a combination is best. In games, usually there are many static light sources, which can be treated efficiently by CSM that are updated only very n'th frame, so wont consume much time. The animated objects like players and enemies are usually small and completely visible in the scene. There, screen space shadows could be efficient, rather than having another render pass.

Why not voxels? The idea is not so crazy anymore and certainly used for real-time GI in games (i.e. Crysis 3).

It may sound crazy due to the memory requirements. However, for shadow mapping you just need 1 bit.

A 1024x1024x1024 voxel would need 128MB and suddenly starts feeling appealing.

Perhaps the biggest block right now is that there is no way to fill this voxel with occlusion data in real-time.

The most efficient way I see would be regular rasterization but where the shader (or the rasterizer) decides on the fly which layer from the 3D texture the pixel should be rendered to, based on its interpolated depth (quantized). However I'm not aware of any API or GPU that has this capability. This would be highly parallel.

Geometry shaders allow selecting which RenderTarget should a triangle be rendered to, but there is no way to select which RenderTarget should a pixel be rendered to (which can be fixed function; not necessarily shaders)

You could use a variant of the KinectFusion algorithm to build a volumetric representation of the scene. The basic idea is to get a depth image (or a depth buffer in the rendering case) and then you find the camera location relative to your volume representation. Then for each pixel of the depth image you trace through the volume, updating each voxel as you go with the distance information you have from the depth image. The volume representation is the signed distance from a surface at each voxel. For the next frame, the volume representation is used to find out where the Kinect moved to and the process is repeated. The distances are updated over a time constant to eliminate the noise from the sensor and to allow for moving objects.

This is a little bit of a heavy algorithm to do in addition to all of the other stuff you do to render a scene, but there are key parts of the algorithm that wouldn't be needed anymore. For example, you don't need to solve for the camera location, but instead you already have it. That signed distance voxel representation could easily be modified and/or used to calculate occlusion. That might be worth investigating further to see if it could be used in realtime...

This topic is closed to new replies.

Advertisement