Sign in to follow this  
Hodgman

Screen-space shadowing

Recommended Posts

FreneticPonE    3294

You could just use imperfect shadow maps http://www.mpi-inf.mpg.de/~ritschel/Papers/ISM.pdf

 

But any screenspace technique is just asking to be unstable, wide search areas for SSAO just end looking like a weird kind of unsharp mask as it is. Not too mention you'd just get light bleeding everywhere since you've only screenspace to work off of.

 

I mean, it's a neat idea for some sort of "better than SSAO" or directional SSAO kind of idea. But I'm afraid I'd be skeptical of doing any more in screen space that's not inherently screenspace than is already done. Even SSR looks weird and kind of wonky in practice, EG Crysis 3 and Killzone Shadowfall.

Share this post


Link to post
Share on other sites
Bummel    1888

The 'problem' with SSAO is imo not a problem with the technique at all but how it is commonly used in current games (e.g. the FarCry 3 case).

I think it all started when some modders exaggerated the subtle SSAO in Crysis 1 and all world was crying at first how awesome this looks.

It can work very well for short-range AO.

Share this post


Link to post
Share on other sites
Matias Goldberg    9577

Why not voxels? The idea is not so crazy anymore and certainly used for real-time GI in games (i.e. Crysis 3).

 

It may sound crazy due to the memory requirements. However, for shadow mapping you just need 1 bit.

A 1024x1024x1024 voxel would need 128MB and suddenly starts feeling appealing.

 

Perhaps the biggest block right now is that there is no way to fill this voxel with occlusion data in real-time.

The most efficient way I see would be regular rasterization but where the shader (or the rasterizer) decides on the fly which layer from the 3D texture the pixel should be rendered to, based on its interpolated depth (quantized). However I'm not aware of any API or GPU that has this capability. This would be highly parallel.

 

Geometry shaders allow selecting which RenderTarget should a triangle be rendered to, but there is no way to select which RenderTarget should a pixel be rendered to (which can be fixed function; not necessarily shaders)

Share this post


Link to post
Share on other sites
MJP    19755

But any screenspace technique is just asking to be unstable

 

I tend to agree. Not to mention the fact that the information you need might not even be there when working with screenspace data. Personally I feel like experimenting with voxels and other optimized data formats for occluders would be a much more promising avenue to explore, but I'm okay with being proven wrong. smile.png

Share this post


Link to post
Share on other sites
SimmerD    1210

Why not voxels? The idea is not so crazy anymore and certainly used for real-time GI in games (i.e. Crysis 3).

 

It may sound crazy due to the memory requirements. However, for shadow mapping you just need 1 bit.

A 1024x1024x1024 voxel would need 128MB and suddenly starts feeling appealing.

 

Perhaps the biggest block right now is that there is no way to fill this voxel with occlusion data in real-time.

The most efficient way I see would be regular rasterization but where the shader (or the rasterizer) decides on the fly which layer from the 3D texture the pixel should be rendered to, based on its interpolated depth (quantized). However I'm not aware of any API or GPU that has this capability. This would be highly parallel.

 

Geometry shaders allow selecting which RenderTarget should a triangle be rendered to, but there is no way to select which RenderTarget should a pixel be rendered to (which can be fixed function; not necessarily shaders)

 

I like this line of thinking.

 

You probably don't need 1024 vertical voxels, so it's possible to spend more on horizontal ones, or to store, say a 8-bit distance instead of a 1 bit solid/empty flag.

 

You could keep two separate voxel structures, the static one, then a dynamic one that is based on slices.  You could sample both of them when required ( when a surface is near a light and a dynamic object ).

 

You could also do some tricks so that highly dense but noisy things like leaves could be faked and not traced directly, just with an appropriate noise function, say.

Share this post


Link to post
Share on other sites
Sirisian    2263

I've thought about the voxel approaches. Even with an octree they are extremely complicated, especially if you want self shadowing. Lot of ideas to speed it up like varying quality based on distance to the camera, but even then you end up having to voxelize geometry for occluders or find other methods to mark which voxels are shaded and which aren't.

 

That said I think a really fast theoretical (as in I made this up a while ago) approach would be to use RTW in a single pass with a low resolution texture (like 64x64). You find all the objects within the radius of your light source then generate a frustum at your point light. Point the frustum at (1, 0, 0) and cut the world into 8 quadrants. Now for all the objects in front of the near plane of the frustum do nothing. For the 4 quadrants behind the near plane assign them to a quadrant. If they overlap two (or 4) quadrants duplicate the objects into both all quadrants. In 2D:

pointshadowmap.png

 

Now render all the geometry in each quadrant passing into a shader the center of the point of the light and transforming the vertices into world space for each quadrant. Then normalize the vertices in the angle 0 to 180 into 0 to 45 degrees so they're all inside of the frustum. If your triangles are small enough there should be no artifacts really. Here's a 2D example of what I mean by artifacts. The red line represents our geometry and we normalize the angle so it's squished into the frustum. This distorts the line (if we look at all the points along it) into the blue line. If we only look at the vertices we see the magenta line. You then render a depth map using RTW. If you're good with math you can probably create a fragment shader that correctly interpolates the vertices and calculates the correct depth (removing the artifact). What you'd end up with would be a RTW'ed low resolution spherical map. When you sample to see if a shadow exists you'd need to perform a look-up on the texture for each light source.

 

You'd only need a texture for lights that collide with geometry and you can choose a texture size based on the distance to the camera. (RTW will correctly warp to give a higher resolution closer to the camera also). I hope that makes sense. I made it up mostly on paper a few months ago and haven't been able to tell anyone to see if it's viable.

Share this post


Link to post
Share on other sites
Hodgman    51234

I've been meaning to come back to this, but have been working full time on stuff that pays the bills dry.png

 

Here's some gifs that I actually produced months ago. Most of the lighting in the scene is from a cube-map, with a few (green) dynamic lights in there too. There's no shadows, so the "SSSVRT" adds all of the shadowing seen in the odd frames of the gifs:

http://imgur.com/a/k3L78

 

Seems to work really well on shiny env-map-lit (IBL) objects to 'ground' them.

 

 

Re voxels - that's a challenge for another day biggrin.png

I imagine you could use both. Screen-space stuff like this is great for capturing the really fine details (which would require an insane amount of memory in a voxel system), so you could combine this with a voxel method for more coarse-detail / longer-distance rays, and/or image probes for really long-range rays.

Edited by Hodgman

Share this post


Link to post
Share on other sites
kalle_h    2464

That's awesome biggrin.png

Is all the shadowing done in screen-space, or are there traditional techniques used as well?

 

Typical cascades shadow map are maybe showing from moon light but I can't be sure because those point lights are so much brighter than anything else. There is also temporallly smoothed SAO variation with multi bounce lighting that contributes fully shadowed areas quite well. https://www.dropbox.com/s/x7tvd8bags5x3pj/GI.png

Share this post


Link to post
Share on other sites
BlackBrain    517

  I thinks it's best to combine screen space shadow tracing with tiled shading . For directional lights surely we need shadow maps but usually point and spot lights have small ranges. And we can assume most of the shadow casters that are going to affect the final result are in G-Buffer .

 

  In this way we can create raytace jobs for each light and each pixel . Let's consider pixel at 0,0 . We know from tiled shading that 16 lights may light this pixel. we create 16 trace jobs ( the direction and start point) , And now we can dispatch a thread for any trace job in compute shader and write the results in a buffer and when sahding using this data.

 

Take a look at AMD Leo Demo . I think they used somehow similar approach.

Share this post


Link to post
Share on other sites
spacerat    1182

I also believe a combination is best. In games, usually there are many static light sources, which can be treated efficiently by CSM that are updated only very n'th frame, so wont consume much time. The animated objects like players and enemies are usually small and completely visible in the scene. There, screen space shadows could be efficient, rather than having another render pass. 

Share this post


Link to post
Share on other sites
Jason Z    6434

Why not voxels? The idea is not so crazy anymore and certainly used for real-time GI in games (i.e. Crysis 3).

 

It may sound crazy due to the memory requirements. However, for shadow mapping you just need 1 bit.

A 1024x1024x1024 voxel would need 128MB and suddenly starts feeling appealing.

 

Perhaps the biggest block right now is that there is no way to fill this voxel with occlusion data in real-time.

The most efficient way I see would be regular rasterization but where the shader (or the rasterizer) decides on the fly which layer from the 3D texture the pixel should be rendered to, based on its interpolated depth (quantized). However I'm not aware of any API or GPU that has this capability. This would be highly parallel.

 

Geometry shaders allow selecting which RenderTarget should a triangle be rendered to, but there is no way to select which RenderTarget should a pixel be rendered to (which can be fixed function; not necessarily shaders)

 

You could use a variant of the KinectFusion algorithm to build a volumetric representation of the scene.  The basic idea is to get a depth image (or a depth buffer in the rendering case) and then you find the camera location relative to your volume representation.  Then for each pixel of the depth image you trace through the volume, updating each voxel as you go with the distance information you have from the depth image.  The volume representation is the signed distance from a surface at each voxel.  For the next frame, the volume representation is used to find out where the Kinect moved to and the process is repeated.  The distances are updated over a time constant to eliminate the noise from the sensor and to allow for moving objects.

 

This is a little bit of a heavy algorithm to do in addition to all of the other stuff you do to render a scene, but there are key parts of the algorithm that wouldn't be needed anymore.  For example, you don't need to solve for the camera location, but instead you already have it.  That signed distance voxel representation could easily be modified and/or used to calculate occlusion.  That might be worth investigating further to see if it could be used in realtime...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this