• Advertisement
Sign in to follow this  

Screen-space shadowing

This topic is 1422 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Advertisement

You could just use imperfect shadow maps http://www.mpi-inf.mpg.de/~ritschel/Papers/ISM.pdf

 

But any screenspace technique is just asking to be unstable, wide search areas for SSAO just end looking like a weird kind of unsharp mask as it is. Not too mention you'd just get light bleeding everywhere since you've only screenspace to work off of.

 

I mean, it's a neat idea for some sort of "better than SSAO" or directional SSAO kind of idea. But I'm afraid I'd be skeptical of doing any more in screen space that's not inherently screenspace than is already done. Even SSR looks weird and kind of wonky in practice, EG Crysis 3 and Killzone Shadowfall.

Share this post


Link to post
Share on other sites

The 'problem' with SSAO is imo not a problem with the technique at all but how it is commonly used in current games (e.g. the FarCry 3 case).

I think it all started when some modders exaggerated the subtle SSAO in Crysis 1 and all world was crying at first how awesome this looks.

It can work very well for short-range AO.

Share this post


Link to post
Share on other sites

Why not voxels? The idea is not so crazy anymore and certainly used for real-time GI in games (i.e. Crysis 3).

 

It may sound crazy due to the memory requirements. However, for shadow mapping you just need 1 bit.

A 1024x1024x1024 voxel would need 128MB and suddenly starts feeling appealing.

 

Perhaps the biggest block right now is that there is no way to fill this voxel with occlusion data in real-time.

The most efficient way I see would be regular rasterization but where the shader (or the rasterizer) decides on the fly which layer from the 3D texture the pixel should be rendered to, based on its interpolated depth (quantized). However I'm not aware of any API or GPU that has this capability. This would be highly parallel.

 

Geometry shaders allow selecting which RenderTarget should a triangle be rendered to, but there is no way to select which RenderTarget should a pixel be rendered to (which can be fixed function; not necessarily shaders)

Share this post


Link to post
Share on other sites

But any screenspace technique is just asking to be unstable

 

I tend to agree. Not to mention the fact that the information you need might not even be there when working with screenspace data. Personally I feel like experimenting with voxels and other optimized data formats for occluders would be a much more promising avenue to explore, but I'm okay with being proven wrong. smile.png

Share this post


Link to post
Share on other sites

Why not voxels? The idea is not so crazy anymore and certainly used for real-time GI in games (i.e. Crysis 3).

 

It may sound crazy due to the memory requirements. However, for shadow mapping you just need 1 bit.

A 1024x1024x1024 voxel would need 128MB and suddenly starts feeling appealing.

 

Perhaps the biggest block right now is that there is no way to fill this voxel with occlusion data in real-time.

The most efficient way I see would be regular rasterization but where the shader (or the rasterizer) decides on the fly which layer from the 3D texture the pixel should be rendered to, based on its interpolated depth (quantized). However I'm not aware of any API or GPU that has this capability. This would be highly parallel.

 

Geometry shaders allow selecting which RenderTarget should a triangle be rendered to, but there is no way to select which RenderTarget should a pixel be rendered to (which can be fixed function; not necessarily shaders)

 

I like this line of thinking.

 

You probably don't need 1024 vertical voxels, so it's possible to spend more on horizontal ones, or to store, say a 8-bit distance instead of a 1 bit solid/empty flag.

 

You could keep two separate voxel structures, the static one, then a dynamic one that is based on slices.  You could sample both of them when required ( when a surface is near a light and a dynamic object ).

 

You could also do some tricks so that highly dense but noisy things like leaves could be faked and not traced directly, just with an appropriate noise function, say.

Share this post


Link to post
Share on other sites

I've thought about the voxel approaches. Even with an octree they are extremely complicated, especially if you want self shadowing. Lot of ideas to speed it up like varying quality based on distance to the camera, but even then you end up having to voxelize geometry for occluders or find other methods to mark which voxels are shaded and which aren't.

 

That said I think a really fast theoretical (as in I made this up a while ago) approach would be to use RTW in a single pass with a low resolution texture (like 64x64). You find all the objects within the radius of your light source then generate a frustum at your point light. Point the frustum at (1, 0, 0) and cut the world into 8 quadrants. Now for all the objects in front of the near plane of the frustum do nothing. For the 4 quadrants behind the near plane assign them to a quadrant. If they overlap two (or 4) quadrants duplicate the objects into both all quadrants. In 2D:

pointshadowmap.png

 

Now render all the geometry in each quadrant passing into a shader the center of the point of the light and transforming the vertices into world space for each quadrant. Then normalize the vertices in the angle 0 to 180 into 0 to 45 degrees so they're all inside of the frustum. If your triangles are small enough there should be no artifacts really. Here's a 2D example of what I mean by artifacts. The red line represents our geometry and we normalize the angle so it's squished into the frustum. This distorts the line (if we look at all the points along it) into the blue line. If we only look at the vertices we see the magenta line. You then render a depth map using RTW. If you're good with math you can probably create a fragment shader that correctly interpolates the vertices and calculates the correct depth (removing the artifact). What you'd end up with would be a RTW'ed low resolution spherical map. When you sample to see if a shadow exists you'd need to perform a look-up on the texture for each light source.

 

You'd only need a texture for lights that collide with geometry and you can choose a texture size based on the distance to the camera. (RTW will correctly warp to give a higher resolution closer to the camera also). I hope that makes sense. I made it up mostly on paper a few months ago and haven't been able to tell anyone to see if it's viable.

Share this post


Link to post
Share on other sites

I've been meaning to come back to this, but have been working full time on stuff that pays the bills dry.png

 

Here's some gifs that I actually produced months ago. Most of the lighting in the scene is from a cube-map, with a few (green) dynamic lights in there too. There's no shadows, so the "SSSVRT" adds all of the shadowing seen in the odd frames of the gifs:

http://imgur.com/a/k3L78

 

Seems to work really well on shiny env-map-lit (IBL) objects to 'ground' them.

 

 

Re voxels - that's a challenge for another day biggrin.png

I imagine you could use both. Screen-space stuff like this is great for capturing the really fine details (which would require an insane amount of memory in a voxel system), so you could combine this with a voxel method for more coarse-detail / longer-distance rays, and/or image probes for really long-range rays.

Edited by Hodgman

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement