Depth Furthest Shadow Maps?

Started by
2 comments, last by InvalidPointer 13 years, 3 months ago
I've been dabbling with some graphics programming and I've recently implemented Cascaded Shadow Maps for an environmental (directional) light source and I came up with an interesting way to deal with determining camera placement for depth texture generation. This came about because I was having some horrible shadow acne due to depth precision depending on how far away the depth texture camera was, as well as trying to determine the optimal distance to put the camera out so it didn't miss out on potential shadow casters (say a huge cliff 100 meters away).

My solution was to place the camera just beyond the viewer's frustum away from the light source and then turn it to face towards the light source. I then changed to depth function to greater than or equal instead of the standard less than or equal and rendered the depth texture.

Then, when using said texture in a shader (GLSL in my case) I took the result of of the regular shadow calculation (shadow2DProj in GLSL) and subtracted it from 1.0 to get the new shadow value.

Basically, instead of determining the depth values that fragments have to be greater than (while looking away from the light source) to be considered in shadow instead it determines what depth values the fragments have to be less than to be in shadow (while looking towards from the light source). This trick should only work for directional lights.

This gives you greater depth precision for your shadowmap near the viewer as well as ensures that you don't miss any potential shadow casters.

I'm sorry if what I'm saying doesn't make any sense, I'm kind of tired but I'm posting this mainly because it seems like such a simple trick but I can't find any articles detailing it so I was wondering if someone could tell me if there is a name for this technique that I'm not aware of.
Advertisement
Could you possibly save a video to see this in action?
I'm guessing that you're using a floating point buffer for storing depth? Because if you're using a fixed-point format, the precision is uniform for an orthographic projection and it shouldn't matter how far away geometry is from the shadow-caster camera. But for a floating point buffer what you're doing makes sense.
Yeah, if I'm understanding this correctly you're basically doing the 'reverse depth' precision trick that Humus suggested for z-buffers and applying it to shadow maps. Stupidly simple (and obvious!) extension in hindsight, but it makes a lot of sense.
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

This topic is closed to new replies.

Advertisement