Hey guys, I'm looking for some theory help. I've been working on an omni-directional shadow mapping solution for a little while now, and I've got something working that looks passable; in very specific circumstances. This worries me, it's not very flexible, and I don't know why I think this is because I have a half knowledge of the actual process, so I'm hoping to get some help filling in the gaps from you guys.
What I understand so far:
Shadow Mapping works by generating a map of depth values for occluders in a scene from a light's perspective. Then when rendering the scene I sample the depth of the pixel in a space relative to the light and if that depth is greater than the depth in the map, the pixel is in shadow.
I'm using a cube map approach to shadow mapping, so I've generated six cameras with 90 degree FOVs pointing in each direction.
(For the sake of not removing my current code, I'd like to get Cubic Shadow Mapping to work, even though there are a hundred different ways to implement shadow mapping)
I'm not doing a post-perspective depth map, I'm just taking the distance from the pixel to the light and dividing it by the light's attenuation. Not the most optimized, but the easiest to wrap my head around.
Currently I have two issues:
1.) Peter-panning. The shadows don't fit up to the object that is doing the occluding. I've read that this problem stems from near / far values, but the solutions don't really make sense. So if someone could explain the problem like I was five, I'd really appreciate it.
2.) Map Size: I can get a somewhat realistic shadow when I use a shadow map size of 512x512, a near of 50, a far of 3000, and a light attenuation of 3000. But if I keep the same near and far and attenuation, but up my shadow map to 1024x1024, everything is in shadow.
If you guys could help me figure this out, I'd be much obliged, thank you!
Omni-directional Shadow Mapping
No replies to this topic