Shadow mapping without hardware support

Started by
9 comments, last by PfhorSlayer 19 years, 11 months ago
Hey, all. I'm currently working on trying to implement shadow mapping on my old GF2MX (Personal project, just to see if I can), and I've run into some trouble. I understand how shadow mapping is supposed to work, but I'm slightly hung up on trying to get the distance from the light to each fragment into the color buffer, which I can then use register combiner operations on to compare it to the light's depth-map texture that I've projected onto the scene. After thinking for a while, I realized that a vertex shader may be able to solve my problem. If I pass the light position and the near and far distances of the light's view frustum into a vertex shader that does the following: 1) Calculate the vertex's position in clip (is that the right one?) space 2) Subtract this position from the light's position, to get a vector that goes from the vertex to the light 3) Calculate the length of this vector 4) Subtract the near distance of the light's frustum from the far distance 5) Divide the distance from the light to the vertex by the (far - near) from above 6) Set the output vertex's colors to this value I should get a color buffer filled with depth values from each fragment to my light! I implemented this shader here. It seems to work alright, except that when comparing the values in the color buffer after using this shader, to the values in the color buffer after projecting the depth buffer from the light's perspective, the values in the Vertex Shaded buffer seem quite off. See what I mean: Distance (vertex shader) color buffer Depth map projected onto scene color buffer The balls in the distance one are far too white (the light is above, to the left, and a bit behind the scene shown); since they are closer to the light's near frustum plane than most of the rest of the scene, they should be darkest, but they are almost white. I'm pretty certain that my idea here, using the vertex shader to fill the color buffer with distance values, should work to achieve what I'm trying to do, and my register combiner code right now is not the issue (I'm testing this by outputting the depth map and distance textures to file, and subtracting the two). Any assistance would be greatly appreciated. Thanks, Dylan Barrie Postpose [edit] fixed the links, oops! [/edit] [edited by - PfhorSlayer on May 20, 2004 8:45:50 PM]
Advertisement
What you are seeing is common. When you add n.l lighting, these artifacts on the sides of the objects should go away to some degree.

This is caused by a combination of depth aliasing and projection aliasing, and can''t be avoided, but can be reduced and hidden by lighting, biasing, and over-sampling.
That still doesn''t solve my immediate problem, but it does give me something to work on for a while. Thanks!
I figured something out: depth mapping isn''t linear.

I implemented how it really works, but it''s still a bit off.

I''ve begun to suspect that it has to do with the fact that the modelview matrix may include things like rotations and scales and translations and such that should take effect on the input vertex BEFORE the distance to the light is computed, but I''m not sure yet. Still thinking...
If you want a linear depth, use post-projection matrix w, which is just z in view space - actually light space in this case. Then scale and bias it to match the light position and direction.

This is linear, and will give better results than the traditional z-buffer equation.

Your best bet is to use a alpha texture that contains a 0x0-0xff alpha ramp. Map this to the depth range, instead of calculating a depth in the vertex shader itself.

The reason this is preferred is because textures give you proper clamping, whereas vertex calculations won''t. This prevents objects that cross the light plane from having strange depth values. It will also prevent objects behind the light from receiving shadows from objects in the light''s view.


Since I''m getting the depth values from the light''s perspective using the depth buffer, won''t I need to implement the distance function in the exact same way that the depth buffer works in order for everything to line up properly?

I''m confused about this post-projection W. Would that be the W value of the vertex in question after it''s been transformed by the mvp matrix? Also, how would I bias it?

Thanks!
You can compute depth any way you want to, as long as its monotonic. I recommend straight linear. The typical z buffer gives more precision close to the light - doesn''t make a lot of sense.

It''s not post-projection w. After projection, w is 1.

It''s a common mistake, but remember that the projection matrix doesn''t actually do any projection, only scales and offsets to get x & y, modifies view space z to get screen space z, and copies view space z to the w component.

Matrices can''t do divisions, only things such as dp3, dp4, muls, and mads. Just affine transformations.

The projective divide is done after the vertex shader completes, perhaps during triangle setup.

quote:Original post by SimmerD
You can compute depth any way you want to, as long as its monotonic. I recommend straight linear. The typical z buffer gives more precision close to the light - doesn''t make a lot of sense.


But doesn''t whatever method I use to compute this depth have to match what OpenGL is doing for the light-perspective depth? I''m copying the depth buffer from OpenGL after rendering from the light''s perspective.

Now that I think of it, though, I guess I could just use the same vertex shader to compute the depth in both instances, and do it however I like, right?
Ok, don''t do that. Instead, use a texture for depth during both passes.
Okay, I''m now using a ramp texture to get the depth values, based on a linear scale inside the light''s view frustum. The difference is MUCH closer to what it should be!

Still, there are a few issues:

In my vertex shader, the modelview state matrix contains all the transformations done on the vertex, like scaling, rotating, transforming, and what not, and ALSO the transformations done by the camera.

If I have a vertex that is rotated 180 degrees around an axis, then its distance to the light is going to be different than if it were not rotated. But, since the camera''s transformations are also applied to this modelview matrix, bad things happen when I simply multiply the vertex by the modelview state matrix in the vertex shader, because the camera''s transformations are also applied (taking the vertex from world space to camera space (I think, not so good with all these spaces)). So, I''d need some way to get the modelview matrix BEFORE the camera''s modelview matrix is applied to it, so that I could get the correct distance from where the vertex REALLY is to the light''s world position.

Any easy way to do this?

This topic is closed to new replies.

Advertisement