## Recommended Posts

Posted (edited)

I am writing a toy software renderer (non real-time) and I can't get shadow mapping to work. I think the problem lies where I attempt to transform positions to light space. When I was debugging the problem, I saw that my code was sometimes getting back 1.0 as a depth value from sampling the shadow map. This is the max depth value and implies that it is sampling a position where there is no geometry in the shadow map. As a first step, I want to check that my theory is correct before posting a bunch of code. For reference here is the scene being rendered with the problem:

Firstly, I think that my actual shadow map is correct. I store it as a big array of floats that represent depth values from -1 (closest to camera) to 1 (away from camera). Here is the shadow map outputted as an image: (The scale has been inverted so bright pixels indicate geometry that's closer to the camera)

For each triangle, I do the standard transformations on each vertex (model, view, projection, perspective divide, viewport). For each fragment, I calculate the barycentric coordinates using vertex coordinates in "device space". Device space means that they have been transformed by the model view projection matrix, have had the perspective divide and have been mapped from 0-800 (output image size). I then use these barycentric coordinates in all subsequent calculations.

For lighting, all calculations are done in eye space. I interpolate the eye space vertices and eye space normals and do calculations based on the phong reflectance model. The depth of each fragment (for z buffering) is calculated by interpolating over the vertices in clip space using the barycentric coordinates:

float depth =
((vertex_clip[0].z / vertex_clip[0].w) * bary.x) +
((vertex_clip[1].z / vertex_clip[1].w) * bary.y) +
((vertex_clip[2].z / vertex_clip[2].w) * bary.z);

For shadow mapping, as far as I understand I need two values: the depth of the current fragment in light space, and the depth of the current fragment as sampled from the shadow map. If the current fragment in light space is "deeper" than the sampled value, then the point is in shadow. In addition to transforming the vertices by the camera's transformations, I also transform them by the light's matrices too. I store the vertices in "light clip space" and "light device space". Light clip space is achieved by transforming the vertices by the model, view_light and projection matrices. Light device space is achieved with the full chain (model, view_light, projection, perspective divide, viewport). This is exactly how I transform them to generate the shadow map. I then use the previously calculated barycentric coordinates to interpolate over the vertices in these two spaces. The depth of the current fragment in light space is calculated like this:

float depth_light =
((vertex_clip_light[0].z / vertex_clip_light[0].w) * bary.x) +
((vertex_clip_light[1].z / vertex_clip_light[1].w) * bary.y) +
((vertex_clip_light[2].z / vertex_clip_light[2].w) * bary.z);

And the position to sample in the shadow map is calculated by interpolating the vertices in "light device space":

V4f pixel_position_device_light =
vertex_device_light[0] * bary.x +
vertex_device_light[1] * bary.y +
vertex_device_light[2] * bary.z;

Is there any part of that whole process that I am fundamentally getting wrong or misunderstanding? Or does my theory check out, implying that my problem probably lies somewhere else?

Edited by yyam

##### Share on other sites
1 hour ago, 1024 said:

The problem looks like "shadow acne".

1 hour ago, David Lovegrove said:

Have you tried applying a z-bias?

I am aware of shadow acne and I don't think this is my problem (at least for now). I applied a bias of 0.1: (bigger depth = further away from camera)

float bias = 0.1f;
bool is_in_shadow = shadow_map < depth_light - bias;

This does seem to improve the image:

But as you can see, the parts of the foreground teapot that should definitely be in shadow still have the acne effect. If I increase the bias, then eventually no parts of the teapot are in shadow. I also tried varying the bias using the slope but that still did not fix the teapot handle having acne.

If my understanding is correct, every pixel in camera space that is part of the teapot should map to a pixel in light space that also is part of the teapot. From debugging, it looks like this not always happening, which makes me think that my camera -> light transform is incorrect? Does it make sense to use barycentric coordinates calculated using the projected, divided vertices in camera space to interpolate projected and non-projected vertices in light space?

1 hour ago, David Lovegrove said:

I'm also curious why you're using -1 to 1 for the depth map values and not just 0 to 1.

I guess there's no particular reason for this. I was trying to losely follow what opengl does with its projection matrix. I think it produces z values from -1 to 1.

##### Share on other sites

So I got it working, turns out it was a pretty dumb floating point bug... At the end of the transformation pipeline, the calculated screen coordinates used to sample the shadow map are floats. This was the code I was using to sample the shadow map: (y * width + x)

int shadow_map_idx = (int)(pixel_position_device_light.y * output_buffer_width + pixel_position_device_light.x);
float shadow_map = shadow_buffer[shadow_map_idx];

When I changed it to this, it worked:

int shadow_x = (int)pixel_position_device_light.x;

float shadow_map = shadow_buffer[shadow_map_idx];

When the screen coordinates were truncated to integers, the correct pixel was sampled. Turns out the difference between the two calculated shadow_map_idx was around 500! So the complete wrong pixel was being sampled.

## Create an account

Register a new account

• 17
• 14
• 10
• 10
• 12