• ### Popular Now

• 11
• 9
• 10
• 9
• 10

This topic is 984 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

hi there,

my question is:

how do you generate the orthographic projection matrices for generating the cascaded shadow maps?

Right now what I tried to do, but it doesn't seem to fully work is the following:

1) Determine view space AABB of the scene, use the bounds for tight near/far planes

2) use logarithmic splitting scheme to create 4 intervals for 4 cascades

3) a) for each interval, create perspective sub-frustums using the same fov, aspect ratio as the main camera, but the interval           bounds for the near/far planes

b) transform the world space sub-frustum corners to light camera space, and construct a light camera space AABB there.

c) use the bounds of the light camera space AABB to create the orthographic matrix of the cascade

See attached picture for how the cascades look for me right now. As you can see lots of the geometry is cut off, as if the orthographic frustum is not correct.

best regards,

Marton Tamas

##### Share on other sites

The frustum can be represented by a triangle which cover the whole visible scene from the actual camera.

You split this frustum and for each split you compute the orthographic projection.

To have stable cascade you can compute the radius and set this radius on width and height of the orthographic projection.

If you use the SDSM algorithm to find the tighter cascade from the depth buffer, you don't need to use stable cascade, that produce wasted space.

To compute the radius you simply need to use trigonometry because the frustum can be represented by a triangle.

Here the code to compute the radius :

// Tangent values.
float TanFOVX = tan(0.5f * FieldOfViewRadian * AspectRatio);
float TanFOVY = tan(0.5f * FieldOfViewRadian);

// Compute the bounding sphere.
Vector3 Center = CameraPos + CameraForward * (Near + HalfDistance);
CornerPoint = CameraPos + (CameraRight * TanFOVX + CameraUp * TanFOVY + CameraForward) * Far;
float Radius = (CornerPoint - Center).Length();

Here the code to compute the AABB :

// Scale needed to extract frustum points.
float scaleXInv = 1.0f / cameraProj(1, 1);
float scaleYInv = 1.0f / cameraProj(2, 2);

// Matrix needed to transform frustum corners into light view space.
Matrix cameraViewToLightProj = cameraViewInv * lightViewProj;

// Compute corners.
Vector3 corners[8];
float nearX = scaleXInv * nearZ;
float nearY = scaleYInv * nearZ;
corners[0] = Vector3(-nearX,  nearY, nearZ);
corners[1] = Vector3( nearX,  nearY, nearZ);
corners[2] = Vector3(-nearX, -nearY, nearZ);
corners[3] = Vector3( nearX, -nearY, nearZ);
float farX = scaleXInv * farZ;
float farY = scaleYInv * farZ;
corners[4] = Vector3(-farX,  farY, farZ);
corners[5] = Vector3( farX,  farY, farZ);
corners[6] = Vector3(-farX, -farY, farZ);
corners[7] = Vector3( farX, -farY, farZ);

// Compute corners in light space.
Vector3 cornersLightView[8];
for(int i = 0; i < 8; ++i)
cornersLightView[i] = cameraViewToLightProj.Transform(corners, 1.0f);

// Compute the AABB.
ComputeAABBFromPoints(cornersLightView, 8, outMin, outMax);
Edited by Alundra

##### Share on other sites

The frustum can be represented by a triangle which cover the whole visible scene from the actual camera.

You split this frustum and for each split you compute the orthographic projection.

To have stable cascade you can compute the radius and set this radius on width and height of the orthographic projection.

If you use the SDSM algorithm to find the tighter cascade from the depth buffer, you don't need to use stable cascade, that produce wasted space.

To compute the radius you simply need to use trigonometry because the frustum can be represented by a triangle.

Here the code to compute the radius :

// Tangent values.
float TanFOVX = tan(0.5f * FieldOfViewRadian * AspectRatio);
float TanFOVY = tan(0.5f * FieldOfViewRadian);

// Compute the bounding sphere.
Vector3 Center = CameraPos + CameraForward * (Near + HalfDistance);
CornerPoint = CameraPos + (CameraRight * TanFOVX + CameraUp * TanFOVY + CameraForward) * Far;
float Radius = (CornerPoint - Center).Length();

Here the code to compute the AABB :

// Scale needed to extract frustum points.
float scaleXInv = 1.0f / cameraProj(1, 1);
float scaleYInv = 1.0f / cameraProj(2, 2);

// Matrix needed to transform frustum corners into light view space.
Matrix cameraViewToLightProj = cameraViewInv * lightViewProj;

// Compute corners.
Vector3 corners[8];
float nearX = scaleXInv * nearZ;
float nearY = scaleYInv * nearZ;
corners[0] = Vector3(-nearX,  nearY, nearZ);
corners[1] = Vector3( nearX,  nearY, nearZ);
corners[2] = Vector3(-nearX, -nearY, nearZ);
corners[3] = Vector3( nearX, -nearY, nearZ);
float farX = scaleXInv * farZ;
float farY = scaleYInv * farZ;
corners[4] = Vector3(-farX,  farY, farZ);
corners[5] = Vector3( farX,  farY, farZ);
corners[6] = Vector3(-farX, -farY, farZ);
corners[7] = Vector3( farX, -farY, farZ);

// Compute corners in light space.
Vector3 cornersLightView[8];
for(int i = 0; i < 8; ++i )
cornersLightView[i] = cameraViewToLightProj.Transform(corners, 1.0f);

// Compute the AABB.
ComputeAABBFromPoints(cornersLightView, 8, outMin, outMax);

I didn't quite get what you tried to convey. I do get the idea of computing the bounding sphere of each sub-frustum and using that for width/height of the orthographic frustum. However isn't it tighter to use the frustum corners and build a light camera space bounding box?
I think you're doing that in the second code snippet.

I did realize a stupid mistake I did from your post. Right now I seem to have fixed the issue, I'm using the diameter of the bounding sphere of the scene as the far plane for the orthographic matrix, this way nothing is excluded from the rendering, but the light camera still "zooms" in on the details. At least that's what I experienced visualizing the light camera.

anyways here's the complete source code & exe:

is this a correct way of doing it?

I know I'm not doing anything fancy yet like filtering between cascades and stuff, but it's good for a start I hope.

##### Share on other sites

The problem using the bounding box is the change of projection which cause artefacts, using a stable cascade based on bounding sphere you got this problem removed.

You got also a problem called "the shimmering edge effect", this problem is caused by the move of the camera, to remove it you have to round to pixel size increments.

You can see the problem here :

Edited by Alundra

##### Share on other sites

The problem using the bounding box is the change of projection which cause artefacts, using a stable cascade based on bounding sphere you got this problem removed.

You got also a problem called "the shimmering edge effect", this problem is caused by the move of the camera, to remove it you have to round to pixel size increments.

You can see the problem here :

thank you :) now I get it, I'll try the method you suggested!

##### Share on other sites

... And after you get the bounding sphere method working, you'll realize that your light frustums are now VERY conservative, and wasting a lot of space. So, as a better solution, you can basically quantize the scale (size) of your light's ortho frustums... Compute the AABB like you were doing originally, then round the width and height of the frustum up to to the next nearest multiple of some value. (The actual value will depend on the scale of your world, and how much shimmer you can tolerate). This basically means that you only get the shimmer occasionally during camera rotation (when the AABB scale crosses the quantization threshold), but in exchange for that, you get tighter fitting bounds (which means higher effective shadow map resolution).

... and you still need to do the thing where you snap translations to texel size increments, like Alundra said.

Edited by osmanb

##### Share on other sites

... And after you get the bounding sphere method working, you'll realize that your light frustums are now VERY conservative, and wasting a lot of space. So, as a better solution, you can basically quantize the scale (size) of your light's ortho frustums... Compute the AABB like you were doing originally, then round the width and height of the frustum up to to the next nearest multiple of some value. (The actual value will depend on the scale of your world, and how much shimmer you can tolerate). This basically means that you only get the shimmer occasionally during camera rotation (when the AABB scale crosses the quantization threshold), but in exchange for that, you get tighter fitting bounds (which means higher effective shadow map resolution).

... and you still need to do the thing where you snap translations to texel size increments, like Alundra said.

this sounds great, I will try this, thank you! :)

##### Share on other sites

For our game Hardland I calculate smallest bounding box by just testing multiple different shadow camera rotations. This usually reduce triangle count a lot and increase effective resolution substantially.

##### Share on other sites

For our game Hardland I calculate smallest bounding box by just testing multiple different shadow camera rotations. This usually reduce triangle count a lot and increase effective resolution substantially.

Do you mean that you try rotating the light's camera around it's (local) forward vector? That's a good idea, actually - assuming that the result remains stable over multiple frames. I do what (I think) most people do, and pick an arbitrary/constant up vector for the shadow camera, but that does lead to waste, like you're saying. It seems like ultimately, you want your up vector to be based on the pitch of the view frustum? Not sure - there are quite a few different cases that might make a direct solution tricky, vs. your simple approach. Do you just test N quantized rotations?

##### Share on other sites

For our game Hardland I calculate smallest bounding box by just testing multiple different shadow camera rotations. This usually reduce triangle count a lot and increase effective resolution substantially.

Do you mean that you try rotating the light's camera around it's (local) forward vector? That's a good idea, actually - assuming that the result remains stable over multiple frames. I do what (I think) most people do, and pick an arbitrary/constant up vector for the shadow camera, but that does lead to waste, like you're saying. It seems like ultimately, you want your up vector to be based on the pitch of the view frustum? Not sure - there are quite a few different cases that might make a direct solution tricky, vs. your simple approach. Do you just test N quantized rotations?

To rotate cascaded based on camera rotation is called parallel split shadow maps. http://http.developer.nvidia.com/GPUGems3/gpugems3_ch10.html

Actually my method is always testing that rotation too. But it's not usually best rotation.