Shadowmapping Geometry Clipmaps

Started by
17 comments, last by IntegralKing 12 years, 9 months ago
So, I want to render my terrain from the point of view of my light source. It seems pretty excessive, however, to do all the Vertex-Texture-Fetches associated with geoclipmapping twice.

That is what you want to do, and that is how it is done. The terrain is drawn from a different perspective in the shadow mapping pass (from the sun's perspective) and the texture fetches and results of the vertex transformations are different from when you draw the terrain from your camera's point of view.

However, you can render a 1/4 polygon terrain for the shadow map pass, and use a filtering technique that softens the shadow edges (VSM/ESM and variants of these), and the result will still be good.


I'd also have to set up a special vertex shader for the shadow-mapped terrain portion (to transform the clip windows appropriate to the camera and sample the textures for the height values). This would be different than my normal shadow vertex shader (which just does the basic gl_ModelViewProjectionMatrix * gl_Vertex for regular objects).

To draw your terrain from a new perspective, you just pass in a different world-view-projection matrix. You don't need a different vertex shader (you do need a different pixel shader, because all the shadow map pass pixel shader outputs is a depth value).

However, I would make a different vertex shader, because the only data you need from the shadow mapping terrain pass is per pixel depth values. So you can render your terrain without any textures applied, and no normal map lookups are needed, etc. Depending on your needs, you may choose to forego ring fixups or whatever you use to avoid cracks in the terrain.

Overall you should be able to get away with a much faster rendering of the terrain at the shadow map pass than your usual textured+lit terrain pass.
Advertisement

That is what you want to do, and that is how it is done. The terrain is drawn from a different perspective in the shadow mapping pass (from the sun's perspective) and the texture fetches and results of the vertex transformations are different from when you draw the terrain from your camera's point of view.


Wouldn't this give me shadows that don't match the geometry I'm seeing? It seems like I want the geometry transformed for LOD around the camera, rendered from the perspective of the light. Furthermore with Transform Feedback I can quickly render the same terrain 3 times, one for the eye pass, one for the shadow pass, and one inverted for a surface reflection pass...

[quote name='Olhovsky' timestamp='1310273163' post='4833238']
That is what you want to do, and that is how it is done. The terrain is drawn from a different perspective in the shadow mapping pass (from the sun's perspective) and the texture fetches and results of the vertex transformations are different from when you draw the terrain from your camera's point of view.


Wouldn't this give me shadows that don't match the geometry I'm seeing? It seems like I want the geometry transformed for LOD around the camera, rendered from the perspective of the light. Furthermore with Transform Feedback I can quickly render the same terrain 3 times, one for the eye pass, one for the shadow pass, and one inverted for a surface reflection pass...
[/quote]

Feedback will not help because the POVs will change considering the camera view point is different from light view point. So, this is kind of no win if you use stream out (and I guess you will not be rendering the full terrain grid or at least cull away the unncessesary grids from different view point).
Rendering (and culling) from the light POV and doing the shadow pass will not be performance intensive considering depth only passes are quite fast on mordern GPUs. The depth only pass will take a separate (read: optimized) pipeline to render on to the floating point texture.
I think parral split shadow maps or cascaded shadow maps will yeild good results without any performance drop. Vertex transforms are anyway pretty fast.

@[color="#284b72"]bluntman

I havent read much on SH but I m sure they will work quite well with static point lights. However wouldn't changing the direction of the directional light would cause SH coefficients to get invalidated. In which case I think (since the directional light cannot be varied at least in motion), a baked light map will be better. However, having said all that, I know intensity can be varied so its a good tradeoff.
What if everyone had a restart button behind their head ;P

I havent read much on SH but I m sure they will work quite well with static point lights. However wouldn't changing the direction of the directional light would cause SH coefficients to get invalidated. In which case I think (since the directional light cannot be varied at least in motion), a baked light map will be better. However, having said all that, I know intensity can be varied so its a good tradeoff.



No the SH in the case of dynamic directional lighting represents the lighting environment for ALL directions that the light could come from. So you simply multiply the SH encoded into the vertices or texture by a SH that represents the dynamic light (i.e. a SH coefficient set that represents a lit point in the direction of the directional light).
Really, just look at the video I posted. That uses SH to encode the light from all possible directions, then allows the light direction to be changed in real time, and all it costs is 9 muls in either the vertex or pixel shader, and some more storage in either the verts or the textures.

[quote name='obhi' timestamp='1310368365' post='4833614']
I havent read much on SH but I m sure they will work quite well with static point lights. However wouldn't changing the direction of the directional light would cause SH coefficients to get invalidated. In which case I think (since the directional light cannot be varied at least in motion), a baked light map will be better. However, having said all that, I know intensity can be varied so its a good tradeoff.



No the SH in the case of dynamic directional lighting represents the lighting environment for ALL directions that the light could come from. So you simply multiply the SH encoded into the vertices or texture by a SH that represents the dynamic light (i.e. a SH coefficient set that represents a lit point in the direction of the directional light).
Really, just look at the video I posted. That uses SH to encode the light from all possible directions, then allows the light direction to be changed in real time, and all it costs is 9 muls in either the vertex or pixel shader, and some more storage in either the verts or the textures.
[/quote]

That would be really nice. I am at work so couldnt check the video. But I can see how that will work as all we can assume the directional light as a point alight, calculate the coefficients and then only use the coefficient depending upon the direction of light.

Thanks,
obhi
What if everyone had a restart button behind their head ;P

That would be really nice. I am at work so couldnt check the video. But I can see how that will work as all we can assume the directional light as a point alight, calculate the coefficients and then only use the coefficient depending upon the direction of light.
Thanks,
obhi



Don't need to assume it is a point light at all! Spherical harmonics are great at encoding complex lighting environments (not as great as Haar wavelets apparently but I haven't looked into them). Think of it as compressing a full environment map into just a few numbers (massively lossy of course). Another way to think of an environment map is "what colour/brightness is the incoming light from each possible direction". So you can reverse this and instead encode into the environment map for a single point (vertex or texel) what colour/brightness that point is when a directional light is cast on it from all possible directions. In some cases it will be lit, some it will be shadowed by other geometry, and some it will have secondary illumination from ambient lighting and light bounces. Then you can encode this environment into a SH with a limited number of coefficients, and hard code it into vertex data or textures. Then when you want to simulate a directional light you can encode the directional light into the same number of SH coefficients and simply multiply all the environment coefficients by these, like a mask, in your shaders. The directional light can be created by taking a cardinal axis SH and rotating it (there is a fairly easy way to rotate SH) to the direction of the light. If you want you can also create much more complex lighting environments and apply them instead.
Google for Precomputed radiance transfer (PRT) and spherical harmonics and it throws up a few papers.


[quote name='obhi' timestamp='1310389329' post='4833708']
That would be really nice. I am at work so couldnt check the video. But I can see how that will work as all we can assume the directional light as a point alight, calculate the coefficients and then only use the coefficient depending upon the direction of light.
Thanks,
obhi



Don't need to assume it is a point light at all! Spherical harmonics are great at encoding complex lighting environments (not as great as Haar wavelets apparently but I haven't looked into them). Think of it as compressing a full environment map into just a few numbers (massively lossy of course). Another way to think of an environment map is "what colour/brightness is the incoming light from each possible direction". So you can reverse this and instead encode into the environment map for a single point (vertex or texel) what colour/brightness that point is when a directional light is cast on it from all possible directions. In some cases it will be lit, some it will be shadowed by other geometry, and some it will have secondary illumination from ambient lighting and light bounces. Then you can encode this environment into a SH with a limited number of coefficients, and hard code it into vertex data or textures. Then when you want to simulate a directional light you can encode the directional light into the same number of SH coefficients and simply multiply all the environment coefficients by these, like a mask, in your shaders. The directional light can be created by taking a cardinal axis SH and rotating it (there is a fairly easy way to rotate SH) to the direction of the light. If you want you can also create much more complex lighting environments and apply them instead.
Google for Precomputed radiance transfer (PRT) and spherical harmonics and it throws up a few papers.

[/quote]

Ah I saw the DX demo for PRT last day and the video too. Also your planet rendering videos, really great stuff. If I understand what you mean then since SH coefficients are rotation invariant, multiplying the directional light's SH components with the precomputed per-vertex or texel transfers will work out even when directional light's direction changes. I will dig some articles on this once I get my renderer ready. You really got me interested on the procedural generation however. Thanks.




What if everyone had a restart button behind their head ;P
I also saw your planet rendering videos bluntman, very impressive stuff, your atmospheric scattering is absolutely gorgeous. How do you manage the terrain generation? Do you do spherical geometry clipmaps? The cube method (6 individual terrain algos sewn together)?

[quote name='Olhovsky' timestamp='1310273163' post='4833238']
That is what you want to do, and that is how it is done. The terrain is drawn from a different perspective in the shadow mapping pass (from the sun's perspective) and the texture fetches and results of the vertex transformations are different from when you draw the terrain from your camera's point of view.


Wouldn't this give me shadows that don't match the geometry I'm seeing? It seems like I want the geometry transformed for LOD around the camera, rendered from the perspective of the light.[/quote]

Drawing the terrain from the perspective of the light is exactly what I was suggesting. Set the terrain LOD center to the same center that was used when drawing from the perspective of the main camera.

If you already understand this, then I guess I misunderstood what problem you're trying to solve. If you draw a terrain that outputs only depth with a reduced LOD from the light perspective, this should draw very fast (I wouldn't be surprised if it draws 10 times faster than your regular terrain pass, assuming you have a reasonably sophisticated terrain), and this will produce accurate shadows. This is also the simplest solution that I know of.

So what else is there?

This topic is closed to new replies.

Advertisement