• Create Account

Awesome job so far everyone! Please give us your feedback on how our article efforts are going. We still need more finished articles for our May contest theme: Remake the Classics

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

5 replies to this topic

### #1PolyVox  Members   -  Reputation: 579

Like
0Likes
Like

Posted 24 November 2009 - 02:08 AM

Hi all, I have a working implementation of Exponential Shadow Maps, but there are a couple of aspects which I don't fully understand and would like clarified. Firstly, I am currently writing linear depth (though multiplied by a depth scale factor) into the shadow map, which also appears to be what Marco Salvi does in his sample code. However, during his 'Exponential Shadow Maps Rendering Breakdown' he states:
Quote:
 1. Render the exponential shadow map as seen from the light. (Note - my emphasis) 2. ...
Which implies to me that maybe I should be rendering something else ( perhaps exp(linearDepth) ). I think my current approach is correct, but can anyone confirm? My second question is why do we have to perform the blur convolution in log space (especially given that I am writing linear depth)? Marco Salvi's example appears to go into log space, perform the convolution, and then come out of log space. But if I replace the log space convolution with a normal convolution I still get an acceptable result (discussed here but went off track). Is the log space convolution really necessary? Does it address real mathematical issues, or just precision/range issues? Perhaps my two questions are related, and you only have to perform a log space convolution if you wrote exponential values into the shadow map, or something like that. But it doesn't seem to be the case from looking at his code. Anyway, thanks for any insight.

### #2Marco_Salvi  Members   -  Reputation: 100

Like
0Likes
Like

Posted 24 November 2009 - 03:04 AM

PolyVox,

There a two variations of the algorithm.
The first one involves rendering exponential depth and applying a linear filter/blur, while the second one requires to render linear depth and to apply a log filter/blur. (note: the final occlusion term is computed in slightly different ways)
If you render exponential depth you might run out of the max available range for single precision floating point numbers. In such a case you might want to switch to linear depth and log-filtering, otherwise you really don't need to.

Marco

### #3PolyVox  Members   -  Reputation: 579

Like
0Likes
Like

Posted 24 November 2009 - 04:23 AM

Thanks Marco,

That's just the kind of information I was looking for. I am hoping to get away with a 16-bit floating point target for the shadow map, so I guess it will make most sense to use linear depth and a log blur. I will do some experiments...

David

p.s. Thanks for joining GameDev just to answer my question!

### #4Marco_Salvi  Members   -  Reputation: 100

Like
0Likes
Like

Posted 24 November 2009 - 05:08 AM

Half precision linear depth and log filtering might be enough to avoid issues with the max representable floating point value but you might also get artifacts due to insufficient accuracy.
If I were you I'd start with full single precision exponential depth and linear filtering (easier to get up and running..) and then you can move to more sophisticated implementations knowing that you have a golden output to compare to.

Regarding log filtering Natasha Tatarchuk has written down a nice derivation (which is more accurate then what I originally came up with for all practical cases) and you can find it in this presentation, along with a lot of very good material about pre-filterable shadow maps representations and other topics:

http://www.bungie.net/images/Inside/publications/siggraph/Bungie/SIGGRAPH09_LightingResearch.pptx

Marco

### #5wolf  Members   -  Reputation: 823

Like
0Likes
Like

Posted 24 November 2009 - 05:59 PM

Hey Marco,
Quote:
 The first one involves rendering exponential depth
you mean you render out a exponential depth value into the shadow map. How would you do this?

The following source looks like the linear approach you describe. It writes linear depth into the shadow map.

output.depth = length(input.light_vec) * depth_scale;

Then later you apply the exponent like this on the shadow map values:

occlusion4 = exp( over_darkening_factor.xxxx * (exponent - receiver.xxxx) );

### #6PolyVox  Members   -  Reputation: 579

Like
0Likes
Like

Posted 25 November 2009 - 01:55 AM

Quote:
 Original post by Marco_SalviHalf precision linear depth and log filtering might be enough to avoid issues with the max representable floating point value but you might also get artifacts due to insufficient accuracy.If I were you I'd start with full single precision exponential depth and linear filtering (easier to get up and running..) and then you can move to more sophisticated implementations knowing that you have a golden output to compare to.

Thanks, but I've already got the linear depth and log filtering working so I'll probably stick with that approach. I'm hoping to allow artists to choose the render target precision on a per-light basic, so hopefully we can use 16 bits when possible and 32 when really neessary.
Quote:
 Original post by Marco_SalviRegarding log filtering Natasha Tatarchuk has written down a nice derivation (which is more accurate then what I originally came up with for all practical cases) and you can find it in this presentation, along with a lot of very good material about pre-filterable shadow maps representations and other topics:http://www.bungie.net/images/Inside/publications/siggraph/Bungie/SIGGRAPH09_LightingResearch.pptx

That's a useful paper, thanks. Especially as I will be trying to support multiple cascades as well.

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS