Sign in to follow this  

full-scene luminosity sampling

This topic is 3932 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey. I need to implement some kind of luminosity sampling of a rendered scene. That sampling would produce a ratio (typically in the 0.0 - 1.0 range) which I would reuse over a couple next frames before sampling again. Currently I've got three options on the table I'd like to know whether someone could suggest some others. I assume I have my rendered scene as a texture that I can sample. - The first method is pretty bad on a PC. It consists of downsampling the scene several times (2 or 3 times) down to something like a 12*9 texture (or even smaller), then LockRect() it and retrieve some value which you set as a vertex/pixel shader constant during further rendering. This is very easy to do, but very poor in terms of performance. - The second method is close to the first one. The difference is that instead of locking the surface and reading the pixel on the CPU, I would output the luminosity to a 1x1 render target which I sample during further rendering. The inconvenient is in enforces per pixel luminosity adjustment (unless you are able to sample a texture from a vertex shader, which ability isn't generalized yet) - The last possibility I see is downsampling the scene to a 12*9 texture. I would then process to N rendering of that texture using a pixel shader that texkill every pixel above a certain luminosity value and use occlusion queries (asynchronously) to retrieve the number of pixels outputed, which would yield a pretty good measure of the overall scene luminosity (this method was explained in the nVidia SDK) Do you have any thoughts or advice? Thanks for reading JA

Share this post


Link to post
Share on other sites
I've done something like that. You can copy the texture to a 512x512 and let the GPU autogen mipmaps. You then must sample the lowest mip (1x1 texture) in a pixel or vertex shader by way of the tex2Dbias intrinsic.

Instead of copying the texture to a standard 512x512 texture, you can do some agressive pixel shading to compute more than just the scene's average luminance. Pixel shade the starting texture into a 512x512, instead of outputting colors, output luminance values. The mipmapping will average all of these up using just the one color channel. You can use the other color channels to save log(luminance) or other values used by your tone mapping operators. I would suggest using R16G16F format or something with less than all 4 color channels to save bandwidth.

G'luck.

Share this post


Link to post
Share on other sites

This topic is 3932 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this