• Advertisement
Sign in to follow this  

help on hdr rendering

This topic is 1386 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello .
Let's say I have my light buffer in hdr and now I want to tonemap it to ldr . my question is why everyone are doing downscale for multiple times to get a 1x1 texture to get average luminance ?
let's say this is the average luminance formula :
(1/N)*(total of the sum of log luminances off all pixels)
where N is the total pixel numbers . even I have seen people use compute shaders for downscaling. Isn't it just possible to write a compute shader that goes over all pixels and sum all of log luminances and put it in a float . then simply we can multiply this float value by 1/N either by cpu or gpu ?

Share this post


Link to post
Share on other sites
Advertisement

Thats exactly what downscaling does, sums the pixel values in parallel.

 

Every thread can sum a small amount of pixels so you can have multiple threads. It wouldnt be so fast if you had a single thread go through every pixel and sum them up sequentially.

 

I guess you can change the amount of pixels each thread processes (eg instead of 2x2 -> 1 pixel you could have 4x4 -> 1 pixel but i dont know how that would work out)

Share this post


Link to post
Share on other sites

Isn't it just possible to write a compute shader that goes over all pixels and sum all of log luminances and put it in a float . then simply we can multiply this float value by 1/N either by cpu or gpu ?

 

This can be done by using Parallel Reduction algorithm:

http://developer.download.nvidia.com/compute/cuda/1.1-Beta/x86_website/projects/reduction/doc/reduction.pdf

Share this post


Link to post
Share on other sites

 

Isn't it just possible to write a compute shader that goes over all pixels and sum all of log luminances and put it in a float . then simply we can multiply this float value by 1/N either by cpu or gpu ?

 

This can be done by using Parallel Reduction algorithm:

http://developer.download.nvidia.com/compute/cuda/1.1-Beta/x86_website/projects/reduction/doc/reduction.pdf

 

 

Which basically amounts to downsampling in the case of a 2D buffer smile.png

Share this post


Link to post
Share on other sites

 

 

Isn't it just possible to write a compute shader that goes over all pixels and sum all of log luminances and put it in a float . then simply we can multiply this float value by 1/N either by cpu or gpu ?

 

This can be done by using Parallel Reduction algorithm:

http://developer.download.nvidia.com/compute/cuda/1.1-Beta/x86_website/projects/reduction/doc/reduction.pdf

 

 

Which basically amounts to downsampling in the case of a 2D buffer smile.png

 

 

You can store your 2D buffer as 1D contiguous buffer with width*height elements and apply the Parallel reduction algorithm.

Share this post


Link to post
Share on other sites

 

 

Isn't it just possible to write a compute shader that goes over all pixels and sum all of log luminances and put it in a float . then simply we can multiply this float value by 1/N either by cpu or gpu ?

 

This can be done by using Parallel Reduction algorithm:

http://developer.download.nvidia.com/compute/cuda/1.1-Beta/x86_website/projects/reduction/doc/reduction.pdf

 

 

Which basically amounts to downsampling in the case of a 2D buffer smile.png

 

 

Same end result, but you may be able to do it faster with a well-written reduction.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement