Jump to content

  • Log In with Google      Sign In   
  • Create Account


help on hdr rendering


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
5 replies to this topic

#1 BlackBrain   Members   -  Reputation: 311

Like
0Likes
Like

Posted 05 April 2014 - 05:50 AM

Hello .
Let's say I have my light buffer in hdr and now I want to tonemap it to ldr . my question is why everyone are doing downscale for multiple times to get a 1x1 texture to get average luminance ?
let's say this is the average luminance formula :
(1/N)*(total of the sum of log luminances off all pixels)
where N is the total pixel numbers . even I have seen people use compute shaders for downscaling. Isn't it just possible to write a compute shader that goes over all pixels and sum all of log luminances and put it in a float . then simply we can multiply this float value by 1/N either by cpu or gpu ?

Sponsor:

#2 Waterlimon   Crossbones+   -  Reputation: 2455

Like
3Likes
Like

Posted 05 April 2014 - 05:59 AM

Thats exactly what downscaling does, sums the pixel values in parallel.

 

Every thread can sum a small amount of pixels so you can have multiple threads. It wouldnt be so fast if you had a single thread go through every pixel and sum them up sequentially.

 

I guess you can change the amount of pixels each thread processes (eg instead of 2x2 -> 1 pixel you could have 4x4 -> 1 pixel but i dont know how that would work out)


o3o


#3 esoufiane   Members   -  Reputation: 504

Like
3Likes
Like

Posted 05 April 2014 - 06:23 AM

Isn't it just possible to write a compute shader that goes over all pixels and sum all of log luminances and put it in a float . then simply we can multiply this float value by 1/N either by cpu or gpu ?

 

This can be done by using Parallel Reduction algorithm:

http://developer.download.nvidia.com/compute/cuda/1.1-Beta/x86_website/projects/reduction/doc/reduction.pdf



#4 Bacterius   Crossbones+   -  Reputation: 8468

Like
3Likes
Like

Posted 05 April 2014 - 06:26 AM

 

Isn't it just possible to write a compute shader that goes over all pixels and sum all of log luminances and put it in a float . then simply we can multiply this float value by 1/N either by cpu or gpu ?

 

This can be done by using Parallel Reduction algorithm:

http://developer.download.nvidia.com/compute/cuda/1.1-Beta/x86_website/projects/reduction/doc/reduction.pdf

 

 

Which basically amounts to downsampling in the case of a 2D buffer smile.png


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#5 esoufiane   Members   -  Reputation: 504

Like
0Likes
Like

Posted 05 April 2014 - 07:13 AM

 

 

Isn't it just possible to write a compute shader that goes over all pixels and sum all of log luminances and put it in a float . then simply we can multiply this float value by 1/N either by cpu or gpu ?

 

This can be done by using Parallel Reduction algorithm:

http://developer.download.nvidia.com/compute/cuda/1.1-Beta/x86_website/projects/reduction/doc/reduction.pdf

 

 

Which basically amounts to downsampling in the case of a 2D buffer smile.png

 

 

You can store your 2D buffer as 1D contiguous buffer with width*height elements and apply the Parallel reduction algorithm.



#6 MJP   Moderators   -  Reputation: 10779

Like
3Likes
Like

Posted 05 April 2014 - 12:39 PM

 

 

Isn't it just possible to write a compute shader that goes over all pixels and sum all of log luminances and put it in a float . then simply we can multiply this float value by 1/N either by cpu or gpu ?

 

This can be done by using Parallel Reduction algorithm:

http://developer.download.nvidia.com/compute/cuda/1.1-Beta/x86_website/projects/reduction/doc/reduction.pdf

 

 

Which basically amounts to downsampling in the case of a 2D buffer smile.png

 

 

Same end result, but you may be able to do it faster with a well-written reduction.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS