# HDR - Reinhard's local operator and gaussian blur

This topic is 4440 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi! Yes, another HDR question :) I'm trying to understand and implement local tone mapping operator in HLSL, but it doesn't go too well. Recently I was pointed to "Perceptual Effects in Real-Time Tone Mapping", which describes implementation of this algorithm in a bit more tricky way than previous attempt (Goodnight's "Interactive Time-Dependent Tone Mapping Using Programmable Graphics Hardware"). Maybe somebody has too much free time and can help me :) The basic idea is that you must find largest area around the pixel, where there's no high luminance contrast. To do this the image is blurred with a set of gaussians blurs, with varying standard deviation and kernel size. The difference between adjacent convolutions is calculated and if it's bigger than a given treshold, the value of the previous convolution is set as local pixel luminance. Goodnight's implementation uses kernel sizes from 1x1 (or rather 3x3) up to 43x43, which is quite slow. In the new paper approximation of gaussian distribution is used. I don't really get how kernel size can be equal to 0.35, but I assume that they meant that "s" is standard deviation for consecutive gaussian blurs (is it correct?). Now here's the approximation: So basically they downsample the image and then apply gaussian blur. And the question: what is the kernel size for all these blurs? Is it always 3x3? Is it different for every convolution? Also, they say that:
Quote:
 At each scale of the Gaussian pyramid, we render the successive scale by convolving the previous scale with the appropriate Gaussian.
I need all blur results to have the same size, so I can calculate difference between them. How to do it? Do I have to create textures for all downsampled versions, so I can perform blur on them or is there some easier way? Another thing is that if I use Goodnight's method and use kernels from 3x3 to 43x43 to blur the image, should I also perform consecutive blurs on result of previous blur or always on the input image (which makes more sense to me). Thanks in advance

##### Share on other sites

I'm going to second this question. The paper gOnzo is talking about is definitely important as a peice of research but very confusing as a guide or explanation to your own HDR implementation. If anyone has anything to say about this paper please speak up.

To address your particular questions gOnzo, as I mentioned in a pervious thread I have 'implemented' what is described in the paper. However, I had and still have all the same questions you have. My 'implementation' just filled in the holes with my best guesses; I came close but I wasn't spot on.

##### Share on other sites
Nonoptimalrobot, could you send me your implementation, or describe it in detail?

If not, could you just answer me these questions?

- do you always use kernel of size 3x3?

- is the "s" parameter used for standard deviation only? What about "s" parameter in the equation for V (difference between adjacent convolutions). Is it the same value (standard deviation) or is it just index of current iteration or something else?

##### Share on other sites
Quote:
 - do you always use kernel of size 3x3?

I try to answer this question: you want a quadratic kernel most of the time. So 2x2, 3x3, 4x4 etc. This makes it easier to make the kernel round :-).

For example a simple 3x3 filter kernel can be rounded very nicely by weighting the result of all fetches and averaging the result ... you do not necessarily need to apply a Gauss blur. This is an advantage on non-PC platforms.

##### Share on other sites
Quote:
 I try to answer this question: you want a quadratic kernel most of the time. So 2x2, 3x3, 4x4 etc. This makes it easier to make the kernel round :-).

Thanks for answering, but in this case the problem is a bit different :)
This implementation approximates gaussian convolutions to increase performance. So instead of calculating gaussian convolutions on the fixed size image and increasing kernel size with every iteration (3x3, 5x5, 7x7, 11x11 and so on up to ~43x43), they downscale the image and use fixed size kernel of size 3x3, which should give more or less the same result (see the last picture in my first post). At least I think they do so, but I'm not sure, that's why I'm asking :)

1. 1
Rutin
69
2. 2
3. 3
4. 4
5. 5

• 21
• 10
• 33
• 20
• 9
• ### Forum Statistics

• Total Topics
633420
• Total Posts
3011793
• ### Who's Online (See full list)

There are no registered users currently online

×