# HDR Questions

This topic is 4082 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

From the different articles I have read HDR is done like: 1) First render the scene to a fp16 rendertarget( I know this can be done with RGBE-conversion to a 32-bit rendertarget too ). 2) Apply tonemapping, such that the HDR values will end up in LDR. 3) Apply a bloom filter: - Downsample the image - Apply horisontal bloom filter ( I intend to use a standard gaussian filter ) - Apply vertical bloom filter 4) Blend the original scene with the downsampled blurred image Voila! apparently [smile] But I have some questions regarding the different steps: 1) How do one specify the light intensity in the scene when using HDR? If using the Phong reflection model would it be having the light intensity in the range of 0.0f to 65536.0f like this: ambient light = (240.0f, 3054.0f, 56042.0f), diffuse light = (32067.0f, 26094.0f, 60542.0f), specular light = (65021.0f, 1052.0f, 42.0f)? 3) As I have not any experience with image/signal processing, bear with me on this. What steps is involved when downsampling an image? If I am downsampling the image to 1/4 of the original size how is it done? Do the downsampled values consist of average values or is it values of the original pixels? Does it take the first pixel and then use that value in the downsampled image, the 5th original pixel as the value in the 2nd pixel in the downsampled image and so forth? 3) Does anybody have a visual explaination on how a gaussian filter works? 4) How is this blending between the original image and the downsampled image done? theoretically not implementation wise That would be all for now. If anyone can give me explaination or links to such on each of these it would be appreciated a great deal[wink] Best Regards [Edited by - thallish on March 19, 2007 8:31:49 AM]

##### Share on other sites
Okay, lets have a go at answering some of your questions [smile]

Quote:
 Original post by thallish2) Apply tonemapping, such that the HDR values will end up in LDR.
Yes, this is correct but bare in mind that you aren't *required* to perform any other post-processing such as bloom/glare - a lot of the time it's just the new lens flare [lol].

Quote:
 Original post by thallish3) Apply a bloom filter: - Downsample the image - Apply horisontal bloom filter ( I intend to use a standard gaussian filter ) - Apply vertical bloom filter
If you want a simple Gaussian bloom then this would be correct, streak filters can require many more passes and steps.

Quote:
 Original post by thallish4) Blend the original scene with the downsampled blurred image
Yes, but you might want to add the bloom during the tone-mapping pass so as to reduce any dependencies on hardware blending capabilities.

Quote:
 Original post by thallish1) How do one specify the light intensity in the scene when using HDR?
However you want. If you're going for physical accuracy then there are lots of distribution graphs you can find (a lot require payment) that give correct initial energy and falloff curves for common light sources. Look into Candela/m2 information.

Alternatively you can just pick an arbitrary range of values - much of the point of HDRI is that you're no longer constrained to an arbitrary range of values [wink]

Quote:
 Original post by thallishWhat steps is involved when downsampling an image?
The single most important part is understanding your API's rasterization rules - e.g. Direct3D having centre-texel addressing. You then have to set up your TC's and various offset coordinates to allow you to fetch multiple samples from neighbouring texels. These constants will typically be 1/w and 1/h and you can then multiply these by the number of 'steps' away you want to do.

If you've got hardware that can do linear filtering on HDR textures you can (ab)use this to take multiple samples with a single fetch - don't have the reference to hand, but this is covered in one of ATI/AMD's slide decks from a few years ago.

Quote:
 Original post by thallishIf I am downsampling the image to 1/4 of the original size how is it done?
There are a lot of examples available online - I wrote one for the DirectX SDK that covers a lot of this sort of thing and visualizes it all on-screen for you.

Quote:
 Original post by thallishDo the downsampled values consist of average values or is it values of the original pixels?
You'd usually take a 2x2 area (for a 2x downsample) and average them to output a single 1x1 pixel in the new, smaller, render target. You can create some funky effects by doing non-linear weighting, but for the most part you needn't bother [smile]

Quote:
 Original post by thallish3) Does anybody have a visual explaination on how a gaussian filter works?
Yes - the aforementioned ATI paper has a few good diagrams. I wrote up the HDRPipeline/HDRDemo samples in the DirectX SDK that show the intermediary steps (might not be so clear), or you can refer to my IOTD: It's amazing what a GPU can do in 63ms... Pay particular attention to the two Visio diagrams linked in the main text, they're the interesting parts [wink]

Hope that's useful!
Jack

##### Share on other sites
Quote:
Original post by jollyjeffers
Okay, lets have a go at answering some of your questions [smile]

Quote:
 Original post by thallish2) Apply tonemapping, such that the HDR values will end up in LDR.
Yes, this is correct but bare in mind that you aren't *required* to perform any other post-processing such as bloom/glare - a lot of the time it's just the new lens flare [lol].

But it just looks so pretty [lol].

Quote:

Quote:
 Original post by thallish4) Blend the original scene with the downsampled blurred image

Yes, but you might want to add the bloom during the tone-mapping pass so as to reduce any dependencies on hardware blending capabilities.

Don't think i follow you here, how do you add bloom during the tone mapping pass?

Quote:

Quote:
 Original post by thallish1) How do one specify the light intensity in the scene when using HDR?
However you want. If you're going for physical accuracy then there are lots of distribution graphs you can find (a lot require payment) that give correct initial energy and falloff curves for common light sources. Look into Candela/m2 information.

Alternatively you can just pick an arbitrary range of values - much of the point of HDRI is that you're no longer constrained to an arbitrary range of values [wink]

Got it. Had to think about how writing to a rendertarget actually worked.

Quote:

Quote:
 Original post by thallishWhat steps is involved when downsampling an image?
The single most important part is understanding your API's rasterization rules - e.g. Direct3D having centre-texel addressing. You then have to set up your TC's and various offset coordinates to allow you to fetch multiple samples from neighbouring texels. These constants will typically be 1/w and 1/h and you can then multiply these by the number of 'steps' away you want to do.

If you've got hardware that can do linear filtering on HDR textures you can (ab)use this to take multiple samples with a single fetch - don't have the reference to hand, but this is covered in one of ATI/AMD's slide decks from a few years ago.

Ok I'll look into this.

Quote:

Quote:
 Original post by thallishIf I am downsampling the image to 1/4 of the original size how is it done?
There are a lot of examples available online - I wrote one for the DirectX SDK that covers a lot of this sort of thing and visualizes it all on-screen for you.

Quote:

Quote:
 Original post by thallishDo the downsampled values consist of average values or is it values of the original pixels?

You'd usually take a 2x2 area (for a 2x downsample) and average them to output a single 1x1 pixel in the new, smaller, render target. You can create some funky effects by doing non-linear weighting, but for the most part you needn't bother [smile]

Can't find it Jack[wink] Maybe I'll have to ask my counselor on this one.

Quote:

Quote:
 Original post by thallish3) Does anybody have a visual explanation on how a gaussian filter works?

Yes - the aforementioned ATI paper has a few good diagrams.

Ok I'll see if I can find it

Quote:
 I wrote up the HDRPipeline/HDRDemo samples in the DirectX SDK that show the intermediary steps (might not be so clear), or you can refer to my IOTD: It's amazing what a GPU can do in 63ms... Pay particular attention to the two Visio diagrams linked in the main text, they're the interesting parts [wink]

Yep those charts helped a great deal

Quote:
 Hope that's useful!Jack

Sure was, bit Ill probably return for more questions soon[lol]

• 9
• 23
• 10
• 19