Tell me if I got this whole HDR thing right?

Started by
18 comments, last by Magmatwister 12 years, 8 months ago
99% of the time when you hear HDR in relation to games it is actually about tonemapping - which, while related, is a different concept. Hodgman covered that though so I won't beat a dead horse - just saying that the two are very easily confused as most games seem to (erroneously) refer to tonemapping as being 'HDR'; which, again, it isn't (it's a way of displaying a High Dynamic Range image on a Low Dynamic Range screen, and arguably not the best one at that).
"I will personally burn everything I've made to the fucking ground if I think I can catch them in the flames."
~ Gabe
"I don't mean to rush you but you are keeping two civilizations waiting!"
~ Cavil, BSG.
"If it's really important to you that other people follow your True Brace Style, it just indicates you're inexperienced. Go find something productive to do."
[size=2]~ Bregma

"Well, you're not alone.


There's a club for people like that. It's called Everybody and we meet at the bar[size=2].

"

[size=2]~

[size=1]Antheus
Advertisement

The blurring, the extracting of pixels "greater than [arbitrary number]", and the mixing of the blurring image back over the original... isn't HDR, that's bloom.

HDR makes bloom better, so most HDR demos will also implement bloom. But this concept you're describing is not HDR. It's bloom. You're describing bloom, which can be implemented in either HDR or LDR. You don't need HDR to implement this concept.


If you ignore all this blurring/mixing stuff for a moment ---
HDR is high-dynamic-range, it's the concept of storing real luminance/irradiance data in your frame buffer, instead of "colours". In the real world, the sky is 10,000 times brighter than an object in a day-time shadow... so in a HDR renderer, we might write the number 10000.0f into the frame-buffer when drawing the sky, and write the number 1.0f for an object in shadow.

HDR is a term that comes from photography, where they've been capturing images with very large differences in brightness for a long time.

This raw HDR data can't be directly displayed though, because your monitor only supports values in the range of 0 to 1. So by using another concept called 'tone-mapping', you can convert this high-range (usually floating point) image, into a regular 8-bit-per-channel image that you can display. These tone-mapping algorithms mimic the human eye, which can easily look at objects that are thousands of times brighter than each other at the same time -- usually some kind of exponential-looking curve is involved in the mapping process.

Notice blurring, or extracting bright pixels, etc, doesn't come into it. HDR is simply the concept of storing a large range of brightness values, and it's paired with the concept of tone-mapping, which lets you convert that large range into an LDR image.


Now, coming back to bloom... it goes hand in hand with HDR, because this "blurring of bright pixels" concept is a trick that fools your brain into thinking a pixel is brighter than it really is. We can't display the value of 10,000 to the monitor, but if we map it to 1.0f instead, and also blur it, the brain will perceive it to be brighter than a 1.0f pixel that doesn't also have blur. Meaning, bloom lets us better convey these really bright pixels, even though we can't display them properly.


[font=arial, verdana, tahoma, sans-serif][size=2]I understand what HDR is and my interest in it was to make lighting in my scene which is more intense than what can be displayed on the screen glow. I was not sure if the standard HDR glow effect used a different method to create it's glow but from what I am understanding it does not. So my idea for using mipmaps to create the blur is suitable or is there a flaw I am over looking?[/font]

[quote name='way2lazy2care' timestamp='1312901321' post='4846709']To my understanding basically HDR is just brightening and darkening parts of your image based off an average brightness level for the whole picture so you can enhance the range of brightness/darkness that you can experience beyond what you can display on screen.
To keep up my pedantry, that's adaptive tone-mapping, which is used to display HDR data. You can also use a non-adaptive tone-mapper that doesn't take average brightness levels into account wink.gif
[/quote]

Oh that explains the difference! I was kinda confused what tone mapping was but now I understand. Thanks!

99% of the time when you hear HDR in relation to games it is actually about tonemapping - which, while related, is a different concept. Hodgman covered that though so I won't beat a dead horse - just saying that the two are very easily confused as most games seem to (erroneously) refer to tonemapping as being 'HDR'; which, again, it isn't (it's a way of displaying a High Dynamic Range image on a Low Dynamic Range screen, and arguably not the best one at that).


Do you mind elaborating? I'm just curious as googling comes up with generally just tonemapping solutions.
This is interesting because I was just researching HDR today and tonemapping. So to summarize:

1. Render scene to HDR render target (say 32-bit float color channels). You can use lighting equations that give colors outside the [0,1] range.

2. Tonemapping pass. Somehow figure out how to map HDR range to [0,1] range.

For the tonemapping pass, the DX11 "HDRToneMappingCS11 shows the use of the CS to do fast parallel reduction on the image to calculate average illuminance, and then use the calculated illuminance to do tone mapping."

My question is, why use the CS? Couldn't you just have the hardware generate the mipmap levels, and the last mipmap level, 1 pixel will contain the average illuminance?

Also does anyone agree the HDRToneMappingCS11 looks kind of weird when you move the camera around? The color intensity changes unnaturally in my opinion.
-----Quat

My question is, why use the CS? Couldn't you just have the hardware generate the mipmap levels, and the last mipmap level, 1 pixel will contain the average illuminance?
[/

Also does anyone agree the HDRToneMappingCS11 looks kind of weird when you move the camera around? The color intensity changes unnaturally in my opinion.


In theory, a parallel reduction implemented with shared memory in a compute shader should have significantly less memory reads and writes compared to generating a mip chain. In practice it depends on how well the compute shader is implemented since there are all kinds of subtle performance pitfalls.

If I recall correctly, that sample implements Reinhard's calibration technique which is basically "auto-exposure". However normally when games using auto-exposure, they put in an adaptation factor to make the camera's exposure slowly changed to different lighting conditions. If you don't use adaptation it will adapt instantly, which looks very unnatural.

In theory, a parallel reduction implemented with shared memory in a compute shader should have significantly less memory reads and writes compared to generating a mip chain. In practice it depends on how well the compute shader is implemented since there are all kinds of subtle performance pitfalls.
[/quote]

I see. I guess I sort of assumed that since graphics hardware has had auto mipmap chain support for a while it would be quite good at it, and perhaps optimized for it. I'll have to do some comparisons if I get time.
-----Quat


In theory, a parallel reduction implemented with shared memory in a compute shader should have significantly less memory reads and writes compared to generating a mip chain. In practice it depends on how well the compute shader is implemented since there are all kinds of subtle performance pitfalls.


I see. I guess I sort of assumed that since graphics hardware has had auto mipmap chain support for a while it would be quite good at it, and perhaps optimized for it. I'll have to do some comparisons if I get time.
[/quote]

The whole luminace mip-map thing is the most obvious thing and what I first thought of, I'd be interested to see your results. I'm particularly lazy when it comes down to it :P I, too looked at the demo when I first started looking at HDR but found the code needed to utilise compute shaders not hard to understand particularly, but certainly more complex.
Currently trying to make a planet renderer. After many hours of work, somehow I know It'll never be complete.
Also, If I help you, please give me an ++
I made a quick sample for my blog that calculates the average luminance using a compute shader reduction, and on my hardware (AMD 6950) it's about twice as fast as GenerateMips for a 1024x1024 luminance map. I'd be interested to see what the results are on other hardware, if you want to try it out. At the very least it's a better starting point than the SDK sample if you're targeting FEATURE_LEVEL_11 hardware.

I made a quick sample for my blog that calculates the average luminance using a compute shader reduction, and on my hardware (AMD 6950) it's about twice as fast as GenerateMips for a 1024x1024 luminance map. I'd be interested to see what the results are on other hardware, if you want to try it out. At the very least it's a better starting point than the SDK sample if you're targeting FEATURE_LEVEL_11 hardware.


Cheers MJP, just what I needed as usual :)
Currently trying to make a planet renderer. After many hours of work, somehow I know It'll never be complete.
Also, If I help you, please give me an ++

This topic is closed to new replies.

Advertisement