Sign in to follow this  
SteveDeFacto

Tell me if I got this whole HDR thing right?

Recommended Posts

SteveDeFacto    109
So basically as I understand it the concept goes like this: Render the scene to a frame buffer object. Extract pixels that are over 1.0 and put them in another frame buffer? Blur the HDR frame buffer object and mix it with the LDR frame buffer object. Display mixed frame buffer object to the screen.

I've never used a frame buffer object before so I'm kinda confused how to process it with the pixel shader to extract the pixels that are over 1.0. I'm guessing maybe the way to do it would be to use a full screen quad with it's texture as the frame buffer object then render that to the other frame buffer but I'm not sure.

Also I would have thought that the brighter a pixel got the more it would blur but with this it sounds like all pixels over 1.0 would blur the same?

Share this post


Link to post
Share on other sites
deftware    1778
I have yet to actually implement this but I was just going to use a FBO, render my scene to a texture (not the screen) render the texture as a quad to display it, then render it again using some magical GLSL shaders to manipulate it to do the blurring and hilighting and all that. My biggest fear for a while, and misunderstanding, was that I would have to render the scene to the screen once, then render it a second time to a texture to be able to do my post-processing stuff... but I later realized I can just render to a texture the size of the screen and use that for multiple purposes, perhaps actually rendering it just once and having all your post-processing done in that pass in a shader..

Share this post


Link to post
Share on other sites
ryan20fun    2635
what about looking at the DirectX sample on HDR rendering ?
that way you can see how it is done from a DX perspective, then port it over to OpenGL for your Engine :)
im shure the Nvidia site has a few samples on this( when i looked there they had [b]lots[/b] )

Share this post


Link to post
Share on other sites
Xest    140
IIRC there's a decent explanation and example of this in the latest edition of The OpenGL SuperBible. May be worth a look for what it's worth.

Share this post


Link to post
Share on other sites
SteveDeFacto    109
[quote name='radioteeth' timestamp='1312885777' post='4846617']
I have yet to actually implement this but I was just going to use a FBO, render my scene to a texture (not the screen) render the texture as a quad to display it, then render it again using some magical GLSL shaders to manipulate it to do the blurring and hilighting and all that. My biggest fear for a while, and misunderstanding, was that I would have to render the scene to the screen once, then render it a second time to a texture to be able to do my post-processing stuff... but I later realized I can just render to a texture the size of the screen and use that for multiple purposes, perhaps actually rendering it just once and having all your post-processing done in that pass in a shader..
[/quote]

Your statement actually may have helped me figure out how it works. Basically I think I just render to a texture and render that texture to the screen on a quad using a pixel shader. I think it could be really simple since I believe I could just read the lower resolution texture from the mipmaps and only add the values over 1.0 to the final resulting color. In my mind this makes perfect sense but we will see what happens!

Share this post


Link to post
Share on other sites
Hodgman    51234
[quote name='SteveDeFacto' timestamp='1312880756' post='4846595']
Extract pixels that are over 1.0 and put them in another frame buffer? Blur the HDR frame buffer object and mix it with the LDR frame buffer object. Display mixed frame buffer object to the screen.[/quote]Firstly, there's nothing special about the number 1.0. You could use 0.87, or 5.0, and the concept is the same: this arbitrary number is the threshold for your "bright-pass", where you arbitrarily decide what brightness is bright enough to become bloom/glare.
Secondly, what you're describing is a bloom effect, which can be done with or without HDR.

So, you've not got this whole HDR thing right [img]http://public.gamedev.net/public/style_emoticons/default/sad.gif[/img]

Share this post


Link to post
Share on other sites
SteveDeFacto    109
[quote name='Hodgman' timestamp='1312893614' post='4846654']
[quote name='SteveDeFacto' timestamp='1312880756' post='4846595']
Extract pixels that are over 1.0 and put them in another frame buffer? Blur the HDR frame buffer object and mix it with the LDR frame buffer object. Display mixed frame buffer object to the screen.[/quote]Firstly, there's nothing special about the number 1.0. You could use 0.87, or 5.0, and the concept is the same: this arbitrary number is the threshold for your "bright-pass", where you arbitrarily decide what brightness is bright enough to become bloom/glare.
Secondly, what you're describing is a bloom effect, which can be done with or without HDR.

So, you've not got this whole HDR thing right [img]http://public.gamedev.net/public/style_emoticons/default/sad.gif[/img]
[/quote]

What part did I not understand? Is it the method of blurring? I've been looking at HDR demos and they appear to do what I have described...

Share this post


Link to post
Share on other sites
Hodgman    51234
The blurring, the extracting of pixels "greater than [arbitrary number]", and the mixing of the blurring image back over the original... isn't HDR, that's bloom.

HDR makes bloom better, so most HDR demos will also implement bloom. But this concept you're describing is not HDR. It's bloom. You're describing bloom, which can be implemented in either HDR or LDR. You don't need HDR to implement this concept.


If you ignore all this blurring/mixing stuff for a moment ---
HDR is high-dynamic-range, it's the concept of storing real luminance/irradiance data in your frame buffer, instead of "colours". In the real world, the sky is 10,000 times brighter than an object in a day-time shadow... so in a HDR renderer, we might write the number 10000.0f into the frame-buffer when drawing the sky, and write the number 1.0f for an object in shadow.

HDR is a term that [url="http://en.wikipedia.org/wiki/High_dynamic_range_imaging"]comes from photography[/url], where they've been capturing images with very large differences in brightness for a long time.

This raw HDR data can't be directly displayed though, because your monitor only supports values in the range of 0 to 1. So by using another concept called 'tone-mapping', you can convert this high-range (usually floating point) image, into a regular 8-bit-per-channel image that you can display. These tone-mapping algorithms mimic the human eye, which can easily look at objects that are thousands of times brighter than each other at the same time -- usually some kind of exponential-looking curve is involved in the mapping process.

Notice blurring, or extracting bright pixels, etc, doesn't come into it. HDR is simply the concept of storing a large range of brightness values, and it's paired with the concept of tone-mapping, which lets you convert that large range into an LDR image.


Now, coming back to bloom... it goes hand in hand with HDR, because this "blurring of bright pixels" concept is a trick that fools your brain into thinking a pixel is brighter than it really is. We can't display the value of 10,000 to the monitor, but if we map it to 1.0f instead, and also blur it, the brain will perceive it to be brighter than a 1.0f pixel that doesn't also have blur. Meaning, bloom lets us better convey these really bright pixels, even though we can't display them properly.

Share this post


Link to post
Share on other sites
way2lazy2care    790
[quote name='SteveDeFacto' timestamp='1312899379' post='4846687']
What part did I not understand? Is it the method of blurring? I've been looking at HDR demos and they appear to do what I have described...
[/quote]
I believe blurring/bloom is added to HDR to make HDR seem more realistic. It is not inherently part of HDR at all though. To my understanding basically HDR is just brightening and darkening parts of your image based off an average brightness level for the whole picture so you can enhance the range of brightness/darkness that you can experience beyond what you can display on screen. Similar to how your pupil expands and contracts. Bloom is just used to make the super bright parts more interesting than just huge bright spots.

I think the effect uses a similar mental trick to how gradients are perceived as flat backgrounds in art so you can use them to make things appear brighter or darker than they actually are:
[img]http://www.123opticalillusions.com/pages/gradient_illusion.jpg[/img]

edit: I think Hodgman's answer is better <_<

Share this post


Link to post
Share on other sites
Hodgman    51234
[quote name='way2lazy2care' timestamp='1312901321' post='4846709']To my understanding basically HDR is just brightening and darkening parts of your image based off an average brightness level for the whole picture so you can enhance the range of brightness/darkness that you can experience beyond what you can display on screen.[/quote]To keep up my pedantry, that's adaptive tone-mapping, which is used to display HDR data. You can also use a non-adaptive tone-mapper that doesn't take average brightness levels into account [img]http://public.gamedev.net/public/style_emoticons/default/wink.gif[/img]

Share this post


Link to post
Share on other sites
DarklyDreaming    367
99% of the time when you hear HDR in relation to games it is actually about tonemapping - which, while related, is a different concept. Hodgman covered that though so I won't beat a dead horse - just saying that the two are very easily confused as most games seem to (erroneously) refer to tonemapping as being 'HDR'; which, again, it isn't (it's a way of [i]displaying [/i]a High Dynamic Range image on a Low Dynamic Range screen, and arguably not the best one at that).

Share this post


Link to post
Share on other sites
SteveDeFacto    109
[quote name='Hodgman' timestamp='1312900681' post='4846705']
The blurring, the extracting of pixels "greater than [arbitrary number]", and the mixing of the blurring image back over the original... isn't HDR, that's bloom.

HDR makes bloom better, so most HDR demos will also implement bloom. But this concept you're describing is not HDR. It's bloom. You're describing bloom, which can be implemented in either HDR or LDR. You don't need HDR to implement this concept.


If you ignore all this blurring/mixing stuff for a moment ---
HDR is high-dynamic-range, it's the concept of storing real luminance/irradiance data in your frame buffer, instead of "colours". In the real world, the sky is 10,000 times brighter than an object in a day-time shadow... so in a HDR renderer, we might write the number 10000.0f into the frame-buffer when drawing the sky, and write the number 1.0f for an object in shadow.

HDR is a term that [url="http://en.wikipedia.org/wiki/High_dynamic_range_imaging"]comes from photography[/url], where they've been capturing images with very large differences in brightness for a long time.

This raw HDR data can't be directly displayed though, because your monitor only supports values in the range of 0 to 1. So by using another concept called 'tone-mapping', you can convert this high-range (usually floating point) image, into a regular 8-bit-per-channel image that you can display. These tone-mapping algorithms mimic the human eye, which can easily look at objects that are thousands of times brighter than each other at the same time -- usually some kind of exponential-looking curve is involved in the mapping process.

Notice blurring, or extracting bright pixels, etc, doesn't come into it. HDR is simply the concept of storing a large range of brightness values, and it's paired with the concept of tone-mapping, which lets you convert that large range into an LDR image.


Now, coming back to bloom... it goes hand in hand with HDR, because this "blurring of bright pixels" concept is a trick that fools your brain into thinking a pixel is brighter than it really is. We can't display the value of 10,000 to the monitor, but if we map it to 1.0f instead, and also blur it, the brain will perceive it to be brighter than a 1.0f pixel that doesn't also have blur. Meaning, bloom lets us better convey these really bright pixels, even though we can't display them properly.
[/quote]

[font=arial, verdana, tahoma, sans-serif][size=2]I understand what HDR is and my interest in it was to make lighting in my scene which is more intense than what can be displayed on the screen glow. I was not sure if the standard HDR glow effect used a different method to create it's glow but from what I am understanding it does not. So my idea for using mipmaps to create the blur is suitable or is there a flaw I am over looking?[/size][/font]

Share this post


Link to post
Share on other sites
SteveDeFacto    109
[quote name='Hodgman' timestamp='1312901563' post='4846715']
[quote name='way2lazy2care' timestamp='1312901321' post='4846709']To my understanding basically HDR is just brightening and darkening parts of your image based off an average brightness level for the whole picture so you can enhance the range of brightness/darkness that you can experience beyond what you can display on screen.[/quote]To keep up my pedantry, that's adaptive tone-mapping, which is used to display HDR data. You can also use a non-adaptive tone-mapper that doesn't take average brightness levels into account [img]http://public.gamedev.net/public/style_emoticons/default/wink.gif[/img]
[/quote]

Oh that explains the difference! I was kinda confused what tone mapping was but now I understand. Thanks!

Share this post


Link to post
Share on other sites
way2lazy2care    790
[quote name='DarklyDreaming' timestamp='1312902585' post='4846724']
99% of the time when you hear HDR in relation to games it is actually about tonemapping - which, while related, is a different concept. Hodgman covered that though so I won't beat a dead horse - just saying that the two are very easily confused as most games seem to (erroneously) refer to tonemapping as being 'HDR'; which, again, it isn't (it's a way of [i]displaying [/i]a High Dynamic Range image on a Low Dynamic Range screen,[b] and arguably not the best one at that[/b]).
[/quote]

Do you mind elaborating? I'm just curious as googling comes up with generally just tonemapping solutions.

Share this post


Link to post
Share on other sites
Quat    568
This is interesting because I was just researching HDR today and tonemapping. So to summarize:

1. Render scene to HDR render target (say 32-bit float color channels). You can use lighting equations that give colors outside the [0,1] range.

2. Tonemapping pass. Somehow figure out how to map HDR range to [0,1] range.

For the tonemapping pass, the DX11 "HDRToneMappingCS11 shows the use of the CS to do fast parallel reduction on the image to calculate average illuminance, and then use the calculated illuminance to do tone mapping."

My question is, why use the CS? Couldn't you just have the hardware generate the mipmap levels, and the last mipmap level, 1 pixel will contain the average illuminance?

Also does anyone agree the HDRToneMappingCS11 looks kind of weird when you move the camera around? The color intensity changes unnaturally in my opinion.

Share this post


Link to post
Share on other sites
MJP    19754
[quote name='Quat' timestamp='1312917663' post='4846827']
My question is, why use the CS? Couldn't you just have the hardware generate the mipmap levels, and the last mipmap level, 1 pixel will contain the average illuminance?
[/

Also does anyone agree the HDRToneMappingCS11 looks kind of weird when you move the camera around? The color intensity changes unnaturally in my opinion.
[/quote]

In theory, a parallel reduction implemented with shared memory in a compute shader should have significantly less memory reads and writes compared to generating a mip chain. In practice it depends on how well the compute shader is implemented since there are all kinds of subtle performance pitfalls.

If I recall correctly, that sample implements Reinhard's calibration technique which is basically "auto-exposure". However normally when games using auto-exposure, they put in an adaptation factor to make the camera's exposure slowly changed to different lighting conditions. If you don't use adaptation it will adapt instantly, which looks very unnatural.

Share this post


Link to post
Share on other sites
Quat    568
[quote]
In theory, a parallel reduction implemented with shared memory in a compute shader should have significantly less memory reads and writes compared to generating a mip chain. In practice it depends on how well the compute shader is implemented since there are all kinds of subtle performance pitfalls.
[/quote]

I see. I guess I sort of assumed that since graphics hardware has had auto mipmap chain support for a while it would be quite good at it, and perhaps optimized for it. I'll have to do some comparisons if I get time.

Share this post


Link to post
Share on other sites
Magmatwister    109
[quote name='Quat' timestamp='1312926701' post='4846916']
[quote]
In theory, a parallel reduction implemented with shared memory in a compute shader should have significantly less memory reads and writes compared to generating a mip chain. In practice it depends on how well the compute shader is implemented since there are all kinds of subtle performance pitfalls.
[/quote]

I see. I guess I sort of assumed that since graphics hardware has had auto mipmap chain support for a while it would be quite good at it, and perhaps optimized for it. I'll have to do some comparisons if I get time.
[/quote]

The whole luminace mip-map thing is the most obvious thing and what I first thought of, I'd be interested to see your results. I'm particularly lazy when it comes down to it :P I, too looked at the demo when I first started looking at HDR but found the code needed to utilise compute shaders not hard to understand particularly, but certainly more complex.

Share this post


Link to post
Share on other sites
MJP    19754
I made a quick [url="http://mynameismjp.wordpress.com/2011/08/10/average-luminance-compute-shader/"]sample[/url] for my blog that calculates the average luminance using a compute shader reduction, and on my hardware (AMD 6950) it's about twice as fast as GenerateMips for a 1024x1024 luminance map. I'd be interested to see what the results are on other hardware, if you want to try it out. At the very least it's a better starting point than the SDK sample if you're targeting FEATURE_LEVEL_11 hardware.

Share this post


Link to post
Share on other sites
Magmatwister    109
[quote name='MJP' timestamp='1312969392' post='4847095']
I made a quick [url="http://mynameismjp.wordpress.com/2011/08/10/average-luminance-compute-shader/"]sample[/url] for my blog that calculates the average luminance using a compute shader reduction, and on my hardware (AMD 6950) it's about twice as fast as GenerateMips for a 1024x1024 luminance map. I'd be interested to see what the results are on other hardware, if you want to try it out. At the very least it's a better starting point than the SDK sample if you're targeting FEATURE_LEVEL_11 hardware.
[/quote]

Cheers MJP, just what I needed as usual :)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this