Tell me if I got this whole HDR thing right?

Started by
18 comments, last by Magmatwister 12 years, 8 months ago
So basically as I understand it the concept goes like this: Render the scene to a frame buffer object. Extract pixels that are over 1.0 and put them in another frame buffer? Blur the HDR frame buffer object and mix it with the LDR frame buffer object. Display mixed frame buffer object to the screen.

I've never used a frame buffer object before so I'm kinda confused how to process it with the pixel shader to extract the pixels that are over 1.0. I'm guessing maybe the way to do it would be to use a full screen quad with it's texture as the frame buffer object then render that to the other frame buffer but I'm not sure.

Also I would have thought that the brighter a pixel got the more it would blur but with this it sounds like all pixels over 1.0 would blur the same?
Advertisement
I have yet to actually implement this but I was just going to use a FBO, render my scene to a texture (not the screen) render the texture as a quad to display it, then render it again using some magical GLSL shaders to manipulate it to do the blurring and hilighting and all that. My biggest fear for a while, and misunderstanding, was that I would have to render the scene to the screen once, then render it a second time to a texture to be able to do my post-processing stuff... but I later realized I can just render to a texture the size of the screen and use that for multiple purposes, perhaps actually rendering it just once and having all your post-processing done in that pass in a shader..
what about looking at the DirectX sample on HDR rendering ?
that way you can see how it is done from a DX perspective, then port it over to OpenGL for your Engine :)
im shure the Nvidia site has a few samples on this( when i looked there they had lots )

Never say Never, Because Never comes too soon. - ryan20fun

Disclaimer: Each post of mine is intended as an attempt of helping and/or bringing some meaningfull insight to the topic at hand. Due to my nature, my good intentions will not always be plainly visible. I apologise in advance and assure you I mean no harm and do not intend to insult anyone.

IIRC there's a decent explanation and example of this in the latest edition of The OpenGL SuperBible. May be worth a look for what it's worth.

I have yet to actually implement this but I was just going to use a FBO, render my scene to a texture (not the screen) render the texture as a quad to display it, then render it again using some magical GLSL shaders to manipulate it to do the blurring and hilighting and all that. My biggest fear for a while, and misunderstanding, was that I would have to render the scene to the screen once, then render it a second time to a texture to be able to do my post-processing stuff... but I later realized I can just render to a texture the size of the screen and use that for multiple purposes, perhaps actually rendering it just once and having all your post-processing done in that pass in a shader..


Your statement actually may have helped me figure out how it works. Basically I think I just render to a texture and render that texture to the screen on a quad using a pixel shader. I think it could be really simple since I believe I could just read the lower resolution texture from the mipmaps and only add the values over 1.0 to the final resulting color. In my mind this makes perfect sense but we will see what happens!

Extract pixels that are over 1.0 and put them in another frame buffer? Blur the HDR frame buffer object and mix it with the LDR frame buffer object. Display mixed frame buffer object to the screen.
Firstly, there's nothing special about the number 1.0. You could use 0.87, or 5.0, and the concept is the same: this arbitrary number is the threshold for your "bright-pass", where you arbitrarily decide what brightness is bright enough to become bloom/glare.
Secondly, what you're describing is a bloom effect, which can be done with or without HDR.

So, you've not got this whole HDR thing right sad.gif

[quote name='SteveDeFacto' timestamp='1312880756' post='4846595']
Extract pixels that are over 1.0 and put them in another frame buffer? Blur the HDR frame buffer object and mix it with the LDR frame buffer object. Display mixed frame buffer object to the screen.
Firstly, there's nothing special about the number 1.0. You could use 0.87, or 5.0, and the concept is the same: this arbitrary number is the threshold for your "bright-pass", where you arbitrarily decide what brightness is bright enough to become bloom/glare.
Secondly, what you're describing is a bloom effect, which can be done with or without HDR.

So, you've not got this whole HDR thing right sad.gif
[/quote]

What part did I not understand? Is it the method of blurring? I've been looking at HDR demos and they appear to do what I have described...
The blurring, the extracting of pixels "greater than [arbitrary number]", and the mixing of the blurring image back over the original... isn't HDR, that's bloom.

HDR makes bloom better, so most HDR demos will also implement bloom. But this concept you're describing is not HDR. It's bloom. You're describing bloom, which can be implemented in either HDR or LDR. You don't need HDR to implement this concept.


If you ignore all this blurring/mixing stuff for a moment ---
HDR is high-dynamic-range, it's the concept of storing real luminance/irradiance data in your frame buffer, instead of "colours". In the real world, the sky is 10,000 times brighter than an object in a day-time shadow... so in a HDR renderer, we might write the number 10000.0f into the frame-buffer when drawing the sky, and write the number 1.0f for an object in shadow.

HDR is a term that comes from photography, where they've been capturing images with very large differences in brightness for a long time.

This raw HDR data can't be directly displayed though, because your monitor only supports values in the range of 0 to 1. So by using another concept called 'tone-mapping', you can convert this high-range (usually floating point) image, into a regular 8-bit-per-channel image that you can display. These tone-mapping algorithms mimic the human eye, which can easily look at objects that are thousands of times brighter than each other at the same time -- usually some kind of exponential-looking curve is involved in the mapping process.

Notice blurring, or extracting bright pixels, etc, doesn't come into it. HDR is simply the concept of storing a large range of brightness values, and it's paired with the concept of tone-mapping, which lets you convert that large range into an LDR image.


Now, coming back to bloom... it goes hand in hand with HDR, because this "blurring of bright pixels" concept is a trick that fools your brain into thinking a pixel is brighter than it really is. We can't display the value of 10,000 to the monitor, but if we map it to 1.0f instead, and also blur it, the brain will perceive it to be brighter than a 1.0f pixel that doesn't also have blur. Meaning, bloom lets us better convey these really bright pixels, even though we can't display them properly.

What part did I not understand? Is it the method of blurring? I've been looking at HDR demos and they appear to do what I have described...

I believe blurring/bloom is added to HDR to make HDR seem more realistic. It is not inherently part of HDR at all though. To my understanding basically HDR is just brightening and darkening parts of your image based off an average brightness level for the whole picture so you can enhance the range of brightness/darkness that you can experience beyond what you can display on screen. Similar to how your pupil expands and contracts. Bloom is just used to make the super bright parts more interesting than just huge bright spots.

I think the effect uses a similar mental trick to how gradients are perceived as flat backgrounds in art so you can use them to make things appear brighter or darker than they actually are:
gradient_illusion.jpg

edit: I think Hodgman's answer is better <_<
To my understanding basically HDR is just brightening and darkening parts of your image based off an average brightness level for the whole picture so you can enhance the range of brightness/darkness that you can experience beyond what you can display on screen.
To keep up my pedantry, that's adaptive tone-mapping, which is used to display HDR data. You can also use a non-adaptive tone-mapper that doesn't take average brightness levels into account wink.gif

This topic is closed to new replies.

Advertisement