Jump to content
  • Advertisement
Sign in to follow this  
Ben Bowen

Physically Correct "Bloom"

This topic is 1948 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been keeping an eye on the real world... hah. I noticed that, appart from subtle HDR-ish effects, this generic term, "bloom," is really caused by -- if I can find the correct word -- aberration. I find a lot of chromatic distortion, and it's mostly like a specular phenomenon. I don't see this kind of bloom bleeding around the windows, or at the horizon unless I'm in a comparatively dark room (i.e. HDR is at work, with a much more subtle kind of "bloom") To speak roughly in graphical terms (meh), this definitely is not screen-space. It's like when light is diffracted within a semi-transparent layer of a material -- most strong along tangents (towards camera) -- it appears to exit such a surface with various distortions; i.e. chromatic aberration not occuring with a subject medium being a lense -- as I was introduced to it -- but a surface in "the world."

Edited by Reflexus

Share this post


Link to post
Share on other sites
Advertisement

But games often use bloom to simulate the first part as well.


I think this must be handled more carefully. Maybe deferred rendering provides opportunities to improve in this area.

 

the air will scatter some of the light, causing a 'fog' around the source.


I may need to take some pictures to show what I mean. I'm not talking about scattering through fog, atmospheres or other thick participating mediums. I'm talking about surfaces. I'm pretty sure it's not simply chromatic aberration in the lense of my eye. I've been looking for good pictures to demonstrate it, but nothing shows exactly what I mean quite so clearly.

Edited by Reflexus

Share this post


Link to post
Share on other sites

Bloom is sometimes (ab)used to simulate phenomena that it's not really suited for. e.g. on my last project, we used a really wide bloom filter to make up for a lack of atmospheric scattering -- that effect should technically be happen in the world, but it was cheap/easy to do it in screen-space.

 

When talking about glare, there's more than one part to it:

Part of it happens near the light source itself -- the air will scatter some of the light, causing a 'fog' around the source.

Part of it happens near the sensor, which is actually the same effect as "lens flare"! Imperfections in the lenses cause diffraction/scattering/abberations/etc, which causes some of the light not to land where the lens system intended it to land.

 

Both of these effects are most noticeable when the background is very dark, so there's a large amount of contrast between the light and background. If, for example, the lens causes 0.1% of the incoming light to become in-lens glare (or if the air causes 0.1% of the light to be scattered), then this won't be noticed with a dim light and bright background. But if the background is 0.01% as bright as the light source, then the glare will overpower the background and make it hard to see.

 

Physically speaking, it's correct to simulate the second part using a screen-space bloom filter. But games often use bloom to simulate the first part as well.

 

n.b. even with a pinhole camera, which in theory has perfect focus (just like our rasterization with standard projection matrices gives us), not all of the light will land on the intended sensor pixel. When the light passes through the pinhole, a very small percentage will be bent outwards and land in rings around the intended sensor pixel. If the light is bright enough, these halos will become noticeable as glare/bloom.

 

Combine all that with the fact that our eyes work like a CCD camera, another, more traditional looking source of "bloom" that occurs when either our eyes or a CCD type camera sensor have areas that are over exposed (enough), the light (or rather the transmission of detection) will bleed over to other nearby areas of the sensor (or eye). This doesn't often happen naturally to us, our eyes and brains are good at correcting for it, and even better at what in a camera would be called adaptation and dynamic range; so most of the time we rarely see anything overexposed (unless you stare directly at the sun). But you can see it more in CCD type camera (most any camera you have will be CCD unless you're a video professional), especially at low f-stops.

Share this post


Link to post
Share on other sites

I disagree, that biological eyes and ccd cameras work the same in this regard. I think the opposite is the case, if you want to get Photo/Eye realistic, you should make up your mind about which one of the two you want to have. For example I have never observed those beautiful lens flares you get from multi lens cameras with my single lens eyes.

 

Photo realism has the great benefit that you can .. well .. take pictures of it. :-)

 

In case you are interested in "eye realism" take a look at this paper. They try to model what is actually going on in biological eyes. And the results actually look the way the world looks to me when I'm drunk, so they can't be that far off ^^

 

http://www.mpi-inf.mpg.de/~ritschel/Papers/TemporalGlare.pdf

Share this post


Link to post
Share on other sites

In case you are interested in "eye realism" take a look at this paper. They try to model what is actually going on in biological eyes.

 

That's actually pretty old. I read into it a few years ago. I'm still just concerned with how specular highlights can be affected by material in a way that causes a blooming effect. My house has a lot of big windows, but today the cloud cover makes it shady. When I can I'll try to capture this effect.

 

 

Back on topic:

 

Here's a screenshot of BF4. I've manipulated it to look how I think is more physically accurate.


bf4scmcomp.png

Look for differences on:
> The Tires
> The Sun Flare

> The Explosions

 

Full-size links:
Original

Manipulated

I recommend you open the two full size images and swap between the tabs to see the differences better.

Edited by Reflexus

Share this post


Link to post
Share on other sites

I disagree, that biological eyes and ccd cameras work the same in this regard. I think the opposite is the case, if you want to get Photo/Eye realistic, you should make up your mind about which one of the two you want to have. For example I have never observed those beautiful lens flares you get from multi lens cameras with my single lens eyes.

 

Photo realism has the great benefit that you can .. well .. take pictures of it. :-)

 

In case you are interested in "eye realism" take a look at this paper. They try to model what is actually going on in biological eyes. And the results actually look the way the world looks to me when I'm drunk, so they can't be that far off ^^

 

http://www.mpi-inf.mpg.de/~ritschel/Papers/TemporalGlare.pdf

 

I remember that paper as well, certainly all very nice in hypothesis, but the actual results they produce aren't really "true to life" as regards to what we actually see or anything. Like I said, I highly suspect our eyes react very similarly to a CCD camera, and in fact our optic nerves getting overloaded has already been suggested in a different mechanism, the noted "blue shift" we perceive when the level of light transitions between rods and cones. Too bad we can't intercept and interpret all those signals going from our eye to our brain to just get an empirical sample of what's going on; but really by that time it would just be easier to get a screen with a huge contrast ratio of 50,000:1 or something and so not have to do HDR in software at all.

Edited by Frenetic Pony

Share this post


Link to post
Share on other sites

remember that paper as well, certainly all very nice in hypothesis, but the actual results they produce aren't really "true to life" as regards to what we actually see or anything. Like I said, I highly suspect our eyes react very similarly to a CCD camera, and in fact our optic nerves getting overloaded has already been suggested in a different mechanism, the noted "blue shift" we perceive when the level of light transitions between rods and cones. Too bad we can't intercept and interpret all those signals going from our eye to our brain to just get an empirical sample of what's going on; but really by that time it would just be easier to get a screen with a huge contrast ratio of 50,000:1 or something and so not have to do HDR in software at all.


Sure, full-blown diffraction effects are hard to see in every day life, but have you never looked at a bright light source surrounded by darkness? Say a street light at night? Our eyes perceive light in much the same way as any camera, collecting photons on the retina and transmitting intensity and color information across the optic nerve to the brain. There are some extra biological effects, for instance cones and rods shutting down when they are overloaded (like the discoloured or black halo you get when you look at a bright light for too long) and thermal noise that many people can perceive, especially in low light (not much different from sensor noise). The eye's design is a bit more complex than the average camera but overall it can be pretty accurately modelled by existing methods. But we need to apply those methods properly and not go over the top for cinematic effect.

As for the "halo" you see around lights, it is caused partly by scattering, and partly by diffraction at the eye's lens (the "rainbow" effect is due to the fact that different wavelengths are diffracted differently, and the overall result produces these multicolor "streaks" of light). What happens is light waves incident to the lens from that light source are diffracted at the aperture and "bleed" onto nearby locations on the sensor. Here's a mockup I did of the base diffraction pattern for a human eye with my lens diffraction tool:

pupil.png?raw=true


Of course it would need to be colorized, scaled and convolved with the image, but I don't know about you but it's pretty damn close to what I see when I look at a small white light, though obviously it depends on lots of factors. I should try to convolve it on an HDR image and see what the result looks like.

That said I agree with just developing a high dynamic range display. That would be so much easier than messing around with tonemapping algorithms. But then by the same reasoning we may as well just develop a holodeck and let nature do the graphics for us :) Edited by Bacterius

Share this post


Link to post
Share on other sites

but the actual results they produce aren't really "true to life" as regards to what we actually see or anything.


Bacterius has produced a much more realistic seeming result. Anyway, I've concluded that per-pixel flares should definitely operate in an HDR process, a deeper color space, or even a spectral process.

 

but really by that time it would just be easier to get a screen with a huge contrast ratio of 50,000:1 or something and so not have to do HDR in software at all.

 

Guess what monitor manufactures are thinking... 50,000:1? Really, by that time it would just be easier to do it in software. Also, I think you're completely wrong about this. To say the least, I'm quite happy my monitor doesn't pain me with those numbers. Then there's other important technical issues you're forgetting...

Edited by Reflexus

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!