Jump to content

  • Log In with Google      Sign In   
  • Create Account


Physically Correct "Bloom"


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
19 replies to this topic

#1 Rhetorician   Members   -  Reputation: 119

Like
0Likes
Like

Posted 24 May 2013 - 12:20 AM

I've been keeping an eye on the real world... hah. I noticed that, appart from subtle HDR-ish effects, this generic term, "bloom," is really caused by -- if I can find the correct word -- aberration. I find a lot of chromatic distortion, and it's mostly like a specular phenomenon. I don't see this kind of bloom bleeding around the windows, or at the horizon unless I'm in a comparatively dark room (i.e. HDR is at work, with a much more subtle kind of "bloom") To speak roughly in graphical terms (meh), this definitely is not screen-space. It's like when light is diffracted within a semi-transparent layer of a material -- most strong along tangents (towards camera) -- it appears to exit such a surface with various distortions; i.e. chromatic aberration not occuring with a subject medium being a lense -- as I was introduced to it -- but a surface in "the world."


Edited by Reflexus, 24 May 2013 - 03:11 PM.


Sponsor:

#2 Hodgman   Moderators   -  Reputation: 28613

Like
5Likes
Like

Posted 24 May 2013 - 12:46 AM

Bloom is sometimes (ab)used to simulate phenomena that it's not really suited for. e.g. on my last project, we used a really wide bloom filter to make up for a lack of atmospheric scattering -- that effect should technically be happen in the world, but it was cheap/easy to do it in screen-space.

 

When talking about glare, there's more than one part to it:

Part of it happens near the light source itself -- the air will scatter some of the light, causing a 'fog' around the source.

Part of it happens near the sensor, which is actually the same effect as "lens flare"! Imperfections in the lenses cause diffraction/scattering/abberations/etc, which causes some of the light not to land where the lens system intended it to land.

 

Both of these effects are most noticeable when the background is very dark, so there's a large amount of contrast between the light and background. If, for example, the lens causes 0.1% of the incoming light to become in-lens glare (or if the air causes 0.1% of the light to be scattered), then this won't be noticed with a dim light and bright background. But if the background is 0.01% as bright as the light source, then the glare will overpower the background and make it hard to see.

 

Physically speaking, it's correct to simulate the second part using a screen-space bloom filter. But games often use bloom to simulate the first part as well.

 

n.b. even with a pinhole camera, which in theory has perfect focus (just like our rasterization with standard projection matrices gives us), not all of the light will land on the intended sensor pixel. When the light passes through the pinhole, a very small percentage will be bent outwards and land in rings around the intended sensor pixel. If the light is bright enough, these halos will become noticeable as glare/bloom.


Edited by Hodgman, 24 May 2013 - 12:56 AM.


#3 Rhetorician   Members   -  Reputation: 119

Like
0Likes
Like

Posted 24 May 2013 - 12:51 AM

But games often use bloom to simulate the first part as well.


I think this must be handled more carefully. Maybe deferred rendering provides opportunities to improve in this area.

 

the air will scatter some of the light, causing a 'fog' around the source.


I may need to take some pictures to show what I mean. I'm not talking about scattering through fog, atmospheres or other thick participating mediums. I'm talking about surfaces. I'm pretty sure it's not simply chromatic aberration in the lense of my eye. I've been looking for good pictures to demonstrate it, but nothing shows exactly what I mean quite so clearly.


Edited by Reflexus, 24 May 2013 - 01:11 AM.


#4 Frenetic Pony   Members   -  Reputation: 1223

Like
0Likes
Like

Posted 24 May 2013 - 01:16 AM

Bloom is sometimes (ab)used to simulate phenomena that it's not really suited for. e.g. on my last project, we used a really wide bloom filter to make up for a lack of atmospheric scattering -- that effect should technically be happen in the world, but it was cheap/easy to do it in screen-space.

 

When talking about glare, there's more than one part to it:

Part of it happens near the light source itself -- the air will scatter some of the light, causing a 'fog' around the source.

Part of it happens near the sensor, which is actually the same effect as "lens flare"! Imperfections in the lenses cause diffraction/scattering/abberations/etc, which causes some of the light not to land where the lens system intended it to land.

 

Both of these effects are most noticeable when the background is very dark, so there's a large amount of contrast between the light and background. If, for example, the lens causes 0.1% of the incoming light to become in-lens glare (or if the air causes 0.1% of the light to be scattered), then this won't be noticed with a dim light and bright background. But if the background is 0.01% as bright as the light source, then the glare will overpower the background and make it hard to see.

 

Physically speaking, it's correct to simulate the second part using a screen-space bloom filter. But games often use bloom to simulate the first part as well.

 

n.b. even with a pinhole camera, which in theory has perfect focus (just like our rasterization with standard projection matrices gives us), not all of the light will land on the intended sensor pixel. When the light passes through the pinhole, a very small percentage will be bent outwards and land in rings around the intended sensor pixel. If the light is bright enough, these halos will become noticeable as glare/bloom.

 

Combine all that with the fact that our eyes work like a CCD camera, another, more traditional looking source of "bloom" that occurs when either our eyes or a CCD type camera sensor have areas that are over exposed (enough), the light (or rather the transmission of detection) will bleed over to other nearby areas of the sensor (or eye). This doesn't often happen naturally to us, our eyes and brains are good at correcting for it, and even better at what in a camera would be called adaptation and dynamic range; so most of the time we rarely see anything overexposed (unless you stare directly at the sun). But you can see it more in CCD type camera (most any camera you have will be CCD unless you're a video professional), especially at low f-stops.



#5 Ohforf sake   Members   -  Reputation: 1688

Like
2Likes
Like

Posted 24 May 2013 - 03:09 AM

I disagree, that biological eyes and ccd cameras work the same in this regard. I think the opposite is the case, if you want to get Photo/Eye realistic, you should make up your mind about which one of the two you want to have. For example I have never observed those beautiful lens flares you get from multi lens cameras with my single lens eyes.

 

Photo realism has the great benefit that you can .. well .. take pictures of it. :-)

 

In case you are interested in "eye realism" take a look at this paper. They try to model what is actually going on in biological eyes. And the results actually look the way the world looks to me when I'm drunk, so they can't be that far off ^^

 

http://www.mpi-inf.mpg.de/~ritschel/Papers/TemporalGlare.pdf



#6 Rhetorician   Members   -  Reputation: 119

Like
0Likes
Like

Posted 24 May 2013 - 02:04 PM

In case you are interested in "eye realism" take a look at this paper. They try to model what is actually going on in biological eyes.

 

That's actually pretty old. I read into it a few years ago. I'm still just concerned with how specular highlights can be affected by material in a way that causes a blooming effect. My house has a lot of big windows, but today the cloud cover makes it shady. When I can I'll try to capture this effect.

 

 

Back on topic:

 

Here's a screenshot of BF4. I've manipulated it to look how I think is more physically accurate.


bf4scmcomp.png

Look for differences on:
> The Tires
> The Sun Flare

> The Explosions

 

Full-size links:
Original

Manipulated

I recommend you open the two full size images and swap between the tabs to see the differences better.


Edited by Reflexus, 24 May 2013 - 02:58 PM.


#7 Frenetic Pony   Members   -  Reputation: 1223

Like
0Likes
Like

Posted 24 May 2013 - 02:12 PM

I disagree, that biological eyes and ccd cameras work the same in this regard. I think the opposite is the case, if you want to get Photo/Eye realistic, you should make up your mind about which one of the two you want to have. For example I have never observed those beautiful lens flares you get from multi lens cameras with my single lens eyes.

 

Photo realism has the great benefit that you can .. well .. take pictures of it. :-)

 

In case you are interested in "eye realism" take a look at this paper. They try to model what is actually going on in biological eyes. And the results actually look the way the world looks to me when I'm drunk, so they can't be that far off ^^

 

http://www.mpi-inf.mpg.de/~ritschel/Papers/TemporalGlare.pdf

 

I remember that paper as well, certainly all very nice in hypothesis, but the actual results they produce aren't really "true to life" as regards to what we actually see or anything. Like I said, I highly suspect our eyes react very similarly to a CCD camera, and in fact our optic nerves getting overloaded has already been suggested in a different mechanism, the noted "blue shift" we perceive when the level of light transitions between rods and cones. Too bad we can't intercept and interpret all those signals going from our eye to our brain to just get an empirical sample of what's going on; but really by that time it would just be easier to get a screen with a huge contrast ratio of 50,000:1 or something and so not have to do HDR in software at all.


Edited by Frenetic Pony, 24 May 2013 - 05:09 PM.


#8 Rhetorician   Members   -  Reputation: 119

Like
0Likes
Like

Posted 24 May 2013 - 04:19 PM

May someone please take a look at the pictures I've posted so far?



#9 Bacterius   Crossbones+   -  Reputation: 8316

Like
2Likes
Like

Posted 24 May 2013 - 07:50 PM

remember that paper as well, certainly all very nice in hypothesis, but the actual results they produce aren't really "true to life" as regards to what we actually see or anything. Like I said, I highly suspect our eyes react very similarly to a CCD camera, and in fact our optic nerves getting overloaded has already been suggested in a different mechanism, the noted "blue shift" we perceive when the level of light transitions between rods and cones. Too bad we can't intercept and interpret all those signals going from our eye to our brain to just get an empirical sample of what's going on; but really by that time it would just be easier to get a screen with a huge contrast ratio of 50,000:1 or something and so not have to do HDR in software at all.


Sure, full-blown diffraction effects are hard to see in every day life, but have you never looked at a bright light source surrounded by darkness? Say a street light at night? Our eyes perceive light in much the same way as any camera, collecting photons on the retina and transmitting intensity and color information across the optic nerve to the brain. There are some extra biological effects, for instance cones and rods shutting down when they are overloaded (like the discoloured or black halo you get when you look at a bright light for too long) and thermal noise that many people can perceive, especially in low light (not much different from sensor noise). The eye's design is a bit more complex than the average camera but overall it can be pretty accurately modelled by existing methods. But we need to apply those methods properly and not go over the top for cinematic effect.

As for the "halo" you see around lights, it is caused partly by scattering, and partly by diffraction at the eye's lens (the "rainbow" effect is due to the fact that different wavelengths are diffracted differently, and the overall result produces these multicolor "streaks" of light). What happens is light waves incident to the lens from that light source are diffracted at the aperture and "bleed" onto nearby locations on the sensor. Here's a mockup I did of the base diffraction pattern for a human eye with my lens diffraction tool:

pupil.png?raw=true


Of course it would need to be colorized, scaled and convolved with the image, but I don't know about you but it's pretty damn close to what I see when I look at a small white light, though obviously it depends on lots of factors. I should try to convolve it on an HDR image and see what the result looks like.

That said I agree with just developing a high dynamic range display. That would be so much easier than messing around with tonemapping algorithms. But then by the same reasoning we may as well just develop a holodeck and let nature do the graphics for us :)

Edited by Bacterius, 24 May 2013 - 07:57 PM.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#10 Rhetorician   Members   -  Reputation: 119

Like
0Likes
Like

Posted 24 May 2013 - 08:29 PM

but the actual results they produce aren't really "true to life" as regards to what we actually see or anything.


Bacterius has produced a much more realistic seeming result. Anyway, I've concluded that per-pixel flares should definitely operate in an HDR process, a deeper color space, or even a spectral process.

 

but really by that time it would just be easier to get a screen with a huge contrast ratio of 50,000:1 or something and so not have to do HDR in software at all.

 

Guess what monitor manufactures are thinking... 50,000:1? Really, by that time it would just be easier to do it in software. Also, I think you're completely wrong about this. To say the least, I'm quite happy my monitor doesn't pain me with those numbers. Then there's other important technical issues you're forgetting...


Edited by Reflexus, 24 May 2013 - 08:37 PM.


#11 Rhetorician   Members   -  Reputation: 119

Like
0Likes
Like

Posted 25 May 2013 - 07:08 PM

Bump:

I'm curious to what people think about my changes here.

 

bf4scm.png

 

You're probably thinking the bloom around the specular highlight on the tires looks unnatural, but keep in mind that it's just an illustration depicting a phenomenon I've realized with my observation. After more careful observation, I have concluded it is indeed a flaring effect which requires a deep range of luminosity; a requirement slightly addressed in my post above. Without such a deep range, it will not appear and the specular highlight will look thin or even virtually invisible i.e. "normal" as you'd expect. In this manipulation, I also reduced the rediculous bloom occuring skyward, as well as adjusted color bleeding: there's none on the tires anymore but there is more between the explosions' smoke clouds. I've been speculating that diffraction even within the sub-layers of surfaces in-scene may contribute to this bloom effect I've described in even more peculiar ways. It seems linked to chromatic aberration, as I noted in the original post.

Now that I recollect -- it has been many years since I first looked into bloom -- isn't this pretty much how bloom is actually established in formal research work? Then I don't get why so many games implement it so damn incorrectly.


Edited by Reflexus, 25 May 2013 - 07:21 PM.


#12 Bacterius   Crossbones+   -  Reputation: 8316

Like
1Likes
Like

Posted 25 May 2013 - 08:19 PM

You're probably thinking the bloom around the specular highlight on the tires looks unnatural, but keep in mind that it's just an illustration depicting a phenomenon I've realized with my observation. After more careful observation, I have concluded it is indeed a flaring effect which requires a deep range of luminosity; a requirement slightly addressed in my post above. Without such a deep range, it will not appear and the specular highlight will look thin or even virtually invisible i.e. "normal" as you'd expect. In this manipulation, I also reduced the rediculous bloom occuring skyward, as well as adjusted color bleeding: there's none on the tires anymore but there is more between the explosions' smoke clouds. I've been speculating that diffraction even within the sub-layers of surfaces in-scene may contribute to this bloom effect I've described in even more peculiar ways. It seems linked to chromatic aberration, as I noted in the original post.

 

The only thing that appears unnatural to me (besides the subtle halos left where you removed the flares) is that the little puddle of water middle-left seems unnaturally dim. But it's really hard to tell. Plus, I've never been in an apocalyptic war zone with a nuke going off in the background :)

 

What do you mean by "diffraction within the sub-layers of surfaces"? Chromatic aberration is a property of the observer's lens, not of the scene itself. And, yes, any form of glare requires a large dynamic range to work. The sun can have a direct brightness more than ten million times higher than your average LCD monitor. You just can't handle that in an LDR framework. And it depends on the tonemapping algorithm used, not all of them accurately depict how the eye actually reacts to changes in brightness.

 

The problem is that perception is also subjective. Not every one sees the world the same way, there are slight changes in color, brightness, and lots of other stuff (some people wear glasses, contacts) so what might look realistic to you might look completely wrong for someone else. There are baseline features that are common to everyone but until we can tweak this to make it work for everyone I assume games find it more cost-effective (both in terms of implementation and research) to simply assume the gamer sees the world through a camera instead of his own eyes. And as I said before, stuff often looks more spectacular when you add extra effects even when it's not physically correct, which sells :P


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#13 Rhetorician   Members   -  Reputation: 119

Like
0Likes
Like

Posted 25 May 2013 - 08:25 PM

"diffraction within the sub-layers of surfaces"? Chromatic aberration is a property of the observer's lens, not of the scene itself

 

Subtle caustics which look like chromatic aberration. Specifically, these "caustics" don't necessarily become effectively evident after reflecting off another surface, but instead they directly hit the eye. I know I'm not using any terms correctly, but at the moment I'm not really focused on exactly what to call it.

 

The only thing that appears unnatural to me (besides the subtle halos left where you removed the flares) is that the little puddle of water middle-left seems unnaturally dim.

 

Yes, I did notice I messed that up and I soon fixed it after uploading the picture you see there. Since you've already noticed yourself, I won't upload the fixed version.


Edited by Reflexus, 25 May 2013 - 08:34 PM.


#14 Bacterius   Crossbones+   -  Reputation: 8316

Like
1Likes
Like

Posted 25 May 2013 - 09:12 PM

Subtle caustics which look like chromatic aberration. Specifically, these "caustics" don't necessarily become effectively evident after reflecting off another surface, but instead they directly hit the eye. I know I'm not using any terms correctly, but at the moment I'm not really focused on exactly what to call it.

 

I'm not sure I understand what you are referring to. Could you give an example, or maybe a picture of the effect? I know it's hard to find pictures of abstract stuff like this but anything would help. The only thing I can think of that comes close to what you describe is multicolored noise on various objects due to interference of light at the surface, like oil film reflections but on a much smaller scale. Or specular reflection/transmission of a bright light source which seems to change color depending on the surface orientation.


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#15 Rhetorician   Members   -  Reputation: 119

Like
1Likes
Like

Posted 25 May 2013 - 11:12 PM

The only thing I can think of that comes close to what you describe is multicolored noise on various objects due to interference of light at the surface, like oil film reflections but on a much smaller scale.

 

Interesting. By looking around my house, I actually discovered this when observing bright specular highlights on oiled woodwork, but something comparable also seemed to occur when I looked at the bright highlights produced by elaborate glass vases which were constructed like mosaics with many discrete glass sub-faces glued together (lot of caustic effects). I'm still trying to reproduce images. I just need to find the right time of day to take pictures, but I'm usually busy/distracted with programming.


Edited by Reflexus, 25 May 2013 - 11:33 PM.


#16 LancerSolurus   Members   -  Reputation: 553

Like
0Likes
Like

Posted 25 May 2013 - 11:49 PM

I don't see it anything like what has been posted in any of the above pictures. All I see is a spiked halo around a bright light source if Im not wearing my glasses. Wearing my glasses I see no halo or lens flare at all, all I see is a perfectly sharp image. As posted everyone sees things differently, all of the above would be unnatural to me. I would have to squint my eyes to get the above effect, getting a diffraction pattern from my eyelashes.


******************************************************************************************
Youtube Channel


#17 Rhetorician   Members   -  Reputation: 119

Like
0Likes
Like

Posted 26 May 2013 - 01:15 AM

I don't see it anything like what has been posted in any of the above pictures. All I see is a spiked halo around a bright light source if Im not wearing my glasses.


It also depends a lot on the light source and several other conditions.



#18 Bacterius   Crossbones+   -  Reputation: 8316

Like
1Likes
Like

Posted 26 May 2013 - 01:47 AM

The only thing I can think of that comes close to what you describe is multicolored noise on various objects due to interference of light at the surface, like oil film reflections but on a much smaller scale.

 

Interesting. By looking around my house, I actually discovered this when observing bright specular highlights on oiled woodwork, but something comparable also seemed to occur when I looked at the bright highlights produced by elaborate glass vases which were constructed like mosaics with many discrete glass sub-faces glued together (lot of caustic effects). I'm still trying to reproduce images. I just need to find the right time of day to take pictures, but I'm usually busy/distracted with programming.

 

At first I was thinking that you were simply observing dispersion, which makes sense for the glass vases, but that doesn't really happen on glossy wood. So it looks like a case of thin film interference, which is compatible with both situations (in the first case the film is the oil, in the second case it's the glue - or maybe it's actually dispersion in the second case and they just happen to look similar). I really cannot think of anything else at the moment. In either case, yes, what happens is only certain wavelengths of light make it to your eye depending on how you are looking at the object (others are either deflected away from your line of sight or eliminated by destructive interference) so if you move slightly the highlight will seem to change in color. This effect is of course independent of the lens system used to observe it. Is this what you see?


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#19 Rhetorician   Members   -  Reputation: 119

Like
1Likes
Like

Posted 26 May 2013 - 02:46 AM

You completely nailed it. I'm very new to this field of study, but I just looked up dispersion and now I'm absolutely sure you're correct for both situations. Thin film interference seems to be the cause for the oiled wood, and dispersion for the vases. I feel stupid because I've completely failed to associate any phenomena with their proper terms (slaughtered it), but I think this topic is completely solved now. The vases do have an interesting gloss coating, so I think thin film interference is also playing a role.  Thanks smile.png

 

 

By the way, what happens when the coating is thicker than allowed by the model in your article, and cos0 is very near to zero (don't think absorption, but dispersion)?


Edited by Reflexus, 26 May 2013 - 03:03 AM.


#20 Bacterius   Crossbones+   -  Reputation: 8316

Like
1Likes
Like

Posted 26 May 2013 - 04:19 AM

By the way, what happens when the coating is thicker than allowed by the model in your article, and cos0 is very near to zero (don't think absorption, but dispersion)?

 

When the coating becomes too large and the local entry/exit point approximation breaks down, the following happens:

 

1. interference effects disappear (in fact, the fringes become so thin that they average out to white, this isn't really captured by the BRDF but if you do any kind of anti-aliasing that's what would happen. in reality I do not recommend doing this unless you are working within a wave rendering framework, which is probably not the case)

 

2. dispersion effects in the coating start to dominate (assuming the coating is a dispersive medium) as the waves travel longer - and hence diverge more - inside the coating

 

3. you actually need some sort of method to have the exit point of light differ from the entry point, it is possible and fairly easy to calculate the distribution of the exit points so that you can sample them randomly but you'll need something akin to subsurface scattering to actually handle that feature in your rendering pipeline

 

In fact it probably makes little sense to use this model when the coating becomes so thick as to not exhibit interference effects as that's the model's raison d'etre. At that point it's probably best to actually use two parallel pieces of geometry :)

 

When cos0 tends to zero (grazing incidence angle) almost all light is reflected off the coating by Fresnel's laws so no appreciable interference or dispersion is observed. You may even get total internal reflection depending on the refractive index of the coating and of the medium the incident wave is in.


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS