Jump to content
  • Advertisement
Sign in to follow this  
lipsryme

Some questions about bloom

This topic is 2126 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

1. Should I use bloom & tonemapping on the specular component ? (to me it seems to distort the form and "energy" of my specular quite a bit)
2. How do I combine the bloomed lightmap with the rest of the scene ?

Because it doesn't seem to effect anything much. When I use a texture that it basically black and has bright white dots and then combine that with the blurred lighting what I get isn't really "glowy" at all. Or am I confusing the result of bloom with bright parts of the image glowing ?

Here are some shots of this:
Diffuse only.
lightmap with bloom: http://cl.ly/image/382p0h321e0S
compose with the bloomed lightmap: http://cl.ly/image/1g0q1h2m2M1U

3. Having the albedo multiplied with the blurred lighting seems to give parts of the image light where there shouldn't be (light leaking)
As you can see on this picture:
http://cl.ly/image/102y261x0w1o

Notice the light leaking through the borders of the cube. Edited by lipsryme

Share this post


Link to post
Share on other sites
Advertisement
I'm quite confused in what you're doing, so I'm going to tell you, how your engine should be structured if you want it to be as realistic as possible. I'm going to divide it into different ordered stages:

1. Incoming Radiance Simulation: In this stage you need to approximate the lighting that shines into your virtual camera / eye. This value should be completely unclamped and can reach values of 0.0001f and 1000000.0f (High Dynamic Range).
Typical Passes in this stage are: G-Buffer Generation, Light Accumulation, Shadow Mapping, SSAO, Sky Rendering, Volumetric Light Scattering, ...

2. Lens Simulation: In this stage you simulate how the incoming lighting gets modified by travelling through the lenses. Lenses typically cause interreflections of the incoming radiance which appear as lens flares on the final image, also lenses might need to focus onto a specific distance (depth of field). Bloom might also occur because the glass is not perfectly pure. Also the lenses might refrect the light, which causes chromatic aberration on cheap lenses.
Typical Passes in this stage are: Lens Flares, Bloom, Depth of Field, Chromatic Aberration

3. Aperture Simulation: The camera / eye needs to adapt to the current average luminance of the incoming lighting, unless you want to set the exposure manually. You should simulate this in this stage.
Typical Passes in this stage are: Average Luminance Calculation, Exposure Adjustment

4. Retina Simulation: In this stage you simulate how the incoming adjusted lighting affects the retina. You need to translate the incoming high dynamic range lighting to the range [0; 1]. The resulting image can be shown on the screen, but you might want to add HUD elements before.
Typical Passes in this stage are: Tone Mapping

You could swap the second and third stage, because the iris of an eye and the aperture of a camera are actually in front of the lenses. But the result should be the same and this is typically the way it's implemented, because exposure ajustment might be done inside the tone mapper.

Also, in your case, you don't want to apply bloom to an untextured render of your scene. Bloom is done inside your eye / camera in the real world and I don't think your eyes see the world untextured laugh.png


1. Should I use bloom & tonemapping on the specular component ? (to me it seems to distort the form and "energy" of my specular quite a bit)

You should apply it on everything the virtual eye sees. Edited by CryZe

Share this post


Link to post
Share on other sites
First of all thanks for taking the time to write this and while it helps to understand the general idea it doesn't solve all my questions. Maybe it was a little confusing what I wanted to know. I guess I got the answer to the first question but the 2. and 3. are still not a hundred percent clear to me.
What I did was show (first screenshot) only the diffuse lighting output (already with bloom & tonemapping) so you could see how it looks. What I compose as the final image is of course what you can see in the second screenshot. I was using this white dots on complete black texture as a test to see the bloom working. And it seems odd to me that no matter what I do, after combining the lighting with the texture color (albedo) the white parts don't appear to "glow". Because ofc just the lighting is blurred and not the texture color. So I guess I confused the effect of bloom with a glow (glowmap). (2. question solved?)
Still for the sake of "correctness". Here's how I do it in my engine.
I calculate my lighting and write that to a light accumulation texture. This texture gets filtered in a bright pass for values >= a threshold which is then written to a separate RT that gets blurred a few times. This texture then gets added to the original light accumulation texture again (originalColor + blurredColor) and then send to the tonemapper.
If there's anything wrong with that tell me smile.png
Which leaves me to question 3. There still appears to be a problem when combing the blurred lightmap with the albedo in the end. The corners seem to bleed with the texture color which should not happen as only the front face of the cube should be lit and everything else should not. Is there any way to fix this problem ? Edited by lipsryme

Share this post


Link to post
Share on other sites

I calculate my lighting and write that to a light accumulation texture. This texture gets filtered in a bright pass for values >= a threshold which is then written to a separate RT that gets blurred a few times. This texture then gets added to the original light accumulation texture again (originalColor + blurredColor) and then send to the tonemapper.
If there's anything wrong with that tell me

Yes, there is something wrong. Like I said, the light accumulation texture is just an intermediate texture in your "Incoming Radiance Simulation Stage". Bloom is what you want to apply after you're done with everything. That means, that you want to combine your Light Accumulation Texture with your Albedo before in your Light Composite Pass (I = LightAccumulationDiffuse * Albedo + LightAccumulationSpecular).

Also there's no difference between Bloom and Glow. What you probably mean is using a Texture for an object as a glow map. You can do that. You basically extend your Light Composite Pass, so that the Glow Map is able to increase the visible radiance without the need for a light. The Bloom will make it seem all glowy afterwards. I = LightAccumulationDiffuse * Albedo + LightAccumulationSpecular + Emissive


Which leaves me to question 3. There still appears to be a problem when combing the blurred lightmap with the albedo in the end. The corners seem to bleed with the texture color which should not happen as only the front face of the cube should be lit and everything else should not. Is there any way to fix this problem ?

This problem won't occur anymore, if you apply Bloom after you combined the Light Accumulation and Albedo. Edited by CryZe

Share this post


Link to post
Share on other sites
edit: Ah so I do the bloom separate and then add that to the final composite, right ? Edited by lipsryme

Share this post


Link to post
Share on other sites

So thinking of a classic deferred approach I render my light in the light accumulation buffer, then combine that with albedo and everything else (ssao) in the composition pass and after that apply bloom & tonemapping as a quasi post-effect ? Is that correct ?

Yes, they are separate passes.
G-Buffer Generation -> Light Accumulation -> Light Composite (combine light accumulation with albedo) -> ... (some other passes) -> Bloom (and lens flares, ... ) -> Average Luminance Calculation -> Exposure Adjustment -> Tone Mapping Edited by CryZe

Share this post


Link to post
Share on other sites

[quote name='lipsryme' timestamp='1347624359' post='4980026']
So thinking of a classic deferred approach I render my light in the light accumulation buffer, then combine that with albedo and everything else (ssao) in the composition pass and after that apply bloom & tonemapping as a quasi post-effect ? Is that correct ?

Yes, they are separate passes.
G-Buffer Generation -> Light Accumulation -> Light Composite (combine light accumulation with albedo) -> ... (some other passes) -> Bloom (and lens flares, ... ) -> Average Luminance Calculation -> Exposure Adjustment -> Tone Mapping
[/quote]

Just one side track question on Bloom. Is there a standard way to generate bloom? I mean i know we

perform a bright pass to extract luminance threshold and then blur.

I mean, Gaussian blur with just few iteration is enough? Also, do we really need to still downsample the buffer

on which we perform blur ? Gaussian looks too much standard to me, I know about epic using some kind of exponential filter and because is not separable they approximate it with multiple Gaussian filter with variable filter size in order to emulate the exponential filter falloff ...
Any other idea on this to get a better convincing glare that looks really cool ?

Thanks in advance

Share this post


Link to post
Share on other sites

Just one side track question on Bloom. Is there a standard way to generate bloom? I mean i know we perform a bright pass to extract luminance threshold and then blur.

I wouldn't recommend using a threshold value. You only need it when you are not having a high enough dynamic range. By simply multiplying the blurred image with something like 0.0075f (that's the value I use, depends on your implementation though) only really bright pixels produce a visible bloom effect, so there's no need for a threshold. You even don't need a bright pass anyway. If you'd actually want to have a bright pass with the threshold value, for whatever reason, you shouldn't do it in a different pass though. Just apply the threshold value when reading the initial values in the Blur Pass. This decreases the bandwidth needed and removes the overhead caused by the additional pass. I'd still not recommend a threshold value though tongue.png


I mean, Gaussian blur with just few iteration is enough? Also, do we really need to still downsample the buffer on which we perform blur ?

The downsampling is needed to effectively double the kernel size each iteration. This way you can use a kernel, that affects the whole screen (so that the sun can bloom over the whole screen for example) that has logarithmic runtime [eqn]O(log(n))[/eqn].
With an intelligent compute shader you might not need to downsample and might be able to do it all in a single pass while still having logarithmic runtime. But I'm not quite sure if the potential implementation I'm currently thinking of would actually result in a gaussian filter. But who knows maybe there's a way to do it in a single pass with logarithmic runtime. For the exponential filter I actually came up with a compute shader implementation with logarithmic runtime. I'm using it as if the filter were separable. So it's really fast.


Gaussian looks too much standard to me, I know about epic using some kind of exponential filter and because is not separable they approximate it with multiple Gaussian filter with variable filter size in order to emulate the exponential filter falloff ...

I'm still using my exponential filter at the moment. The "non-separableness" produces some kind of star-like looking shape and I don't think that actually looks that bad. You can't even see these artifacts in most cases. The star-like shape only gets visible when looking at small bright pixels. For example stars biggrin.png

Other than Epic Games' implementation I could only think of doing basically the exact same thing but instead only in 2 compute shader passes (a horizontal one and a vertical one that also combines the intermediate results). You basically do the downsamples in groupshared memory and write out the intermediate results to global memory. You later combine them to get the approximated exponential filter. The thing is that both their implementation and this theoretical implementation are really heavy on global memory accesses (the theoretical one should be lighter though). That's why I still prefer my "non-separable exponential filter done separable"-implementation over theirs, since it only accesses global memory 4 times per pixel (2 reads, 2 writes).


Any other idea on this to get a better convincing glare that looks really cool ?

I can't think of a filter that would look better than an exponential filter.


In that case the composite has to be done in High dynamic range, correct ?

Yes, the composite should be done in High Dynamic Range. Basically everything should be high dynamic range that involves lighting up until the tone mapper. Not SSAO though, as it represents the percentage of ambient light reaching a specific point. So you only need unsigned normalized values. Edited by CryZe

Share this post


Link to post
Share on other sites

I wouldn't recommend using a threshold value. You only need it when you are not having a high enough dynamic range. By simply multiplying the blurred image with something like 0.0075f (that's the value I use, depends on your implementation though) only really bright pixels produce a visible bloom effect, so there's no need for a threshold. You even don't need a bright pass anyway. If you'd actually want to have a bright pass with the threshold value, for whatever reason, you shouldn't do it in a different pass though. Just apply the threshold value when reading the initial values in the Blur Pass. This decreases the bandwidth needed and removes the overhead caused by the additional pass. I'd still not recommend a threshold value though


Ok that's a good point. Just some questions about your luminance extraction:

1) Do you extract luminance (xyY, so Y) using the usual row from the RGB -> CIE conversion matrix ? So you basically convert back and forth between RGB <-> CIE when you need to perform calculation on RGB values ? And then when you tonemap you workout the right luminance and then use that tonemapped luminance to go from CIE to RGB before to write to the frame buffer ?

2) For what that concern Blur, I want to just be sure that I perform all the step correctly, which means:

2a) Downsample the luminance scene buffer to half the size
3a) Blur the halved buffer (When you blur you take the samples at the texels edges I guess, right ?)
4a) Reapeat 3a on the halved buffer that was blurred in the previous iteration ? (what I mean is I downsample the scene luminance buffer just the very first time and then I blur it repeatedly until satisfied and only in the end I stretch it to fullscreen ?).
Since you said: "The downsampling is needed to effectively double the kernel size each iteration." It looked like I had to also downsample the blurred buffer each iteration ... could you please clarify on this ?
The magic of doubling the kernel happens when you actually stretch to fullscreen at the end of all the process or during the blur iterations ?

3) the exp distribution it's just weight == ?exp(-?x) ? where x is the i-th sample to weight and ? is the actual rate of the distribution? and obviously I divide for the sum of all weights such that their sum account to 1 otherwise the filter will darken my image)

thanks in advance Edited by MegaPixel

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!