Jump to content

  • Log In with Google      Sign In   
  • Create Account


Separable gaussian blur too dark ?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
20 replies to this topic

#1 lipsryme   Members   -  Reputation: 1006

Like
0Likes
Like

Posted 08 September 2012 - 10:57 AM

So I implemented a 9-tap (5-tap linear sampled) seperable gaussian blur which you can see here:

float4 PS(VSO input) : SV_TARGET0
{
const float offset[3] = {  0.0, 1.3846153846, 3.2307692308 };
const float weight[3] = { 0.2270270270, 0.3162162162, 0.0702702703 };
const float pixelSize = float2(1.0f / _ScreenSize.x, 1.0f / _ScreenSize.y);


float3 texColor = TargetTexture.Sample(TargetTextureSampler, input.UV * pixelSize).xyz * weight[0];

for (int i = 1; i < 3; i++)
{
	texColor += TargetTexture.Sample(TargetTextureSampler, input.UV + float2(0.0f, offset[i] * pixelSize)).xyz * weight[i];
	texColor += TargetTexture.Sample(TargetTextureSampler, input.UV - float2(0.0f, offset[i] * pixelSize)).xyz * weight[i];
}

return float4(texColor.rgb, 1.0f);
}

I'm just not sure if the result is correct since its quite a lot darker than the original image:
original : http://cl.ly/image/093q0Z2c0m1a
blurred: http://cl.ly/image/3E0P2d3t0U1g

I'm trying to achieve a bloom effect so in this sense making the result darker is quite the opposite of what I want...
Am I doing something wrong ?

Edited by lipsryme, 10 September 2012 - 07:10 PM.


Sponsor:

#2 Madhed   Crossbones+   -  Reputation: 2812

Like
0Likes
Like

Posted 08 September 2012 - 11:06 AM

I think you weights are off.
Shouldn't they add up to 1 ?


Nevermind, didn't really read through the algo first Posted Image

Edited by Madhed, 08 September 2012 - 11:09 AM.


#3 Madhed   Crossbones+   -  Reputation: 2812

Like
1Likes
Like

Posted 08 September 2012 - 11:11 AM

float3 texColor = TargetTexture.Sample(TargetTextureSampler, input.UV * pixelSize).xyz * weight[0];


Doesn't that always sample the top left corner (i.e Black) ?

I think it should be:
float3 texColor = TargetTexture.Sample(TargetTextureSampler, input.UV).xyz * weight[0];
Not?

Edited by Madhed, 08 September 2012 - 11:12 AM.


#4 CryZe   Members   -  Reputation: 768

Like
0Likes
Like

Posted 08 September 2012 - 11:11 AM

Your first weight seems to be about half the weight it should be.
Also

I recommend not using a hardcoded array like this. Use a function that evaluates the gaussian filter for you instead.

float gaussianKernel(float x, float standardDeviation)
{
    return exp(-(x * x) / (2 * standardDeviation * standardDeviation)) / (sqrt(2 * 3.14159265) * standardDeviation);
}

The compiler will compile this into constant weights, but it's easier for you to manage your code or compile this shader for multiple standard deviations.

Also, shouldn't your pixel size be a float2?
const float pixelSize = float2(1.0f / _ScreenSize.x, 1.0f / _ScreenSize.y);

And yes, Madhed is right too.

Edited by CryZe, 08 September 2012 - 11:27 AM.


#5 lipsryme   Members   -  Reputation: 1006

Like
0Likes
Like

Posted 08 September 2012 - 11:43 AM

Well I was implementing it based on this implementation here:
http://rastergrid.co...inear-sampling/

or this one seems to do the same:
http://www.geeks3d.c...filter-in-glsl/


update:
You were right the initial texcolor was wrong.
This code now seems to give me a correct image:
float4 PS(VSO input) : SV_TARGET0
{
const float offset[3] = {  0.0, 1.3846153846, 3.2307692308 };
const float weight[3] = { 0.2270270270, 0.3162162162, 0.0702702703 };;
float3 texColor = TargetTexture.Sample(TargetTextureSampler, input.UV).xyz * weight[0];

for (int i = 1; i < 3; i++)
{
    texColor += TargetTexture.Sample(TargetTextureSampler, input.UV + float2(offset[i], 0.0f) / _ScreenSize.x).rgb * weight[i];
    texColor += TargetTexture.Sample(TargetTextureSampler, input.UV - float2(offset[i], 0.0f) / _ScreenSize.x).rgb * weight[i];
}

//return TargetTexture.Sample(TargetTextureSampler, input.UV);
return float4(texColor.rgb, 1.0f);

}

@CryZe what are those inputs 'x' and 'standardDeviation' ?
x = i in my case ?

Edited by lipsryme, 08 September 2012 - 11:56 AM.


#6 CryZe   Members   -  Reputation: 768

Like
0Likes
Like

Posted 08 September 2012 - 12:05 PM

Yes, x is your i and standardDeviation correlates to the width of the filter (it should be about a third of the maximum i). Your weights still seem off though. I fixed them:
const float weight[3] = { 0.40261952, 0.2442015368, 0.0544886997 };

Edited by CryZe, 08 September 2012 - 12:22 PM.


#7 lipsryme   Members   -  Reputation: 1006

Like
0Likes
Like

Posted 08 September 2012 - 12:17 PM

With that in mind would this case be correct ?
float gaussianKernel(float x, float standardDeviation)
{
	return exp(-(x * x) / (2 * standardDeviation * standardDeviation)) / (sqrt(2 * 3.14159265) * standardDeviation);
}

float4 PS(VSO input) : SV_TARGET0
{
   const float offset[3] = {  0.0, 1.3846153846, 3.2307692308 };
   const float weight[3] = { 0.45405405404, 0.3162162162, 0.0702702703 };;
   float3 texColor = TargetTexture.Sample(TargetTextureSampler, input.UV).xyz * weight[0];

   for (int i = 1; i < 3; i++)
   {
	  float weight = gaussianKernel(i, 3 / 3); // (2 / 3) also gives different results
	  texColor += TargetTexture.Sample(TargetTextureSampler, input.UV + float2(offset[i], 0.0f) / _ScreenSize.x).rgb * weight;
	  texColor += TargetTexture.Sample(TargetTextureSampler, input.UV - float2(offset[i], 0.0f) / _ScreenSize.x).rgb * weight;
   }
   return float4(texColor.rgb, 1.0f);
}

Solving the equation with those numbers gives me different ones than my predefined ones.

Edited by lipsryme, 08 September 2012 - 12:28 PM.


#8 CryZe   Members   -  Reputation: 768

Like
1Likes
Like

Posted 08 September 2012 - 12:33 PM

float gaussianKernel(float x, float standardDeviation)
{
	return exp(-(x * x) / (2 * standardDeviation * standardDeviation)) / (sqrt(2 * 3.14159265) * standardDeviation);
}

float4 PS(VSO input) : SV_TARGET0
{
   const int numSamples = 3;
   const float standardDeviation = numSamples / 3.0;

   const float offset[numSamples] = {  0.0, 1.3846153846, 3.2307692308 };
   const float weight[numSamples] = { 0.40261952, 0.2442015368, 0.0544886997 }; //Either use these or the gaussianKernel function
   float3 texColor = TargetTexture.Sample(TargetTextureSampler, input.UV).xyz * gaussianKernel(0, standardDeviation); //You forgot about this weight here

   for (int i = 1; i < numSamples; i++)
   {
	  float weight = gaussianKernel(i, standardDeviation);
	  texColor += TargetTexture.Sample(TargetTextureSampler, input.UV + float2(offset[i], 0.0f) / _ScreenSize.x).rgb * weight;
	  texColor += TargetTexture.Sample(TargetTextureSampler, input.UV - float2(offset[i], 0.0f) / _ScreenSize.x).rgb * weight;
   }
   return float4(texColor.rgb, 1.0f);
}

You might also want to check out the implementation I'm currently using in my engine (even though I'm currently switching to a more optimized compute shader implemention with a runtime of O(log n) per pixel):
#ifndef MIN_WEIGHT
	 #define MIN_WEIGHT 0.0001f
#endif
#ifndef FILTER
	 #error You have to define the filter. (FILTER = (GAUSSIAN|EXPONENTIAL))
#endif
#define GAUSSIAN 0
#define EXPONENTIAL 1
#if FILTER == GAUSSIAN
	 #ifndef STANDARD_DEVIATION
		  #error You have to define the standard deviation when using a gaussian kernel. (STANDARD_DEVIATION = float)
	 #endif
#elif FILTER == EXPONENTIAL
	 #ifndef MEAN_VALUE
		  #error You have to define the mean value when using an exponential kernel. (MEAN_VALUE = float)
	 #endif
#endif
#ifndef DIRECTION
	 #error You have to define the direction. (DIRECTION = (HORIZONTAL|VERTICAL|int2(x,y)))
#endif
#ifndef MIP
	 #define MIP 0
#endif
#define HORIZONTAL int2(1, 0)
#define VERTICAL int2(0, 1)
Texture2D SourceTexture : register(t0);
cbuffer InfoBuffer : register(b0)
{
	 float Width;
	 float Height;
};
struct PSIn
{
	 float4 Position : SV_POSITION;
	 float2 TexCoord : TEXCOORD0;
	 float2 ScreenPos : SCREEN_POSITION;
};
float gaussianKernel(int x, float standardDeviation)
{
	 return exp(-(x * x) / (2 * standardDeviation * standardDeviation)) / (sqrt(2 * 3.14159265) * standardDeviation);
}
float integratedExponentialKernel(float x, float m)
{
	 return 0.5 * (1 - exp(-x / m) / 2) * (sign(x) + 1) - 0.25 * exp(x / m) * (sign(x) - 1);
}
float exponentialKernel(int x, float m)
{
	 return integratedExponentialKernel(x + 0.5, m) - integratedExponentialKernel(x - 0.5, m);
}
float filter(int x)
{
	 #if FILTER == GAUSSIAN
		  return gaussianKernel(x, STANDARD_DEVIATION);
	 #elif FILTER == EXPONENTIAL
		  return exponentialKernel(x, MEAN_VALUE);
	 #endif
}
float3 sample(int2 position, int offset)
{
	 float3 textureColor = 0.0f;
	 float2 newOffset = offset * DIRECTION;

	 if (newOffset.x >= -8 && newOffset.x <= 7 && newOffset.y >= -8 && newOffset.y <= 7)
		  textureColor = SourceTexture.Load(
		   int3(position, MIP),
		   newOffset);
	 else
		  textureColor = SourceTexture.Load(int3(position + newOffset, MIP));
	
	 return textureColor;
}
float4 PSMain(PSIn Input) : SV_Target
{
	 float3 accumulatedColor = 0.0f;
 
	 float accumulatedWeight = 0, weight = 0;
	 [unroll]
	 for (int x = 0; (weight = filter(x)) > MIN_WEIGHT; ++x)
	 {
		  accumulatedWeight += (x != 0) ? (2 * weight) : weight;
	 }
	 [unroll]
	 for (int x = 0; (weight = filter(x)) > MIN_WEIGHT; ++x)
	 {
		  accumulatedColor += weight / accumulatedWeight * sample((int2)Input.ScreenPos, x);
		  if (x != 0)
			   accumulatedColor += weight / accumulatedWeight * sample((int2)Input.ScreenPos, -x);
	 }
	 return float4(accumulatedColor, 1);
}

Edited by CryZe, 08 September 2012 - 12:45 PM.


#9 Madhed   Crossbones+   -  Reputation: 2812

Like
0Likes
Like

Posted 08 September 2012 - 02:09 PM

@cryze

OP's initial weights were alright. I stumbled across this also the first time I had a look at the code. He is doing 2 samples per loop iteration, so the weights add up to 1 as they should be.

#10 CryZe   Members   -  Reputation: 768

Like
2Likes
Like

Posted 08 September 2012 - 03:30 PM

He was / is not doing it for the sample with the index 0. Both samples with index 1 had higher weights, which would not have resulted in a proper gaussian blur.

Edited by CryZe, 08 September 2012 - 03:30 PM.


#11 Madhed   Crossbones+   -  Reputation: 2812

Like
0Likes
Like

Posted 08 September 2012 - 04:02 PM

You're right. Good catch.

#12 toasterthegamer   Members   -  Reputation: 205

Like
1Likes
Like

Posted 08 September 2012 - 07:52 PM

One of my friends has a good blog post he did on the issue you can check it out here:
http://theinstructionlimit.com/gaussian-blur-experiments

Hope this helps! :)

#13 MegaPixel   Members   -  Reputation: 241

Like
0Likes
Like

Posted 09 September 2012 - 04:11 AM

float gaussianKernel(float x, float standardDeviation)
{
	return exp(-(x * x) / (2 * standardDeviation * standardDeviation)) / (sqrt(2 * 3.14159265) * standardDeviation);
}

float4 PS(VSO input) : SV_TARGET0
{
   const int numSamples = 3;
   const float standardDeviation = numSamples / 3.0;

   const float offset[numSamples] = {  0.0, 1.3846153846, 3.2307692308 };
   const float weight[numSamples] = { 0.40261952, 0.2442015368, 0.0544886997 }; //Either use these or the gaussianKernel function
   float3 texColor = TargetTexture.Sample(TargetTextureSampler, input.UV).xyz * gaussianKernel(0, standardDeviation); //You forgot about this weight here

   for (int i = 1; i < numSamples; i++)
   {
	  float weight = gaussianKernel(i, standardDeviation);
	  texColor += TargetTexture.Sample(TargetTextureSampler, input.UV + float2(offset[i], 0.0f) / _ScreenSize.x).rgb * weight;
	  texColor += TargetTexture.Sample(TargetTextureSampler, input.UV - float2(offset[i], 0.0f) / _ScreenSize.x).rgb * weight;
   }
   return float4(texColor.rgb, 1.0f);
}

You might also want to check out the implementation I'm currently using in my engine (even though I'm currently switching to a more optimized compute shader implemention with a runtime of O(log n) per pixel):
#ifndef MIN_WEIGHT
	 #define MIN_WEIGHT 0.0001f
#endif
#ifndef FILTER
	 #error You have to define the filter. (FILTER = (GAUSSIAN|EXPONENTIAL))
#endif
#define GAUSSIAN 0
#define EXPONENTIAL 1
#if FILTER == GAUSSIAN
	 #ifndef STANDARD_DEVIATION
		  #error You have to define the standard deviation when using a gaussian kernel. (STANDARD_DEVIATION = float)
	 #endif
#elif FILTER == EXPONENTIAL
	 #ifndef MEAN_VALUE
		  #error You have to define the mean value when using an exponential kernel. (MEAN_VALUE = float)
	 #endif
#endif
#ifndef DIRECTION
	 #error You have to define the direction. (DIRECTION = (HORIZONTAL|VERTICAL|int2(x,y)))
#endif
#ifndef MIP
	 #define MIP 0
#endif
#define HORIZONTAL int2(1, 0)
#define VERTICAL int2(0, 1)
Texture2D SourceTexture : register(t0);
cbuffer InfoBuffer : register(b0)
{
	 float Width;
	 float Height;
};
struct PSIn
{
	 float4 Position : SV_POSITION;
	 float2 TexCoord : TEXCOORD0;
	 float2 ScreenPos : SCREEN_POSITION;
};
float gaussianKernel(int x, float standardDeviation)
{
	 return exp(-(x * x) / (2 * standardDeviation * standardDeviation)) / (sqrt(2 * 3.14159265) * standardDeviation);
}
float integratedExponentialKernel(float x, float m)
{
	 return 0.5 * (1 - exp(-x / m) / 2) * (sign(x) + 1) - 0.25 * exp(x / m) * (sign(x) - 1);
}
float exponentialKernel(int x, float m)
{
	 return integratedExponentialKernel(x + 0.5, m) - integratedExponentialKernel(x - 0.5, m);
}
float filter(int x)
{
	 #if FILTER == GAUSSIAN
		  return gaussianKernel(x, STANDARD_DEVIATION);
	 #elif FILTER == EXPONENTIAL
		  return exponentialKernel(x, MEAN_VALUE);
	 #endif
}
float3 sample(int2 position, int offset)
{
	 float3 textureColor = 0.0f;
	 float2 newOffset = offset * DIRECTION;

	 if (newOffset.x >= -8 &amp;&amp; newOffset.x <= 7 &amp;&amp; newOffset.y >= -8 &amp;&amp; newOffset.y <= 7)
		  textureColor = SourceTexture.Load(
		   int3(position, MIP),
		   newOffset);
	 else
		  textureColor = SourceTexture.Load(int3(position + newOffset, MIP));
	
	 return textureColor;
}
float4 PSMain(PSIn Input) : SV_Target
{
	 float3 accumulatedColor = 0.0f;

	 float accumulatedWeight = 0, weight = 0;
	 [unroll]
	 for (int x = 0; (weight = filter(x)) > MIN_WEIGHT; ++x)
	 {
		  accumulatedWeight += (x != 0) ? (2 * weight) : weight;
	 }
	 [unroll]
	 for (int x = 0; (weight = filter(x)) > MIN_WEIGHT; ++x)
	 {
		  accumulatedColor += weight / accumulatedWeight * sample((int2)Input.ScreenPos, x);
		  if (x != 0)
			   accumulatedColor += weight / accumulatedWeight * sample((int2)Input.ScreenPos, -x);
	 }
	 return float4(accumulatedColor, 1);
}


Hi,

I have few questions about your approach:

1) why you use the custom semantic SCREEN_POSITION and not the SV_Position which is actually the same once you get it in the pixel shader as input.

2) which kind of kernel is the exponential one with respect to the standard gaussian one and which visual results it gives and what is best for?

3) Also, do you support a poisson sampling kernel as well ? If yes what's the best way to express it and what is good for (I mean in what situation)

4) do you apply you filter on downsampled version of your buffer? If yes how much and how that will impact the correctness of the end result ? I guess no, because I see you don't sample at the pixels edges ... (that should help biliniear sampling when you stretch at fullscreen, right?)

5) Last :D, do you know any good resource on the web where I can find a list of filters along with their use case situation (I mean something like when it's good to use that filter or that other one and in which case and how it looks like etc.)

Thanks in advance for any reply

#14 CryZe   Members   -  Reputation: 768

Like
0Likes
Like

Posted 09 September 2012 - 05:37 AM

1) why you use the custom semantic SCREEN_POSITION and not the SV_Position which is actually the same once you get it in the pixel shader as input.

Oh, I didn't even know about that, thanks Posted Image

2) which kind of kernel is the exponential one with respect to the standard gaussian one and which visual results it gives and what is best for?

The exponential kernel is the exponential distribution from probability theory. It's sharper in the center than a gaussian kernel. This is the reason why it's better suited for bloom than a gaussian kernel. In the Unreal Engine 4 Elemental Demo they used multiple gaussian kernels of different widths and summed them together to approximate an exponential distribution. The thing is, that it's not really separable. That's why they chose to sum multiple gaussians. I implemented it before knowing that it was not separable, that's why it's even in my code. I still don't know which one to choose for my engine. Here's a visual comparison of all of these filters:
Posted Image

3) Also, do you support a poisson sampling kernel as well ? If yes what's the best way to express it and what is good for (I mean in what situation)

No, I can't think of a situation where it would make any sense.

4) do you apply you filter on downsampled version of your buffer? If yes how much and how that will impact the correctness of the end result ? I guess no, because I see you don't sample at the pixels edges ... (that should help biliniear sampling when you stretch at fullscreen, right?)

I point sample the Mip 1 and render the result to a texture with half the size. I didn't really do any benchmarks. But it doesn't really look much worse. I'm currently implementing a compute shader version with logarithmic runtime. It should be fast enough to run in full resolution.

5) Last , do you know any good resource on the web where I can find a list of filters along with their use case situation (I mean something like when it's good to use that filter or that other one and in which case and how it looks like etc.)

I don't know a good website that shows the different kernels. I just came up with the exponential distribution as a filter myself while seeing that Epic Games summed multiple gaussians together, which resulted in the shape of an exponential distribution, which let me think about why one does not simply implement a filter with exponential distribution instead of summing gaussians together.

Edited by CryZe, 09 September 2012 - 05:43 AM.


#15 lipsryme   Members   -  Reputation: 1006

Like
0Likes
Like

Posted 09 September 2012 - 06:29 AM

So to get a good looking bloom do I have to blur my lighting image several times or is there something else I have to change ?
Because as it is now its just slightly blurred but you can't even tell the difference when being added to the rest of the scene (albedo, ....)

#16 MegaPixel   Members   -  Reputation: 241

Like
0Likes
Like

Posted 09 September 2012 - 02:04 PM



5) Last , do you know any good resource on the web where I can find a list of filters along with their use case situation (I mean something like when it's good to use that filter or that other one and in which case and how it looks like etc.)

I don't know a good website that shows the different kernels. I just came up with the exponential distribution as a filter myself while seeing that Epic Games summed multiple gaussians together, which resulted in the shape of an exponential distribution, which let me think about why one does not simply implement a filter with exponential distribution instead of summing gaussians together.


Well because is not separable! You also said that :). While Gaussian blur is separable.

#17 MegaPixel   Members   -  Reputation: 241

Like
1Likes
Like

Posted 09 September 2012 - 02:27 PM

So to get a good looking bloom do I have to blur my lighting image several times or is there something else I have to change ?
Because as it is now its just slightly blurred but you can't even tell the difference when being added to the rest of the scene (albedo, ....)


You blur multiple times until you are satisfied ! Like 3/4 times will give you good results ! You do that by ping ponging and blurring repeatedly the same blurred image ! But then you can also use more or less taps or varying the standard deviation and see what happens ! Fx are not really rocket science. The rule is always tweak tweak tweak until it looks good :)

#18 lipsryme   Members   -  Reputation: 1006

Like
0Likes
Like

Posted 10 September 2012 - 02:29 PM


So to get a good looking bloom do I have to blur my lighting image several times or is there something else I have to change ?
Because as it is now its just slightly blurred but you can't even tell the difference when being added to the rest of the scene (albedo, ....)


You blur multiple times until you are satisfied ! Like 3/4 times will give you good results ! You do that by ping ponging and blurring repeatedly the same blurred image ! But then you can also use more or less taps or varying the standard deviation and see what happens ! Fx are not really rocket science. The rule is always tweak tweak tweak until it looks good Posted Image


Ok it's quite a bit better now with 4 x blur but I still seem to be putting it wrong together. What I currently do is on the last blur I return the blurred image + the original lighting image and then in my deferred composition shader I put it together like so:

   float4 albedo = ToLinear(AlbedoTarget.Sample(LinearTargetSampler, input.UV));
   float4 lighting = LightMapTarget.Sample(LinearTargetSampler, input.UV);
   float AO = SSAOTarget.Sample(LinearTargetSampler, input.UV).r;
   output = albedo * (float4(lighting.rgb, 1.0f) + AO);

The lighting itself does look okay but when being combined with the rest it doesn't seem to do anything except give it a blurry border.

#19 MegaPixel   Members   -  Reputation: 241

Like
0Likes
Like

Posted 11 September 2012 - 03:10 AM



So to get a good looking bloom do I have to blur my lighting image several times or is there something else I have to change ?
Because as it is now its just slightly blurred but you can't even tell the difference when being added to the rest of the scene (albedo, ....)


You blur multiple times until you are satisfied ! Like 3/4 times will give you good results ! You do that by ping ponging and blurring repeatedly the same blurred image ! But then you can also use more or less taps or varying the standard deviation and see what happens ! Fx are not really rocket science. The rule is always tweak tweak tweak until it looks good Posted Image


Ok it's quite a bit better now with 4 x blur but I still seem to be putting it wrong together. What I currently do is on the last blur I return the blurred image + the original lighting image and then in my deferred composition shader I put it together like so:

   float4 albedo = ToLinear(AlbedoTarget.Sample(LinearTargetSampler, input.UV));
   float4 lighting = LightMapTarget.Sample(LinearTargetSampler, input.UV);
   float AO = SSAOTarget.Sample(LinearTargetSampler, input.UV).r;
   output = albedo * (float4(lighting.rgb, 1.0f) + AO);

The lighting itself does look okay but when being combined with the rest it doesn't seem to do anything except give it a blurry border.


Do you blur the AO buffer right ?... The AO term should be part of the ambient part of the lighting equation, so it should be:

output = (albedo * float4(lighting.rgb, 1.0f) ) + AO;

becuase AO is part of the global illumination interaction so it should add as part of the ambient term...

#20 lipsryme   Members   -  Reputation: 1006

Like
0Likes
Like

Posted 11 September 2012 - 04:32 AM

I always thought the ambient term has to only be added to the diffuse/specular light.
edit: Actually no, if I add it to the albedo*lighting I get a weird merged ssao (white/grey) and albedo picture.
But anyway the AO term isn't the problem.

I guess showing it on pictures is easier:

Note: The blurred images already include tonemapping and BG lighting is turned down (0.2f)

Lighing only (blur off): http://cl.ly/image/3x0o1w451W2x
Compose (blur off): http://cl.ly/image/3K022S2a2Y0g

Lighting only (4x blur): http://cl.ly/image/0g0q0z1n0937
Compose (4x blur): http://cl.ly/image/3D2j0K04413Y

As you can also see the back of the cube is leaking the light from the background, too which is a problem...

Edited by lipsryme, 11 September 2012 - 04:54 AM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS