Jump to content

  • Log In with Google      Sign In   
  • Create Account


Fog as posteffect


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
8 replies to this topic

#1 Lleran   Members   -  Reputation: 100

Like
0Likes
Like

Posted 13 February 2012 - 12:42 PM

How can I make this using depth buffer and color buffer? I understand general idea, but I can't implement this well. Any ideas, glsl, cg, hlsl code?

Sponsor:

#2 TiagoCosta   Crossbones+   -  Reputation: 1864

Like
2Likes
Like

Posted 13 February 2012 - 02:01 PM

If you just want simple linear fog, render a fullscreen quad and sample the depth of each pixel and calculate the amount of fog you want to add. Then sample the color buffer and apply the fog to it.

Texture2D gDepthMap;
Texture2D gColorMap;

float4 PS(PS_IN pIn) : SV_Target
{
	 float depth = gDepthMap.Sample(gTrilinearSampler, pIn.texC);

	 float fog = //Calculate fog (between 0-1)
	
    float color = gColorMap.Sample(gTrilinearSampler, pIn.texC);

	 return lerp(color, gFogColor, fog);
}

If you want more advanced fog effects you can reconstruct world space position of each pixel using depth
Tiago Costa
Aqua Engine - my DirectX 11 game "engine" - In development

#3 Lleran   Members   -  Reputation: 100

Like
0Likes
Like

Posted 14 February 2012 - 10:09 AM

Thanks for reply. Yes I want to implement simple linear fog first. I know the general idea - I have implemented some posteffects before. My problem is in fog factor calculation. I used this formula f = (fogend – d)/(fogend – fogstart), but it doesn't work properly. My fragment shader code (Cg):

//-----------------------------------------------------------------------------
struct FS_INPUT
{
    float4 Position : POSITION; 
    float2 TexCoord : TEXCOORD0;
};
//-----------------------------------------------------------------------------
float4 main(FS_INPUT IN,
   uniform sampler2D ColorMap,
   uniform sampler2D DepthMap,
   uniform float4    fogColor,
   uniform float	 fogStart,
   uniform float	 fogEnd  
   ) : COLOR
{
  float4 fragmentColor = f4tex2D(ColorMap, IN.TexCoord);
  float4 fragmentDepth = f4tex2D(DepthMap, IN.TexCoord);

  float fogFactor = (fogEnd - length(fragmentDepth)) / (fogEnd - fogStart);
  fogFactor = clamp(fogFactor, 0.0, 1.0);

  return lerp(fragmentColor, fogColor, fogFactor);
}


#4 ATEFred   Members   -  Reputation: 1001

Like
0Likes
Like

Posted 14 February 2012 - 10:33 AM

How can I make this using depth buffer and color buffer? I understand general idea, but I can't implement this well. Any ideas, glsl, cg, hlsl code?


One thing to remember when doing this is potential issues with alpha blended objects / particles / anything that does not write into depth. Plenty of ways of getting round it (alpha doing their own fogging, rendered after the full screen pass, dithered depth buffer (inferred lighting), etc.), but will require some thought of course.

#5 Lleran   Members   -  Reputation: 100

Like
0Likes
Like

Posted 15 February 2012 - 03:06 AM

One thing to remember when doing this is potential issues with alpha blended objects / particles / anything that does not write into depth. Plenty of ways of getting round it (alpha doing their own fogging, rendered after the full screen pass, dithered depth buffer (inferred lighting), etc.), but will require some thought of course.

Yes, you're right. I want to use fog not as weather element - only to cover far objects in fog (e.g. Morrowind). I'm talking about outdoor scenes now. In this case I can to neglect some aspects like particles and other transparent objects - only nearest objects are draw. Well... I think so.

#6 samoth   Crossbones+   -  Reputation: 4505

Like
0Likes
Like

Posted 15 February 2012 - 05:20 AM

So basically you want poor man's atmospheric attenuation to hide geometry popping into existence at the far plane. Is there an urgent reason to do this as postprocess, which burns fillrate for nothing and means one extra pipeline stall?

You could as well do this kind of thing by interpolating the diffuse color with your fog color according to vertex z (possible both in forward and deferred shading), and by reducing the lighting intensity of objects.

You could probably even do a believeable approximation without any extra logic at all, simply by reducing saturation and luminosity in diffuse texture mipmaps.

#7 InvalidPointer   Members   -  Reputation: 1365

Like
0Likes
Like

Posted 15 February 2012 - 12:19 PM

You could probably even do a believeable approximation without any extra logic at all, simply by reducing saturation and luminosity in diffuse texture mipmaps.


Interesting line of thinking, but what about nearby triangles at oblique angles?

EDIT: I know Super Mario Sunshine created an ingenious water rendering hack by doing something like this for the water surface, but there it wasn't so much of a problem-- in fact, the shimmering was probably a desirable artifact.
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

#8 samoth   Crossbones+   -  Reputation: 4505

Like
0Likes
Like

Posted 16 February 2012 - 03:48 AM

Interesting line of thinking, but what about nearby triangles at oblique angles?

Since you would only do the effect on the rather small mipmaps, it would mostly (only?) affect extremely small triangles at a very tesselated object's silhouette. Other "typically oblique" triangles (house walls, floor) aren't normally so small.
Though I'm not sure if and how pronounced you might see saturation changes on the floor when rotating the camera. Maybe you won't notice at all, and maybe you'll think you should cut down on the LSD. Also, I am not sure in howfar aniso sampling might make the effect more visible e.g. on terrain (or, contrarily, hide it, by pulling in a lot of texels from a higher detail level?). One would really have to try :)

All in all, it would mostly mean that tesselated object silhouettes might appear darker and less saturated. Whether that's disturbing, I can't tell. Again, one would have to try. Darker, pronounced silhouettes might look totally wrong, might be not noticeable unless someone tells you, or might look surreal interesting, or might even give a "somewhat fake AO" impression.

#9 samoth   Crossbones+   -  Reputation: 4505

Like
0Likes
Like

Posted 16 February 2012 - 04:17 AM

A few more thoughts: On this landscape photo, which I found via Google, you can see some trees with moderate atmospheric attenuation in the left half, and with some gentle as well as clearly visible attenuation in the right half. Compared to the more-or-less-foreground trees in the middle, these are about 1/12 and 1/30 the size (I'm assuming those trees are more or less the same size in real), which would translate to somewhere between 4 and 5 mip levels (pulling a marquee over some random photo is not a terribly accurate measurement).

So, for the effect to be clearly visible where you don't want it (oblique nearby triangles), that would be triangles which use 4-5 mip levels above normal. Let's say 4 to be on the conservative side. Out of my belly 4 mip levels over the same object sounds like a lot to me.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS