Sign in to follow this  
larspensjo

Using adaptive distance fog color

Recommended Posts

I want the distance fog to have a whiteness that depends on surrounding luminance. How can this be implemented effectively?

 

There is a light map with luminance information for every pixel. However, this map has the same size as the display, so some kind of blurring is needed. I think downsampling the light map to something really small, maybe 8x8 pixels, would make a good input. However, I worry about efficiency here. Though I haven't measured it, I strongly suspect most algorithms will cost much?

 

A very simple algorithm would be to sample 8x8 pixels from the original light map. But that would give a high degree of randomness, and flickering effects when the player turns around. A slightly better solution would be to read 16x16 pixels, and manually downsample to 4x4. However, I think it is preferable not having to read pixel data to the CPU, and then transfer it back to the GPU again.

 

Any suggestions how to best go ahead?

Share this post


Link to post
Share on other sites

I did some glReadPixels() to read out values. It turned out to be fatal, taking 10ms regardless of number of calls to glReadPixels. I think the problem is that I have to wait for the pipeline to finish.

Share this post


Link to post
Share on other sites

Why not do the downsampling on the GPU instead of CPU?

 

Yes, this seems to be the best way.

 

In my first attempt (based on a deferred shader), I did:

  1. Render parts of the scene to a GL_RED texture of lower resolution than the screen (a factor of 4 in each dimension), only saving the weighted luminosity (luminance = fragColor.r*0.3 + fragColor.g*0.6 + fragColor.b*0.1).
  2. A Gaussian blur on this light map.
  3. Downsample using glGenerateMipmap.
  4. Use the resulting bitmap with a specific LOD using textureLod() for setting white point of distance fog and local fog.

It gives a nice average light map, but there are some artifacts. For example, using a LOD that divides the screen into 3x2 pixels from the downsampled bitmap gives 3x2 zones of adapted whiteness on the fog. However, as these 3x2 zones  are positioned relative the screen, not the scene, the average luminosity in each zone will gradually change as  the camera is turned around. For example, slowly turning camera up to the sky, will make pixels lighter in the zone that is transitioning from the ground to the sky. That means fog in this zone will grow lighter, until the fog is moved out of the zone and into another.

 

I think it should be possible to add some offset (in x and y) to each zone, to make them stationary independent of camera rotations. However, the zones will have to move  with camera translations. Picture below shows local fog but with GL_NEAREST_MIPMAP_NEAREST instead of GL_LINEAR_MIPMAP_NEAREST to make the zones obvious.

 

AdapativeFog1_2013-02-07.png

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this