Jump to content

  • Log In with Google      Sign In   
  • Create Account

Using adaptive distance fog color


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
3 replies to this topic

#1 larspensjo   Members   -  Reputation: 1540

Like
0Likes
Like

Posted 04 February 2013 - 03:38 AM

I want the distance fog to have a whiteness that depends on surrounding luminance. How can this be implemented effectively?

 

There is a light map with luminance information for every pixel. However, this map has the same size as the display, so some kind of blurring is needed. I think downsampling the light map to something really small, maybe 8x8 pixels, would make a good input. However, I worry about efficiency here. Though I haven't measured it, I strongly suspect most algorithms will cost much?

 

A very simple algorithm would be to sample 8x8 pixels from the original light map. But that would give a high degree of randomness, and flickering effects when the player turns around. A slightly better solution would be to read 16x16 pixels, and manually downsample to 4x4. However, I think it is preferable not having to read pixel data to the CPU, and then transfer it back to the GPU again.

 

Any suggestions how to best go ahead?


Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

Sponsor:

#2 larspensjo   Members   -  Reputation: 1540

Like
0Likes
Like

Posted 04 February 2013 - 12:17 PM

I did some glReadPixels() to read out values. It turned out to be fatal, taking 10ms regardless of number of calls to glReadPixels. I think the problem is that I have to wait for the pipeline to finish.


Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

#3 Hodgman   Moderators   -  Reputation: 30387

Like
0Likes
Like

Posted 04 February 2013 - 03:17 PM

Why not do the downsampling on the GPU instead of CPU?

#4 larspensjo   Members   -  Reputation: 1540

Like
0Likes
Like

Posted 07 February 2013 - 01:08 AM

Why not do the downsampling on the GPU instead of CPU?

 

Yes, this seems to be the best way.

 

In my first attempt (based on a deferred shader), I did:

  1. Render parts of the scene to a GL_RED texture of lower resolution than the screen (a factor of 4 in each dimension), only saving the weighted luminosity (luminance = fragColor.r*0.3 + fragColor.g*0.6 + fragColor.b*0.1).
  2. A Gaussian blur on this light map.
  3. Downsample using glGenerateMipmap.
  4. Use the resulting bitmap with a specific LOD using textureLod() for setting white point of distance fog and local fog.

It gives a nice average light map, but there are some artifacts. For example, using a LOD that divides the screen into 3x2 pixels from the downsampled bitmap gives 3x2 zones of adapted whiteness on the fog. However, as these 3x2 zones  are positioned relative the screen, not the scene, the average luminosity in each zone will gradually change as  the camera is turned around. For example, slowly turning camera up to the sky, will make pixels lighter in the zone that is transitioning from the ground to the sky. That means fog in this zone will grow lighter, until the fog is moved out of the zone and into another.

 

I think it should be possible to add some offset (in x and y) to each zone, to make them stationary independent of camera rotations. However, the zones will have to move  with camera translations. Picture below shows local fog but with GL_NEAREST_MIPMAP_NEAREST instead of GL_LINEAR_MIPMAP_NEAREST to make the zones obvious.

 

AdapativeFog1_2013-02-07.png


Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS