Sign in to follow this  
Chris_F

Environment maps and visibility

Recommended Posts

I'm working with diffuse and specular irradiance environment maps, and the results look great, some of the time. Unfotunatly, on some types of objects, the lack of visibility calculation is overwhelmingly apparent and completely destroys the effect. It's very apparent on complex objects when the environment has strong contrast, eg. in my image you see the head with a very bright source of light on the opposite side.

 

What are some ways in which this can be handled?

Edited by Chris_F

Share this post


Link to post
Share on other sites
You could look into bent normals or bent cones for sampling the environment, either pre-baked into vertices/textures, or a screen-space version. These replace the actual surface normal with a fudged version to produce more feasible lighting.

You could also try baking a visibility map per vertex/texel, stored in a spherical-harmonics basis (or similar), and then using the visibility term to modulate the env-map. This won't be correct if using pre-convolved env-maps, but might be better than nothing...

Share this post


Link to post
Share on other sites

I think I got similar artifacts when using env-maps, here's my journal entry about how I handled it in my engine, it might be helpful for you.

 

I don't think that really helps in my case.

 

 

You could look into bent normals or bent cones for sampling the environment, either pre-baked into vertices/textures, or a screen-space version. These replace the actual surface normal with a fudged version to produce more feasible lighting.

You could also try baking a visibility map per vertex/texel, stored in a spherical-harmonics basis (or similar), and then using the visibility term to modulate the env-map. This won't be correct if using pre-convolved env-maps, but might be better than nothing...

 

I generated a bent normal map in xNormal, and it made nearly no difference. I'd like to know more about using SH visibility maps, but I haven't found much material on it.

 

you could approximate the head with a sphere in the shader and calculate a soft occlusion by it, not perfect, but could be quite fine most of the cases, while fast, minimal storage and no precomputation.

 

This head is just an example. I would need a solution that would work for all objects, some much more complicated than a head.
 

Share this post


Link to post
Share on other sites

ok, I thought it's the use case.

 

in that case, I'd suggest to trace in screenspace for occlusion, the rim lighting effect you get is usually getting occlusion from surfaces further away into the screen. you could trace rays to check for occlusion (depthbuffer-z closer than the ray depth), similar to screenspace reflections.
 

Share this post


Link to post
Share on other sites

ok, I thought it's the use case.

 

in that case, I'd suggest to trace in screenspace for occlusion, the rim lighting effect you get is usually getting occlusion from surfaces further away into the screen. you could trace rays to check for occlusion (depthbuffer-z closer than the ray depth), similar to screenspace reflections.
 

 

I'd love to try that out, but I'm using a forward shaded engine (closed) so I don't have access to the full scenes depth when shading objects.

Share this post


Link to post
Share on other sites
In that case, theres still some options -
• You can add a z-pre-pass (which is often a good idea in forward renderers anyway) which will give you full scene depth when shading.
• Or, in your forward shading pass, you can output to 2 render targets -- with your highlights going to the 2nd one. Then in a post processing step, you can add the highlights in based on depth (which you now have because forward shading is complete).

(edit) oh, does (closed) mean you can't edit the render pipeline at all?

Share this post


Link to post
Share on other sites


(edit) oh, does (closed) mean you can't edit the render pipeline at all?

 

Yup. This is UDK I'm working in at the moment. As far as I know, it doesn't implement z-pre-pass, and even if it does I don't believe I can get access to the depth buffer while rendering opaque objects.

Share this post


Link to post
Share on other sites

With spherical harmonics you can essentially approximate any function that's defined about a sphere. Normally people will approximate lighting by integrating radiance in every direction, but you can also approximate visibility. Basically you just need a bunch of sample points about your mesh (they can be vertices, or lightmap texels), and then you need to evaluate visibility at every direction about hemisphere surrounding that sample point's normal direction (a ray tracer is good for this, but you can also rasterize a hemicube).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this