Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Environment maps and visibility


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
9 replies to this topic

#1 Chris_F   Members   -  Reputation: 2438

Like
0Likes
Like

Posted 29 January 2013 - 11:20 PM

I'm working with diffuse and specular irradiance environment maps, and the results look great, some of the time. Unfotunatly, on some types of objects, the lack of visibility calculation is overwhelmingly apparent and completely destroys the effect. It's very apparent on complex objects when the environment has strong contrast, eg. in my image you see the head with a very bright source of light on the opposite side.

 

What are some ways in which this can be handled?


Edited by Chris_F, 29 January 2013 - 11:33 PM.


Sponsor:

#2 Ashaman73   Crossbones+   -  Reputation: 7837

Like
0Likes
Like

Posted 30 January 2013 - 12:31 AM

I think I got similar artifacts when using env-maps, here's my journal entry about how I handled it in my engine, it might be helpful for you.



#3 Hodgman   Moderators   -  Reputation: 31100

Like
0Likes
Like

Posted 30 January 2013 - 01:08 AM

You could look into bent normals or bent cones for sampling the environment, either pre-baked into vertices/textures, or a screen-space version. These replace the actual surface normal with a fudged version to produce more feasible lighting.

You could also try baking a visibility map per vertex/texel, stored in a spherical-harmonics basis (or similar), and then using the visibility term to modulate the env-map. This won't be correct if using pre-convolved env-maps, but might be better than nothing...

#4 Krypt0n   Crossbones+   -  Reputation: 2606

Like
0Likes
Like

Posted 30 January 2013 - 03:17 AM

you could approximate the head with a sphere in the shader and calculate a soft occlusion by it, not perfect, but could be quite fine most of the cases, while fast, minimal storage and no precomputation.



#5 Chris_F   Members   -  Reputation: 2438

Like
0Likes
Like

Posted 30 January 2013 - 05:13 AM

I think I got similar artifacts when using env-maps, here's my journal entry about how I handled it in my engine, it might be helpful for you.

 

I don't think that really helps in my case.

 

 

You could look into bent normals or bent cones for sampling the environment, either pre-baked into vertices/textures, or a screen-space version. These replace the actual surface normal with a fudged version to produce more feasible lighting.

You could also try baking a visibility map per vertex/texel, stored in a spherical-harmonics basis (or similar), and then using the visibility term to modulate the env-map. This won't be correct if using pre-convolved env-maps, but might be better than nothing...

 

I generated a bent normal map in xNormal, and it made nearly no difference. I'd like to know more about using SH visibility maps, but I haven't found much material on it.

 

you could approximate the head with a sphere in the shader and calculate a soft occlusion by it, not perfect, but could be quite fine most of the cases, while fast, minimal storage and no precomputation.

 

This head is just an example. I would need a solution that would work for all objects, some much more complicated than a head.
 



#6 Krypt0n   Crossbones+   -  Reputation: 2606

Like
1Likes
Like

Posted 30 January 2013 - 06:20 AM

ok, I thought it's the use case.

 

in that case, I'd suggest to trace in screenspace for occlusion, the rim lighting effect you get is usually getting occlusion from surfaces further away into the screen. you could trace rays to check for occlusion (depthbuffer-z closer than the ray depth), similar to screenspace reflections.
 



#7 Chris_F   Members   -  Reputation: 2438

Like
0Likes
Like

Posted 30 January 2013 - 07:19 AM

ok, I thought it's the use case.

 

in that case, I'd suggest to trace in screenspace for occlusion, the rim lighting effect you get is usually getting occlusion from surfaces further away into the screen. you could trace rays to check for occlusion (depthbuffer-z closer than the ray depth), similar to screenspace reflections.
 

 

I'd love to try that out, but I'm using a forward shaded engine (closed) so I don't have access to the full scenes depth when shading objects.



#8 Hodgman   Moderators   -  Reputation: 31100

Like
0Likes
Like

Posted 30 January 2013 - 07:32 AM

In that case, theres still some options -
• You can add a z-pre-pass (which is often a good idea in forward renderers anyway) which will give you full scene depth when shading.
• Or, in your forward shading pass, you can output to 2 render targets -- with your highlights going to the 2nd one. Then in a post processing step, you can add the highlights in based on depth (which you now have because forward shading is complete).

(edit) oh, does (closed) mean you can't edit the render pipeline at all?

#9 Chris_F   Members   -  Reputation: 2438

Like
0Likes
Like

Posted 30 January 2013 - 07:41 AM


(edit) oh, does (closed) mean you can't edit the render pipeline at all?

 

Yup. This is UDK I'm working in at the moment. As far as I know, it doesn't implement z-pre-pass, and even if it does I don't believe I can get access to the depth buffer while rendering opaque objects.



#10 MJP   Moderators   -  Reputation: 11585

Like
1Likes
Like

Posted 30 January 2013 - 01:47 PM

With spherical harmonics you can essentially approximate any function that's defined about a sphere. Normally people will approximate lighting by integrating radiance in every direction, but you can also approximate visibility. Basically you just need a bunch of sample points about your mesh (they can be vertices, or lightmap texels), and then you need to evaluate visibility at every direction about hemisphere surrounding that sample point's normal direction (a ray tracer is good for this, but you can also rasterize a hemicube).






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS