Jump to content
  • Advertisement
Sign in to follow this  
maxest

[Deferred Shading] Check whether a camera is inside or outside a volume

This topic is 2514 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Many techniques like deferred shading can be nicely optimized (stencil culling etc.) if we can check whether a camera is inside or outside some volume, say, sphere volume for a point light. We can easily check whether a point (camera) is inside or outside a sphere, but the problem is that the geometry of that sphere that will be used in stencil masking is coarser than the idealized sphere against which we test the presence of the camera. See the attached picture. The camera's position is the red point, and that polygonal shape inside the sphere is the approximate mesh of the sphere.
In that case we can experience artefacts in lighting, because the camera will be classified to be inside the volume, so only back faces of the sphere will be rendered, but in this case we should render front faces also. How would you overcome this problem?

Share this post


Link to post
Share on other sites
Advertisement
Switching to backfaces when the camera is slightly outside the light volume won't cause visual artifacts, as long as the light isn't so big as to intersect the with camera's far clipping plane (in which case you should just fall back to using a fullscreen quad). You might get slightly worse performance than if you stayed with front-faces + stenciling, but in practice probably not since every pixel of the front faces is probably going to pass the depth test when it's that close to the camera. So there really isn't much of an issue unless you have really big lights that can possibly intersect both the near and far clipping planes, but this is usually rare.

Oh and another tip: a point-in-sphere test doesn't really work for this, since you need to account for the camera's near clipping plane.

Share this post


Link to post
Share on other sites
Hidden
I think I get it. Correct me if I am wrong but I guess that we can equally well render backfaces *always*. This will produce correct shading results. The only downside is that objects that are in between camera and light source (and are not enclosed by the volume of that light source) will also get shaded.

However, this polygonization of the boudning sphere is a problem when doing volumetric fog. I made a simple demo where a sphere is used as the volume. Now, when we are outside the volume we want to subtract distances to the front faces, from the distance to the back faces, thus getting the distance that light travels through the volume. On the other hand, when we are inside the volume, we only want to render backfaces. I can see quite severe artefacts when rendering only back faces when the camera is outside the volume or rendering front and back faces when the camera is inside the volume. How to tackle that problem? I think that a solution could be to prevent the camera, or actually it's near plane, to ever cross the volume's geometry. To do this we could extend the volume a little bit when the camera is close enough to it, thus making the camera to be inside the volume. But that sound like a dirty hax. Is there a more elegant solution this?

Share this post


Link to post
@rdragon1: Unfortunately it's not that easy.

@MJP: I think I get it. Correct me if I am wrong but I guess that we can equally well render backfaces *always*. This will produce correct shading results. The only downside is that objects that are in between camera and light source (and are not enclosed by the volume of that light source) will also get shaded.

However, this polygonization of the boudning sphere is a problem when doing volumetric fog. I made a simple demo where a sphere is used as the volume. Now, when we are outside the volume we want to subtract distances to the front faces, from the distance to the back faces, thus getting the distance that light travels through the volume. On the other hand, when we are inside the volume, we only want to render backfaces. I can see quite severe artefacts when rendering only back faces when the camera is outside the volume or rendering front and back faces when the camera is inside the volume. How to tackle that problem? I think that a solution could be to prevent the camera, or actually it's near plane, to ever cross the volume's geometry. To do this we could extend the volume a little bit when the camera is close enough to it, thus making the camera to be inside the volume. But that sound like a dirty hax. Is there a more elegant solution this?

Share this post


Link to post
Share on other sites

@MJP: I think I get it. Correct me if I am wrong but I guess that we can equally well render backfaces *always*. This will produce correct shading results. The only downside is that objects that are in between camera and light source (and are not enclosed by the volume of that light source) will also get shaded.


Indeed, you won't get any artifacts as long as the volume doesn't intersect with the far clipping plane. And yes, you can't reject surfaces in between the camera and the light volume when you're only rendering backfaces. However this isn't a concern when you're inside the volume, or very very close to being inside the volume.


However, this polygonization of the boudning sphere is a problem when doing volumetric fog. I made a simple demo where a sphere is used as the volume. Now, when we are outside the volume we want to subtract distances to the front faces, from the distance to the back faces, thus getting the distance that light travels through the volume. On the other hand, when we are inside the volume, we only want to render backfaces. I can see quite severe artefacts when rendering only back faces when the camera is outside the volume or rendering front and back faces when the camera is inside the volume. How to tackle that problem? I think that a solution could be to prevent the camera, or actually it's near plane, to ever cross the volume's geometry. To do this we could extend the volume a little bit when the camera is close enough to it, thus making the camera to be inside the volume. But that sound like a dirty hax. Is there a more elegant solution this?


Perhaps I'm not fully understanding what you're trying to do here, but couldn't you just compute the distance analytically in the pixel shader?

Share this post


Link to post
Share on other sites

Perhaps I'm not fully understanding what you're trying to do here, but couldn't you just compute the distance analytically in the pixel shader?
[/quote]
That wouldn't change anything.

I will briefly describe what I'm doing. To compute volumetric fog (in this case the volume is a sphere) I compute how much light has travelled through the volume. To do so, I render to one render target RT1 objects and the volume, and to the other render target RT2 I render objects and backfaces of the volume. Now, if the camera is outside the volume I want to compute RT2 - RT1, and if I am inside the volume I only need to use RT2. Since the volume is a tesselated sphere, doing test like this to differentiate between these two cases:

if (getDistanceBetweenPoints(eye, sphereCenter) > sphereRadius)
renderer.setTexture(0, RTDifference);
else
renderer.setTexture(0, RT2);

does not work if the camera is positioned close to the tesselated sphere, yet considered to be inside the sphere in the if-statement above (siutation on the picture I posted before).

As far as I understand your suggestion, you want to put that if-statement in the pixel shader. Well, it can be done like that, but that doesn't solve the problem.

Share this post


Link to post
Share on other sites
What I meant was to analytically determine the length of segment of the view ray that intersects with the sphere, rather than render the backface/frontface depths and comparing them.

Share this post


Link to post
Share on other sites
I guess it is possible. I even found an interesting article "Ray Traced Fog Volumes" in ShaderX4 which does that on a per-vertex basis. This can also be done per-pixel. However, note that backfaces of the volume (or to be more precise, volume enclosing the fog volume...) must be rendered as well to draw *volumetric* fog where there are no actual geometry pixels.

Nevertheless, I still think about using arbitrary meshes for fog volumes. This gives more flexibility. Besides, the problem of detecting whether the camera is inside or outside the volume mesh (or should I say "the near plane crosses the volume mesh") seems to be more pervasive. I think more algorithms needs that, although right now I cannot think of anything other than deferred shading and volumetric fog.

Share this post


Link to post
Share on other sites
Provided the geometry rendering the volume is convex, you can render from inside or outside the volume without needing to test which case you are in.


Make sure all your polygons are facing the same way (all inward or all outward)
Clear stencil to zero
Setup the two sided stencil to increment front (w/ depth pass) and decrement back facing (w/ depth fail)
Render your light, with keep pixels with a non zero stencil


This handles all the odd cases, including the convex volume geometry intersecting the near plane. Just don't let the convex volume hit the far plane ever (or use an infinite projection that will avoid far clipping).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!