Localizing image based reflections (issues / questions)

Started by
6 comments, last by Hodgman 9 years ago

I'm currently trying to localize my IBL probes but have come across some issues that I couldn't find any answers to in any paper or otherwise.

1. How do I localize my diffuse IBL (irradiance map) ? The same method as with specular doesn't really seem to work.

The locality does not really seem to work for the light bleed.

Image%202015-04-18%20at%2012.17.10%20.pn

As you can see here the light bleed is spread over a large portion of the entire plane even if the rectangular object is very thin.

Also the red'ish tone doesn't really become increasingly stronger the closer the sphere is moved to the red object.

If I move the sphere further to the side of the thin red object the red reflection is still visible. So there's no real locality to it.

2. How do I solve the case for objects that aren't rectangular or where there's objects not entirely at the edge of the AABB that I intersect ? (or am I'm missing a line or two to do that ?)

EXAMPLE_PICTURE:

Image%202015-04-18%20at%2012.22.24%20.pn

As you can see here the rectangular red object reflection works perfectly (but then again only if its exactly at the edge of the AABB).

If an object is like a sphere or moved closer to the probe (so not at the edge) the reflection will still be completely flat and projected to the side of the AABB.

Here's the code snippet how I localize my probe reflection vector...

CODE_SAMPLE:


float3 LocalizeReflectionVector(in float3 R, in float3 PosWS, 
                                in float3 AABB_Max, in float3 AABB_Min, 
                                in float3 ProbePosition,
{
	// Find the ray intersection with box plane
	float3 FirstPlaneIntersect = (AABB_Max - PosWS) / R;
	float3 SecondPlaneIntersect = (AABB_Min - PosWS) / R;

	// Get the furthest of these intersections along the ray
	float3 FurthestPlane = max(FirstPlaneIntersect, SecondPlaneIntersect);

	// Find the closest far intersection
	float distance = min(min(FurthestPlane.x, FurthestPlane.y), FurthestPlane.z);

	// Get the intersection position
	float3 IntersectPositionWS = PosWS + distance * R;

	return IntersectPositionWS - ProbePosition;
}
Advertisement
The basic idea of "parallax correction" is used used, though juding by it you may already being doing that https://seblagarde.wordpress.com/2012/09/29/image-based-lighting-approaches-and-parallax-corrected-cubemap/ There are, as you've found, fundamental problems with using cubemaps for all the things you're trying to do with them. You can also try getting the "distance" to each object in your cubemap by tracing a signed distance field, and the above blog has some ideas on blending between cubemaps, but the basic problems of being limited to essentially projecting from the 2d walls of the cubemaps AABB will always be there. You could also try screenspace raytracing, which can be pretty cheap today EG http://jcgt.org/published/0003/04/04/ but that's going to come up with all the problems of being screenspace, and fundamentally you're always going to have errors with cubemaps.

The basic idea of "parallax correction" is used used, though juding by it you may already being doing that https://seblagarde.wordpress.com/2012/09/29/image-based-lighting-approaches-and-parallax-corrected-cubemap/ There are, as you've found, fundamental problems with using cubemaps for all the things you're trying to do with them. You can also try getting the "distance" to each object in your cubemap by tracing a signed distance field, and the above blog has some ideas on blending between cubemaps, but the basic problems of being limited to essentially projecting from the 2d walls of the cubemaps AABB will always be there. You could also try screenspace raytracing, which can be pretty cheap today EG http://jcgt.org/published/0003/04/04/ but that's going to come up with all the problems of being screenspace, and fundamentally you're always going to have errors with cubemaps.

This.

You'll never get an accurate approximation for your scene geometry by using standard parallax corrected cubemaps. The whole idea of this is to reproject your cubemap onto "proxy geometry", which usually is either an AABB or a sphere, but of course a game's scene will almost never be made up out of axis-aligned rectangular or spherical areas.

Try to think of this approach as a way to very roughly approximate reflections, not as a solution for getting actual correct reflections for your scene. If you strategically choose locations for capturing your reflection data this will usually hold up quite nicely.

A lot of current-gen games combine this method with screen-space reflections as Frenetic Pony mentioned. For this you just determine a maximum length for your reflection ray when you're sampling in screen-space. If no sample was found you just fall back to sampling from your cubemap. This can give some very convincing results.

As for your diffuse reflection problem, the same rules for specular reflections apply, so you'll never really get perfect bounce lighting by solely using parallax corrected cubemaps. Luckily this actually doesn't really matter in a lot of cases. Due to the low-frequency nature of diffuse lighting the illusion of diffuse bounce lighting is much easier to uphold.

I wouldn't worry too much about your bounce lighting not being 100% correct, especially if you're working with outdoor scenes. If you're working with environments where your bounce lighting will be much more noticeable (e.g. dark indoor scenes with smaller splashes of bright light) I'd recommend you look into different approaches to solving this.

I gets all your texture budgets!

One of the (many) problems of pre-filtered, parallax-corrected cubemaps is that applying the pre-filtering before the parallax correction will result in errors, even if the proxy geo is a perfect fit for the actual surfaces being captured in the cubemap. I believe the DICE presentation touched on this in the context of specular reflections, but the problems is much worse for diffuse/irradiance since the filter kernel is much wider compared to a typical specular BRDF.

Honestly though, a cubemap is really overkill for storing irradiance at a sample point. You can store a pretty good approximation of irradiance in fairly small footprint by using something like spherical harmonics, SRBF's, or even something simple like Valve's ambient cube format (which is basically a 1x1 cubemap). If you do this, then you can store diffuse samples at a much higher density relative to your specular probes, which will mitigate issues caused by incorrect parallax. You can even use your dense irradiance samples to improve the quality of your specular probes by normalizing the specular intensity in a manner similar to what was presented by Activision at SIGGRAPH 2013.

As you can see here the light bleed is spread over a large portion of the entire plane even if the rectangular object is very thin.
Also the red'ish tone doesn't really become increasingly stronger the closer the sphere is moved to the red object.
If I move the sphere further to the side of the thin red object the red reflection is still visible. So there's no real locality to it.

There's probably some neat hacks that could be developed for this case.
e.g. measure the distance from the shaded point to the edges of the bounding volume. As that distance approaches zero, you could skew the lookup direction towards the nearest edge of the volume, and perhaps even bias the mip-level to a slightly higher resolution. That should produce a bit of a strong light bleeding gradient towards the edges of the room. It kinda sorta makes a flimsy bit of physical sense, as if you were doing proper sampling instead of pre-filtering, you would be taking many more samples from that edge as surfaces get close to it.

As you can see here the rectangular red object reflection works perfectly (but then again only if its exactly at the edge of the AABB).
If an object is like a sphere or moved closer to the probe (so not at the edge) the reflection will still be completely flat and projected to the side of the AABB.

Yeah that case simply doesn't work with the bounding-shape or convex-hull parallax correction techniques. For objects within the room (like your sphere), you can omit them from your cubemaps and then composite them over the top using a different technique (like screen-space reflections, planar mirrors, etc).

I've been experimenting with a more robust parallax correction technique where you bake the scene into a 3d distance field (or a BVH of distance field cubes) and then ray/sphere trace through the BVH/fields to find where the reflection cone intersects the scene, and then using that point to to parallax correction.
I'm not yet at the stage where I have any results to show off, or where I could recommend going down this path though...

Alright thanks for clearing that up ;)

I feel a little disappointed in IBL now sad.png

By the way do you guys know of any good way to do the local cubemap blending in a forward renderer ? I've been reading up on some articles but have only come to the conclusion that it's really crappy if you're not doing your lighting deferred...

The only thing I've found was lagarde's blog (https://seblagarde.wordpress.com/2012/09/29/image-based-lighting-approaches-and-parallax-corrected-cubemap/) where he introduces his method of having a point of interest (e.g. the camera) where he performs a pre pass to blend the 4 closest cubemaps together but this approach fails when it comes to the parallax correction. He then proposes a hacky way (which I don't quite understand) to do this but it limits everything to planar surfaces which I don't really like.

I'm also having issues on what method to use to calculate the blend weights between the probes. Most of the methods I've found seem really complicated (Unity's delauny triangulation,...)

For IBL you can just take n closest cube maps and blend between them in shader (for example Killzone Shadow Fall does it this way). For probes you can just do a regular grid or octree and trilinear blend between 8 probes (Natalya Tatarchuk - "Irradiance Volumes").

If you don't want to use delauny triangulation, the simplest solution is a uniform grid.
You could sample it on the CPU and do a linear interpolation per object.
Alternatively, you can store the grid in a texture and sample it per vertex or per pixel.

The grid can store diffuse SH directly, or it can store indices of probes.
If you store all your cubemaps and in a texture array or atlas, then you can use the indices that were fetched from the grid to sample the probes.
That would work for forward, or for doing all probes in a single deferred pass.


The FarCry 3/4 presentations have a lot of details on their SH grid systems.


Also, lots of games dont stick with just traditional forward or traditional deferred. Most games are hybrids.
e.g. you can do a depth-pre-pass, which also writes roughness and normal to the colour target, and use them to calculate your IBL contributions as you wouldfor a deferred renderer. Then, you can do a regular forward shading pass, which samples your "deferred IBL" texture.

This topic is closed to new replies.

Advertisement