IBL Cubemap blending and dynamic daytime

Started by
9 comments, last by knarkowicz 7 years, 4 months ago

Hey guys,

I'm curious what you think about this problem.

Let's say I want to blend light probe's together in a deferred way.

The way I read it is finding out which probes are visible on screen (basically frustum cull them) and then blend K-LightProbes together.

Now I guess we obviously need to limit this to a fairly low number like let's say 4 ? How would I determine which 4 light probes these are? Compare distance of shaded pixel in WS to light probe WS pos ?

Having captured a cubemap for both specular and diffuse reflections this would already mean we need to sample 8 cubemaps just to blend them.

However let's assume I want to capture 4 cubemaps for key daytime intervals (sunrise / noon / sunset / night) that's 4 variations of the same cubemap meaning in total we would have to sample 2 * 4 * 4 = 32 cubemaps ! only to blend 4 light probes together. I haven't actually tested it yet to see how real world performance would be but it seems a bit insane to me in theory. That's also 32 shader resource slots I'd have to fill...

Is there a better / faster way to do this ?

Advertisement

Most re-light the probes in realtime (Witcher 3/coming to Cryengine/etc.) You just store what would normally be in your g-buffer into the cubemap and then light that, no polys or draw calls or anything, arbitrary lighting changes, etc. etc.

That being said, why would you have all 4 time of day probes loaded at the same time? Surely, if you're using pre-calculated probes, you're just blending from one to another over time. I'd also suggest only 4 times of day would be a ridiculously low timescale resolution.

You don't need all 4 time of day probes for blending - the time of day can only ever exist between two of those probes at a time due time being 1D... Unless you're rendering such a large area that it encompasses multiple timezones ;)

Also, instead consider doing the blending of the probes into a new cubemap - you can then cache the results of blending. Depending on resolution, you end up paying the cost of blending on far fewer pixels.

Now I guess we obviously need to limit this to a fairly low number like let's say 4 ? How would I determine which 4 light probes these are? Compare distance of shaded pixel in WS to light probe WS pos ?

You can do robust blending if you have a voroni tetrahedronilazation. (There is a paper about from a Unity developer, but i' lost the link).

The 2D case would be easy to explain: Have a point for each probe position, build voroni triangulazition, find triangle where the sampling point is in and lerp its 3 probes using barycentric coords.

Edit: found it again: http://gdcvault.com/play/1015312/Light-Probe-Interpolation-Using-Tetrahedral

Sebastien Lagarde briefly mentions a deferred blending method in his blog post here:

https://seblagarde.wordpress.com/2012/09/29/image-based-lighting-approaches-and-parallax-corrected-cubemap/

– Only with a deferred or light-prepass engine: Apply K cubemaps by weighted additive blending. Each cubemap bounding volume is rendered to the screen and normal+roughness from G-Buffer is used to sample the cubemap.

Global and local cubemap overlapping
Global (infinite in this case) and local cubemap affect only objects in their range (defined by artists). Objects can be affected by several overlapped cubemap, like a global and a local cubemap.
This strategy can only be implemented efficiently with a deferred or light-prepass rendering architecture.
This is the method used by Cryengine 3 (Crysis 2) [6] . This method is simple to implement in a deferred context, the cubemap are blended in a deferred way. There can be lighting seams at the boundary of the cubemap.

But he doesn't go into detail and only states that it is "easy" to implement in a deferred context.

Can anyone tell me how this works exactly ?

Sebastien Lagarde briefly mentions a deferred blending method in his blog post here:

https://seblagarde.wordpress.com/2012/09/29/image-based-lighting-approaches-and-parallax-corrected-cubemap/

– Only with a deferred or light-prepass engine: Apply K cubemaps by weighted additive blending. Each cubemap bounding volume is rendered to the screen and normal+roughness from G-Buffer is used to sample the cubemap.

Global and local cubemap overlapping
Global (infinite in this case) and local cubemap affect only objects in their range (defined by artists). Objects can be affected by several overlapped cubemap, like a global and a local cubemap.
This strategy can only be implemented efficiently with a deferred or light-prepass rendering architecture.
This is the method used by Cryengine 3 (Crysis 2) [6] . This method is simple to implement in a deferred context, the cubemap are blended in a deferred way. There can be lighting seams at the boundary of the cubemap.

But he doesn't go into detail and only states that it is "easy" to implement in a deferred context.

Can anyone tell me how this works exactly ?

Render each cubemap either using fullscreen pass or bounding volume. Output to RGBA16f target with additive blending. To RGB channels output weighted reflection and for A channel output weight. When using this reflection buffer you simple normalize reflections by dividing with A. Be carefully not to divide by zero.

Sebastien Lagarde briefly mentions a deferred blending method in his blog post here:

https://seblagarde.wordpress.com/2012/09/29/image-based-lighting-approaches-and-parallax-corrected-cubemap/

– Only with a deferred or light-prepass engine: Apply K cubemaps by weighted additive blending. Each cubemap bounding volume is rendered to the screen and normal+roughness from G-Buffer is used to sample the cubemap.

Global and local cubemap overlapping
Global (infinite in this case) and local cubemap affect only objects in their range (defined by artists). Objects can be affected by several overlapped cubemap, like a global and a local cubemap.
This strategy can only be implemented efficiently with a deferred or light-prepass rendering architecture.
This is the method used by Cryengine 3 (Crysis 2) [6] . This method is simple to implement in a deferred context, the cubemap are blended in a deferred way. There can be lighting seams at the boundary of the cubemap.

But he doesn't go into detail and only states that it is "easy" to implement in a deferred context.

Can anyone tell me how this works exactly ?

Render each cubemap either using fullscreen pass or bounding volume. Output to RGBA16f target with additive blending. To RGB channels output weighted reflection and for A channel output weight. When using this reflection buffer you simple normalize reflections by dividing with A. Be carefully not to divide by zero.

The artist defined bounding/local cubemap blending can be a pain though. The guys from Remedy (Quantum Break) came up with a great way for automating cubemap placement and automating/storing cubemap blending via voxel information http://advances.realtimerendering.com/s2015/SIGGRAPH_2015_Remedy_Notes.pdf

@frenetic

I'm okay for now with manually defining the boundaries.

@kalle_h

So I'd have to do a final pass that divides by the accumulated alpha value ?

Additionally I'm curious how to do the blend weight calculation...

You don't divide by accumulated alpha value. Just use vanilla alpha blend. There is no correct formula for the blend weights. Just some do falloff in order to hide the transitions.

What would be the advantage of running a postFX pass for each light probe and blend it using alpha blending instead of just sampling all probes in one pass and just apply the weighting using constant buffer variables ? Wouldn't that be more efficient ?

This topic is closed to new replies.

Advertisement