Hypothesizing a new lighting method.

Started by
17 comments, last by MJP 10 years, 2 months ago

While I can see how image-based lighting might work great on mobile devices, how do these translate to desktops where the ALU/TEX ratio is much higher? I would guess that using literally abundant ALU to compute a dozen lights that are visible from one fragment may very well be faster than to do yet an additional texture lookup?

In console/PC games, it's standard to use IBL for background/fill/bounce/ambient lights (GI), instead of a flat ambient colour, and then compute the direct lighting analytically.

For film-quality IBL, you don't pre-convolve the probes, and each pixel has to read thousands of importance-sampled values from the IBL probes and integrate them using the BRDF (which is both ALU and TEX heavy)...

Which is why most people pre-convolve the probes. I'm not sure UE4 does, then again I'm not sure exactly what it is they're even doing. All I've gathered is "cube map array" and their ability to relight the probes in realtime.

They pre-convolve their specular probes, by convolving the specular BRDF with a given roughness assuming that V=N. They also take the standard approach of storing the results for different roughness values in the mip levels of the cubemap (higher roughnesses go into lower-res mip levels), and then selecting the mip per-pixel based on the surface roughness. They also refactored things a bit so that they can approximate the appropriate BRDF response for different viewing angles by precomputing values into lookup textures and indexing them at runtime. This sort of approach is becoming fairly common.

I'm not sure what they do for diffuse, but I would assume that they store SH probes which is what most people do these days.

This topic is closed to new replies.

Advertisement