Jump to content
  • Advertisement
Sign in to follow this  
spek

Some Crytek VPL questions

This topic is 2650 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi. I was wondering a few things about Crytek's LPV technique. But before asking, let's see if I didn't misunderstood the basics of "propagating":

* When spreading light through the grid:
- render a point on each cell (32x32x32 = 32.768 points)
- let that point gather light from 6 direct neighbour cells, then reproject it for the next iteration

* Any idea how big their cells are (in meters)? They have 3 nested grids, but I have no clue what kind of sizes to think about.


* How much iterations do they use to spread the light? Basically, a 32x32x32 grid needs up to 64 iterations to get light from 1 corner to another.
Since their grids overlap, you might transfer it halfway only, then hope your surfaces will sample from the 2nd grid (distant pixels will lerp with a second grid). But... won't that give transition problems? And even with 32 iterations "only", you still need to render 32 x 32.768 points. 3 times actually (when having 3 grids). Isn't that an awfull lot?

* Is there any form of attenuation, asides from blocking geometry? I mean, a weak source still gets its light spread all over the grid, unless it's intensity reduces each spread-iteration. But how to do that if you don't know from which source the light comes from (I mean, what you read from a neighbour cell is possibly a summation of multiple lights). I know this solution doesn't have to produce physical accurate results, but...

* Last one. I read something about "texel snapping", but don't really know what they do. If a light shines on a nearby wall, it might inject a lot of VPL's into a single cell (what kind of sizes (meters) do they use anyway?). When the light moves away, less VPL's are injected. Just summing up everything might result into extreme bright indirect light...

Greets

Share this post


Link to post
Share on other sites
Advertisement

* When spreading light through the grid:
- render a point on each cell (32x32x32 = 32.768 points)
- let that point gather light from 6 direct neighbour cells, then reproject it for the next iteration


You lay down a screen-aligned quad to make sure every pixel gets executed in the pixel shader. For every pixel you compute the flux of your pixels faces by solving the spherical harmonics coefficients of the neighboring cells into the direction of these faces and than simply add the results together.



* Any idea how big their cells are (in meters)? They have 3 nested grids, but I have no clue what kind of sizes to think about.


That's irrelevant as they are using different cascades of LPV's that all have different sizes, but the same resolution. The largest basically covers the whole scene.



* How much iterations do they use to spread the light? Basically, a 32x32x32 grid needs up to 64 iterations to get light from 1 corner to another.
Since their grids overlap, you might transfer it halfway only, then hope your surfaces will sample from the 2nd grid (distant pixels will lerp with a second grid). But... won't that give transition problems? And even with 32 iterations "only", you still need to render 32 x 32.768 points. 3 times actually (when having 3 grids). Isn't that an awfull lot?


They are using 8 iterations.


* Is there any form of attenuation, asides from blocking geometry? I mean, a weak source still gets its light spread all over the grid, unless it's intensity reduces each spread-iteration. But how to do that if you don't know from which source the light comes from (I mean, what you read from a neighbour cell is possibly a summation of multiple lights). I know this solution doesn't have to produce physical accurate results, but...


I think you misunderstood something. They don't store actual point lights. They store spherical harmonics coefficients which are capable of saving the whole irradiance (low-frequency radiance). They don't attenuate anything. They are using transfer coefficients stored in the geometry volume (GV) to block off the light.


* Last one. I read something about "texel snapping", but don't really know what they do.


I don't know what it is either, but probably nothing you need to worry about since it's just an optimization to get some better results.


If a light shines on a nearby wall, it might inject a lot of VPL's into a single cell (what kind of sizes (meters) do they use anyway?). When the light moves away, less VPL's are injected. Just summing up everything might result into extreme bright indirect light...


They don't add up the results. They are intelligently solving the integral to calculate the SH coefficients, probably kinda like Monte Carlo integration. Isn't there an Appendix?

Share this post


Link to post
Share on other sites
I understand the cascaded approach, the total amount of cells remains the same. But size does matter for accuresy matters. The (32x32x32) grid could cover 1,10 and 100 meters per cell. Or maybe something different? Just curious what kind of scales to think about. The paper also mentions it skips small objects to avoid flickering problems. "Small" would depend on how big their cells are.


8 iterations... Sounds reasonable. But let's say I have (direct) light onto a wall on the left side of the (smallest) grid. In case each grid cell covers 1 cubic meter, the light would reflect up to 8 cells (meters). In case you would be exactly in the center (cell {16,16,16}), the light doesn't even reach you.
Now this is partially solved with the bigger sized grids. The big grid might transport the light 800 meters (in case each cell would cover 100 meters). But... the cells around the camera don't sample from this big grid, right? Only from the smallest grid (at the borders, they sample from both grids to make a smooth transition though). Well I made a picture. In that case the orange light would dissapear around the camera.


Probably I miss some things indeed. Spherical Harmonics is magic to me :) From what I know, it's just a way to pack data. In this case light fluxes from multiple directions. You could do the same with cubeMaps, except that cubeMaps take more space.

With attenuation I don't mean blocking by geometry (I get the concept of injecting geometry surfels to calculate an occlusion factor later on). Basic lights in games often use a (fake) method to let the intensity fall-off over meters. strength *= 1 - (distance / light.maxRange). That stuff. When propagating, you basically collect the light from all directions and store it with SH. But how does light "fade out" after X meters? Excuses for the stupid question maybe, but again, SH is hard to understand for a math idiot like me.

Share this post


Link to post
Share on other sites
Regarding each cascade propagating over different distances - maybe they don't mind this artifact :) or they could use a different number of propagation-steps on each cascade.
Regarding attenuation - when collecting incoming light, you could scale it by 1/(cell_size^2) to account for distance attenuation. This will result in too much attenuation when applied over multiple propagation-steps, but as usual you can just use a fudged attenuation function.
If you're math inclined, (and aren't happy with hacks) you could try to find a way to properly evaluate the real distance attenuation via numerical integration (i.e. over over multiple propagation-steps)... ;)
Or maybe distance attenuation isn't even needed to get decent looking results?

Share this post


Link to post
Share on other sites
"Don't mind" is often the best remedy when it comes to solving brain crunchers :) I'd like to have a better look at the Crysis lighting, but I get killed each 4 seconds in their multiplayer demo while observing. Using different iteration amounts could work pretty well, but I doubt if they really do that.

Probably their attenuation model won't be very realistic either, but the light has to fade out somehow right? Otherwise a shitty little candle could spread his light far away, then suddenly stops at the last cell. Maybe they base it on light intensity. And just reduce the intensity each step. No matter what kind of range a lightsource has. Problem with packing multiple light fluxes into SH (or a cubeMap) is that you can't really tell anymore where it came from (or at least on which wall it collided, and how strong).

Cheers!

Share this post


Link to post
Share on other sites

- render a point on each cell (32x32x32 = 32.768 points)
- let that point gather light from 6 direct neighbour cells, then reproject it for the next iteration


Let me clarify the process:
- Render the scene from the sun's point of view and create a set of points that represent the light being bounced off walls, each point contains just 3 vectors that represent: the position where light is bounced, light color and light direction

- Render the points to the LPV converting them to SH, the spherical harmonics they use are really simple. The first coefficient L00 represents a constant light color that comes from all directions while the L1-1 L10 and L11 coefficients form a vector that represents a spot of light coming from a single direction with a cosine attenuation.

- Then an 8-step iterative process is performed on the LPV where each texel fetches the irradiance coming from its 6 adjacent cells. This is also pretty easy since you can get the irradiance that comes from the neighbour's direction with simple math once you understand the SH representation (let me know if you need some clarification on this).


* Any idea how big their cells are (in meters)? They have 3 nested grids, but I have no clue what kind of sizes to think about.


You can parametrize this to get the best tradeoff for your scene as larger LPV will increase the range of the lighting but also cause low resolution artifacts.
Unlike DarkChris said, the largest LPV does not cover the whole scene, it covers a wide area of the scene around the player.


* How much iterations do they use to spread the light? Basically, a 32x32x32 grid needs up to 64 iterations to get light from 1 corner to another.
Since their grids overlap, you might transfer it halfway only, then hope your surfaces will sample from the 2nd grid (distant pixels will lerp with a second grid). But... won't that give transition problems? And even with 32 iterations "only", you still need to render 32 x 32.768 points. 3 times actually (when having 3 grids). Isn't that an awfull lot?


Yup, you're right on all that. The number of iterations limits the range of the bounced lighting and that's a major problem of the technique because the lighting bouncing off one wall may not reach the opposite wall of the same room. Crytek uses 8 iterations which is a small amount but rely on the cascaded approach to getting a higher range throughout the scene.
There's also the transition problem, but you can use a simple blending method to get a smooth transition between cascades. Even though the cascades approach and their blending is physically incorrect it does provide good results in practice due to te smooth nature of the LPV lighting.
The propagation step isn't all that expensive because it's just a simple image filter applied to the LPV's texture and you can use many tricks to improve the performance a lot.


* Is there any form of attenuation, asides from blocking geometry? I mean, a weak source still gets its light spread all over the grid, unless it's intensity reduces each spread-iteration. But how to do that if you don't know from which source the light comes from (I mean, what you read from a neighbour cell is possibly a summation of multiple lights). I know this solution doesn't have to produce physical accurate results, but...


The propagation process tends to attenuate the lighting by itself, sometimes too much.


* Last one. I read something about "texel snapping", but don't really know what they do. If a light shines on a nearby wall, it might inject a lot of VPL's into a single cell (what kind of sizes (meters) do they use anyway?). When the light moves away, less VPL's are injected. Just summing up everything might result into extreme bright indirect light...


When the camera moves the LPV also moves and since the LPV is low resolution you end up seeing a lot of flickering and unstable lighting because each texel of the LPV ends up mapping to different positions in the scene as the player moves. Hence, texel snapping is used to force texels to assume the same positions on the scene regardeless of the position and orientation of the camera.

I've implemented this technique a while back for my master thesis, you can find a lot of information about it in my thesis document here.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!