Volumetric lighting

Started by
4 comments, last by FreneticPonE 10 years, 2 months ago

Hey, it has been a little while.

I was remaking some of the volumetric light effects I have, and came across this little CryEngine tutorial below. The article says you can create volumetric light for pointlights without needing a shadowMap. Really?

http://www.crydev.net/viewtopic.php?f=126&t=69239&start=15

Unless I'm missing something, there are 2 ways to do volumetric lighting in general:

A- Raymarch through a volume, for each position test if that coordinate would be litten or not, by using a shadowMap test

B- Post-screen FX: Render lightsources to a buffer, generate blurry streaks into the direction of that light

Possibly that tutorial uses method B, which is very cheap but doesn't work very well as soon as the lightsource is invisible for the camera (= nothing to smear out). But maybe there are some other smart tricks nowadays?

Furthermore, I believe UDK has some sort of volumes you can use for well, a volumetric effect (render a 3D texture, local fog, et cetera). So you would place a cube or something, and then a shader renders the "inside". I assume that works with raymarching as well. But how to determine where to start or end the ray?

With the lightshafts, I render the backsides of a light volume (a cone or sphere for example). Then a ray flies from the camera towards the backside. I can skip the gap between the camera and the actual volume a bit by approximating where the volume (foreground triangles) could start, but I still have to test for each sample if it really falls inside the volume.

It works, but I bet there are faster/smarter ways. Maybe by rendering both fore- and backsides to a buffer so I have 2 coordinates? But that won't work very well if volumes overlap each other...

Advertisement

Looks like B is you're answer, or so I'd guess by the screenshots.

If you want to go expensive there's new stuff involving min/max maps and etc. that looks fantastic and are expensive as all get out (but still realtime).

I don't know the CryEngine good enough to confirm, but casting lightshafts without raymarching/shadowMaps indeed sounds like this:

http://http.developer.nvidia.com/GPUGems3/gpugems3_ch13.html

Although the shot shows area that are still a bit greenish or orange behind one of those reflecting balls. With the post-screen-blur-streak effect that wouldn't be possible... Anyhow, I'll just implement both techniques I think. Method A is more accurate but costs a lot, method B can't be used in all situations but is cheap.

Min/Max maps...?

Is that something new, or just 2 ordinary depthMaps? Basically I want to place volumes and raymarch through them. To avoid useless samples, I'll need to need the entrance- and exit point of a ray in that volume. When rendering the volume normally, you only know either one of those (usually the exit-point as a I render the backsides). I could render the same volumes into a depthMap to find the other coordinate, but it won't work very well if volumes are placed behind each other. Which is quite likely going to happen in my case...

Another way might be calculating the entrance- or exit point mathematically inside the shader, giving the type of shape & size, a matrix, and the camera. But I wouldn't know how.

Thanks!

Pre-filtered single scattering: http://www.mpi-inf.mpg.de/~oklehm/publications/2014/i3d/prefiltered_single_scattering-i3DKlehm2014.pdf

Similarly, and with explicit point light support: http://software.intel.com/en-us/blogs/2013/03/18/gtd-light-scattering-sample-updated

Thanks!

Didn't read the papers yet, but does it work fundamentilly different from the techniques described earlier? And ifso, what are the major differences / improvements compared with those others?

Basically I can do the stuff being shown in the movie on the 2nd link. But at a relative high cost (need a shadowMap, lot's of sampling, etc.). And balancing is a pain in the ass, Sometimes the effect is barely visible, from another viewing angle it might be too strong, or the light core is too bright, et cetera. But I guess parameter tweaking goes along with any technique you chose.

Epipolar sampling is one of the main speedups. Basically instead of raymarching naively you raymarch in a regular fashion with samples radiating from the screenspace position of the lightsource out to the edges of the screen. Then take into account edge detection, which can again be done in screenspace, for high contrast variations and you suddenly have a lot less samples to go through.

1d Min/Max take advantage of the above. Epipolar sampling gives you what looks sort of like a 1d heightmap, which is then used to speed things up again.

A gross simplification, but I hope I just wrote something coherent enough. The intel paper ends up with only a little over 2ms on a GTX680, at least with their lowest quality setting.

This topic is closed to new replies.

Advertisement