Lighting in caves

Started by
18 comments, last by Lightness1024 11 years, 7 months ago

I think I would use a trigger plane at the entrance, which controls the scenes ambience. As you approach the trigger, it activates and records your distance from the plane, as this distance decreases( you approach the entrance) , dim ambient... at a certain point, perhaps when you hit some -r distance from the plane, leave it alone, reverse it on the way out.


That would work in a non dynamic world. The issue is how do you get a trigger plane to exist in a world that is layed out a certain way, and then modified by the player to be something else. Conventional methods can not apply since we are not linear in the controls of where the player goes.
Advertisement
I can think of ways, but how heavyweight they are is a different question. I appreciate that directional lighting with occlusion would give too drastic an effect, e.g. the area next to a cliff in mid afternoon would be pitch black. I doubt that you want to do a full realtime radiosity/global illumination solution, but you could perhaps cannibalise the techniques for a very coarse voxel grid and a limited maximum range for radiosity calculations. A couple of compendiums of links follow:

http://realtimeradiosity.com/
http://raytracey.blogspot.co.nz/2011/06/sparse-voxel-octree-with-real-time.html
What I did here: http://pic.twitter.com/foX0EZvS was trace a bunch of rays for each surface. The Minecraft thing is essentially a special case of the ray tracing where you only use a single ray pointing straight up. The Problem with this is that its pretty expensive and it becomes fairly hard to determine what has to be updated when a single block is removed and you end up updating a whole bunch of surrounding blocks etc.
From a results point of view this look very nice though. I spent a lot of time thinking about optimizations like mipmapping the terrain and use that to accelerate the raytracing etc. But the problem is always that you will miss some sort of detail when doing that (thin walls become transparent etc.).
Interesting, you would be willing to share concepts at some point? what you are doing could be of use. perhaps.
I think the best and fastest way is this. We already have light propagating for each block. In the GBuffer, I can pass those lighting values to a channel. Then in the Directional Lighting, apply the ambient lighting according to whats in the channel.
what you want is real time global illumination, this is a tough topic.
you can try to go with Crytek method : Light Propagation Volumes for instance. but this does not handle well multi light bounces which are almost the only thing that physically light a cave.

the ray idea is a simplification of the "final gather" step in radiosity calculation. radiosity is traditionally calculated this way:
use the light source to create a photon map (equivalent of a stochastic shadow map) then for each surfel gather photons over a hemisphere, this allows to evaluate indirect occlusion.

In a less correct way there is another method using spherical harmonics, they encode the irradiance at each vertex of the scene. in your case you cuold go with each voxel. but encoding irradiance needs to integrate the environment over the sphere, multiple times, which is not real time at all.
the SDK of DirectX incorporate some examples of that method. (e.g. the little spaceship in a greek-parthenon-like temple, based on a paper from ATI)

other real time GI methods: the first paper from Dachsbacher : Reflective Shadow Maps. but this requires screen space gathering with 400 samples per pixel which is crazy slow. In the paper they are doing it with a reduced resolution but still...
CryEngine3 has an implementation of that last method for everything that is outsides of the light propagation volumes cascades. (far world, thus small gathering disk on screen, thus fast)

The most complex real time GI method : Imperfect Shadow Maps.
this consists of renddering of approximated scene geometry with point rendering from the point of view of distributed virtual point lights amongst the scene volume, and apply some 2d reconstruction algorithm on those little (cubic) shadow maps. this gives a good volumetric information about occlusion in any direction, like a final gather in the classical ray traced way, they can deduce the ambient occlusion in real time.

The SSAO: you already have it, nothing to say on that.

In my sense, you should go with a volume partitionning that is rougher than your basic cell, like say a grid of sectors that can contain 1024 cells each, and for those sectors you can easily update the flag that tells the actual presence of cells in it. which you propagate up in the octree that stores this grid structure.
then when there is invalidation in your world (element movings/deleted/created) you can locate what sector needs to be re-gathered quickly. and you run a little custom simplified ray-traced photon map/final gather system for that sector. (which is much faster than having to do it for every surfel).
then you sample this sector like you would a volume texture, with real trilinear sampling.
you will need something to improve the frequency of the information at interfaces when your geometry has thin layers/walls/ceilings, because you will have "light bleeding" and also "shadow bleeding" on the outside.
Also, a final gather is always noisy because of the limited amount of ray (samples) => lacks of information <=> aliasing.

or you would invent a totally new method, why not with a compute shader, that takes advantage of the blocky structure of your world.... needs thinking
Has anyone considered an approach like Instant Radiosity w/ deferred rendering? Maybe one Virtual Point Lights could exist per voxel?

With several dynamic lights, I could possibly see this getting expensive though...
The first thought that comes to mind to define a cave system is to cast rays from the edges of the blocks in the area in question towards (and away from) the sun. You'll end up with an odd shaped cone volume of sorts defining the area in the cave that's visible to the light source.

This can be used to define which blocks receive light and can be used to define the cave volume by saying "the volume that is not in the lit cone volume and below the ray cast point".

Possibly a brutish approach to it. I'm not really familiar with the current techniques used today.
We have been busy with moving this week so nothing has been touched on this. I can see some seriously great ideas going on here and I am excited to get working on them. I will keep you posted as to what we find and what we end up doing.

For those who are curious, we are using a differed rending system right now, we do not have point lights setup because we are having issues with how we are doing our depth field. We are using LOG and it doesnt seem to want to render the lighting properly.... anyway.... ill keep you posted.
I just had an idea that could serve you here:
you could use environment maps. you don't need a high resolution (like 32x6), but it will allow to get the information "how much of the sky is seen from this point in space".
and you can use a shader to convert to irradiance maps then you keep only the spherical harmonics coefficients at the position of the rendering of the environment map.

the issue is that you will need many of those to cover your whole world.
you could prepare those coefficients in a structure that places those points near interfaces (between void and presence of voxels).
then invalidate the the SH near voxel that changed, and re-render the envmaps when needed.

All in all, it is exactly a final gather, just pure GPU friendly and very light in storage.
the problem is that it could still take several seconds to prepare all SH for a map of a few hundreds of meters.

edit:
at least 3 papers you need to check:
Spherical Harmonic Lighting The Gritty Details, by Robin Green
Irradiance Volumes for games, by Tatarchuk
An Efficient Representation for Irradiance Environment Maps, by Ravi Ramamoorthi

This topic is closed to new replies.

Advertisement