Jump to content
  • Advertisement
Sign in to follow this  
dcteris

Lightmap on dynamic objects

This topic is 3352 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, everyone! I made a level with static lightmaps. But the question is, how it is possible to cast lightmaps on dynamic objects, such as my characters walking around the level, to make them receive shadows. There are a lot f games that implements this. I'm saying about using only lightmaps (pre calculated soft shadows) --- when the moving object gets into the shadow of some another object -- it receives completely proper shadow projection (not some highlighting of the full mesh, NO -- because there are also visible soft shadow's edges) -- and this looks like projective shadows ------ HOW this is done? Thanks very much!!

Share this post


Link to post
Share on other sites
Advertisement
I'm not so sure if many games use it. Most modern games combine dynamic lights with an (ambient) lightmap. So the objects use direct lighting. For example, from the 2 closest lightsources. As for the ambient light portion, one way to do it is to place 'ambient nodes' throughout your map. These nodes collect the incoming light at that point. When objects move around, they pick the closest node(s). Eventually these nodes contain 6 colors, just like a cubeMap. This allows normalMapping. Simple and quite effective.


Older games probably had a fixed (ambient)color value per sector/area. This is somewhat the same as using the nodes from above, but on a bigger level. You can use the level lightMap for your objects. IF you know where your object stands, you can also check on which polygon it stands. Then calculate the 2D lightmap uv coordinates for that 3D position by interpolating the object 3D position between the 3 polygon vertex positions. Now you have a 2D coordinate that you can use in the (vertex)shader to pick the Floor color. However, keep in mind that the floor is not always representive for the light value at that point. Maybe your object floats 2 meter above the floor, or maybe the floor is in shadow while there is light 50 CM above.


You can also make a lightMap for your objects. You could for example create a volume(3D)texture that covers your map. This is basically the same as a lightmap. Create an uniform 3D grid of points and pre-calculate the incoming light for each point (use the lightsources and static map with lightmap). Store these colors in the volume texture and then use it in the object vertex/fragment shader. Newer cards can do a texture lookup in the vertex-shader:

// Convert object world position to 3D texture coordinates
float3 uvw = offset + objectWorldPos.xyz * factor; // or something like it
float3 color = tex3D( objectLightMap, uvw );

Eventually use multiple(6) textures in case you want to do normalMapping. Each map contains incoming light from a certain direction (-x,+x,-y,+y,-z,+z). The good thing about 3D textures is that your color also mixes with the neighbour pixels = smooth transitions while walking for free.

This approach has some memory issues though. Big Levels = big textures. 3D textures can grow explosively fast. For example, if you place a node for each m3, a 500x500x20 meter level already needs at least 14,3 MB (RGB, 8 bits per color). Even more when you want HDR lightMaps. And the resolution is still not that high for this map. An anoying problem is that probes might be placed outside the level (below the floor, behind a wall, etc.). Not only a waste of memory, it also gives wrong values to objects nearby that point. You can fix this by shifting the nodes when creating the lightMap, but nevertheless, the textures remain large.

On the other hand, indoor levels are ussually not that big, and large outdoor levels can often do with low-res lightMaps as the light value is roughly the same everywhere.

Greetings,
Rick

Share this post


Link to post
Share on other sites
spek's option (3D textures) is a very good way to go, and it looks quite good when implemented. But, if you want some well defined shadows casting from your lightmapped objects into the dynamic objects, you may also want to check this option:

In my engine, when the camera enters the bounding volume of a light source, a shadow map is generated (only once) for the lightmapped geometry. Then I use this shadow map to cast static shadows into dynamic objects. With hardware shadow maps acceleration turned on, the FPS drop is almost unnoticeable. I use this only for lights marked with a "static shadows" flag.

On the other hand, I created an "ambient source" entity that generates a dynamic ambient color from the radiosity solution, in order to apply an ambient term to dynamic objects that are inside the bounding volume of the source.

In this screenshot, the walls and floor are using lightmaps, and the balls are the dynamic geometry. Note the shadows projected from the wall into the balls and from the balls into the floor.



HTH. Regards,

Share this post


Link to post
Share on other sites
jpventoso
Technique that you have decribed dosn't explaine shadows on wall behind dynamic spheres. Is it a SSAO?

Share this post


Link to post
Share on other sites
Quote:
Original post by Viik
jpventoso
Technique that you have decribed dosn't explaine shadows on wall behind dynamic spheres. Is it a SSAO?


I forgot to mention it, that screenshot was taken with SSAO enabled ;)
When HQ SSAO is enabled, the engine uses SSAO for dynamic objects and both SSAO and precomputed AO for static objects (with a write mask on the depth/normal buffer to know which pixels are static or dynamic).

Here's a screenshot without SSAO (with precomputed AO for lightmapped objects only).



Regards,

Share this post


Link to post
Share on other sites
Well done that shot! How do you implement that "ambient source"?

In my engine I create a low res 3D grid with probes. After that, probes outside the world are removed or shifted inside. Each probe collects the incoming light from 6 directions (like a cubeMap) dynamically with some sort of raytracing done on the CPU. The results are written into 6 3D textures, where each sector has its own set of textures. This enables realtime ambient lighting, but to limit the processing needs and memory usage, I only place a few probes. Distant sectors also disable realtime lighting and switch over to a more simple technique, using pre-calculated ambient occlusion values combined with an overall sector light color.

Greetings,
Rick

Share this post


Link to post
Share on other sites
Thanks!

Well, my "ambient sources" are very similar to your probes. But, instead of saving it on a 3D texture, I do the interpolation on the CPU and pass the result to the shader via constants.

For the rest, it works quite similar as your implementation. A minimal set of ambient sources are automatically generated, and the designer can add more sources on the areas he considers necessary.

Another difference I see is that you're using 6 colors from 6 directions (I assume you're doing that in order to achieve normal mapping). So I imagine you're using your probes to get some direct lighting as well...

Regards,
Juan Pablo

Share this post


Link to post
Share on other sites
>> I see is that you're using 6 colors from 6 directions (I assume you're doing that in order to achieve normal mapping).

Yep, normalMapping.

The direct lighting is done with shadowMaps (spotlights, pointlights and cascaded lightsources). Well, not all lightsources use shadowMaps, just to gain speed. Shadowmaps are updated when the lightSource moves, or when the objects in its volume are moving. Some others only update once when being created. Depends on the settings in the editor. I think its the same what you're doing, and also SSAO is used for some fine detail ambient.

The ambient lighting is done via a realtime lightMap. Each sector has a (small) lightmap that is updated continously, if close to the camera. To make this process fast, the maps are small and the CPU is doing most of the work with pre-calculated relations between the patches. So, each patch in the lightMap knows from which other patches it can collect (indirect) light. The CPU can easily update an entire map with multiple bounces.

First the GPU renders the world with direct lighting/shadowMaps/emissive colors to a flat 2D atlas texture (= a lightmap with direct lighting only). The CPU reads this map and then spreads out the light to the other patches multiple bounces. Then the final map is written back to the GPU. Actually 3 textures are written, as each patch has collects incoming light from 3 directions (like Halflife2 Radiosity Normalmapping).

The probes used for the dynamic object also receive their indirect light in the same way, although they do not contribute to the lightMap. That means a dynamic object cannot reflect or block indirect light (although its shadow in the first pass with direct lighting does block light and influence the ambient result).


It's quite low-res, but SSAO improves that. I can update ~10 or ~20 lightMaps per second (depends on the sector complexity) on a dual core CPU. Not too bad. My attempts to get realtime GI with the GPU were MUCH slower. I can't see the dynamic object results yet though. My computer is out of order for the millionth time, and my laptop can't do 3D textures.

Greetings,
Rick

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!