Directional Radiosity?

Started by
17 comments, last by Lightrocker 19 years, 1 month ago
Hi, I was looking at 'HL2/Source Shading' paper and I was wondering what is that 'Radiosity Normal Mapping' technique described. They say it combines both radiosity and normal mapping to achieve realistic lighting results. But when I go on reading, I get confused with the 'Basis for Radiosity Normal Mapping'. They transform that basis into tangent space and compute the lighting according to that basis. Later on the paper, it seems like they use three different lightmaps computed by radiosity, so i thought it has to deal with that basis: three vectors, three orientations for some directional radiance computation, isnt it? So if i'm correct, how is that directional radiosity computed? I thought it is global, not directional as you can pick three vectors and get three different radiosity maps. Enlight me, please :)
YengaMatiC for the people[Pandora's Box project]
Advertisement
Hi,

I got similar questions. Are they using three light maps (one for each basis vector) or are they using three different samples from one light map, where the images, which are shown for each basis vector, are only to demonstrate the different result for every vector? If it is only one light map, then how do they calculate the texture coordinates? It should be impossible for maps with complex UV structures. So I think, there are three different light maps but why are all maps sampled with the same sampler?
Disclaimer: All I know is what I read in the same paper you guys did.

I believe that they are using 3 separate lightmaps. What they are doing is basically computing a weighted average of the lightmaps according to the surface normal. The idea is that light impinging on a point doesn't come equally from all directions, but that there are different light fluxes depending on the direction of interest. So if there's a reddish light to the left and a bluish one to the right, the color of a surface between them varies depending on where the facets of that surface are facing.

So they are doing something like:

outputColor = dot(bumpNormal, basisVec1) * lightmap1color + dot(bumpNormal, basisVec2) * lightmap2color + dot(bumpNormal, basisVec3) * lightmap3color

In effect, the lightmap color varies depending on the surface normal.

This is a pretty elaborate system for lightmapping; after looking at the game demo, it seems like they are not getting enough out of this to be worth the effort (and cycles and texture memory), but maybe I'm wrong, especially since I didn't play the actual game. In fact it seemed like a lot of surfaces weren't even bump mapped. Does anyone else have an opinion about this?


Quote:Original post by ganchmaster
Disclaimer: All I know is what I read in the same paper you guys did.
This is a pretty elaborate system for lightmapping; after looking at the game demo, it seems like they are not getting enough out of this to be worth the effort (and cycles and texture memory), but maybe I'm wrong, especially since I didn't play the actual game. In fact it seemed like a lot of surfaces weren't even bump mapped. Does anyone else have an opinion about this?

I've played the game

there are some objects [rocks notably] which are normal mapped, and several scenes where you can see benefit from this technique [for example, bump mapped walls and floors in counter strike source]

before I get to what this is so good at I'll re-explain [ganchmaster seems correct] the technique

a surface has 3 lightmaps, and a normal map [in addition to a "diffuse" map and others...]. The shader uses the normal map to blend between the colors in the 3 lightmaps

so... this essentially allows much more detailed static lighting without requiring a higher resolution light map.

A normal map can be of a high resolution, and mapped so it repeats, but the lightmap doesn't repeat and it looks like it is of the resolution of the normal map. This way you get the tiny details of the normal map, and the broad details of the static lighting, but combined together [as if you had a lightmap of the same resolution in the real world as the normal map]

get it?


the reason why you wouldn't notice much is it just looks like a high resolution texture [it is still static lighting], but if you see something like a lamp just next to a wall you will be able to see the bumps on the wall caused by the position of the light [not possible with only static lighting, and wouldn't look as good with only dynamic bump mapping]
The lightmaps are constructed as follows:

For normal radiosity mapping, each texel on the lightmap is calculated by using a hemisphere (or hemicube) which is centered on the normal of the wall.

In the case of the three normals, they use three basis vectors (they're in a diagram on the paper), and thus three hemisphere/cubes, each with a different center. This gives different lighting results for each, which can then be blended between using the normal map, as described by others.

I imagine a decent speedup would be to render normal radiosity until the system stabilizes, then create the radiosity normal maps last, using that data.

That way, you only have to render one radiosity map per object while gathering light, then do the full-on bit at the end. I don't think there should be any (noticeable) drop in quality by doing it that way, but there'll be a hell of a speed increase by not doing a full radiosity normal mapping pass each time.
It is not possible to use the same sampler register for three different textures, is it? But they are doing it in the HLSL shader function GetDiffuseLightingBumped(). Is it only a mistake in the tutorial? It makes sense to use three different radiosity textures but also wastes a lot of texture stages, especially for graphics cards of the first shader generation like GeForce 3, which has only four of them.
They do multi-pass rendering on the earlier cards (DX8 generation). Their little combination diagram details the passes, if I remember correctly. The simple way would be to do the static radiosity in one pass (being three texture stages for the lightmaps and one for the bump map), then layer on the diffuse color (the base texture map/detail/whatever).

Edit: This didn't answer the question, hopefully I've answered it in my next post down :)

[Edited by - Drilian on March 15, 2005 11:30:44 AM]
It is clear to me that they are using multi-pass rendering for older cards but the question about the light maps sampler register is still there.
Ah, I see, I misread your question AND I answered wrong.

The answer is, they've packed all three lightmaps into one texture, and they get the different coordinates for those packed lightmaps.

So it's using three images packed into one texture.

Sorry about the confusion!
This would result in other questions: How are they able to pack three colored textures into one? And much more interesting: How can an UV coordinate be offset for a texture, on which neighbor triangles can be situated in different charts of the UV map?

This topic is closed to new replies.

Advertisement