• Advertisement
Sign in to follow this  

Directional Radiosity?

This topic is 4724 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I was looking at 'HL2/Source Shading' paper and I was wondering what is that 'Radiosity Normal Mapping' technique described. They say it combines both radiosity and normal mapping to achieve realistic lighting results. But when I go on reading, I get confused with the 'Basis for Radiosity Normal Mapping'. They transform that basis into tangent space and compute the lighting according to that basis. Later on the paper, it seems like they use three different lightmaps computed by radiosity, so i thought it has to deal with that basis: three vectors, three orientations for some directional radiance computation, isnt it? So if i'm correct, how is that directional radiosity computed? I thought it is global, not directional as you can pick three vectors and get three different radiosity maps. Enlight me, please :)

Share this post


Link to post
Share on other sites
Advertisement
Hi,

I got similar questions. Are they using three light maps (one for each basis vector) or are they using three different samples from one light map, where the images, which are shown for each basis vector, are only to demonstrate the different result for every vector? If it is only one light map, then how do they calculate the texture coordinates? It should be impossible for maps with complex UV structures. So I think, there are three different light maps but why are all maps sampled with the same sampler?

Share this post


Link to post
Share on other sites
Disclaimer: All I know is what I read in the same paper you guys did.

I believe that they are using 3 separate lightmaps. What they are doing is basically computing a weighted average of the lightmaps according to the surface normal. The idea is that light impinging on a point doesn't come equally from all directions, but that there are different light fluxes depending on the direction of interest. So if there's a reddish light to the left and a bluish one to the right, the color of a surface between them varies depending on where the facets of that surface are facing.

So they are doing something like:

outputColor = dot(bumpNormal, basisVec1) * lightmap1color + dot(bumpNormal, basisVec2) * lightmap2color + dot(bumpNormal, basisVec3) * lightmap3color

In effect, the lightmap color varies depending on the surface normal.

This is a pretty elaborate system for lightmapping; after looking at the game demo, it seems like they are not getting enough out of this to be worth the effort (and cycles and texture memory), but maybe I'm wrong, especially since I didn't play the actual game. In fact it seemed like a lot of surfaces weren't even bump mapped. Does anyone else have an opinion about this?


Share this post


Link to post
Share on other sites
Quote:
Original post by ganchmaster
Disclaimer: All I know is what I read in the same paper you guys did.
This is a pretty elaborate system for lightmapping; after looking at the game demo, it seems like they are not getting enough out of this to be worth the effort (and cycles and texture memory), but maybe I'm wrong, especially since I didn't play the actual game. In fact it seemed like a lot of surfaces weren't even bump mapped. Does anyone else have an opinion about this?

I've played the game

there are some objects [rocks notably] which are normal mapped, and several scenes where you can see benefit from this technique [for example, bump mapped walls and floors in counter strike source]

before I get to what this is so good at I'll re-explain [ganchmaster seems correct] the technique

a surface has 3 lightmaps, and a normal map [in addition to a "diffuse" map and others...]. The shader uses the normal map to blend between the colors in the 3 lightmaps

so... this essentially allows much more detailed static lighting without requiring a higher resolution light map.

A normal map can be of a high resolution, and mapped so it repeats, but the lightmap doesn't repeat and it looks like it is of the resolution of the normal map. This way you get the tiny details of the normal map, and the broad details of the static lighting, but combined together [as if you had a lightmap of the same resolution in the real world as the normal map]

get it?


the reason why you wouldn't notice much is it just looks like a high resolution texture [it is still static lighting], but if you see something like a lamp just next to a wall you will be able to see the bumps on the wall caused by the position of the light [not possible with only static lighting, and wouldn't look as good with only dynamic bump mapping]

Share this post


Link to post
Share on other sites
The lightmaps are constructed as follows:

For normal radiosity mapping, each texel on the lightmap is calculated by using a hemisphere (or hemicube) which is centered on the normal of the wall.

In the case of the three normals, they use three basis vectors (they're in a diagram on the paper), and thus three hemisphere/cubes, each with a different center. This gives different lighting results for each, which can then be blended between using the normal map, as described by others.

I imagine a decent speedup would be to render normal radiosity until the system stabilizes, then create the radiosity normal maps last, using that data.

That way, you only have to render one radiosity map per object while gathering light, then do the full-on bit at the end. I don't think there should be any (noticeable) drop in quality by doing it that way, but there'll be a hell of a speed increase by not doing a full radiosity normal mapping pass each time.

Share this post


Link to post
Share on other sites
It is not possible to use the same sampler register for three different textures, is it? But they are doing it in the HLSL shader function GetDiffuseLightingBumped(). Is it only a mistake in the tutorial? It makes sense to use three different radiosity textures but also wastes a lot of texture stages, especially for graphics cards of the first shader generation like GeForce 3, which has only four of them.

Share this post


Link to post
Share on other sites
They do multi-pass rendering on the earlier cards (DX8 generation). Their little combination diagram details the passes, if I remember correctly. The simple way would be to do the static radiosity in one pass (being three texture stages for the lightmaps and one for the bump map), then layer on the diffuse color (the base texture map/detail/whatever).

Edit: This didn't answer the question, hopefully I've answered it in my next post down :)

[Edited by - Drilian on March 15, 2005 11:30:44 AM]

Share this post


Link to post
Share on other sites
It is clear to me that they are using multi-pass rendering for older cards but the question about the light maps sampler register is still there.

Share this post


Link to post
Share on other sites
Ah, I see, I misread your question AND I answered wrong.

The answer is, they've packed all three lightmaps into one texture, and they get the different coordinates for those packed lightmaps.

So it's using three images packed into one texture.

Sorry about the confusion!

Share this post


Link to post
Share on other sites
This would result in other questions: How are they able to pack three colored textures into one? And much more interesting: How can an UV coordinate be offset for a texture, on which neighbor triangles can be situated in different charts of the UV map?

Share this post


Link to post
Share on other sites
And how would you calculate an texture coordinate offset to sample the three different lights from such a packed texture? It seems impossible, because you cannot access other vertex data in the shader. I think it has to be three different light maps and that there is a mistake in the tutorial shader.

Share this post


Link to post
Share on other sites
You don't need other vertex data.

Well, if you look, they have two variables:

i.lightmapTexCoord1And2
i.lightmapTexCoord3

So, basically, it's something like:
bumpCoord1.xy = i.lightmapTexCoord1And2.xy
bumpCoord2.xy = i.lightmapTexCoord1And2.zw
bumpCoord3.xy = i.lightmapTexCoord3.xy

Then, using those three sets of coordinates (taken, effectively, DIRECTLY from the pixel shader's input, which comes directly from the vertex shader), it looks up into the lightmap texture (with at least all of the lightmaps for the current object in it) at the given coordinate (bumpCoord 1, 2, and 3, respectively).

The rest, as they say, is history.

It's not a bug in the shader.

Share this post


Link to post
Share on other sites
There was a great pdf on the HL2 rendering techology but I cannot find the link. Google got no hits either. Does anybode know the link to it?
I think it was a slide presentation covering the rendering tech with lots of images.

Share this post


Link to post
Share on other sites
Drillian, you are right. All the time I thought that texture coordinates have to be manipulated in the pixel shader but the pre-computed ones are all we need. But which coordinates are used for every base vector? Is it the next point of the mesh, which is intersected by the base vector? Or is it a light map texel, which is situated near the destination one.

Share this post


Link to post
Share on other sites
Are you referring to the basis vectors for the actual light color calculation?

If so, those are simply:


(-1/sqrt(6), 1/sqrt(2), 1/sqrt(3))
(-1/sqrt(6), -1/sqrt(2), 1/sqrt(3))
(sqrt(3/2), 0, 1/sqrt(3))


The actual coordinates that are stored in the vertex are essentially lightmap coordinates, it's just that there are three sets of them per vertex.

The bump basis vectors are simply the three vectors described above.

Each of those basis vectors is the surface-based "center" vector used to calulate the radiosity. So, instead of having an up vector of (0,0,1) relative to the surface, each of the three radiosity maps would use the corresponding basis vector for the light gathering pass.

Thus, you can use the same vectors to estimate the directional components of the lighting by doing the dot product of the bump map texel with the corresponding bumpBasis, then multiplying by the corresponding light maps's vector (And saturating, to keep the result from going negative if the surface normal is facing away from the basis).

Does that answer the question? I'm not sure I'm reading it right :)

Share this post


Link to post
Share on other sites
No, I "only" want to know what could be the best way to calculate the three light map coordinates for each vertex in the pre-processing step.

Share this post


Link to post
Share on other sites
It seems that there are three light maps, which are all packed into one texture. I asked Gary McTaggart, who wrotes the article, and he answered the following:


"The 3 lightmaps are packed right next to each other in a page. The vertex shader deals with calculating the proper offset into the page for the texture coordinates.

If you grab the latest sdk off of steam, we shipped some sample shaders (sdk_lightmappedgeneric I believe) that are exactly what we do for the game."


So the question for calculating the light map coordinates has disappeared, because the three different texture coordinates are only to sample the right part of the texture.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement