Jump to content
  • Advertisement
Sign in to follow this  
Sphet

Lightmapper tutorials or help?

This topic is 4512 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey all, Happy New Year. I'd like to write a lightmapper but don't really know where to start. Can anyone give me any links to tutorials? I havn't had much luck looking myself. I don't understand how to do the mapping for the second UV channel without destroying any coherency in the vertex. Also, since the mapper uses ray tracing, should I be super sampling the geometry so that I don't miss small pieces of geometry? Any tips or hints or pointers on this - I'm just getting started.

Share this post


Link to post
Share on other sites
Advertisement
I've been thinking about this problem some more - once I map all the faces into the shadow map, do I walk across the shadow map and cast the rays from the lightsource to those faces at some resolution, or is it the other way around?

Share this post


Link to post
Share on other sites
Quote:
Original post by Sphet
I've been thinking about this problem some more - once I map all the faces into the shadow map, do I walk across the shadow map and cast the rays from the lightsource to those faces at some resolution, or is it the other way around?


I assume that by "shadow map" you mean "lightmap".

Theoretically, they should be the same thing. This means that they should yield the same result BUT it might be beneficial for you to start from the lightsource and not the face or vice-versa. The reason for is that you want your rays to intersect with your geometry as soon as possible so as to minimize the number of your collision checks. If you implement a data structure that informs you on where it is best to start from, you will see performance gains.

When I was going through lightmapping, I found GameDev and www.flipcode.com to be the best pools of knowledge so search them as much as you can.

After you have understood the theory behind lightmapping, I suggest you study Quake's code to see how the ID team did it.

Share this post


Link to post
Share on other sites
Quote:
Original post by head_hunter

I assume that by "shadow map" you mean "lightmap".

Theoretically, they should be the same thing. This means that they should yield the same result BUT it might be beneficial for you to start from the lightsource and not the face or vice-versa. The reason for is that you want your rays to intersect with your geometry as soon as possible so as to minimize the number of your collision checks. If you implement a data structure that informs you on where it is best to start from, you will see performance gains.

When I was going through lightmapping, I found GameDev and www.flipcode.com to be the best pools of knowledge so search them as much as you can.

After you have understood the theory behind lightmapping, I suggest you study Quake's code to see how the ID team did it.



Yes, of course I mean light maps.

I'll keep digging. Thanks for the tip about minimizing collision tests, that makes sense.

Share this post


Link to post
Share on other sites
So I am continuing with this work and have got my triangles planar mapped. Now I need to walk through my lightmap and find world space coordinates for each texel.

Given that I have a 2d lightmap that contains some triangle in in, what is the process for finding the world space coordinate for that 2D point? I can obviously interpolate the UV associated with the texel, but I am having a hard time figuring out how to transform a UV back to the triangle and from there into screen space.

Any help on this would be appreciated.

Share this post


Link to post
Share on other sites
Once you have found which face the given texel belongs to in texture space, you also know which face it belongs to in world space. The next step is to find a measure of "where" the texel lies on the planar mapped face and apply it to the world space coordinates of that face to project the texel in 3D.

This is done by conceiving a bounding rectangle around the 2D face and normalizing the texel's UV coordinates like so:

norm_u = (u - min_u) / (max_u - min_u);
norm_v = (v - min_v) / (max_v - min_v);

min_u, min_v, max_u and max_v pertain to the face and u, v should be within the [min_u, max_u] and [min_v, max_v] ranges.


Next, as in 2D, so in 3D the face which to which the texel belongs is bounded by a rectangle. The only problem now is that, being in 3D space, you will have to calculate the "missing" third coordinate so as to invert the planar mapping. To do this, you only need to store which the primary axis of the face was. If it was, say, Z then:

tex_x = min_x + norm_u * (max_x - min_x);
tex_y = min_y + norm_v * (max_y - min_y);
tex_z = origin + norm_u * uvec + norm_v * vvec;

where, as before, min_x, min_x, max_y and max_y pertain to the face and origin, uvec and vvec are:

origin = -(face_a * min_x + face_b * min_y + face_d) / face_c;
uvec = -(face_a * max_x + face_b * min_y + face_d) / face_c - origin;
vvec = -(face_a * min_x + face_b * max_y + face_d) / face_c - origin;

Note that face_a, face_b, face_c and face_d are the plane parameters of the face.

The above was typed from memory so I can't guarantee correctness. The principle should be sound though...

Share this post


Link to post
Share on other sites
Quote:
Original post by head_hunter
Once you have found which face the given texel belongs to in texture space, you also know which face it belongs to in world space. The next step is to find a measure of "where" the texel lies on the planar mapped face and apply it to the world space coordinates of that face to project the texel in 3D.

This is done by conceiving a bounding rectangle around the 2D face and normalizing the texel's UV coordinates like so:

norm_u = (u - min_u) / (max_u - min_u);
norm_v = (v - min_v) / (max_v - min_v);

min_u, min_v, max_u and max_v pertain to the face and u, v should be within the [min_u, max_u] and [min_v, max_v] ranges.


Next, as in 2D, so in 3D the face which to which the texel belongs is bounded by a rectangle. The only problem now is that, being in 3D space, you will have to calculate the "missing" third coordinate so as to invert the planar mapping. To do this, you only need to store which the primary axis of the face was. If it was, say, Z then:

tex_x = min_x + norm_u * (max_x - min_x);
tex_y = min_y + norm_v * (max_y - min_y);
tex_z = origin + norm_u * uvec + norm_v * vvec;

where, as before, min_x, min_x, max_y and max_y pertain to the face and origin, uvec and vvec are:

origin = -(face_a * min_x + face_b * min_y + face_d) / face_c;
uvec = -(face_a * max_x + face_b * min_y + face_d) / face_c - origin;
vvec = -(face_a * min_x + face_b * max_y + face_d) / face_c - origin;

Note that face_a, face_b, face_c and face_d are the plane parameters of the face.

The above was typed from memory so I can't guarantee correctness. The principle should be sound though...



Headhunter, thank you for the reply.

I will try this, because, although I found a working sollution using barycentric coordinates, I am getting some seam problems that I think may be either related to interpolation of position, or, most likely, the UV generation of each face not lining up with texel centers. This problem only happens in corners where the surface is lying in a different direction to it's neighbour. My illumination routine doesn't take normals into account, so I don't understand why I'm getting these seams.

None the less, I'll give yours a try.

Share this post


Link to post
Share on other sites
Well, things have been moving along pretty well. I've got my UV generation code working and multiple triangles glued together. Now I've hit the next stumbling block.

If I render everything without bilinear filtering I get the correct results - no pixel leaks, uv's are correct, seams line up, everything. When I turn on filtering I get bleed. Of course this is expected behaviour; I understand how bilinear filtering works and I know what I need to do. Essentially, I want to grow my lightmap around the edges so that bilinear filtering can work correctly. Unfortunately, I'm not sure of the best way to do this. Does anyone have any ideas?

- S

Share this post


Link to post
Share on other sites
Quote:
Original post by Hydrael
Hi there,

you might want to look at this article
There is a part that deals with bleeding

Greets

Chris



Chris,

Thanks for the reply. I've already read this article and implemented a fill-routine similiar to this. I get edges where I can't explain them but they are quite subtle. Perhaps no one will notice, but I can't explain them. I am using a test case where the inside of a box ( normals flipped inside) is lightmapped with a single point light. The lighting calculation for the lightmap only includes attenuation, not normals, so I should see a consistent colour gradient across edges or creases in the walls, but I don't; I see subtle seams. This is, I think, because I fill missing pixels with an average of surrounding legal pixels, not with a valid light equation from adjacent triangles in other collections projected onto other planes.

None the less I will simply continue and see if the artifact is reduced when I increase my texture resoltuion.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!