Lightmaps on tristripped geometry?

Started by
5 comments, last by felonius 21 years, 1 month ago
Hi, I am in the process of researching light maps for our engine. In the tutorials it all seems pretty straight forward, but there is one problem I have yet to solve and I hope I can get some help here. Problem is that in all light tutorials I have seen each triangle is mapped to new UV coordinates on the light map texture. This is fine for geometry as a list of vertices. But in our case we use indexed/tristripped geometry. This means that the list of indexes in the geometry in made to reuse vertices between adjacent triangles in the mesh. However, this requires the UV coordinates to be the same for all triangles that use it. How do I get around this problem? I can''t just reuse the mapping of the texture from the original texture since it may be utilizing wrapping of the texture og even covering the same area more than once - therefore making it impossible to garantee that each light map can become unique. Should I just accept life and send the light map as a trilist? Any suggestions would be appreciated. Jacob Marner, M.Sc. Console Programmer, Deadline Games
Jacob Marner, M.Sc.Console Programmer, Deadline Games
Advertisement
Well, you''re never going to get as good tri-stripping with lightmapping, as you''ll have to split and duplicate vertices at some point in order to get a unique mapping.

But, you don''t need to individually map each triangle. This is what I do to try and keep connected triangles connected:

classify triangles in 1 of 6 planar directions (+ve and -ve x,y,z axis).

Then create sub-surfaces. These are regions that are in the same planar direction as each other and are connected.

Then planar map the lightmap uv''s based on each sub-surfaces planar direction.

Now pack those uv sub-surfaces into a texture.

Hence, you do manage to keep some vertex sharing. eg, a sphere will be split into 6 connected regions rather than 1 for each triangle (still not as good as a single region though ).

I can''t say whether it will still be worth tri-stripping rather than using tri-lists as I always used tri-lists, but it might be worth a try.

HTH

Matt Halpin
Hi Matt,

Thanks for the suggestion. Using tristrips is quite essential for in our case since you get a high performance boost on the PS2 from it.

It is good idea to do planer mapping on your models and in the case of the sphere this works specially well. But what do you do in the case where more than one triangle map into the same area of the light map? Lets say you want to map a donut; what do you do?



Jacob Marner, M.Sc.
Console Programmer, Deadline Games
Jacob Marner, M.Sc.Console Programmer, Deadline Games
You''re talking about the parts on the inside and outside of the donut that map sideways and hence have the same planar projection?

That''s not actually a problem. The important point is that the triangles are split up into *connected* sub-surfaces that have the same planar mapping. This means that these 2 parts of the donut will be in different sub-surfaces, as they are not directly connected (they *are* connected, but only via triangles that are in a different planar direction, hence there will be a split). Once the sub-surfaces have been determined, you calculate projected uv''s (at this point the 2 sub-surfaces will have overlapping uv''s) but you don''t stop there:

You then work out a bounding rectangle for each sub-surface (a 2D bounding rectangle in uv space). You do this by working out the 2D convex hull in uv space and then applying a ''rotating callipers'' algorithm to determine the best fit rectangle. Now rotate the uv''s (and bounding rectangle) so that the rectangle is axis aligned. Then do a standard bin packing algorithm on each rectangle. Then transform the uv''s so they fit in the packed rectangle. Hence each sub-surface will get a unique part of the texture.

eg. There will be 10 sub-surfaces for a donut (1 top, 1 bottom, 2 left, 2 right, 2 up, 2 down) but each will be packed into different regions of the texture, so no overlap will occur...

HTH

Matt Halpin
Hi Matt,

Thanks for your suggestion. That was exactly the kind of thing I was looking for.

Do you have any experience with, whether it pays of to try to do alternative mappings, eg. pyramid style or planar mapping using other axes than the principal ones?

Jacob Marner, M.Sc.
Console Programmer, Deadline Games
Jacob Marner, M.Sc.Console Programmer, Deadline Games
I have a polygon soup to unwrapped UV mapper for lightmapping and it works fine. The problem that I have is that seams sometimes occur between clusters in different parts of the lightmap texture (or between clusters in different lightmap textures). What is the best way of dealing with these lighting seams?
Thanks
felonius:
Nope I haven''t tried anything else. I guess it might be possible to take the set of triangle normals and determine a set of axes that would give the best grouping, but I haven''t tried that... With less axes, you will get a greater maximum possible angle between triangle normal and projection direction, hence the lightmap could look a bit sheared. But with more axes, you obviously have less chance of keeping connected triangles connected. So I''d say the 6 primary directions is a good bet...
Maybe try using a palettising algorithm on the set of triangle normals, and only allow 6 (or whatever) palette entries. Might actually work pretty well...

blue_knight:
Firstly, you need to duplicate edge texels in your lightmap texture. This stops bilinear filtering from sampling neighbouring (probably black) texels. Also, you need to leave a gap between packed rectangles. This is to stop bilinear filtering from sampling a different sub-surface. Also, it can stop smaller mip-levels from bleeding, though I don''t personally enable mip-mapping for lightmaps so that''s not a problem.
Lastly, you need to make sure triangle edges that are actually connected in the mesh, but have been split due to the uv-mapping, have smooth lightmapping between them (otherwise there will be a visible seem along the triangle edge). I did this by storing a list of ''split-edges''. Each of these has 4 uv coordinates (2 for each vertex - 1 for each sub-surface the edge connects). Then once the lightmapping has finished, I run through each split-edge and average texel values from the 2 different sub-surfaces, hence the seem is lessened. NB. you can''t completely remove the seem without forcing the texel values to be equal. This is due to the different uv mapping on each side of the edge causing different bilinear interpolation values for different pixels. But with a base texture it''s normally not very visible...

HTH

Matt Halpin

This topic is closed to new replies.

Advertisement