Jump to content
  • Advertisement
Sign in to follow this  

OpenGL light mapping help

This topic is 3973 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've run into a problem with a light mapping program I'm trying to write. The light mapping works for the most part. Here are a couple of screen shots of the output: example b (note: I run the output of my program through ImageMagick to apply a gaussian blur to the light maps. It takes the edge off) The problem is the method I'm using. The program currently, for each object to be light mapped, recursively tessellates the object. It does this by subdividing the triangles by the edge midpoints. A simple box goes from 12 triangles to over 200K triangles. At this point I loop through the tesselated vert list translating them away from the object by 0.01% of the unit normal of the face. This list is used as an end point for the ray triangle intersection test (obviously, the light source is the other end point for the ray). The texture coord from the tessellated vert is used to plot the calculated intensity. Longer story short, I'd like to use a traditional algo where I itterate across the texture and map the UV coords onto the plane of the triangle. From there I can do a simple point in triangle test, translate out by a fraction of the normal and shoot the ray the from here. This would greatly speed up my program and get rid of the edge bleeding and aliasing I'm getting with the current method. I searched google and here but I can't find (or don't get) the answer. I did see some stuff on flip code but it seemed everything being mapped had to be axis aligned. It shouldn't matter but I'm writting the program in python with the math intensive stuff embedded in C. A second python program I wrote is used to display the scene using OpenGL. Anyway, any help would be appreciated.

Share this post

Link to post
Share on other sites

I dunno if these are of any help, but they are some light mapping tutorials I know of.

3Ddrome.com - Dynamic Lightmaps in OpenGL

Apron - OpenGL Lightmap

flipcode.com - Light Mapping - Theory and Implementation

flipcode.com - Advanced Lightmapping

flipcode.com - Lightmaps (Static Shadowmaps)

flipcode.com QA - Lightmaps

Tobias Johansson - Lightmaps

Alan Baylis - Lightmapping Tutorial

BTW, I'm not sure if all of these are about the generation of lightmaps, but I think I will go through them and edit my post later. Ohh, and I included the flipcode ones even though you might have already seen them.

Anyhows I've never made a light map generator before, but I would like to some day :\.

I'm no expert on things, but I think mapping to the closest axis helps to avoid seams in the lightmap.


[Edited by - yosh64 on January 13, 2008 3:25:45 AM]

Share this post

Link to post
Share on other sites

I was just thinking about this subject again, and looking to make a lightmap generator myself.

Anyhows I was thinking that instead of just mapping the UV's to the strongest/neareset axis, that maybe you could build a transformation matrix that transforms the vertices into the new texture space coordinate system.

I'm not sure how this would go in practise, as I'm yet to try... but I think it would be interesting to find out, and it would be great to hear from anyone who has tried, and/or knows about such things :). I wonder if there would be issues with texture seams and such?

Anyhows I was thinking about how to go about building the matrix to transform each vertex of the face/triangle into texture space. Well firstly for the rotation part of the matrix, you could use the faces normal as the Z axis, and you could just align the Y or X axis to an edge of the face/triangle, and get the cross product for the remaining axis (the way you choose which axis to align with what edge may depend on the manner you are packing your lightmaps, and the length of the edge). Then you could build the translation part by calculating the bounding box of the face/triangle and then getting the distance to the origin. Then I think you would need to invert this matrix ;), or maybe just invert the translation part. Well as it would be an orthogonal matrix, I think you could just get the transpose instead.

Hmm, I hope I make some sense here, hehe :).

Sorry to kinda hijack your thread, but anyone know anymore about this? as I think all light mapping tutorials I've come across also just map to the nearest/strongest axis, and I'm not exactly sure why? nor do I know of the benefits and such :\.

Anyhows I might dig up and look through a few of those tutorials and see if there is any mention of such things.

@hostileturtle: How did you end up going with it all?

I just took a quick look at the 3Ddrome.com - Dynamic Lightmaps in OpenGL tutorial, and they seem todo kinda like I mention :). But, I'm not sure if they go into building the surfaces rotation matrix used to transform from world space to texture space. Yep, just had another quick look, and they do explain how to build the matrix, although I haven't looked long enough yet to see how they do it, hehe.

another edit...

This bothered me for a while because I originally wanted an "exact" solution, using the exact orthogonal uv vectors for arbitrary surfaces; but as it turns out, an "exact" solution in many cases wouldn't look right because the textures don't line up. You want a uniform mapping across all of your surfaces. If you use the 'exact' texture plane of every surface to generate your lightmaps, it'll probably look "sort of" right since the 3D sample points are in the right world position, but you'll also find that its extremely difficult, if not entirely impossible, to make sure the lumels on every adjacent polygon line up. An offset of even a few pixels completely ruins the illusion of smooth and pretty lighting across surfaces. So then, how is it done?

This is form the flipcode.com - Lightmaps (Static Shadowmaps) article. I'm quite sure I've read this before, and this is probably where I got the idea that there were issues with seams and such.

and another...

To accomplish this seamless mapping across planar as well as non-planar surfaces, we'll need a shared frame of reference (a frame of reference that all polygons fit into) for all mapping. I'll use world-space coordinate system as a frame of reference for this example, but feel free to find your own.

World-space mapping is done by using two of the three world-space components (X&Y, Y&Z or Z&X) and applying them directly to the UV coordinates. By using these world-space coordinates we effectively apply a planar mapping to everything in the scene using the plane defined by the coordinates chosen. And magically, any polygons that share a common edge (and hence, vertices that define those edges) will end up with the same UV values at those edges.

This is from flipcode.com QA - Lightmaps, and I think I've read this before also :).

Anyhows it would still be nice to hear back from folks who have tried :).


[Edited by - yosh64 on January 27, 2008 9:01:01 PM]

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!