• Advertisement

Archived

This topic is now archived and is closed to further replies.

Lightmapping triangles

This topic is 5095 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Decided to start a new post. I''ve been working on this article, http://www.flipcode.com/articles/article_lightmapping.shtml but it only shows how to lightmap one triangle at a time. But, I don''t know how to modify the above example to lightmap two ore more triangles together. Some things still don''t make sense. How do you lightmap two triangles that share an edge, but have different normals? Wouldn''t the lightmap be distorted? Like for example, the top triangles of a sphere. I would like to lightmap all of them together, but they have different normals. I can see how putting two or more triangles together might save some texture space, but lightmapping a single triangle gives perfect results.

Share this post


Link to post
Share on other sites
Advertisement
You might want to google with the keyword "texture atlas".

Some papers:
"Generation of Radiosity Texture Atlas for Realistic Real-Time Rendering"
"Least Squares Conformal Maps for Automatic Texture Atlas Generation"
"Signal-Specialized Parametrization"

HTH.

Share this post


Link to post
Share on other sites
Well, to make things easier, I''m not interested in packing techniques, nor I am interested
in Least Squares Conformal Maps (LSCMs). That''s way too complicated for me. I don''t have a PhD.

I just want to lightmap a group of triangles together in the simplest sense.

Share this post


Link to post
Share on other sites
What I do is to accept some distortion, and create 6 projection directions for each major axis direction.

Then I create a neighbor structure based on xyz position of each vertex. Each face tries to insert its 3 edges in a set. If it finds that edge already there, then it checks the face normal of the neighbor triangle.

If the neighbor''s face normal faces towards the same major axis, I treat it as a mutual neighbor.

Once all faces are added, I start a chart for the first triangle, then recursively add any of its neighbors to the same chart. I keep going through the different neighbor groups until all charts are done, then I pack them.

If you want no distortion, then you could do the above, but treat each face normal as a different chart. So, flat walls and floors would still group together.

Share this post


Link to post
Share on other sites
Thanks, you''ve given my some ideas.

btw, I would prefer not to use terminology like ''charts'', as the meaning doesn''t connect for me.

Share this post


Link to post
Share on other sites
Yeah, well, that''s what people call them...

I guess one group of triangles that live connected together on a lightmap are called a chart, and a group of charts on one texture is called an atlas.

If you generate a true per-vertex tangent space matrix to perform your lighting, it should give a smoother look to things. This is similar to using the interpolated vertex normal instead of the face normal when doing lighting for each lumel.

Just move your light position ( not unit vector ) into tangent space, then renormalize per pixel and perform N.L, and attenuation.

Share this post


Link to post
Share on other sites
quote:

If you generate a true per-vertex tangent space matrix to perform your lighting, it should give a smoother look to things. This is similar to using the interpolated vertex normal instead of the face normal when doing lighting for each lumel.



Well, now you got me curious since I don't know. How is tangent space defined for lightmapping?

Also, don't I need UV coordinates first, before I can do anything with tangent space?



[edited by - raydog on February 10, 2004 3:36:59 AM]

Share this post


Link to post
Share on other sites
Actually tangent space is overkill for this, although I still think using the interpolated, renormalized vertex normal will give better results than using the face normal for smooth geometry.

Share this post


Link to post
Share on other sites
I understand the basics on how to get a vertex normal by averaging the neighboring face normals,
but, how is the the ''interpolated, renormalized vertex normal'' computed?

Doing this would give me a normal for every lumel?

Share this post


Link to post
Share on other sites
No, you need the barycentric weights of the lumel, and then use this value to interpolate between the three vertex normals.

Share this post


Link to post
Share on other sites
An interpolated normal per lumel seems to help somewhat, but it''s still not good enough for low poly models.

I''m trying to lightmap a low poly sphere (32 faces or so), but the lit faces are very faceted. The lighting
abruptly stops at the edges. So the visible faces are lit, but the adjacent faces are completely black.

Is there anyway to fix this, other than the obvious (increasing the poly count)?

Share this post


Link to post
Share on other sites
quote:
Original post by raydog
An interpolated normal per lumel seems to help somewhat, but it's still not good enough for low poly models.

I'm trying to lightmap a low poly sphere (32 faces or so), but the lit faces are very faceted. The lighting
abruptly stops at the edges. So the visible faces are lit, but the adjacent faces are completely black.

Is there anyway to fix this, other than the obvious (increasing the poly count)?

Sounds like you still have face normals to me. What you need to interpolate are the vertex normals (the average of all the face normals in the faces a vertex belongs to.) Per pixel falloff, using the lumel's distance from the light position for example, could also improve the quality. Now... if your lighting doesn't look like standard per face lighting and is just either black or white faces there could be something wrong with your chart, your lighting equation, etc.

[edited by - impossible on February 10, 2004 7:45:19 PM]

Share this post


Link to post
Share on other sites
Do lighting on all faces, even those facing away from the light, that way you can correctly get the light to wrap around.

Another idea is to take the ''least backfacing'' vertex normal, and if that one is > 90 degrees from the light, then you can cull the whole face from lighting.

Share this post


Link to post
Share on other sites
I think I''m using the correct interpolated vertex normals.

Take a look here at my problem:

lightmapping problem

And I won''t know if I''m using a sphere or not, so it has to work for any kind of model.

Share this post


Link to post
Share on other sites
How are you deciding which faces get lit? Are you culling them out somehow? Perhaps via the face normal?

It looks like the top of the ''sphere'' should be partially lit.

Share this post


Link to post
Share on other sites
just using dot(L.N), where L is the light position/lumel vector,
and N is the face or lumel normal.

From that angle, the top sphere faces are actually not visible. It may be hard to see from that camera position.






[edited by - raydog on February 11, 2004 9:57:12 PM]

Share this post


Link to post
Share on other sites
"Face or Lumel Normal"? These two are different. Which is it?

You want to always do the lumel normal - which is an interpolated version of the shared and smoothed vertex normal.

Is your ''sphere'' so low-poly that you''re not getting smoothed vertex normals?

Share this post


Link to post
Share on other sites
For the first pic, I use the same face normal for each lumel, and for the second pic, I use an interpolated vertex normal for each lumel. The sphere is 48 triangles.

I added more pics to show face and vertex normals:

lightmap problem

[edited by - raydog on February 12, 2004 4:52:20 AM]

Share this post


Link to post
Share on other sites
OK, you''re using a point light. You just need to tessellate more. The top face vertex normals are > 90 degrees from the light position.

Share this post


Link to post
Share on other sites

  • Advertisement