Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

raydog

Lightmapping triangles

This topic is 5360 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Decided to start a new post. I''ve been working on this article, http://www.flipcode.com/articles/article_lightmapping.shtml but it only shows how to lightmap one triangle at a time. But, I don''t know how to modify the above example to lightmap two ore more triangles together. Some things still don''t make sense. How do you lightmap two triangles that share an edge, but have different normals? Wouldn''t the lightmap be distorted? Like for example, the top triangles of a sphere. I would like to lightmap all of them together, but they have different normals. I can see how putting two or more triangles together might save some texture space, but lightmapping a single triangle gives perfect results.

Share this post


Link to post
Share on other sites
Advertisement
You might want to google with the keyword "texture atlas".

Some papers:
"Generation of Radiosity Texture Atlas for Realistic Real-Time Rendering"
"Least Squares Conformal Maps for Automatic Texture Atlas Generation"
"Signal-Specialized Parametrization"

HTH.

Share this post


Link to post
Share on other sites
Well, to make things easier, I''m not interested in packing techniques, nor I am interested
in Least Squares Conformal Maps (LSCMs). That''s way too complicated for me. I don''t have a PhD.

I just want to lightmap a group of triangles together in the simplest sense.

Share this post


Link to post
Share on other sites
What I do is to accept some distortion, and create 6 projection directions for each major axis direction.

Then I create a neighbor structure based on xyz position of each vertex. Each face tries to insert its 3 edges in a set. If it finds that edge already there, then it checks the face normal of the neighbor triangle.

If the neighbor''s face normal faces towards the same major axis, I treat it as a mutual neighbor.

Once all faces are added, I start a chart for the first triangle, then recursively add any of its neighbors to the same chart. I keep going through the different neighbor groups until all charts are done, then I pack them.

If you want no distortion, then you could do the above, but treat each face normal as a different chart. So, flat walls and floors would still group together.

Share this post


Link to post
Share on other sites
Thanks, you''ve given my some ideas.

btw, I would prefer not to use terminology like ''charts'', as the meaning doesn''t connect for me.

Share this post


Link to post
Share on other sites
Yeah, well, that''s what people call them...

I guess one group of triangles that live connected together on a lightmap are called a chart, and a group of charts on one texture is called an atlas.

If you generate a true per-vertex tangent space matrix to perform your lighting, it should give a smoother look to things. This is similar to using the interpolated vertex normal instead of the face normal when doing lighting for each lumel.

Just move your light position ( not unit vector ) into tangent space, then renormalize per pixel and perform N.L, and attenuation.

Share this post


Link to post
Share on other sites
quote:

If you generate a true per-vertex tangent space matrix to perform your lighting, it should give a smoother look to things. This is similar to using the interpolated vertex normal instead of the face normal when doing lighting for each lumel.



Well, now you got me curious since I don't know. How is tangent space defined for lightmapping?

Also, don't I need UV coordinates first, before I can do anything with tangent space?



[edited by - raydog on February 10, 2004 3:36:59 AM]

Share this post


Link to post
Share on other sites
Actually tangent space is overkill for this, although I still think using the interpolated, renormalized vertex normal will give better results than using the face normal for smooth geometry.

Share this post


Link to post
Share on other sites
I understand the basics on how to get a vertex normal by averaging the neighboring face normals,
but, how is the the ''interpolated, renormalized vertex normal'' computed?

Doing this would give me a normal for every lumel?

Share this post


Link to post
Share on other sites
No, you need the barycentric weights of the lumel, and then use this value to interpolate between the three vertex normals.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!