Jump to content
  • Advertisement
Sign in to follow this  
ivy

Normal Map generation articles!

This topic is 3735 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I want to figure out how to create normal maps from a high polygon model and a low polygon model. The problem is, that I searched for a few hours looking for articles that explain the exact algorithm, and have found 0 articles yet. I read a bunch of links about the general idea, and how to use existing tools nvidia and ati normal mapper tools to do it. Also, any articles from artists showing how to do it in 3dsmax and all that. But cmon, some programmers just want to understand a lot of stuff, and don't want to take it for granted and use existing tools. Hence I want articles showing the algorithm on how to do it. I can probably come up with a random way to do it, but seriously, it be much easier and better to figure out the intelligent way of doing it. EDIT: If no one knows of any articles, then as long as you know, then thats fine too. Just post the algorithm, and I'll post replies with any questions I might have. Thanks

Share this post


Link to post
Share on other sites
Advertisement
I'm not really qualified to answer factually, but I think I understand the general concept. Each UV map pixel represents an exact 3D space coordinate on the model - actually, it's the other way around, but you can look at it either way. For each pixel of your UV map, bounce a ray off of that point on the high-poly model, find the vertex/poly normal stored there, and encode the direction vector as that pixel's color in the normal map.

I believe the direction vector can be made relative in different ways. It can be relative to the vertex/poly normal of the same 3D coordinates on the low poly model, or relative to object space, world space, and so on.

The vector-to-color encoding process is pretty simple, if I remember correctly. Red, green, and blue represent the x, y, and z axes:

FLOAT COLOR
-1.0 0
0.0 127
1.0 255


Edit:

Forgot to mention something. Use the low-poly model to find the UV-3D relationships. The high-poly model shouldn't even require UV mapping to generate the normal map. So for each UV pixel...

1. Figure out the 3D space origin of that pixel in the low poly model.
2. Record the low_surface_normal of the low-poly model at that point.
3. Move origin in the direction of low_surface_normal a little.
4. Launch a ray, from origin, pointed exactly opposite of low_surface_normal, at the high poly model.
5. Record the high_surface_normal where it makes contact.
6. Encode high_surface_normal into a color value.
7. Store color as the texture pixel.

I think that would be the appropriate steps. The reason for step 3 is that the low poly and high poly models will obviously have spacing differences on their surfaces. The high poly model is likely to be more round, extruding further. It doesn't really matter how far back you push it, as long as you don't accidently intersect another part of the mesh.

[Edited by - Kest on July 30, 2008 8:53:46 PM]

Share this post


Link to post
Share on other sites
Just want to comment that you should generate the surface map normals in planar space, not model or world space; that is, the normal of a given texel should be relative to the low-poly surface normal it maps to. This is so that as the model moves, your normal map stays correct. If you do it in model space, then even as the model deforms, your normals stay pointing in the same direction, which is probably not the effect you're after.

Off the top of my head, I'm not certain if that prohibits using parts of the texture map on more than one surface, like mirroring a texture across a symmetric model.

However, it does mean you only really need two channels for the normal map; U deflection and V deflection. You can use the third for a parallax mapping height field, or something... which would probably be both overkill and awesome. Have the parallax map extrude or penetrate from the low-poly surface to the high-poly surface it came from. For when fragments are cheap but the polygon budget is already used up.

Share this post


Link to post
Share on other sites
Kest:

For Step 1, to calculate the 3d point of the pixel on the low poly model. I guessing the way to do it is if I have the UV coordinate, which is just he current pixel, I loop through all triangles in the low poly model, and figure out which triangle contains this UV point, then I can easily construct the point from the points of the triangle I found. But if my model consists of subsets which have their own texture. Then it would be for each triangle in each subset, then I would create a normal map for each corresponing texture in the subset.

For your remark in Step 3, "Launch a ray, from origin, pointed exactly opposite of low_surface_normal, at the high poly model". I thought that the low poly model should be fully contained INSIDE the high res model, so it seems the ray should point in the direction of the low_surface normal, and not the opposite way. Unless you are suggesting the high res model is the one that is inside the low res model.

Wyrframe makes a good point in that, each UV coordinate may have to correspond to 1 surface. Mirroring you maybe be able to find a special case to handle that, but what if two surfaces are completely different and share a few UV pixels. Sounds like it prohibits that since a pixel can't contain 2 different surface normals.

Wyrframe, thats interesting that you said just to store 2 channels for the normal map, since I can re-compute the 3 axes cause its normalized. After reading various sources about normal maps, no one brought that up. Although I didn't read anything about parallax mapping yet, so I wouldn't know if thats how you would create normal maps when you do parallax mapping. I'll save it for reading at a later date.

I read that both normal maps in both planar space, or model/world space both have their pros and cons. So I plan to be using both.

Share this post


Link to post
Share on other sites
Quote:
Original post by ivy
For your remark in Step 3, "Launch a ray, from origin, pointed exactly opposite of low_surface_normal, at the high poly model". I thought that the low poly model should be fully contained INSIDE the high res model, so it seems the ray should point in the direction of the low_surface normal, and not the opposite way. Unless you are suggesting the high res model is the one that is inside the low res model.

I was just picturing the ray being casted toward the surface from outside of the model, rather than from inside it. But it actually does seem like a better idea to cast it outward. Still, you might not want to rely on the low poly model being completely contained within the high poly model. If nothing else, you could try both directions to find the surface on the high poly model that is closest.

The hardest part of the whole routine is probably step 1.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!