Jump to content
  • Advertisement
Sign in to follow this  
Max Power

Tangent Space computation for dummies?...

This topic is 795 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I'm having a hard time auto-generating vertex normals and tangents on my procedurally generated character meshes.

 

The normals look good except on a few vertices that produce visible seams. This is probably going to drive me mad as well, but first I want to take care of the tangents. I'm not a math genius or a graphics programmer, just an ordinary dummy and hobbyist programmer who dreams of making his own game and is happy when things work without the desire to understand everything in detail.

 

That said, can someone please provide me with some equations to calculate vertex tangents? I think I have all the data I need:

 

- vertex positions, normals (except for the few broken ones...), uv-coordinates, triangle-indices and also information what vertices are "on top of each other" that I used for the normal generation.

 

The sources I found turned out less than comprehensive for me :(

Share this post


Link to post
Share on other sites
Advertisement

tangents are orthogonal to normals. So get some random vector and apply the vector product 3 times. The last two vectors would span a base of the tangent space. If division by zero occures , rinse and repeat.

I do not get how a vertex can have a normal. Triangles have a normal. At least in Blender. Then Blender also assigns a mean normal to subdivision patches ..

Also if the tangent space vectors are to be aligned with the UV vectors, I would first do that on a tri.

Share this post


Link to post
Share on other sites

I am using UE4, where vertices do indeed have normals. Before that I used Irrlicht3D and NVidia-PhysX (which uses normals for mesh-collision-shapes), which also both had vertex normals. It makes sense to me, because I don't see how you can efficiently smooth normals without putting them into vertices (averaged by adjacent faces).

 

And yes, the tangents need to be aligned with the UV vectors. That's the whole problem.

 

At least I think I managed to fix the few broken normals by now. It was pretty much just a typo... the sort of thing that happens when you use arrays of indices to access arrays of indices to access arrays of indices...

 

So close now :D

Edited by Max Power

Share this post


Link to post
Share on other sites

tangents are orthogonal to normals. So get some random vector and apply the vector product 3 times. The last two vectors would span a base of the tangent space. If division by zero occures , rinse and repeat.

I do not get how a vertex can have a normal. Triangles have a normal. At least in Blender. Then Blender also assigns a mean normal to subdivision patches ..

Also if the tangent space vectors are to be aligned with the UV vectors, I would first do that on a tri.

That's funny: you give a fairly deep mathematical explanation of tangents and then say "I do not get how a vertex can have a normal". :-)
 

A vertex isn't really a point; it's a piece of data that has a position assigned to it. A position is a point. A position is the minimum amount of data in a vertex. But you can assign other data to it such as color, UV position, normals, and tangents.

 

Blender does face normals. The graphics card does vertex normals. You can't really pass a face/triangle normal to the graphics card.

 

All the graphics card really does is draw triangles. It defines those triangles as three vertices. So, you can think of the vertices as the data that defines the corners of a triangle. Obviously a point cannot really "face" a direction since it has no surface. But you can assign it a facing/normal.

 

When you get the normal from Blender, you're going to have to break it up into vertex normals.Unless you can figure out a way to use a triangle strip in a model (which I've never been able to figure out), you're going to have to break the entire model up into triangles with no shared vertices.

 

You could assume the vertices point in the direction of their faces. But for smooth shading you'll want to average the face directions for every face that vertex originally was part of.

 

What you end up with is a triangle where you have 3 normals that point in different directions and none of them are the direction the triangle are facing but an average of the surrounding triangles with this triangle.

 

This gets into how rasterization works. It has to shade in the pixels between these three points to draw a triangle. It interpolates the values of the three vertices between these 3 points. That's a weighted average. The closer you get to any vertex/point, the more you take on the characteristics of that point. That was the purpose of showing people those red, blue, green triangles where the colors blend.

 

Well, it does that with the normal too. So, what you end up with is a "pixel normal". It averages the color value (if you don't use a color map/texture to determine the color) and it also averages those three normals to give you a direction that one pixel is facing (pixel normal). So now every pixel on the face of the triangle is facing a slightly different direction. This is what gives smooth shading it's smooth look; it's a gradual (gradient) shift between the facing of each corner of the triangle.

 

Now, if you really want to control the facing of every pixel, rather than just using a weighted average between the corners, you can use a normal map. This works the same way a color map (texture) works, but instead of assigning a pixel's color based on a photo (texture) you are assigning the pixel's facing based on a photo (texture). Except the color information in the normal map photo describes a 3D normal for that one pixel on the model.

 

But a vertex normal is just a normal at the corner of the triangle that allows you to "bend" the shape of the triangle when you change their values away from the direction of the face normal. The bending effect happens because you use them to calculate a pixel normal for every pixel drawn by averaging them across the face of the triangle. The vertex normal is likely generated from the face normal by averaging all the faces the original vertex was part of. Without vertex normals, everything would be flat shaded because smooth shading would not be possible.

Edited by BBeck

Share this post


Link to post
Share on other sites

If you're looking to find a working implementation, you can have a look at mine's. It's very basic, nothing fancy. It's based on Langyel's method.
 
Several more modern, superior one's have appeared since then.
 
Should be enough to get you started.

 

 

I do not get how a vertex can have a normal. Triangles have a normal. At least in Blender. Then Blender also assigns a mean normal to subdivision patches ..

See Polycount vs Vertex count

Share this post


Link to post
Share on other sites

tangents are orthogonal to normals. So get some random vector and apply the vector product 3 times. The last two vectors would span a base of the tangent space. If division by zero occures , rinse and repeat.

I do not get how a vertex can have a normal. Triangles have a normal. At least in Blender. Then Blender also assigns a mean normal to subdivision patches ..

Also if the tangent space vectors are to be aligned with the UV vectors, I would first do that on a tri.

A vertex can have a normal. There's a lot of reasons you'd want a vertex to have a normal. It is usually the average of all faces it is connected to.

Share this post


Link to post
Share on other sites

Technically I can misuse any data-structure. A vertex is a point with edges to other points. I was just reacting to the comprehension part. Premature optimization. I remember engines which could only do flat shaded polygons. I do not know why current API do make the hack of vertex normals easier to use then the mathematical sound way of using the normal of the polygon. And I do not think that this is the case.

Share this post


Link to post
Share on other sites

Technically I can misuse any data-structure. A vertex is a point with edges to other points. I was just reacting to the comprehension part. Premature optimization. I remember engines which could only do flat shaded polygons. I do not know why current API do make the hack of vertex normals easier to use then the mathematical sound way of using the normal of the polygon. And I do not think that this is the case.

The GPU's concept of a vertex is "a tuple of attributes", such as position/normal/texture-coordinate -- the mathematical definition doesn't really apply :(

A GPU vertex doesn't even need to include position! When drawing curved shapes, "GPU vertices" are actually "control points" and not vertices at all.

 

There's also no native way to supply per-primitive data to the GPU -- such as supplying positions per-vertex and normals per-face. Implementing per-face attributes is actually harder (and requires more computation time) than supplying attributes per vertex, because the native "input assembler" only has the concept of per-vertex and per-instance attributes.

Edited by Hodgman

Share this post


Link to post
Share on other sites

 

Technically I can misuse any data-structure. A vertex is a point with edges to other points. I was just reacting to the comprehension part. Premature optimization. I remember engines which could only do flat shaded polygons. I do not know why current API do make the hack of vertex normals easier to use then the mathematical sound way of using the normal of the polygon. And I do not think that this is the case.

The GPU's concept of a vertex is "a tuple of attributes", such as position/normal/texture-coordinate -- the mathematical definition doesn't really apply :(

A GPU vertex doesn't even need to include position! When drawing curved shapes, "GPU vertices" are actually "control points" and not vertices at all.

 

There's also no native way to supply per-primitive data to the GPU -- such as supplying positions per-vertex and normals per-face. Implementing per-face attributes is actually harder (and requires more computation time) than supplying attributes per vertex, because the native "input assembler" only has the concept of per-vertex and per-instance attributes.

 

You could treat vertex as triangle and expand it at geometry shader. It just bit cumbersome and performance would be awful.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!