• Create Account

# Best practice for vertex normal calculations?

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

6 replies to this topic

### #1Saldan  Members   -  Reputation: 115

Like
0Likes
Like

Posted 29 May 2012 - 11:37 PM

I'm looking to create arbitrarily-shaped meshes and calculate the proper vertex normals for each vertex. As I understand it (and I may be wrong), if you want smooth shading, you need to ensure that vertices have a normal that represents the combined normals of all the triangles that vertex is involved in. For instance, if you have a straight edge along your model where the surfaces extend down along Y starting at 0, right along X starting at 0 and infinitely into and out of the Z axis, then your vertices need to have normals that point exactly in the +Y, -X direction. I'm sure that's not the best description, but you get my gist. Otherwise, if the vertices in each triangle merely represent that triangle's normal, you'll get entirely flat shading.

(EDIT: Just to be clear, I mean that given a triangle A, the vertices A1, A2 and A3 shouldn't all have identical normals - rather, their normals should be influenced by the normals of other triangles as well, such that, for example, normals A2 and A3 should have a normal also influenced by the normal of the separate triangle A2-A3-A4.)

So, first question - is that indeed the way to achieve smoother-looking shading?

If it is, I've got something of a problem, as the image below (which should be a pyramid) no doubt makes clear. The triangles themselves are all placed accurately, but I'm not calculating my normals properly. I should mention that the way I build my triangles internally is that I have a fixed set of vertices and, based on a number of parameters, choose some of these vertices as points in an arbitrary mesh (which are first stored as pointers, and then ultimately uploaded to a vertex buffer object as a series of floats). Since vertices store their normals as well as their position, I thought it would be easy to ensure that each time a vertex is used by a triangle, that triangle's normal influences the vertex's total normal. It appears that isn't the case.

Currently, every time I make a new triangle, I calculate its normal and then add that normal to the existing normals of the three vertices involved. After making all my triangles, I reduce the length of each vertex's normal to 1. I've also tried adding them and then dividing by the number of triangles involved, but that doesn't work and it doesn't sound like it should work, either.

This is my normal calculation. I run this for all triangles (which contain pointers to their Verts 1, 2, and 3).

float A[3] = {Vert2.x - Vert1.x, Vert2.y - Vert1.y, Vert2.z - Vert1.z};
float B[3] = {Vert3.x - Vert1.x, Vert3.y - Vert1.y, Vert3.z - Vert1.z};

float nX = A[1] * B[2] - A[2] * B[1];
float nY = A[2] * B[0] - A[0] * B[2];
float nZ = A[0] * B[1] - A[1] * B[0];

nX *= -1;
nY *= -1;
nZ *= -1;

Vert1.nx += nX;
Vert1.ny += nY;
Vert1.nz += nZ;

Vert2.nx += nX;
Vert2.ny += nY;
Vert2.nz += nZ;

Vert3.nx += nX;
Vert3.ny += nY;
Vert3.nz += nZ;


This is how I normalize the normals. I run this for all Verts after having run the above code for all triangles.

//Normalize!
float Length = sqrt(Vert.nx * Vert.nx + Vert.ny * Vert.ny + Vert.nz * Vert.nz);

Vert.nx /= Length;
Vert.ny /= Length;
Vert.nz /= Length;


Clearly, the results aren't what I want. I was hoping somebody might be able to help me arrive at a better solution than this. Any advice?

#### Attached Thumbnails

Edited by Saldan, 29 May 2012 - 11:44 PM.

### #2szecs  Members   -  Reputation: 2081

Like
0Likes
Like

Posted 30 May 2012 - 12:13 AM

I'm a bit sleepy to think it through, but I think you have to recalculate the vertex normals from the triangle normals every time, you can't just append the new triangle normal to an already calculated vertex normal.

Are you try to avoid storing or recalculating all the triangle normals?

Another thing: you should weigh your triangle normals in the vertex normal calculation. Not all triangle normals affect the vertex normal in the same amount. Consider a cube. If you want to smooth shade the cube (I know it's stupid) you know that the vertex normals should point in a 1,1,1 direction for example (I didn't normalize it, but you get the idea). A vertex in a cube is shared by 3 sides, but that means four triangles in some cases. That means a side is represented by 2 triangles at some corners. Without weighing, it will distort the desired result because you add that normal twice.
That's where weighing comes in. I simply weigh by the angle between the incoming edges, I guess there are faster tricks for that.

That said, your pyramid will look distorted anyway, because you smooth sharp edges too. Take a loot at a smooth shaded triangle in any 3d modelling software. It will look something like that.

That's where "smoothing groups" come in. A smoothing group is a group of triangles that should be considered smooth, thus used to calculate the vertex normal. Um,.... this is a different lengthy topic, if you are interested, I'll try to explain more

Edited by szecs, 30 May 2012 - 12:15 AM.

### #3Saldan  Members   -  Reputation: 115

Like
0Likes
Like

Posted 30 May 2012 - 01:14 AM

Are you try to avoid storing or recalculating all the triangle normals?

I'm not 100% sure about that, to be honest. The reason I was going for this approach is that the models can get quite large and complex, and I figured temporarily storing triangles as pointers to vertices, rather than as sets of new vertices, would reduce the amount of data being moved around in each model, as well as facilitate sharing of vertex normals between triangles. In the end I do store triangles as a series of floats, though, since the vertices are generated and discarded as needed, and since uploading floats to the GPU as VBOs seemed more straightforward.

Honestly, though, I'm new at programing meshes like this and I'm not sure what the best approaches are.

Another thing: you should weigh your triangle normals in the vertex normal calculation. Not all triangle normals affect the vertex normal in the same amount. Consider a cube. If you want to smooth shade the cube (I know it's stupid) you know that the vertex normals should point in a 1,1,1 direction for example (I didn't normalize it, but you get the idea). A vertex in a cube is shared by 3 sides, but that means four triangles in some cases. That means a side is represented by 2 triangles at some corners. Without weighing, it will distort the desired result because you add that normal twice.

Oh gods, that does makes sense. That's surely at least a part of the problem. Hm... Could you elaborate a little on your angle-weighting? Do you mean the angle of the line defined by two vertices (or vertex locations) shared by two different triangles?

That said, your pyramid will look distorted anyway, because you smooth sharp edges too. Take a loot at a smooth shaded triangle in any 3d modelling software. It will look something like that.

Yes, I was expecting it to look weird since, as you said, it's got sharp edges. However, I was rather expecting the weirdness to be uniform along the edges of the pyramid (like the edges were highlighted or something), rather than jagged. Presumably, if the weirdness was uniform, it could be reduced by having a smoother model.

In some of the more complex models I have, there are sharp edges (90° or less) that I'd rather not have smoothed, but duller edges that I would like to smooth out. I read a little about smoothing groups on Wikipedia just now, but I'd need some kind of algorithm that can efficiently calculate and modify smoothing groups on the fly, because my meshes are supposed to be not only arbitrary but also, to an extent, deformable. Ideally, an algorithm that identifies whether a neighboring triangle is at too sharp an angle to be included in the normal modification process, or something similar. I suppose I could just compare triangle normals and, if they are too different, ignore the neighboring triangle in the calculations...

### #4szecs  Members   -  Reputation: 2081

Like
0Likes
Like

Posted 30 May 2012 - 02:18 AM

Angles: yes, the angle between the two edges coming into the vertex. It should be straightforward, you can google "angle between two vectors".

Jagged edges: it's hard to decide if it's normal, it could be expected, it would be good to see the wire-frame model of your pyramid.

I have worked with this kind of thing and with sharp edges, but it's not so easy to explain, and I don't have much time at the moment. I will come back later and try to explain my approach.

Edited by szecs, 30 May 2012 - 02:21 AM.

### #5Digitalfragment  Members   -  Reputation: 738

Like
0Likes
Like

Posted 30 May 2012 - 02:22 AM

Currently, every time I make a new triangle, I calculate its normal and then add that normal to the existing normals of the three vertices involved. After making all my triangles, I reduce the length of each vertex's normal to 1. I've also tried adding them and then dividing by the number of triangles involved, but that doesn't work and it doesn't sound like it should work, either.

That sounds like you are averaging the previously averaged result added to the new normal, which is wrong as it will always be biased towards the newest triangle.

ie. assuming A is the first normal, B is the second, C is the third (already normalized)

N = ||(||(||A||)+B)||)+C||

where as what you want is ||(A + B + C)|| so that all 3 have equal weighting.

The other one is that if you want a hard edge, then you need to split the vertices. The triangles in the shading are due to linear interpolation across each triangle, as the normals are stored per vertex not per face - you will never get just the edge highlighted unless you have incredibly high tessellation if you use smoothed normals.

Edited by Digitalfragment, 30 May 2012 - 02:23 AM.

### #6bwhiting  Members   -  Reputation: 618

Like
0Likes
Like

Posted 30 May 2012 - 03:18 AM

here is my guide to super awesome normalnesss:

step1: destroy your mesh, explode it so there are 0 shared vertices (i.e. if you have 4 verts and two triangles.. you should end up with 6 verts and 2 triangles - if the number of verts isn't 3 x the number of tris something went wrong))

step2: calculate the face normals of each triangle (you could do this first if you wanted)

step3: make a hash for each vertex based on its location in 3d space and link it to each triangle that has has a vertex that matches (you could use a threshold here i.e. 0.001 world units or something)
for my hash I just used strings i.e. "x1.21y5.5z-3" easy to calculate

step4: run through every triangle, and for every vertex use the hash to get the adjacent /shared triangles and sum up the normals (provided they are at similar angles - use a dot product to test) see this video: , then normalize the aggregate normal

step5: run through the lot removing any left over duplicates

pros of this method:
its awesome... and you can customise it - distance threshold and angle threshold
for most cases it does a pretty good job

cons
its slow - but it doesn't matter as you can save the resultant mesh and load that next time
you will probably end up with a few extra triangles... but shouldn't be a massive issue
as it is automated all the triangles will adhere to the same rules, so no flexibility between individual triangles.

I made this up cos I couldn't think of a better way to calculate normals that work just as well for a cube/sphere/smooth mesh/stairs.
It does the job but there is probably a better way.

You could always try and add support for smoothing groups.
Could be a nice thing to have built into a brush that you can paint over a model to build better normals.

Hope it helps / makes any form of sense

### #7szecs  Members   -  Reputation: 2081

Like
0Likes
Like

Posted 30 May 2012 - 03:42 AM

I use something similar, though I don't explode the vertices, I just use a "triangle" data struct with the 3 vertex normals. I use this, because I have an editor, so exploding mesh in any way (except if it's the users intention) is a no-no.

But for rendering, you have to explode, because modern rendering pipelines don't allow different indexing to the different types of coordinates.

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS