How much can Normal avergaing do for you? o)

Started by
25 comments, last by adder_noir 13 years, 9 months ago
Hi!

Quickie chaps/chapesses I'm rendering a low poly humanoid about 4000 faces. Great, fast etc.. but obviously theres the od rather *sharp* edge here and there and my art skills aren't up to much so I won't be able to hide them easily with clever texturing.

So I looked up normal averaging. Looks good. Got an open source pakcage that can supposedly do it but no doubt I'll end up doing most of it myself - :oP

So just how good can these things be? Can they rescue the nasty edge off a 4000 face model? I can't export any normal averaging from my 3d tool as I'm working with the brass tacks here. Just 4 arrays containing vertex, index normal and texture detail. That's it. I'm hoping a normal averger will just re-calc the normals in my normal array. Hopefully that'll be it.

Anyways! THought I'D ask cheers ;o)
Advertisement
Quote:Original post by adder_noir
Got an open source pakcage that can supposedly do it but no doubt I'll end up doing most of it myself - :oP

So just how good can these things be? Can they rescue the nasty edge off a 4000 face model? I can't export any normal averaging from my 3d tool as I'm working with the brass tacks here.


Now, you're talking about generating the per-vertex normals from just the vertex geometry data? This is something that you usually want to avoid, unless you don't have any better alternatives, since you're losing information that the modeller generated if you just regenerate the normals from scratch. The artists generally love to hate whenever this is done (almost as much as whining about DXT compression ;), and my recommendation is to avoid this from being done.

Unless it takes you less than half an hour to try this out in code, I recommend to just stuff your data into a modelling software and see it there visually.

For example in 3DSMax, smooth/hard edges can be controlled by what's called "smoothing groups" - the triangles of the model are categorized into different groups (artist-driven, manually tunable) and the adjacent triangles in the same group are taken to share a smooth edge, and adjacent triangles in different groups will share a hard edge.

Now, for generating these yourself, you have the naive method of just generating normals as if each edge is a smooth edge. For low-poly models, this does not work too good, since there are lots of sharp corners. Whenever you have a sharp edge in the geometry, having a smooth edge in the normals will look slightly odd.

The next method is to use some heuristic, e.g. an angle threshold, to automatically generate smoothing groups and to distinguish hard edges from smooth edges. But since the distinction of a smooth or a hard edge is something that depends on the model and the artistic eye, and especially for low-poly models the edge angles are quite large, finding a suitable angle threshold can be difficult.

So, if you just have a few triangles that have their normals a bit wonky, I'd recommend the best approach is to just load the model up in Blender and adjust the few normals manually to show a better effect.
Well that sounds good to me! SO if I get this right - can I just use a built in Blender tool to adjust the normals for me and they will automatically export when I run my script that exports the mesh data (normals, indices, etc...).

Basically meaning I don't have to do anything after the export - except accept there'll be a few dodgy edges ;o)
Hmmm... not so easy. Blender won't export the smoothed mesh even when setting the smooth flag to true for each face in the export script and after having saved the file beforehand too.

SO it looks like it's back to muggins to deal with it again. I've had a read through MWE and I reckon that's good enough for me alright. Only issue is that up to now I've only seen folks use this in a quad style mesh. My humanoid model is all triangles as that's how I've set-up my code to run. Great and all but it means now the tutorials do not cover everything I need.

I'm beginning to suspect I can see what a vertex duplication list is for. It's so you can note how many times the same vertex position is referenced in the index list, even though each index number is individual. Means you can I guess build a list of all triangles associated with any particular vertex.

What I must ask is, if I simply go through all my normals and turn them into unit vectors will that help? Would be alot easier than opening up a whole new project to deal with smoothing. I suspect there's no way round it though. I'll just have to do it.

I'm thinking of going through my original non-exclusive index list array and marking how often each number appears and where in the index array. Might be a good place to start. Probably best to start with a simple model too like say a cube or chess piece. Anyways... here goes another huge journey ;oP
In every project I've worked on, smoothing of normals has been done in the art tool (like blender), so the artists have complete control over which edges are hard ones and which ones are soft.

What kind of 'export script' are you using?
A custom one. Probably not so hot perhaps. I did try everything I could though, even setting smooth to 1 did not help.

From what you say it sounds as though even if you can post-operate the normals it might screw up the art-work = bad!

It seems perhaps I might want to look at writing or finding a better export script for Blender. What do you think? If not I can always do it manually. I'm working on it right now - hopefully if it comes to that it won't wreck any artwork. I hope :oP

Thanks for the reply.

**Edit**

Interesting. I went into Python in Blender and ran an iterator through every face in the mesh setting smooth = 1. NO EFFECT!

Hmm.. it is this exact operation which is replictaed in my export script. Seems this is not enough to get Blender to smooth normals at all. I need to find how it is done really so I can add this function into my script. Or just find anohter way of doing it.
Well I've now written code that can find where a face index exists in the old index array, searches through to find all the faces this is in and then retrieves the normals from every one of these faces ready for averaging.

Maybe it's not so simple though. Perhaps I can't just average a load of normals without knowing dimensionally where each vertex the indices point to are in relation to one being studied/searched for. No idea to be honest. Almost fun to do though (not really) so I'll continue. Here's the code:

i = 0;    j = 0;    k = 0;    int l = 0;    tempval = 0;    int tempval2 = 0;    for(i = 0; i < 1; i++) // start with the first number in the old array    {        tempval = oldIndexArray; // a temp variable to hold the number        for (j = 0, k = 0, l = 0; j < INDEXARRAYSIZE; j++)        {            if((tempval == oldIndexArray[j]) & (j < INDEXARRAYSIZE))            {                // finds the face number that tempval2 is in.                tempval2 = j/3; // face no. that '4' e.g. is in                normalAverageTemp[l] = normalArray[(tempval2 * 3)]; // goto that same face no.                normalAverageTemp[l+1] = normalArray[(tempval2 * 3)+1]; // in the old normal array and start copying data across                normalAverageTemp[l+2] = normalArray[(tempval2 * 3)+2];                l+=3;                k = l;            }        }        //PROPOSE FUNCTION CALL TO CHANGE NORMALS AVERAGES GOES HERE - check!!        // send k into the function call so it knows how big an array to process ;o)        average_normals(k);    }


That's it. It means I now have an array called normalAverageTemp for each pass through the face index array. This is a global array which the function average_normals links to, to start doing stuff with it. At the moment average_normals just displays stuff. I need the actual computation algorithm in there next *sigh*. The idea here is that k is made equal to l and passed to the average_normals (AN) function so that it knows how far into the array to stop. I couldn't make the array dynamically sizeable obviously so it just has to be big enough to hold the max values likely to be in it.

But that obviously introduces the need to tell AN when to stop, hence it gets k passed to it. One last thing is the AN function will need to clear out the normalTemp array thing for the next pass. Anyways, it's all teaching me code anyway so....it's ok ;o)

ANyone with a better suggestion please do step forward as you'll save me alot of work - which I'm trying to demonstrate ;o)
Quote:Original post by adder_noir
Hmmm... not so easy. Blender won't export the smoothed mesh even when setting the smooth flag to true for each face in the export script and after having saved the file beforehand too.

Mind if I ask, can you post a screenshot of the problem? Does Blender show the model correctly? Can you post a screenshot of the proper appearance? I heartily recommend that you switch exporters or work to fix your exporter to get the proper data out.

Quote:Original post by adder_noir
What I must ask is, if I simply go through all my normals and turn them into unit vectors will that help?


Every modelling tool should by default produce unitary normals. Writing your own tool code that goes through your exported geometry and renormalizes all normals (and orthonormalizes your per-vertex tangent frames if you have any) is a nice sanity check, but shouldn't be required and shouldn't have an effect here - the normals should already be unitary.

If you really want to go the route, this can be a very interesting read: S. Jin, R. Lewis and D. West. A Comparison of Algorithms for Vertex Normal Computation.

I'm not sure about your code, and what you refer to as the "old index array", or "vertex duplication list". When you regenerate geometry normals without changing the existing hard/smooth edge distinctions, you do not need to re-create any index structures. For indexed geometry, the algorithm goes roughly like this:

input: indexArray: a sequence of triplets of numbers indexing the vertexArray and defining a triangle list.vertexArray: the list of vertices to draw.temporary work space:triNormals: the list of triangle normals.adjacent: a list of lists giving adjacency information. adjacent[v] is a list that stores the indices to the triangles that v is a part of.foreach(triangle t in indexArray){   // compute the normal of the triangle t, taking direction from triangle winding order;   triNormals[t] = cross product of two face edges;   // mark to which vertices this triangle is adjacent to. This step is so that we can   // later answer queries like "given a vertex v, what are all the triangles    // it is part of?"   foreach(vertex v in t)      adjacent[v].insert(t);}foreach(vertex v in vertexArray){   v.normal = 0;   foreach(triangle t in adjacent[v])      v.normal += triNormals[t];   normalize(v.normal);}


Now, in the above we don't group together vertices which have the same position but different other attributes ("duplicated vertices"), which (depending on what kind of attributes you have) logically corresponds to preserving the hard/smooth edge distinction that was present in the original geometry. If you want to make all edges smooth, or to reconstruct the hard edge/smooth edge categorization by yourself, you should make adjacent a map of lists instead of a list of lists, where you use the vertex position as the key.

Even though the code is simple, manually generating normals can be a bit sad and tedious process. It should only ever be done when you either have full smoothing group information from the source object, or if you procedurally generated the whole geometry in code (e.g. terrain), or if the model did not come with any normals to start with (ew!). I have to say again, can't really recommend this.

Here's a story why: (might go wonderfully off-topic in your case, but this is just to really explain why)

In the simplest of cases, imagine a cube. All of its edges are hard edges, meaning that if you take any corner, its vertex is specified three times in the buffer, once for each normal direction of the cube. Now, if you simply go and generate smooth edged normals for the whole cube, you'll each corner normal point out off the cube in the diagonal direction, very much unintended.

Now, you're smart and do the edge angle threshold -kind of check to generate your automatic smoothing group information and remove the above problem with the algorithm. But soon you find that your code breaks when your artists used discontinuous UV mappings for some odd reason, and your vertex merge step just goes and fuses together everything that's close enough losing the UV information.

So, quickly you fix that issue and make discontinuous UVs (and other discontinuous vertex attributes) go into separate smoothing groups. Before you have the fix in, you realize that lots of automatic UV unwrappers do produce discontinuities, so you realize you have to just duplicate the vertices but put them in the same smoothing group anyways. Fine.

Then, you find that one of your artists just had to have the disco ball effect in the game (see the first page in this tutorial), and you realize you have no way of making the automatic edge threshold detector pick that up, and since you now have the "discontinuous UV => not a smoothing group edge" feature in, the artist can't use discontinuous UVs to signal hard edges. So, you're smart again and go "hey, let's make this an option in the exporter." and in the end you have a configurable threshold value in the exporter, or a boolean flag "[x] Hard edges on UV discontinuities", or worse, both. But having a global option will not even allow you to combine both techniques, so you're limited there.

All of this manual tuning was supposed to make your pipeline more robust, since, you know, automatically generating your content means more power and flexibility, right? But instead, you now have two more need-to-remember flags, and the next summer the new intern/junior artist comes to you complaining about why the tools are broken and his character looks like it's from 1990 or SNES era. Good grief.

Instead, the best option is not to waste time with this (unless you have one of the cases explained above), and to spend your time to implement an exporter that properly reads the normal data as well. The normals are supposed to belong to the artists, and when something looks wrong in the game, it should be you the programmer going to the artists yelling "this looks wrong, fix it!" and not the other way around (a sign of a good tool is when you can always blame the artists ;).

Exporter stages like double-checking the unitarity of normals and consistent winding orders (and that normals match the winding order) are of course good for Robustness. Also, make the tool yell out to the artist whenever his data is degenerate in some way - that'll make them learn.

(No, seriously, I love our artists in house <3 ).
Wow that's a mother of a post. I'm only replying to let you know I've seen it and have the decency to say thankyou! No doubt I'll have more questions once I've read through it all. Thanks so much. I can see you know yor subject back to front inside out so I'm going to just do whatever you say basically. I will probably finished this exercise purely for academic reasons.

Sorry for the bad explanation I would really need to post a full diagnosis of what I'm doing for it to make sense. When I've read yor post I'll reply properly. Thanks again ;o)
Quote:Original post by clb
(a sign of a good tool is when you can always blame the artists ;)


OT: Quote of the day =)

This topic is closed to new replies.

Advertisement