What is a good average vertex-per-face count?

Started by
6 comments, last by Ravyne 9 years, 3 months ago

Hello

I'm currently investigating optimizations for a game project. The vertex processing is fairly expensive. One thing that struck me is that the average vertex-per-face count seems rather high. It's currently 1.2. Theoretically a mesh average could be around 0.5 if each vertex is shared between 6 as each triangle has 3 vertices (1/6 * 3 = 0.5). Of course this isn't achievable in real life due to many factors such as hard edges, uv-mapping etc.

I tried to search for this online and on this forum but didn't find any good resources. So I ask anyone with an opinion in this subject. As a rule of thumb:

What would you think is a good scene average?

What is a good value for a character mesh?

What is a good value for a props mesh?

As an artist besides keeping the vertex count down in low poly modeling. Are you applying any particular techniques to keep the vertex-per-face count down, e.g. uv map in certain ways etc?

Maybe someone know some webpage concerning this issue as well?

Thanks a lot!

Advertisement

The vertex processing is fairly expensive.

The relation between number of vertices and number of faces doesnt really matter a lot, only the number of final vertices count. With final vertex I mean after splitting vertices accordingly to UV, color, normals, tangentspace etc, which can't be easily estimated in a modelling tool.The relation of vertices to faces depends more on the model topology, material variance etc. A simple sphere has an other factor than a tree with leafs.

But to be honest, no artist would try to model his model to reduce the number of shared vertices in a significant way, thought they will try to optimize when possible. And having 1.2 vertices for each tri, with an worst case of 3, sounds neither bad nor good.

What would you think is a good scene average?

What is a good value for a character mesh?

What is a good value for a props mesh?

You can't really tell.

There are lot of factors which count more than the number of vertices. Eg most modern videocards will be able to render props with 500-1000 vertices faster than you can feed the rendering API, but this will change completely once the engine use batching, instancing or a modern API (with less overhead). On last gen hardware cars in car racing games had sometimes more than 60k vertices, characters 10k , a zombie in L4D2 ~2k (?). Is your model animated ? How many bones are used ? How many materials ? How many textures ?

Some hints:

1. If you work in a project context, you will be told what are the limits and requirements.

2. If you want to create some game art to show off, it is more important to show good skills and creativity, a good use of texture space, a sparse use of bones, a good mesh topology, avoidance of unnecessary surfaces etc.

3. If you want to write your own game and want to pin down your art requirements, then you need to create an (technical) art concept in the first place. How will the environment look like ? Which art style will be used ? Which target platform ? What about the dynamic part of the scenen (4 moving characters vs. 100 zombies) ? What will be the camera perspective ? Do you need higher quality models for cut-scenes ? Do a lot of prototyping,

Yeah, like said above, vertices per face is not an interesting metric. The total number of vertices and the total number of triangles are more important. If an artist needs to do e.g. a hard edge on a seam to deliver some effect, the artist will need to do a hard edge on a seam and there's no way around it. Perhaps the opportunity in improving vertices per face might occur if for some odd reason two faces that share a vertex should have the same normal direction at the vertex, but somehow ended up having normals that are almost identical but not quite, and as a result the vertex got (unintentionally) duplicated for the two adjacent faces. Although I imagine such mistakes would be quite rare in practice.

On the GPU there's this thing called post transform vertex cache. Before the vertex shader process a new vertex it first check if it's in the cache already. It's much faster to fetch a vertex from the cache than to process the vertex shader. So if there's a lot of sharing of the vertices between the faces this means a lot more vertices are fetched from the cache and the GPU works faster. So I disagree!

I don't think anyone is saying that the number of shared vertices doesn't have an impact, I think they're saying that its not a factor you can control for -- you can't go to your artists and say something like "I need you guys to make sure you share more vertices" and expect anything other than an incredulous stare -- the sharing factor is essentially a function of the subject matter and the level of detail, assuming the artists (or model processing tools) aren't doing anything incompetent.

As a thought experiment, consider a simple cube: 12 faces, 8 vertices is 0.66... verts/face -- you can get a much lower ratio by adding a vertex to the center of each side of the cube: 24 faces, 14 vertices is 0.583... verts/face. So, you can achieve more sharing of vertices in a cube, but only by adding vertices which is bad. Going the other direction, if you take away even one vertex from the original cube, its no longer a cube. Thus the conclusion: more sharing is just more sharing, it does not produce an optimal model; 0.66... is the optimal sharing ratio for a cube -- other kinds of shapes have different optimal ratios.

My gut instinct tells me that for any given subject matter (person, machine, etc) and given level-of-detail (that is, the number of total vertices) the law-of-averages/scales-of-economy says that the model will converge towards the ratio that's optimal, given those parameters. Again, assuming that the artist/processing tools aren't incompetent.

As to the cache, yes you want to make effective use of it, but you need to understand how it works -- it doesn't remember every vertex that's been transformed anywhere in the model, its been a long time since I've investigated, but last I remember the cache was only 16 verts deep. You don't benefit from the cache at all if you come back to that vertex 17 indices later. While it is the case that more shared vertices are likely to exercise the cache more, that doesn't necessarily translate to better overall performance, because you might be sharing more vertices at the cost of simply having more vertices, each of which has to be processed at least once. That's why the general approach to optimizing a mesh is to reduce vertex count instead of trying to increase sharing.

throw table_exception("(? ???)? ? ???");

I don't think anyone is saying that the number of shared vertices doesn't have an impact, I think they're saying that its not a factor you can control for -- you can't go to your artists and say something like "I need you guys to make sure you share more vertices" and expect anything other than an incredulous stare -- the sharing factor is essentially a function of the subject matter and the level of detail, assuming the artists (or model processing tools) aren't doing anything incompetent.

As a thought experiment, consider a simple cube: 12 faces, 8 vertices is 0.66... verts/face -- you can get a much lower ratio by adding a vertex to the center of each side of the cube: 24 faces, 14 vertices is 0.583... verts/face. So, you can achieve more sharing of vertices in a cube, but only by adding vertices which is bad. Going the other direction, if you take away even one vertex from the original cube, its no longer a cube. Thus the conclusion: more sharing is just more sharing, it does not produce an optimal model; 0.66... is the optimal sharing ratio for a cube -- other kinds of shapes have different optimal ratios.

My gut instinct tells me that for any given subject matter (person, machine, etc) and given level-of-detail (that is, the number of total vertices) the law-of-averages/scales-of-economy says that the model will converge towards the ratio that's optimal, given those parameters. Again, assuming that the artist/processing tools aren't incompetent.

As to the cache, yes you want to make effective use of it, but you need to understand how it works -- it doesn't remember every vertex that's been transformed anywhere in the model, its been a long time since I've investigated, but last I remember the cache was only 16 verts deep. You don't benefit from the cache at all if you come back to that vertex 17 indices later. While it is the case that more shared vertices are likely to exercise the cache more, that doesn't necessarily translate to better overall performance, because you might be sharing more vertices at the cost of simply having more vertices, each of which has to be processed at least once. That's why the general approach to optimizing a mesh is to reduce vertex count instead of trying to increase sharing.

Sure I'm fully aware of the limited size of the cache and you can also run methods in the code during asset loading/conversion that attempt to optimize the mesh in relation to cache hits. We currently miss the cache 10% of the time by the way which doesn't seems too bad.

Of course only concentrating on the vertex-per-face ratio would be stupid, I do not disagree with that! Well demonstrated in your example. However I don't see a problem in consider it as well. I fully understand that different kind of meshes will have different average. On the other hand if it turns out that all or many of the objects in a scene has unusually high ratio this would indicate that things can get optimized.

With this thread I was mainly interested in hearing peoples opinion in this matter. If people have rough ideas of where the numbers should be at, e.g. you usually have a budget for how many verts a main character should have and in case the real number diverts considerably from this the asset will get iterated. For example when I sorted our meshes according to ratio some high poly meshes had over 2.5, to me that number is way higher than what you should usually expect. From what I gathered from people replying so far, it has not been of their concern. If you're modeling in a way were the ratio is naturally pretty low that is not an issue of course smile.png

As a complete guess, I would say that 1 new vert per face is a fairly common value for average art.

As above, trying to get more bang out of the post-transform cache is usually a task for the engine/tools programmers, not something to bug artists about.
That said, on older consoles this was such a huge performance concern that when I worked on Wii, our tools showed the artists and visualisation of how their faces had been auto-stripified (converted from a triangle-soup to a tri-strip) by the tools, and they often did iterate on models to optimize the strip-lengths.
That's not a practice I've seen since then though!

Think of it like this -- your goal in making greater use of the vertex cache is to avoid paying the cost of another vertex, right? You can also avoid the cost of another vertex simply by not adding another vertex to the model -- in fact, fewer vertices are better because it increases the relative occupancy of useful verts in the cache. Getting the most visual fidelity for the fewest verts is the first-order optimization, it'll give you the most bang for your buck.

The vertex cache is there to optimize the gpu's execution of whatever mesh it's handed.Its good to consider cache behavior, but its a Second order optimization at best. You're just so much more likely to be hosed by having too many verts, or state changes, or API overhead, than poor vertex cache utilization, and the workflow to exercise any control over it is so time-consuming, that its not really worth thinking about.

Your tools should try to emit good patches/strips, and your artists should be vaguely aware of things they shouldn't do because it will prevent the tools from doing a good job.

If all you're really asking for is a means you can use to identify outliers to investigate for iteration, you'll already have it: its the one whose vert/face ratio is unusually high compared to similar models.

throw table_exception("(? ???)? ? ???");

This topic is closed to new replies.

Advertisement