• Create Account

Like
15Likes
Dislike

# Efficient Art Production: Theory and Practice

By Andrew Maximov | Published Jan 23 2014 05:42 PM in Visual Arts
Peer Reviewed by (riuthamus, Hodgman, jbadams)

art technical efficiency vertex draw call batch uv normal theory clean up materials smoothing groups hard edges fill rate pipeline triangulation teamwork

Title image: Making sure your assets don't stink :) (An update of my 2009 article).

Video Game Artists work is all about efficiently producing an incredible looking asset. In this article we won't speak about what makes an asset look good, but rather about what makes it technically efficient.

From here this article splits into two parts. The first one will be about things you need to know to produce an efficient asset. The second one will be about things you need to do to make sure that the asset you've produced is efficient. Lets go!

# Part 1: Theory

## Brain Cells Do Not Recover

Even though most of the information to follow is about making your assets more engine friendly, please don't forget that this is just an insight into how things work, a guideline to know which side to approach your work from. Saving you or your teammates time is also extremely important. An extra hundred of tris won't make FPS drop through the floor. Feeling work go smooth would make for a happier team that is able to produce a lot more a lot faster. Don't turn work into struggle for anyone including yourself. Always take concern in needs of people working with the assets you produce.

## Dimension 1: Vertex

Remember geometry for a second. When we have a dot we, well, we have a dot. A dot is a unit of 1-dimensional space. If we move up to a 2-dimensional space, we'd be able to operate with dots in it too. But if we take two of them, then we'd be able to define a line. A line is a building block of 2-dimensional space. But if you take a closer look, a line is simply an endless number of dots put alongside each other according to a certain rule (linear function). Now lets move a level up again. In 3-dimensional space we can operate with both dots (or vertices) and lines. But, if we add one more dot to the previous two, that defined a line, we'd be able to define a face. And that face would be a building block in 3-dimensional space, that forms shapes which we are able to look at from different angles.

I'm pretty sure most of you are used to receiving a triangle count as a main guideline for creating a 3D model. And I think the fact of it being a building block of a 3-dimensional space has something to do with it. :)

But that's the human way of thinking. We, humans, also operate in a decimal numeral system, but hardware processors don't. It's just 0 and 1 - binary - the most basic number representation system. In order for a processor to execute anything at all, you have to break it into the smallest and simplest operations that it can solve consecutively. So in order to display 3D graphics you also have to get down to the basics. Even though a triangle is a building block of 3-dimensional space, it is still composed of 3 lines, which in their turn are defined by 3 vertices. So basically, it's not the tris that you are saving, but vertices.

Though, the less the tri count the less vertices there are, right? Totally. But unfortunately, the number of tris is not the only thing affecting your vert count. There's also some underlying process that are less obvious.

A 3D model is stored in memory as a number of vertex-structure-based objects. "Structures", speaking an object oriented programming language (figuratively), are predefined groups of different types of data and functions composed together to present a single entity. There could be thouthands of instances of such entities which all share the same variable types and functions, just different values stored in them. Such entities are called "objects". Here's a simplified example of how a vertex structure could look:

Vertex structure
{
Vertex Coordinates;
Vertex Color;
Vertex Normals;
UV1 Coordinates;
UV2 Coordinates;...
};


If you think of it, it's really obvious that vertex structures should only contain necessary data. Anything redundant could become a great waste of memory when your scenes hit a couple dozen million tris.

That's why a single vertex structure only holds one set of the same types of data. What does it mean for artists? It means that a vertex can't have 2 different UV positions in the same UV set, or 2 normals, or two material ID's. But we've all have seen how smoothing groups work, or applied multiple materials to objects and that didn't seem to increase the vert count. That's only in your modeling package. Your engine would treat the vertex data differently:

The easiest and most rational way to add that extra attribute to a vertex is to simply create another vertex in the exact same position.

Simply put, every time you set another smoothing group for a selection of polys or make a hard edge in maya, invisibly to you, the number of border vertices doubles. The same goes for every UV seam you create. And for every additional material you apply to your model.

UDK used to automatically compare the number of imported vertices versus generated upon assets import and warn you if the numbers differ for more than 25 percent. Now be it 25, 50 or a gazzilion percent - doesn't really matter that much if your game runs at frame. But knowing this stuff might help you get there. Just don't be surprised if your actual vert count is 3 times what you thought it was if you set all the edges to hard and break/detach all your UV verts.

## Connecting the dots

This small chapter here concerns the stuff that keeps those vertices together – the edges. The way they form triangles is important for an artist, who wants to produce efficient assets. And not only because they define shape, but because they also define how fast your triangles are rendered in a pretty non-trivial way.

How would you render a pixel if it's right on the edge that 2 triangles share? You would render the pixel twice, for both triangles and then blend the results. And that leads us to a pretty interesting concept, that the tighter edge density, the more rerendered pixels you'll get and that means bigger render time. This issue should hardly affect the way you model, but knowing about it could come in handy in some other specific cases.

Triangulation would be a perfect example of such a case. It's a pretty known issue, that thin tris aren't all that good to render. But talking about triangulation, if you've made one triangle thinner - you made another one wider. Imagine if we zoom out from a uniformly triangulated model: the smaller the object becomes on screen, the tighter the edge density and the bigger the chance of rerendering the same pixels.

But, if you neglect uniform triangulation and worry about making every triangle have the largest area possible (thus making it incapacitate more pixels), in the end you'd get triangles with consecutively decreasing area sizes. Then once you zoom out again the amount of areas with higher edge density would be limited to a much smaller number of on-screen pixels. And the smaller the object becomes on screen, the smaller amount of potentially redrawn pixels it'll have. You could also try to work this the other way around, and start with making the triangle edges as short as possible. Now although trivial it's an interesting way to inform some of your decisions while modeling. Make sure to check out some statistics on the subject - pretty fascinating

## Eating in portions is better for your health

Exactly the way your engine draws your object triangle by triangle, it draws the whole scene object by object. In order for your object to be rendered a draw call must be issued. While CPU gathers information and prepares batches to be sent to GPU, GPU renders stuff. What's important for us here, is that, if CPU is unable to supply GPU with the next batch by the time it's finished with the current, the GPU has nothing to do. This means that rendering an object with a small amount of tris actually isn't all that efficient. You'll spend more time preparing for the render, then on the render itself and waste the precious milliseconds your graphics card could be working its magic.

A frame from NVidia's 2005(?) GDC presentation

The number of tris GPU can render until the next batch is ready to be submitted varies per engine and GPU. But I'd say objects up to 700 tris probably wouldn't benefit that much from further optimization.

Defining such a number for your project would be a great help for your art team. It could save a lot of production and even some render time. Plus it'll serve as a guideline for artists to go by in production situations.

You'd want to have a less detailed model only when there's really no point in making it more complex, and you'd have to spent some extra time on things no one will ever notice. And that luckily works the other way around – you wouldn't want to make your model lowpolier than this, unless you have your specific reasons. Plus the reserve of tris you have could be well spent on shaving off those invisible vertices mentioned earlier. For example you can add some chamfered edges and fearlessly assign one smoothing group to the whole object (make all the edges soft). It may sound weird but having smoother, more tessellated models sometime could actually help the performance.

If you'd like your game to be more efficient, try to avoid making very low-polygonal objects a single independent asset. If you are making a tavern scene, you really don't want to have every fork, knife and dishes hand placed in the game editor. You'd rather combine them into sets or even combine them with a table. Yeah, you would have less variety, but when done right no one will notice.

But this in no case means that you should run around applying turbosmooth to everything. There are some things to watch out for, like stencil shadows for example. Plus some engines combine multiple objects into a single batch, so it's alway best to talk to talk your programmers first.

After batching, another very important thing to make or break your performance is the culling system your engine uses. If your engine doesn't cull the objects out of your frustum you're doing a lot of unnecessary and invisible rendering. If you doubled your Field Of View you've most likely doubled the amount of objects you'll have to render in your frame. Finally if your engine doesn't cull objects that are obstructed by other objects then technically you're rendering a lot more then you could've. So it's not only about the tri-counts but about being selective in what to render.

## Vertex VS Pixel

If you ask an artist what is the main change in game art production we've seen in the last 10 years, I'm pretty sure that the most common answer would be the introduction of per texel shading and use of multiple textures to simulate different optical qualities of a single surface. Sure polycounts have grown, animation rigs now have more bones and procedurally generated physical movement is more widespread. But normal and spec maps are the ones contributing the most visual difference. And this difference comes at a price: in modern day engines, most of the render time is spent processing and applying all those endless maps based on the direction of the incoming lights and the cameras position.

From a viewpoint of an artist, who strives to produce effective art, this means following things: Optimizing your materials is much more fruitful than optimizing vertex counts. Adding an extra 10, 20 or even 500 tris ain't nearly as stressing for performance as applying another material on an object. Shaving hundreds of tris off your model would hardly ever bring a bigger bang than deciding that your object could do without an opacity map, or glow map, or bump offset or even a secular map. You can shave 2-3 millions triangles off a level just to gain around 2-3 fps. It's not the raw tri count that affects performance the most, but more the number of draw calls and shader and lighting complexity. Then there are vertex transformation costs when you have some really complex rigs or a lot of physically controlled objects.

Shader blending and lighting modes also have a lot to do with performance. Alpha blended materials cause a lot more stress than opaque ones. Vertex lit is faster and cheaper because you're going to have set vertex colors on your vertexes anyway. And if you're deffered you don't even have to worry about that, 'cause your engine is going to calculate lighting on the final frame (that has a constant amount of pixels) rather then for every object.

And finally: post processing. Doing too many opeartions on your final rendered pixels could also slow your game down significantly.

## Things Differ (Communication is King)

As with everything in life, there's no universal recipe - things differ. And the best thing you can do is figure out what your specific case looks like. Get all the information you can from the people responsible. No one knows your engine better than the programmers. They know a lot of stuff that could be useful for artists but sometimes, due to lack of dialogue, this information remains with them. Miscommunication may lead to problems that could've been easily avoided, or be the reason you've done a lot of unnecessary work or wasted a truckload of time that could've been spent much wiser. Speak, you're all making one game after all and your success depends on how well you're able to cooperate. Asking has never hurt anyone and it's actually the best way to get an answer ;)

Dalai Lama once said:

“Learn your rules diligently, so you would now where to break them.”

And I can do nothing, but agree with him. Obeying rules all the time is the best way to not ever do anything original. All rules or restrictions have some solid arguments to back them up, and fit some general conditions. But conditions vary. If you take a closer look, every other asset could be an exception to some extent. And, ideally, having faced some tricky situation artists should be able to make decisions on their own, sometimes even break the rules if they know that the project will benefit from it, and them breaking the rules wouldn't hurt anything. But, If you don't know the facts behind the rules I doubt you would ever go breaking them. So I seriously encourage you take interest in your work. You're making games and not just art.

# Part 2: Practice

I hope this wall of text up here made some sense for you guys. All this information on how stuff works is really nice to know, but it's not exactly what you would use on a day-to-day basis. As an artist I'd love to have a place where all the “hows” are clearly stacked, without any other distracting information. And the “whys” section would serve as a reference you can turn to, in case something becomes unclear.

Now lets imagine you're finally done with an asset. You'd want to make sure things are clean and engine friendly. Here's the list of things to check upon consecutively:

## - Deleted History, Frozen Transformations/Reset XForm, Collapsed Stack

Transformation information stored in a model could prevent it from being displayed correctly, making all further checks useless. Plus it's simply unacceptable for import into some engines. And even if it does import, the object's orientation and normal direction could be messed up.

In Maya, don't forget to select your object.

## - Inverted Normals

While mirroring (scaling by a negative number) or performing a ton of other operations actually, your vertex normals could get turned inside out. You should have the right settings set in your modeling application in order to spot such problems. In 3Ds Max you can go to object properties and turn “Backface cull” on. Then examine your mesh.

In Maya you could just disable “Double Sided lighting” in the lighting tab (if it's missing hit “shift+m”), then make sure that in the Shading tab “Backface Culling” is disabled. Then if you'll check out your model with shading all the places with inverted normals will be black.

## - Mesh splits/ Open Edges

It sometimes happens that while working we forget to weld some vertices or accidentally break/split some. Not only could this cause some lighting and smoothing issues, but it's also a waste of memory and pretty much a sign of a sloppy work. You wouldn't want that.

Open edges are an issue you want to think twice about. And not only because in some cases they could be an additional stress for computing dynamic lighting, but because it seriously reduces reusability of your asset. If you simply close the gap and find a place on your texture you can throw this new shell on, that would still be more preferable.

To detect both those issues in 3Ds Max simply choose border selection mode (“3” by default) and hit select all (“ctrl + a” by default).

In Maya you could use a handy tool called “Custom Polygon Display”. Choose “Highlight: Borders” radio button and apply to your object.

## - Multiple edges/ Double faces

This double stuff is a nasty bugger since it's almost impossible to spot unless you know how. And sometimes when you modify stuff you can get very surprised with things behaving not the way they should.

I could hardly remember having them in Max, but just to be sure I always apply a STL Check modifier. Tick the appropriate radio button and check the “Check” checkbox.)

In Maya, the “Cleanup” tool is very useful. Just check “Nonmanifold geometry” and “Lamina faces” and hit apply.

## - Smoothing groups/soft-hard edges

For this one you'd want to have as few smoothing groups/hard edges as possible. You might consider making it all smooth and just adding some extra chamfers, where bad lighting issues start to appear.

Plus there's one more issue to watch out for, more in Maya then in 3Ds Max though, since Max utilizes the Smoothing Group concept: edges on planar surfaces would appear smooth even if they are not. To see which edges are actually unsmoothed “Custom Polygon Display” tool comes in handy again. Just click “Soft/Hard” round button right alongside the “Edges:”

## - UV splits

You would like your UVs to have the least number of seams possible, but as long as it is nice to work with. No need to go over the top with distortion here, just keep it clean and logical.

Broken/split vertices are a thing to watch out for too. 3Ds Max would indicate them with different color inside the “Edit UVWs” window.

While in Maya's “UV Texture Editor” window you have Highlight Edges button, that simply checks “Highlight: Texture Borders” checkbox in the “Custom Polygon Display” tool for you.

## - Triangulation

While checking triangulation, first of all make sure that all the triangles accentuate the shape you're trying to convey, rather than contradict it. Then a quick glance, to check if triangulation is efficient.

Plus: some engines have their own triangulation algorithms and will triangulate a model on import themselves, with no concern about how you thought your triangulation looked. In trickier places this could lead to a messy result, so please take caution and investigate how your engine works and connect the vertices by hand if necessary. Btw, Maya more or less helps you find such places if you check “Concave faces” in “Cleanup Options”. In 3Ds Max you'll just have to keep an eye out for yourself.

## - Grid Alignment/ Modularity/Pivot point placement

Since the last generation of videogames, graphics production costs have increased significantly, so modularity and extensive reusability are now a very common thing. Ease of implementation and combination with different assets could save a lot of time, maybe not even yours, so don't make your level designers hate you – think about it.

## - Material Optimization

Evaluate your textures and materials again, since it's probably the biggest source of optimization. Maybe the gloss map doesn't deliver at all and specular is doing great on its own? Maybe if you used a twice smaller specular the asset would still hold up? Or maybe you can go with the diffuse for specular, since it's a just a small background asset? Maybe that additional tileable normal isn't necessary at all? Or maybe you could go with a grayscale spec, and use the spare channels for something else?

## - Lightmapping possibility

If your engine supports lightmapping make sure you have a spare set of uniquely unwrapped UVs, which totally met all your engine's requirements.

# Afterword

Please remember no matter how technical and optimized your model is, it's meant to look beautiful first of all. No optimization could be an excuse for an ugly model. Optimization is not what artists do. Artists do art. And that's what you should concentrate on. The beauty of technicalities is that they can be precise to a pretty big extent. This means you can write them down, memorize them and don't bother thinking about them again for quite a while, just remembering instead. But it's art where you have to evaluate and make millions of decisions every single second. All this text is not important and that is exactly why it is written down. Really important things aren't all that easy to be expressed with words. And I hoped, that maybe, if you didn't have to bother thinking about all the tech stuff at least for some time, you'd concentrate on stuff much more important, and prettier I hope.

Cheers,
Andrew

Andrew Maximov - Senior Artist
http://www.artisaverb.info/
After the first year of getting my degree in International Relations and Economy I dropped out of University to make games.
5 years, 3 countries and 10 projects later my work has been featured a number of times on the front pages of some of the biggest game art communities in the world, written about in Game Developers Magazine and awarded the grand prix by Montreal International Game Summit Art Gallery. I gave about a dozen talks on beauty and challenges of video game art production, speaking for universities and academies around the world including the Game Developers Conference and Game Developers Association in San Francisco and National Animation & Design Center in Montreal.
Right now, I lend my expertise to a highly secret project as a Senior Artist at KIXEYE in San Francisco and am always happy to help out the community.

Even though most of the information to follow is about making your assets more engine friendly, please don't forget that this is just an insight into how things work, a guideline to know which side to approach your work from. Saving you or your teammates time is also extremely important. An extra hundred of tris won't make FPS drop through the floor. Feeling work go smooth would make for a happier team that is able to produce a lot more a lot faster. Don't turn work into struggle for anyone including yourself. Always take concern in needs of people working with the assets you produce.

I couldn't have said it better myself. I have said this before but many people aspire to be great 3d modelers but when put to purpose fail to provide something of worth because of a real lack of engines and how your asset must work in accordance with them. Each engine has different requirements and those requirements very much define the parameters for your pipeline. It is hard to do but I would urge all artists to spend some serious time learning the different engines and trying to plan out all assets for these things. It will save on numerous reworks and thus save hours on your end. Try and speak with your designers to ensure you know what will be required of you. At one point I had gone through my main character model but realized I needed to add another 10 bones. This caused 2 days of rework and animation tweaking due to not planning for those bones in the original project.

Connecting the dots
This small chapter here concerns the stuff that keeps those vertices together – the edges. The way they form triangles is important for an artist, who wants to produce efficient assets. And not only because they define shape, but because they also define how fast your triangles are rendered in a pretty non-trivial way.

How would you render a pixel if it's right on the edge that 2 triangles share? You would render the pixel twice, for both triangles and then blend the results. And that leads us to a pretty interesting concept, that the tighter edge density, the more rerendered pixels you'll get and that means bigger render time. This issue should hardly affect the way you model, but knowing about it could come in handy in some other specific cases.

Triangulation would be a perfect example of such a case. It's a pretty known issue, that thin tris aren't all that good to render. But talking about triangulation, if you've made one triangle thinner - you made another one wider. Imagine if we zoom out from a uniformly triangulated model: the smaller the object becomes on screen, the tighter the edge density and the bigger the chance of rerendering the same pixels.

Great advice here as well. When you are done with your model go over each and every line and see where you can reduce geo. I rethink most of my work over and over to ensure the best possible layout. This not only makes for a clean model it helps to reduce the issues with seems and UV mapping.

- Deleted History, Frozen Transformations/Reset XForm, Collapsed Stack
Transformation information stored in a model could prevent it from being displayed correctly, making all further checks useless. Plus it's simply unacceptable for import into some engines. And even if it does import, the object's orientation and normal direction could be messed up.

This, this, this, and then again THIS! Seriously... 100% so true.

FINAL THOUGHTS:

You have summed up a lot of what I feel should be good for a beginning artist. Great points and great thoughts. I rather enjoyed reading this article and would love to see/read more.

How would you render a pixel if it's right on the edge that 2 triangles share? You would render the pixel twice, for both triangles and then blend the results.

I do not believe this to be the case.

One important concept in hardware and software rasterizer design is "fill rules" or "rasterization rules," which include designing your rasterizer to never leave out a pixel and never draw a pixel twice along triangle edges.

This means that only the top and left facing edge pixels of a triangle are rendered. Thus, no matter how small the triangles, and no matter their topology, you'll simply never have a pixel that is drawn twice for two triangles with a shared edge.

Here are some references

http://fgiesen.wordpress.com/2011/07/06/a-trip-through-the-graphics-pipeline-2011-part-6/

[...] which is fill rules; you need to have tie-breaking rules to ensure that for any pair of triangles sharing an edge, no pixel near that edge will ever be skipped or rasterized twice. D3D and OpenGL both use the so-called “top-left” fill rule; the details are explained in the respective manuals.

http://graphics-software-engineer.blogspot.de/2012/04/rasterization-rules.html

A triangle rasterization rule (in Direct3D, OpenGL, GDI) defines how vector data is mapped into raster data. The raster data is snapped to integer locations that are then culled and clipped, and per-pixel attributes are interpolated before being passed to a pixel shader. Top-left rule ensures that adjacent triangles are drawn once.

http://msdn.microsoft.com/en-us/library/windows/desktop/cc627092%28v=vs.85%29.aspx

Any pixel center which falls inside a triangle is drawn; a pixel is assumed to be inside if it passes the top-left rule. The top-left rule is that a pixel center is defined to lie inside of a triangle if it lies on the top edge or the left edge of a triangle. [...] The top-left rule ensures that adjacent triangles are drawn once.

Hey there guys, thank you very much for the comments!

riuthamus, thank you very much man. More is coming

Cozzie, thank you buddy, much appreciated!

Great find, Cygon! This is very interesting. I'm glad gamedev.net has so many programming-savvy folks.

I've mailed Emil Persson - the author of the post I'm referencing and I'm going to go through all the links you've shared 'cause it seems like a great read and it might inform some ideas/explanations. The paper and his post are all the way back from 2009 so I'm wondering if things might've changed since then.

Still the advice on triangulation doesn't seem harmful to me and if anything it still feels like it could be doing some good, just for the wrong reasons. Would be interesting to get to the bottom of this though. Thank you very much once again!

@AndrewMaximov: Your advice is sound and emil is right, too: longer edges == lower performance (and, for extremely skinny triangles, even precision issues in certain algorithms)

Only the reason is slightly different. The "quads" in emil's article are blocks of 2x2 pixels by which graphics hardware steps through polygons 2-scanline-wise (allows for early rejection of entire blocks if obscured, helps align memory accesses and probably some other things). So if more screen space is touched by polygon borders, those quads have to be evaluated 2, 3 or even 4 times in the worst case.

Depending on the hardware's design, this also means a rectangle from (2, 2)-(10, 10) is usually rendered faster than one from (3, 3)-(11, 11) even though it's the same number of pixels because 19 more 2x2 pixel blocks ("quads") have to be processed.

But one can hardly optimize for such stuff, apart from maybe making edges as short as possible (which I find difficult, too, since one usually builds models from 4-sided polygons without knowing in which direction they'll be split). My own real world reason for subdividing cleanly is texturing - having extremely skinny triangles makes it hard to tweak texture coordinates since you can hardly see the difference.

Note: Please offer only positive, constructive comments - we are looking to promote a positive atmosphere where collaboration is valued above all else.

PARTNERS