AndrewMaximov

Members
  • Content count

    0
  • Joined

  • Last visited

Community Reputation

681 Good

About AndrewMaximov

  • Rank
    Newbie

Personal Information

  1. Physically Based Shading For Artists

    Thank you very much Azaral! You should start with node-based shader creation tools like in UDK or Unity(plugin). It's really not that hard to get into but really opens your mind to the possibilities of shader work :) There are also quite a few useful DVDs from Eat3D and 3D motive.   Best of luck! Andrew
  2. Physically Based Shading For Artists

    Hello Ladies And Gentlemen! I'm happy to present you here a quick yet thorough introduction into the world of next gen physically based shading made specifically for artists. My only hope in doing this is that more artists get to worry less about technical issues. Believe me, core artistic values will still be key in the coming generation of computer graphics, so just listen to this and forget about it until you get an actual tool to test all your new knowledge on. Concentrate on colors, lighting and composition instead and when the time comes you'll harness all the next gen awesomeness to do something truly outstanding! Regards, Andrew [media][/media] P.S.As a bonus I'm giving away a next gen asset with some hints on how to author the maps: [media][/media]
  3. Ditching Diffuse Maps

    Thank you very much JJD. Well if there's anything this experiment proves is that you can at least be a bit of both ;) Programming is great fun too though. I like it as a break from art.   Riuthamis, ask away here man. I'll do my best to answer.   Thank you Cozzie. Good luck with your engine!
  4. Ditching Diffuse Maps

    One of the most amazing things for me as an artist that current generation brought with it was the introduction of the concept of shaders. No longer did we describe surfaces merely by flat images that already incorporated the majority of the lighting. Our materials became vastly richer with much more dynamic per-pixel lighting as well, and hands down had the biggest impact on the visual definition of the current generation of real-time graphics. But the programmable pixel and vertex pipelines were capable of much more than just calculating normal map and specular contributions. Also that wide array of things you could do with a pixel was now at the fingertips of a much broader audience, including the visual content creators themselves, who finally got a chance to become the architects of their own tech. From animated wet surfaces in a Modern Warfare ship level to heightmap-based vertex and up-aligned blending popularized by Uncharted 2, shaders further drove the visual splendor that is this generation of games. But such advancements came at a price: Technological: the amount of textures needed to define a single surface grew exponentially, as well as the amount of video memory needed for their allocation. Productional: The amount and sophistication of content skyrocketed. Budgets are through the roof in this generation of games and team sizes in some cases are plain ridiculous coming close to a thousand people. Of course it's not completely the fault of shaders, but art assets production is still one of the most expensive parts of your budget. Now as an industry and as a consumerist society, I guess, we are always about more stuff. Number 1 thing your average Joe Gamer seems to want from every new game is better graphics. Most people naturally see it as more polygons and crisper textures, since they seem like things that drive the visual quality of games. And they certainly do, but unfortunately they have a pretty remote impact on the visual pleasure that players experience with our games. When I come back to the old games that I used to enjoy 10 or even 5 years ago I'm amazed at just how crude the technology was back then: the resolutions, the polycounts, the effects. Yet somehow I and millions of people who played that game as well, managed to enjoy it on all levels. Including the visual. How so? Some might say that people are always dazzled by the top technology of their time. But I reckon you this: People are dazzled by Beauty. As an artist I've always been fascinated and interested in the concept of Beauty and I study it always with everything I do. I've been kindly invited by the top universities and art academies of the country, to share my findings on that particular subject, so you could say some of my ideas about Beauty were "ok". Now in this paper I'm not going to discuss what Beauty is, but I can definitely tell you what it isn't. Beauty never lies in the details. Still frame from my "Analyzing Beauty" Lecture, image by Zhu Haibo I so love this image, since it illustrates the point so vividly. It is a quick sketch that has very little in terms of detail, yet it makes it so painstakingly obvious why it is a sight worth seeing. The amazing lights and colors take your breath away, completely disregarding the fact there isn't a remotely detailed object in the whole image. The visual world around us is infinitely complex. All visual representative art is merely an approximation, comprised with the limited resources we have at our disposal. We can never have enough resources to recreate every single process that shapes the world around us, so trying to distill it to what makes reality feel real and feel beautiful is key in balancing the quality of your art with the amount time required to produce it. Another amazing example would be Robh Ruppel with the way he breaks down images to simple shapes and gradients to eventually come out with an almost photorealistic image that actually reeks with "simplicity" when you just take a closer look. Painting by Robh Ruppel Now with this logic in mind and a little fetish for smart tech I was always fascinated with the concept of procedural materials. Also, in my own work I kept noticing little things here and there, like people ignoring that wood, plastic and metal could have the same base diffuse texture as long as you sell their specular properties or wear and tear correctly. Or that my tileable textures diffuse could at times be just a kind of surface noise if it wasn't for the AO on top. Somewhere along that time I stumbled upon another interesting piece of shader tech: Gradient Mapping. All it does is take a grayscale heightmap you feed it and paint it with colors you assign to different pixel heights (brightness values). For example, here all pixels with brightness from 0.0 to 0.33 will gradually transition from red to green, .33-.66 will transition from green to blue and blue eventually to yellow. You've seen that technology used in Left for Dead 2 by Valve, where it allows them to pack numbers of blood splatter and detail onto a single zombie texture, as well as to variably color them at runtime. Screenshot from Valves "Left for Dead 2" Procedural Environment So with another personal project I set out to do I really wanted to push the concept and see how far you can go with "procedural" materials even on current-day tech. After months of hard work, here it is (be sure to HD): [media][/media] Not a single surface here uses a dedicated RGB diffuse texture. In fact I wanted to take it as far as inputting only one texture per one type of surface. Now of course there were some other little textures buried in the shader functions but they were no more than small grayscale masks stacked together. But let's take it from the top. The main idea in this procedural approach is this: Separating Surface Volume from Surface Detail. As simple as this. Generally rocks in the same area will have a similar geological origin thus requiring a similar diffuse rock texture component. All objects in the same environment usually accumulate identical types of dirt. Objects made from the same materials will generally wear, tear and decay similarly. Objects in a damp environment will start growing a similar type of moss, etc.,etc. What differs is how this dirt, moss, wear, tear accumulate on the surface, as well as the diffuse texture pattern. Now it may look like a lot of textures, but don't forget that all those grayscale ones are always stacked into R, G and B channels of a single DXT1 texture and are reused everywhere. Let's take a look at the memory usage in the worst case scenario, when we just have 2 materials to compare: Optimizing Storage of Heightmaps Now whether we import our heightmaps as an alpha channel in a DXT5 texture or a separate grayscale texture we'll still end up using another 128kb for 512x map. We can stack 3 different heightmaps into a single DXT1 texture and save 1/3 of 128kb but there is a better way. A normal map's blue channel hardly stores any vital information, so why don't we put it to meaningful use? You don't even need to recreate the blue channel in-shader - merely replacing it with a neutral normal color works fine in most situations. All and all you'll spend 2 additional pixel shader instructions, which is a puny price to pay for cutting down the memory footprint by half. And this is exactly why further on you're going to see Normal Maps and Height Maps listed as one single entity. Damage Now imagine you wanted to vertex blend damage to your material as a lot of games do this day. Here's how it goes: With this tech you get your damage smartly blended only where it could exist in real life - on the most protruding parts of your surface volume. Now on top of getting a 2 times memory gain you get an opportunity to dynamically tile, tweak intensity and available range of your damage, which by the way will always be taken into account in every kind of heightmap-based blending further on. And you can change it per every single material instance, creating exactly the type of damage you need. Multitude Now imagine that we, as it usually happens, need a bunch of similar surfaces for our level: Almost 2 times difference. Want to count what the difference would be if we also did a damage pass for each of those textures? I took that liberty and did that for you: 488kb to 1576kb. Now how do you like that? A reasonable question now would be just how much different surfaces could be considered similar and produce acceptable results if we just keep swapping the combo normal and height map. Turns out: almost all of them. How is that possible? In the beginning we said that the surface volume and diffuse texture patterns were an integral part of any material. But the big thing is, they do not have to be separate entities. You don't have to strictly feed heightmaps to gradient mapping. In fact feel free to forget the term heightmap for now, because from now on it'll be a part of a broader term: Gradient Map Gradient Map is your main diffuse component so you make it work as such. Blend your depth info with AO and every single texture pattern you might need. Paint in all the details and accents or tweak surface values just like you would with a regular texture. If you're working with heavily photo sourced textures, it's even easier because you can generate a normal map and a heightmap from your diffuse. And then you blend your grayscale diffuse with your heightmap to create a Gradient Map, thus keeping both your surface depth and surface detail info. If you think about surface detail, it is still depth just on a much smaller scale, so Gradient Mapping function processes it greatly. Think of it as evaporating water from juice or soup to create a concentrate. Colors are water that we can spruce back in at runtime. The Gradient Map works as good as the Height Map for blending, so no worries here. Also an amazing unexpected use is as an Opacity Mask, for foliage for example. You just have to make sure that your background is completely black and your gradient has its level pushed up so no pixel there is black also. Then you just clip this opacity mask from zero brightness and that's it. Another amazing fact is that gradient mapping works perfectly well even if you use just your usual diffuse turned grayscale. Here's a comparison of a material from Epic and the same material only with the diffuse map desaturated and put through the GM function. Now does this difference really warrant a whole diffuse texture for this and every similar object? The silver lining here is that you can actually use your diffuse textures as Gradient Maps without any modification, right here right now, significantly trimming your texture footprint. Extremely comfy and easy for new tech penetration. Now's the time for a little pros and cons, which should cover the questions I'd be having right now if I were you: - Procedural Colors in Tileable Textures vs. Procedural Colors in Uniquely Mapped Textures The point that I was most worried about as an artist was color. How much do we need? To my surprise 4 colors to color a Gradient Map are more than enough! I even had to create a lighter version of the Gradient Mapping function that just blends between two colors and has no texture overlay. Now I've obviously used it mostly for environment textures, which are generally tiled thus requiring certain uniformity from their colors to make tiling seem unapparent, and that actually works greatly to the advantage of gradient mapping. If you're working with tiled textures a lot - you definitely want to try this out. And if you're not working with tiling textures a lot, how the hell do your games even work? Also important to note, from an artistic standpoint, good lighting, fog and post processing greatly influence colors usually creating the broadest and most important strokes. Your scene hardly ever is supposed to be about every little piece screaming for attention with a different color. Uniformity is good in a lot of ways and it is definitely not something to be fighting with a lot of the time. Now you don't have to take my word for what the textures should look like. To make it fair, let's analyze a real-life example and see what makes a high-quality modern day texture: Textures from Uncharted 3 from Naughty Dog by Melissa Altobello These textures are made in the good old RGB-diffuse-map fashion, yet you can see that all that grunge and damage is really just an additional layer on top of the base textures. And if you strip that away, the textures have a of lot shades of pretty similar colors - it's the overall color tone and the brightness of each pixel that describes them. I hope that previous examples have convinced you that "Procedural" materials could do all of that - red brick/white bricks, dirty/clean, damaged - in a matter of a few button clicks, saving you a lot of production time and memory as a bonus. By now you probably think "Ok, what about uniquely mapped stuff?!". Do not despair. Valve has used it to create vast varieties of undead hordes for Left for Dead 2 and so can you. Viewport screenshot of a "Left For Dead 2" character by Valve You'll hardly want to Gradient Map your main characters but there's still a lot of mileage you can get out of it. Read all about it here The main hurdle is of course: color variety. Whenever you need a lot of color in a single texture Gradient Mapping probably won't suit your needs. But that's a good thing - you have a choice of what technology to apply to get the biggest bang for your buck. Gradient Mapping and Diffuse Mapping were never meant to be mutually exclusive. - Relinquishing control vs. Unexpected Variety: Another tricky thing for me as an artist was fear of losing complete and utter control. Now I'm the first guy to be plain anal about every little piece of my artwork, I'll be carefully planting old pieces of chewing gum and cigarette buds on my textures in places where most people won't even bother. Yet throughout my career I constantly had to teach myself to choose my battles wisely. I don't believe in things existing for their own sake. Anything is only as good as it performs its purpose - no more, no less. And the purpose of details is to be sufficient enough not to break the illusion of the imposed reality, as much as beginning artists think detail is all there is. I was and probably still am a guy who loves his details, yet I had to admit that most players will never notice a difference between Gradient Mapping and Diffuse mapping, just like all the professional artists who I've shown this environment to. Only further down the road I noticed just how much flexibility I could get from this system. From plants to rocks to bark it's a matter of swapping the only texture and tweaking the colors, diffuse patterns, damage and specular values. One second you use this texture as an orange canyon wall, next one it's already blue and is a part of a cave. Your bark is brown, but just turn it green, tile it more and there you have vines texture. Need more cold color in your shadows? Just make them appear in the cavities of your surfaces to amplify the effect. Want it dirty - just push a button and determine how much. Mossy/sandy - all just a matter of a button push. Procedural damage was another unexpectedly awesome thing since I could reuse a single asset with different damage tiling and intensity and create a whole lot more variety then I could ever imagine doing it the old way. And then I could paint my cracks green and invert their intensity, to create an illusion of vines overgrowing assets further in the distance. As an artist I've got to grips with this workflow. I can't imagine not being able to tweak any object's color, diffuse noise scale or dirt at my smallest whim. I love it now. It's like working with bigger Lego pieces. We no longer model every part of our level individually, but rather create a set of modular meshes to work with. Then why on earth shouldn't we do it with our materials and textures?! - Extra Processing Power vs. Freed up VRam While "procedural" texturing technology frees up a whole lot of VRam it also requires additional processing power. And as always with software optimization - it's a question of what you've got to spare. Screenshot from the "Last of Us" by Naughty Dog If we look at something like PS3 we'll find just 256mb of VRam and it shows. Even the most gorgeous games sometimes put blurry textures right in your face. Notice the dramatic texeleration difference in characters and the background truck in the screenshot above. Yet PS3 has 8 SPUs that technically could be used to alleviate the issue by implementing Gradient Mapping Shaders, cutting the diffuse texture footprint in half or even by 3/4, allowing to selectively increase texeleration on some surfaces. Xbox 360 has 2 SPUs and a combined Ram/VRam module yet it's still half the amount of memory of an iPad. With the Desert environment I didn't use diffuse textures at all, just a couple of masks that are insignificant in terms of the whole level's memory footprint. So it would be fairly accurate to say that I cut my texture memory expenses in half by relying on this technology. It's also worth noting that to create damage I use in shader normal map generation from a grayscale mask which is somewhat of a pricey operation (though there could be plenty of workarounds). Yet on a modern-day PC the environment you've seen has no trouble doing 60+ FPS. If we can trust UDK's custom nodes instructions calculations then it would take us just 14 instructions to replace a diffuse map with a gradient map. I have fully functional shaders that with 73 instructions provide you full diffuse, specular, masked opacity and normal functionality with only One texture sampled. The most complex version of the shader is 153 instructions and features vertex paintable heightmap-based sand blending as well as 2 types of procedural "smart" damage generation that are also both vertex paintable. These instruction counts are at the very least comparable to the instruction counts of Unreal Engine Games Materials with similar functionality. Yet I'm not a graphics programmer by any means and UDK's instruction counter does lie from time to time. Right now some proper tests are being conducted and so far I'm afraid I can't tell you much more. Though I promise to update this with more info the second it appears. - Production Costs This one has no cons. Procedural Materials save a lot of production time and subsequently a ton of money. If I had to create every single diffuse map by hand I can assure you that the environment you've seen would've been much smaller or taken more time to produce. It's funny how explaining this to a more business savvy person the first question I got was: "So how many people can you replace?". And this is definitely not about replacing people. It's about how much more and how much faster you can produce. There is always lack of time in our industry and having a chance to free some of it up for more important things is an amazing opportunity to have that yields better games and subsequently more profit. Outro Now there's no question left for me, the notion of diffuse textures being indispensable the way they are, couldn't be further from the truth. Gradient Map carries enough information to make your brain perceive surfaces as completely believable. And that's all there is to it. So if there's somewhere we can trim fat, it's in the diffuse textures. Both in terms workload and technical constraints. Without sacrificing visual quality. Now I know our industry is all about more stuff. The next generation is just around the corner and it will have to blow something like Battlefield 3 out of the water if Sony and Microsoft are going to convince their audience to make the pricey upgrade. As much as I look forward to quadrupling texture sizes and polygon budgets, the industry could well collapse under its own weight. Nowadays, in the world of big games, good-yet-not great-70%-metacritic games just do not pay off any more, and companies are uber reluctant to invest money in something that, in their eyes, doesn't absolutely guarantee return. Now consider investing in big next gen projects, which could very well be double the cost of today's AAA game, for a platform that has zero market penetration? When can you really expect at least 3 million units shipped there, to at least make it worth your while? The risks could just keep rising proportionally with graphical fidelity. Now there is a cushion in the form of assets being downsized for today's games so going higher-rez could merely be a matter of not resizing, but higher resolutions are still going to require more work. If we as industry want to stay competitive we should not only advance the quality of the product we produce, but the quality of production itself as well. The industry should slowly move towards things becoming more "procedural". In fact it does: we switched from animating dudes being blown away by a shotgun to simulating it; we have specific tools to build trees, roads, terrain, or LoDs; we no longer model every single piece of our levels, but rather create a set of highly modular Lego pieces to build our levels with; gameplay scripting in UDK is now visual node-based editing - no more writing code! All of this saved our industry years of man-hours and millions of dollars in production costs. And I believe that this is something we should be doing with materials. Give our artists bigger Lego pieces so they can dedicate themselves to the bigger picture with no real loss in detail. I would love to see next-gen engine creators take that into account. Normal + Gradient map RGB combo, built-in gradient mapping functionality or even high-level procedural material editor with built in variety of diffuse patterns, specular presets, dirt, damage and wear types where users just input their Gradient Map, tweak a few handles and: voila! Procedural wet effect or water rolling down oblique surfaces, overgrown with moss, dusty, sandy, burned; procedural polished metal, car paint, glass, water etc. - all with one button push. And with deferred lighting becoming commonplace imagine using Screen Space AO to mask tiled dirt to make it appear only in cavities, corners and intersections of flagged static objects. Or use vertex normals to procedurally create a mask for sharp corners that you multiply with a wear map and use it to reveal metal underneath worn painted surfaces, or even use pixel normals from you scene's Normal Pass, combine it with highly contrasted depth pass to try to figure out where those sharp edges are. There are millions of ideas and some of them could save you months of work and hundreds megabytes of VRam. Technology is meant to be a tool to achieve artistic results, so instead making us try to keep up with the crazy amount of fidelity we have to put in, lets make it help us concentrate on adding details and creating real beauty where it would matter most. Thank you very much and keep it pretty. Andrew Maximov, 2012 P.s. There one last thing I wanted to share with you guys...but I can't seem to remember what it is.... ...oh yeah, THE MATERIALS! Click here to grab them! :) These are all kinds of "procedural" materials for you to check out as well as a couple of example textures and meshes. Also there's a .PSD that makes for very smooth gradient map production as it allows you to preview your gradient mapping, normal, specular, damage and diffuse pattern influence right in Photoshop! It's like you're painting your gradient map directly in UDK and can immediately see the end result! I've made a little video that will hopefully make things more visual for you: [media][/media]
  5. Ditching Diffuse Maps

    Thank you very much Richard - the link is updated. Sorry about that
  6. Efficient Art Production: Theory and Practice

    Hey there guys, thank you very much for the comments! riuthamus, thank you very much man. More is coming   Cozzie, thank you buddy, much appreciated!   Great find, Cygon! This is very interesting. I'm glad gamedev.net has so many programming-savvy folks. I've mailed Emil Persson - the author of the post I'm referencing and I'm going to go through all the links you've shared 'cause it seems like a great read and it might inform some ideas/explanations. The paper and his post are all the way back from 2009 so I'm wondering if things might've changed since then. Still the advice on triangulation doesn't seem harmful to me and if anything it still feels like it could be doing some good, just for the wrong reasons. Would be interesting to get to the bottom of this though. Thank you very much once again!
  7. Title image: Making sure your assets don't stink :) (An update of my 2009 article). Video Game Artists work is all about efficiently producing an incredible looking asset. In this article we won't speak about what makes an asset look good, but rather about what makes it technically efficient. From here this article splits into two parts. The first one will be about things you need to know to produce an efficient asset. The second one will be about things you need to do to make sure that the asset you've produced is efficient. Lets go! Part 1: Theory [attachment=19473:Things-You'dWant-To-Know_Web.jpg] Brain Cells Do Not Recover Even though most of the information to follow is about making your assets more engine friendly, please don't forget that this is just an insight into how things work, a guideline to know which side to approach your work from. Saving you or your teammates time is also extremely important. An extra hundred of tris won't make FPS drop through the floor. Feeling work go smooth would make for a happier team that is able to produce a lot more a lot faster. Don't turn work into struggle for anyone including yourself. Always take concern in needs of people working with the assets you produce. Dimension 1: Vertex Remember geometry for a second. When we have a dot we, well, we have a dot. A dot is a unit of 1-dimensional space. If we move up to a 2-dimensional space, we'd be able to operate with dots in it too. But if we take two of them, then we'd be able to define a line. A line is a building block of 2-dimensional space. But if you take a closer look, a line is simply an endless number of dots put alongside each other according to a certain rule (linear function). Now lets move a level up again. In 3-dimensional space we can operate with both dots (or vertices) and lines. But, if we add one more dot to the previous two, that defined a line, we'd be able to define a face. And that face would be a building block in 3-dimensional space, that forms shapes which we are able to look at from different angles. I'm pretty sure most of you are used to receiving a triangle count as a main guideline for creating a 3D model. And I think the fact of it being a building block of a 3-dimensional space has something to do with it. :) But that's the human way of thinking. We, humans, also operate in a decimal numeral system, but hardware processors don't. It's just 0 and 1 - binary - the most basic number representation system. In order for a processor to execute anything at all, you have to break it into the smallest and simplest operations that it can solve consecutively. So in order to display 3D graphics you also have to get down to the basics. Even though a triangle is a building block of 3-dimensional space, it is still composed of 3 lines, which in their turn are defined by 3 vertices. So basically, it's not the tris that you are saving, but vertices. Though, the less the tri count the less vertices there are, right? Totally. But unfortunately, the number of tris is not the only thing affecting your vert count. There's also some underlying process that are less obvious. A 3D model is stored in memory as a number of vertex-structure-based objects. "Structures", speaking an object oriented programming language (figuratively), are predefined groups of different types of data and functions composed together to present a single entity. There could be thouthands of instances of such entities which all share the same variable types and functions, just different values stored in them. Such entities are called "objects". Here's a simplified example of how a vertex structure could look: Vertex structure { Vertex Coordinates; Vertex Color; Vertex Normals; UV1 Coordinates; UV2 Coordinates;... }; If you think of it, it's really obvious that vertex structures should only contain necessary data. Anything redundant could become a great waste of memory when your scenes hit a couple dozen million tris. That's why a single vertex structure only holds one set of the same types of data. What does it mean for artists? It means that a vertex can't have 2 different UV positions in the same UV set, or 2 normals, or two material ID's. But we've all have seen how smoothing groups work, or applied multiple materials to objects and that didn't seem to increase the vert count. That's only in your modeling package. Your engine would treat the vertex data differently: The easiest and most rational way to add that extra attribute to a vertex is to simply create another vertex in the exact same position. Simply put, every time you set another smoothing group for a selection of polys or make a hard edge in maya, invisibly to you, the number of border vertices doubles. The same goes for every UV seam you create. And for every additional material you apply to your model. UDK used to automatically compare the number of imported vertices versus generated upon assets import and warn you if the numbers differ for more than 25 percent. Now be it 25, 50 or a gazzilion percent - doesn't really matter that much if your game runs at frame. But knowing this stuff might help you get there. Just don't be surprised if your actual vert count is 3 times what you thought it was if you set all the edges to hard and break/detach all your UV verts. Connecting the dots This small chapter here concerns the stuff that keeps those vertices together - the edges. The way they form triangles is important for an artist, who wants to produce efficient assets. And not only because they define shape, but because they also define how fast your triangles are rendered in a pretty non-trivial way. How would you render a pixel if it's right on the edge that 2 triangles share? You would render the pixel twice, for both triangles and then blend the results. And that leads us to a pretty interesting concept, that the tighter edge density, the more rerendered pixels you'll get and that means bigger render time. This issue should hardly affect the way you model, but knowing about it could come in handy in some other specific cases. Triangulation would be a perfect example of such a case. It's a pretty known issue, that thin tris aren't all that good to render. But talking about triangulation, if you've made one triangle thinner - you made another one wider. Imagine if we zoom out from a uniformly triangulated model: the smaller the object becomes on screen, the tighter the edge density and the bigger the chance of rerendering the same pixels. But, if you neglect uniform triangulation and worry about making every triangle have the largest area possible (thus making it incapacitate more pixels), in the end you'd get triangles with consecutively decreasing area sizes. Then once you zoom out again the amount of areas with higher edge density would be limited to a much smaller number of on-screen pixels. And the smaller the object becomes on screen, the smaller amount of potentially redrawn pixels it'll have. You could also try to work this the other way around, and start with making the triangle edges as short as possible. Now although trivial it's an interesting way to inform some of your decisions while modeling. Make sure to check out some statistics on the subject - pretty fascinating Eating in portions is better for your health Exactly the way your engine draws your object triangle by triangle, it draws the whole scene object by object. In order for your object to be rendered a draw call must be issued. While CPU gathers information and prepares batches to be sent to GPU, GPU renders stuff. What's important for us here, is that, if CPU is unable to supply GPU with the next batch by the time it's finished with the current, the GPU has nothing to do. This means that rendering an object with a small amount of tris actually isn't all that efficient. You'll spend more time preparing for the render, then on the render itself and waste the precious milliseconds your graphics card could be working its magic. A frame from NVidia's 2005(?) GDC presentation The number of tris GPU can render until the next batch is ready to be submitted varies per engine and GPU. But I'd say objects up to 700 tris probably wouldn't benefit that much from further optimization. Defining such a number for your project would be a great help for your art team. It could save a lot of production and even some render time. Plus it'll serve as a guideline for artists to go by in production situations. You'd want to have a less detailed model only when there's really no point in making it more complex, and you'd have to spent some extra time on things no one will ever notice. And that luckily works the other way around - you wouldn't want to make your model lowpolier than this, unless you have your specific reasons. Plus the reserve of tris you have could be well spent on shaving off those invisible vertices mentioned earlier. For example you can add some chamfered edges and fearlessly assign one smoothing group to the whole object (make all the edges soft). It may sound weird but having smoother, more tessellated models sometime could actually help the performance. If you'd like your game to be more efficient, try to avoid making very low-polygonal objects a single independent asset. If you are making a tavern scene, you really don't want to have every fork, knife and dishes hand placed in the game editor. You'd rather combine them into sets or even combine them with a table. Yeah, you would have less variety, but when done right no one will notice. But this in no case means that you should run around applying turbosmooth to everything. There are some things to watch out for, like stencil shadows for example. Plus some engines combine multiple objects into a single batch, so it's alway best to talk to talk your programmers first. After batching, another very important thing to make or break your performance is the culling system your engine uses. If your engine doesn't cull the objects out of your frustum you're doing a lot of unnecessary and invisible rendering. If you doubled your Field Of View you've most likely doubled the amount of objects you'll have to render in your frame. Finally if your engine doesn't cull objects that are obstructed by other objects then technically you're rendering a lot more then you could've. So it's not only about the tri-counts but about being selective in what to render. Vertex VS Pixel If you ask an artist what is the main change in game art production we've seen in the last 10 years, I'm pretty sure that the most common answer would be the introduction of per texel shading and use of multiple textures to simulate different optical qualities of a single surface. Sure polycounts have grown, animation rigs now have more bones and procedurally generated physical movement is more widespread. But normal and spec maps are the ones contributing the most visual difference. And this difference comes at a price: in modern day engines, most of the render time is spent processing and applying all those endless maps based on the direction of the incoming lights and the cameras position. Complex/Simple Shaders in UDK From a viewpoint of an artist, who strives to produce effective art, this means following things: Optimizing your materials is much more fruitful than optimizing vertex counts. Adding an extra 10, 20 or even 500 tris ain't nearly as stressing for performance as applying another material on an object. Shaving hundreds of tris off your model would hardly ever bring a bigger bang than deciding that your object could do without an opacity map, or glow map, or bump offset or even a secular map. You can shave 2-3 millions triangles off a level just to gain around 2-3 fps. It's not the raw tri count that affects performance the most, but more the number of draw calls and shader and lighting complexity. Then there are vertex transformation costs when you have some really complex rigs or a lot of physically controlled objects. Shader blending and lighting modes also have a lot to do with performance. Alpha blended materials cause a lot more stress than opaque ones. Vertex lit is faster and cheaper because you're going to have set vertex colors on your vertexes anyway. And if you're deffered you don't even have to worry about that, 'cause your engine is going to calculate lighting on the final frame (that has a constant amount of pixels) rather then for every object. And finally: post processing. Doing too many opeartions on your final rendered pixels could also slow your game down significantly. Things Differ (Communication is King) As with everything in life, there's no universal recipe - things differ. And the best thing you can do is figure out what your specific case looks like. Get all the information you can from the people responsible. No one knows your engine better than the programmers. They know a lot of stuff that could be useful for artists but sometimes, due to lack of dialogue, this information remains with them. Miscommunication may lead to problems that could've been easily avoided, or be the reason you've done a lot of unnecessary work or wasted a truckload of time that could've been spent much wiser. Speak, you're all making one game after all and your success depends on how well you're able to cooperate. Asking has never hurt anyone and it's actually the best way to get an answer ;) Dalai Lama once said: "Learn your rules diligently, so you would now where to break them." And I can do nothing, but agree with him. Obeying rules all the time is the best way to not ever do anything original. All rules or restrictions have some solid arguments to back them up, and fit some general conditions. But conditions vary. If you take a closer look, every other asset could be an exception to some extent. And, ideally, having faced some tricky situation artists should be able to make decisions on their own, sometimes even break the rules if they know that the project will benefit from it, and them breaking the rules wouldn't hurt anything. But, If you don't know the facts behind the rules I doubt you would ever go breaking them. So I seriously encourage you take interest in your work. You're making games and not just art. Part 2: Practice [attachment=19472:Things-You'dWant-To-Do_Web.jpg] I hope this wall of text up here made some sense for you guys. All this information on how stuff works is really nice to know, but it's not exactly what you would use on a day-to-day basis. As an artist I'd love to have a place where all the "hows" are clearly stacked, without any other distracting information. And the "whys" section would serve as a reference you can turn to, in case something becomes unclear. Now lets imagine you're finally done with an asset. You'd want to make sure things are clean and engine friendly. Here's the list of things to check upon consecutively: - Deleted History, Frozen Transformations/Reset XForm, Collapsed Stack Transformation information stored in a model could prevent it from being displayed correctly, making all further checks useless. Plus it's simply unacceptable for import into some engines. And even if it does import, the object's orientation and normal direction could be messed up. In Maya, don't forget to select your object. - Inverted Normals While mirroring (scaling by a negative number) or performing a ton of other operations actually, your vertex normals could get turned inside out. You should have the right settings set in your modeling application in order to spot such problems. In 3Ds Max you can go to object properties and turn "Backface cull" on. Then examine your mesh. In Maya you could just disable "Double Sided lighting" in the lighting tab (if it's missing hit "shift+m"), then make sure that in the Shading tab "Backface Culling" is disabled. Then if you'll check out your model with shading all the places with inverted normals will be black. - Mesh splits/ Open Edges It sometimes happens that while working we forget to weld some vertices or accidentally break/split some. Not only could this cause some lighting and smoothing issues, but it's also a waste of memory and pretty much a sign of a sloppy work. You wouldn't want that. Open edges are an issue you want to think twice about. And not only because in some cases they could be an additional stress for computing dynamic lighting, but because it seriously reduces reusability of your asset. If you simply close the gap and find a place on your texture you can throw this new shell on, that would still be more preferable. To detect both those issues in 3Ds Max simply choose border selection mode ("3" by default) and hit select all ("ctrl + a" by default). In Maya you could use a handy tool called "Custom Polygon Display". Choose "Highlight: Borders" radio button and apply to your object. - Multiple edges/ Double faces This double stuff is a nasty bugger since it's almost impossible to spot unless you know how. And sometimes when you modify stuff you can get very surprised with things behaving not the way they should. I could hardly remember having them in Max, but just to be sure I always apply a STL Check modifier. Tick the appropriate radio button and check the "Check" checkbox.) In Maya, the "Cleanup" tool is very useful. Just check "Nonmanifold geometry" and "Lamina faces" and hit apply. - Smoothing groups/soft-hard edges For this one you'd want to have as few smoothing groups/hard edges as possible. You might consider making it all smooth and just adding some extra chamfers, where bad lighting issues start to appear. Plus there's one more issue to watch out for, more in Maya then in 3Ds Max though, since Max utilizes the Smoothing Group concept: edges on planar surfaces would appear smooth even if they are not. To see which edges are actually unsmoothed "Custom Polygon Display" tool comes in handy again. Just click "Soft/Hard" round button right alongside the "Edges:" - UV splits You would like your UVs to have the least number of seams possible, but as long as it is nice to work with. No need to go over the top with distortion here, just keep it clean and logical. Broken/split vertices are a thing to watch out for too. 3Ds Max would indicate them with different color inside the "Edit UVWs" window. While in Maya's "UV Texture Editor" window you have Highlight Edges button, that simply checks "Highlight: Texture Borders" checkbox in the "Custom Polygon Display" tool for you. - Triangulation While checking triangulation, first of all make sure that all the triangles accentuate the shape you're trying to convey, rather than contradict it. Then a quick glance, to check if triangulation is efficient. Plus: some engines have their own triangulation algorithms and will triangulate a model on import themselves, with no concern about how you thought your triangulation looked. In trickier places this could lead to a messy result, so please take caution and investigate how your engine works and connect the vertices by hand if necessary. Btw, Maya more or less helps you find such places if you check "Concave faces" in "Cleanup Options". In 3Ds Max you'll just have to keep an eye out for yourself. - Grid Alignment/ Modularity/Pivot point placement Since the last generation of videogames, graphics production costs have increased significantly, so modularity and extensive reusability are now a very common thing. Ease of implementation and combination with different assets could save a lot of time, maybe not even yours, so don't make your level designers hate you - think about it. - Material Optimization Evaluate your textures and materials again, since it's probably the biggest source of optimization. Maybe the gloss map doesn't deliver at all and specular is doing great on its own? Maybe if you used a twice smaller specular the asset would still hold up? Or maybe you can go with the diffuse for specular, since it's a just a small background asset? Maybe that additional tileable normal isn't necessary at all? Or maybe you could go with a grayscale spec, and use the spare channels for something else? - Lightmapping possibility If your engine supports lightmapping make sure you have a spare set of uniquely unwrapped UVs, which totally met all your engine's requirements. Afterword Please remember no matter how technical and optimized your model is, it's meant to look beautiful first of all. No optimization could be an excuse for an ugly model. Optimization is not what artists do. Artists do art. And that's what you should concentrate on. The beauty of technicalities is that they can be precise to a pretty big extent. This means you can write them down, memorize them and don't bother thinking about them again for quite a while, just remembering instead. But it's art where you have to evaluate and make millions of decisions every single second. All this text is not important and that is exactly why it is written down. Really important things aren't all that easy to be expressed with words. And I hoped, that maybe, if you didn't have to bother thinking about all the tech stuff at least for some time, you'd concentrate on stuff much more important, and prettier I hope. Cheers, Andrew
  8. Ditching Diffuse Maps