Creating Efficient Next Gen Environment Art

Published January 02, 2008 by Tim McGrath, posted by Myopic Rhino
Do you see issues with this article? Let us know.
Advertisement

Introduction

As an industry, we are now deeply entrenched in a new generation of console development. What seemed unknown and even scary only a few years ago is now part of most developer's everyday work. In the end, most people's fears about the new breed of games requiring hundreds more artists, drastic increases in budgets and longer development times turned out to be unjustified. Many high quality titles have been made on time by small to average sized teams of experienced staff.

With that said, creating artwork for these new machines has never been more problematic. Many artists, especially those coming from the familiarity of the previous generation, have had to learn a whole new set of tools and work practices.

Creating efficient, high quality environment art can be done with the right amount of planning and forethought. I have identified, based on my experience developing several titles for the 360 and PS3, key 'problem areas' commonly found in next-gen art pipelines. This article outlines these areas and offers solutions, hopefully helping other artists identify problems earlier rather than later.

Conceptualizing is key

More than ever, it is important to get as much visual pre-production done as possible. Next Gen artwork takes a lot longer to produce, so it's important everyone agrees on a direction before moving forward. Concept art or geometry paint-overs (where an artist paints over a screen-grab of designer geometry) will reduce the risks of large revisits later in the project. They will also help the artist with color coordination, lighting and composition.

Having designers pair up with an artist can prove especially useful, with the artist helping the designer with layout and making sure abstract shapes can be easily translated into fitting game objects and the designer helping the artist keep within certain boundaries and rules essential to the gameplay. Forming a good relationship between art and design will lay the foundation for productive times ahead, or at least less stress on both fronts.

Concept artists can also do a lot of the time consuming ground work - a good concept should not only contain sketches or artworks, but also breakdowns of ideal textures and materials. Color swatches and textures pulled from the internet can be laid out next to the concept, giving the artist a quick and easy guide to the sort of materials the concept artist was envisaging. (This also allows the concept artist to create rougher sketches and color comps)

Creating the Building Blocks

Level designers need to start blocking out level layouts and collision geometry as early as possible. An artist can help out with this process by creating a library of simple textures and materials (grass, concrete, metal etc) and help out with any tricky geometry.

If a designer can nail a level's basic gameplay, cameras and character navigation based on a set of building blocks it will mean less back and forth with artists later on. This was important last generation but is doubly important this time around. A lot of projects get held up by major changes to layout after art has been created, and these changes often involve a lot more work than it used to. Such changes are usually unavoidable, nevertheless the goal should be to limit this as much as possible.

Artists can spend this design phase of development working on standalone assets such as props and textures which are independent of level layout.

Modeling And Polycounts

The new generation of consoles can definitely push a lot more polys around, but this isn't an excuse to use more of them everywhere. In general, medium to low polycounts in-game will suffice in the majority of cases. Of course, outside of the game engine, modeling high poly meshes for detailed objects and building will always give you a better basis for outputting normal, AO and specular maps.

Try to use the 'silhouette' rule when modeling in-game background geometry and architecture. Essentially, this means saving your polygons for the outer edges, where the geometry's silhouette might be visible (against the sky for instance) A good example would be an arched bridge, doorway or curved roof. Putting polygons into these important curves will give the impression of a high-poly scene.

High poly counts don't always look good: it's not necessary to use geometry to model everything out. Polygon edges can't be mipped like a good texture map, so modeling out a small window ledge or the edges of window panes might seem like a good idea, but often will cause shimmering visual artifacts from a distance, especially if a realtime lighting solution is being used. On the other hand, modeling out such fine detail in a 3D package to create normal maps can work nicely, if time allows.

Using a 'render to texture' function will help improve the look of stand-alone objects. Use this function within the 3D package to render ambient occlusion and any other renderable features back onto the diffuse map, giving your objects a realistic look which is difficult to match by hand. Rendering ambient occlusion onto a diffuse map can also be beneficial when used as a new layer in a paint package and used to represent dirt, rust or grime in the corners of the object.

Texturing

Memory, or the lack of it, means textures still pose a problem on next-gen consoles. In fact, in many cases they are more of a hit, performance wise, than polygons. The key here then is prioritizing which textures get priority in terms of resolution.

Textures can be too high resolution. Even on a high definition display, 1024x1024 or 2048x2048 textures will rarely display close enough to the screen to show all the pixels at a 1:1 ratio. Such textures will look bad at a distance, and as such should be avoided in most cases.

Signs and other textures involving text should always get resolution. Having high res signage around a level is important - sharp, crisp text on posters and logos is a great way of creating a high-res look to the world.

Maintaining a consistent pixel density across your world is also important, with HD displays showing every stretched and inefficiently mapped polygon clearer than ever. Keeping textures at a low or medium resolution across the board looks better than a jarring mixture of high res and low res maps next to each other.

If your world requires a lot of low res textures, blending a second pass of a tightly tileable detail map works wonders. Grime maps can always go fairly low. Most specular and normal maps can be reduced to 50% of the diffuse map's size without too much of a problem; similarly, it's possible to achieve great results with high res normal maps over low res diffuse maps.

Specular maps are important! Make sure you dedicate enough time to these important textures, which are often the deciding factor in presenting a convincing material to the player. Don't make the mistake of making a black and white version of the diffuse, similarly, a lack of color in the spec map will give surfaces a more plastic look. Add a reflective color where necessary for a convincing effect.

Make a shared company texture library, similar to a shader library. It's not productive having five artists all making grass textures, or concrete normal maps. Get into the mindset of sharing and re-using assets. If you work as part of a larger publisher, try to get an internal art database going. Sharing assets between studios can save weeks to months of art time.

Normal Mapping

This generation's buzzword. It's vital to establish rules early on about the level of detail on each surface. Make sure not to waste valuable time on creating normal maps for background scenery or less important areas that night not be the main focus of the level.

Dependent on the budget, game type and art direction, you may be creating each and every object using high poly modeling and a digital sculpting tool like Z-Brush or Mudbox, or you'll be using a faster solution, applying normal 'detail' maps across your texture surfaces with a plug-in such as Nvidia's Photoshop filter.

For surface, or texture detail, many games use a mixture of both and if you have a tight schedule, then here is when a set of guidelines can apply.

Work out the rules. If a surface's detail is only 5mm deep you might decide to generate normal maps for anything up to this depth using a filter. This can be used for simple things such as fine hairline cracks, pitted cement, lightly recessed brick and so forth. If it's detail exceeds 5mm, for example a craggy rock with a lot of chipped edges and some deep crevices, it would make more sense to model the object high poly, add the detail via Zbrush and create normal maps for a low poly base object.

Be careful using normal map filters. They can save a lot of time and work wonders in the right circumstances, but can also make some horrible looking textures. Shadows and specular will be interpreted as depth and ridges, respectively, so take care about which textures get treated via a filter. A 3D package will always give better results due to its ability to process correct 3D information into the normal map. Try creating surface cracks and pits using a 2D filter and overlay them with your exported 3D normal map for an added layer of detail.

Remember, normal maps are very expensive in most engines. In a lot of cases (especially flat surfaces) a good specular map will suffice and will be a lot cheaper, allowing you to do more with the really important spots.

Shaders

Shaders are one of the most important aspects of this generation's visuals. Due to their complex nature and their potential for using lots of instructions in untrained hands, it makes sense to leave most of the development down to the technical artists. The technical artists should be experienced at using as few instructions as possible while maintaining the desired look, reserving the more complex shaders for cinematic sequences and other special one-off moments.

Many games nowadays use a single 'super-shader' capable of creating most necessary materials. An alternate system is to create a set of highly optimized base shaders grouped by instruction cost (low, medium and high) - these would feature an interface with controllers for modifications and scalars within each. With each artist using shaders parented to the technical artist's originals, optimizations and changes can be made across whole levels later, without the need to go in and change everything by hand.

Regardless of the methods the artists use to create their own shaders, make sure to set up an easily accessible library on the internal network where the whole art team can share and access each other's materials. In the long run, this will not only avoid lots of unnecessary duplicates, but will also help maintain a cohesive art style.

Lighting

Lighting is, in my opinion, the most important aspect of next gen art. How a scene is lit will control the mood, scale and atmosphere of a scene. The extra memory and power afforded to us by next gen consoles open up a new world of lighting choices.

There are various ways to light levels, dependent on the style. In the previous generation, 95% of all games used the vertex coloring method. This is an outdated system now, but can be still be used in conjunction with other techniques mentioned below to provide a great look.

The ideal solution is to light using lightmaps. Lightmaps give you the flexibility to use advanced static lighting calculations across your world which could never be achieved in realtime. It also does away with the laborious task of creating floating shadow 'overlays' (and the draw time they use) and the unnecessary subdivision of level geometry we saw last generation. Some engines already have built-in support for creating lightmaps, but it is likely you'll end up lighting in an external 3d package and exporting the lightmaps for use in the engine. This method is generally preferable anyway, as a 3D package will always offer more tools and flexibility.

If you're lighting a level, you'll need to unwrap all the geometry on a separate UV channel. Letting the 3D package unwrap for you is usually a sufficient, if not the most optimal method. If using a separate engine, you'll need to design an efficient shader for blending the lightmaps with the diffuse channel and maintain the correct levels. Consider additional cheaper shaders which work as emissive only, ideal for background scenery.

Ambient occlusion is extremely important, adding realism by calculating soft shadows in corners, under objects and at points of contact. As a rough approximation of global illumination, it gives you a comparable look with drastically reduced render times.

Work out your system of generating ambient occlusion solutions. If lighting within a proprietary engine, you may find it's not featured as part of the lighting tools (It can be faked, but is very time consuming) Many 3D packages now have advanced renderers featuring ambient occlusion solutions as standard. Do some experimenting with these built-in tools, as you may find they are overly complex or render times take too long. If you're going to be lighting a city block, for instance, you're going to want a fast renderer. Luckily lots of 3rd party render systems exist, and it will be worth your while to test a few, as they will all vary in their results, ease of use and speed.

It used to be extremely difficult to light outdoor, day lit scenes, but nowadays this is a viable option. One direct light source means the whole world can be either lit using lightmaps, or in some cases, dynamically lit. An ideal solution would be to dynamically light your world, with an ambient occlusion lightmap pass on the geometry. Be careful lighting interiors using lightmaps - due to the size of lightmaps, you'll invariably experience a lot of banding and compression artifacts, especially if using a lot of colored lighting. See if you can achieve a balance of non-overlapping dynamic light sources and lightmapped shadows and ambient light. If you're using dynamic lighting solution, make sure there is a faked filler light in the occluded areas, to pop out the normal maps on textures.

Putting it all together

Each project is unique, but in every project we are striving for the same goal - to realize our initial visions and still remain on schedule. We are at a point where we can finally add the level of detail and creativity many of us have always wanted, but only by planning and adhering to an efficient art pipeline can we seriously devote the necessary time and resources to do so.

It's vital to plan ahead and learn from experience, hopefully this article has helped forward the notion that high quality next gen artwork can be made under budget and on time.

Tim McGrath has worked in the games industry as an artist for over 11 years. He is currently Senior Environment Artist at Midway Studios Los Angeles.

Cancel Save
0 Likes 0 Comments

Comments

Nobody has left a comment. You can be the first!
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!

Discusses several next-gen art techniques such as shaders and normal mapping and how to approach them to create next-gen assets without a huge team

Advertisement
Advertisement