Just how alright will I be if I were to skip normal-mapping?

Started by
5 comments, last by spek 9 years, 10 months ago

I don't really get the use of normal-mapping. Graphics cards have triangle output ability, fillrate, and math power.

Normal-mapping uses fillrate in the form of an extra texture lookup (right?), and even more math power, to save triangle output.

The problem is, math power and fillrate are much more precious on a graphics card than triangle output. You'll hit limits on them faster.

I've seen slides showing a 20k model using a 1 million k normal map, being 10x faster than using a million k model with no normal map. The only problem with this test is that a 20k model with a 1 million k normal map, is going to end up looking like a 60k-500k model, not a 1 million k model.

Advertisement

Fillrate and memory bandwidth are not quite the same thing. Normalmaps don't really hit fillrate outside of a deferred or light prepass render, just memory bandwidth. However, their access patterns are fairly predictable, and scale better (assuming mipmaps) than random vertex access.

Given that every card imaginable these days shades and rasterizes in units larger than a pixel, 2x2 quads at the least, and far larger in practice, any ALU gains you get by not bothering with normalmaps will be consumed by small triangle overhead. 1x1 pixel triangles will generally compute as 2x2 quads, or worse, and throw away most of the results, so pixel for pixel they're 4-16x more expensive than a more reasonably sized triangle.

Lastly, mipmaps provide a more automatic method of LOD. With discrete triangles, lighting, texturing, etc will almost certainly break down into a flickery, sparkly mess as the triangles shrink to sub-pixel resolution.

^ what Reaper said. Sub-pixel triangles are very inefficient for most GPU's, and will lead to aliasing hell since you can't pre-filter them. Normal maps can also be block-compressed, and are better-suited to streaming due to having built-in LOD thanks to mipmaps.

EDIT: I should also add that if you start to dramatically increase vertex density, the cost of per-vertex operations like skinning and blendshapes will increase proportionally.

Thanks guys. You saved me from the mistake of not using normal-mapping!

It's also important to point out that one really useful property of normal maps in modern games is in detail maps. Forgelight (the Planetside 2) engine and probably others have a normal map atlas they use a UV channel for detail normals on a lot of their meshes. When you get close to models the pixel shader looks up the detail normal and uses the atlas to map it along the object creating much finer details. Like being able to see stitching on cloth and fine patterns when zooming into surface rather than a pixelation. You can't really get that detail without really pushing the tessellation shaders.

It's also not always 20k vs 1000k; there are other kinds of geometry too, such as a 4-vertex model representing a flat floor. So 4 vs 1000k can be a big win.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

I think you'll burn in hell when chosing not to normalMap tongue.png

But more seriously, the needs depend on the situation I guess. Pumping an extra 10k triangles into a character model instead of using normalMaps may also get you where you want, without bringing the videocard to its knees. But think about the environment. How much tris would a level cost if every brick-wall was modeled like, well, a real brick wall? The polycount would explode with every extra square meter you make.

You also see more and more "detailNormalMaps", a secundary (frequently repeating / tiled) normalMap to simulate the micro-structure of a certain material. Cotton, bumpy skins, leather, rough concrete spickles, wood nerves, et cetera. Even if the video-card would laugh about it, the artist that has to model your stuff won't! NormalMaps can often be recycled for various cases, and in some cases its enough to "cheat" by converting a greyscaled photo into a normalMap. Production-wise easier than modeling each and every detail.

Then again if you won't see the surfaces from nearby, there is less need to have a normalMap of course. When peeking around in other game's texture packs, you may notice that ceiling textures are often a bit simpler, without normalMaps. Reason? You're not looking at the ceiling all day, are you?

I think/hope that rendering will get more towards displacement mapping, or whatever its called these days. So (nearby) geometry would get tesselated into much smaller patches, and have their vertices offsetted by a "bump-" or "heightMap" kind of thing. That may kill normalMaps one day maybe, although you still need that extra texture of course.

This topic is closed to new replies.

Advertisement