High poly or normal mapped?

Started by
19 comments, last by rubicondev 14 years, 1 month ago
Basically I'm wondering if I should just make higher poly models with no shaders or make lower polygon models with normal map shaders. Which would perform better?
Advertisement
Normal mapped low polygon models should render much faster than equivalent high poly models. That is the entire point of normal maps, that they allow you to 'fake' a high polygon mesh without the expense of transforming and rasterizing all of the extra polygons.

Also as a minor note: you don't really use high poly models with "no shaders". These have a shader just like the normal map, and it will look 90% the same. Only difference is normal map will pull its normals from a texture and perform lighting in the fragment shader, while other models will get normals from the vertices and perform lighting in the vertex shader.
[size=2]My Projects:
[size=2]Portfolio Map for Android - Free Visual Portfolio Tracker
[size=2]Electron Flux for Android - Free Puzzle/Logic Game
Normal mapping is definitely "standard" these days ;)

Another thing to keep in mind though is memory usage.
Non-normal mapped mesh: Vertex layout:  position(3 x float)  colour  (3 x float)  normal  (3 x float)  UV      (2 x float)  Total = 44 bytes per vertex Textures:  512x512 diffuse (RGB8)  Total = 786432 bytesNormal mapped mesh: Vertex layout:  position(3 x float)  colour  (3 x float)  normal  (3 x float)  binormal(3 x float)  tangent (3 x float)  UV      (2 x float)  Total = 68 bytes per vertex Textures:  512x512 diffuse (RGB8)  512x512 normal map (RGB8)  Total = 1572864 bytes
If a 512 normal map takes 786432 bytes, and a vertex from a non-normal mapped mesh takes 44 bytes, that means that for the same memory-cost as a n-map, you could've had an extra 17,000 verts instead.
Quote:]If a 512 normal map takes 786432 bytes, and a vertex from a non-normal mapped mesh takes 44 bytes, that means that for the same memory-cost as a n-map, you could've had an extra 17,000 verts instead.


Ah. But the normal map simulates 262,144 vertices without the performance hit. It's a trade off, of course, but with the average video card having at least 512 MB of video RAM, memory consumption is a minor concern.

No, I am not a professional programmer. I'm just a hobbyist having fun...

Quote:Original post by Hodgman


Another thing to keep in mind though is memory usage.

...

Normal mapped mesh:
Vertex layout:
position(3 x float)
colour (3 x float)
normal (3 x float)
binormal(3 x float)
tangent (3 x float)
UV (2 x float)
Total = 68 bytes per vertex

...



Under Direct3D the equivalent the vertex structure would be something like

position(3 x float)
colour (1 dword) //if needed
normal (3 x float)
tangent (3 x float)
UV (2 x float)
Total = 44-48 bytes per vertex

the bitangent can be computed in shader. Just to point out that the vertex structure doesn't have to be as heavy as presented.

Anyway, in my opinion, hi-poly models and normal maps aren't really exclusive. Normal mapping is technique which gives more detail to the surfaces and in certain cases it may make low-poly objects look smoother. At the surface level, normal mapping can be used to create impression of such details which are almost impossible to create just with polygons (if you want to have reasonable polygon count).

So, make your models as high as required and profile. GPU's can easily push millions of normal mapped polygons. Also, you may use lodding techniques for distanced models.

Cheers!
Quote:Original post by Hodgman
Normal mapping is definitely "standard" these days ;)

Another thing to keep in mind though is memory usage.
Non-normal mapped mesh: Vertex layout:  position(3 x float)  colour  (3 x float)  normal  (3 x float)  UV      (2 x float)  Total = 44 bytes per vertex Textures:  512x512 diffuse (RGB8)  Total = 786432 bytesNormal mapped mesh: Vertex layout:  position(3 x float)  colour  (3 x float)  normal  (3 x float)  binormal(3 x float)  tangent (3 x float)  UV      (2 x float)  Total = 68 bytes per vertex Textures:  512x512 diffuse (RGB8)  512x512 normal map (RGB8)  Total = 1572864 bytes
If a 512 normal map takes 786432 bytes, and a vertex from a non-normal mapped mesh takes 44 bytes, that means that for the same memory-cost as a n-map, you could've had an extra 17,000 verts instead.


But normal maps can often be reused for different models (or tiled on the same model), without requiring more memory.
Quote:Original post by kauna
...
the bitangent can be computed in shader. Just to point out that the vertex structure doesn't have to be as heavy as presented.
...
Cheers!

Both the tangent and the bitangent can also be computed in geometry shader. I think that depending on the quality requirements the normal maps could be significantly compressed. So you should design the rendering part with some flexibility and then test and profile your game with various settings.

Quote:Both the tangent and the bitangent can also be computed in geometry shader.


I am still unaware of all the potential of the geometry shaders. I am hoping that some day I'll have the chance to look into them. Thank you for telling this.

Best regards!
With D3D11, tesselation is one of the big new features, which does mean almost rendering per pixel triangles if you look at some of the tech demos. Of course, you'd want an LOD system.
-----Quat
Quote:Original post by maspeir
Quote:]If a 512 normal map takes 786432 bytes, and a vertex from a non-normal mapped mesh takes 44 bytes, that means that for the same memory-cost as a n-map, you could've had an extra 17,000 verts instead.


Ah. But the normal map simulates 262,144 vertices without the performance hit. It's a trade off, of course, but with the average video card having at least 512 MB of video RAM, memory consumption is a minor concern.


I beg to differ. Texture sampling hurts performance much more than having more vertex data. The cost also goes up significantly with tri-linear and anisotropic filtering. Its made even worse by the fact that the texture fetch will stall the pipeline as the shader must immediately perform a MAD to get it from (0:1) to (-1:1) and then perform a matrix multiply on it to get the normal from tangent space to worldspace.

Of course you can bring the light from worldspace to tangentspace in the vertex shader to get rid of that little matrix multiply, but you end up with some terribly disgusting artificats.

This topic is closed to new replies.

Advertisement