#### Archived

This topic is now archived and is closed to further replies.

# Establishing desired polygon count

This topic is 5491 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Lately, I''ve been asked by a 3D modeler how many polygons I want certain 3D models for my game to be. Of course, the answer is a function of the minimal target graphics hardware (as many as it can handle). My question is how do you guys figure that(poly count)out? Trial and error? Exact methods (knowing the capabilities of the card, and summing the number of polygons in a scene) ? Any answer (based on experience) would be appreciated

##### Share on other sites
well it depends on your OpenGL skills

someone posted a comparsion of several drawing methods in opengl

from vertex arrays to displaylists to vertex buffers

and the results are really encouraging me
i only use standard vertex arrays at the moment and get really high fps even on high polygonal scenes

doom3 i think uses 10000 triangles per player model

and with these drawing extensions like VBO 10000 triangles really shouldn t be a problem

##### Share on other sites
Well one way of establishing the average number of tris per object:

- you define what the minimum requirements of the videocard are.
- you distribute the max triangles between the scene and objects within it.
- you define the minimum frame rate
- you estimate how many objects you expect to be visible usually

Now the numTriPerObject = ((triThroughput / minFrameRate) - numSceneTris) / numObjectsPerScene.

For instance a GF2/Radeon 7200, you know it pushes around 4 million textured triangles per second. I expect to need 50K tris for my static geometry. Roughly 10 objects are to be visible on average. 30fps is the minumum acceptable framerate.

This results in:
numTriPerObject = ((4,000,000 / 30) - 5,600,000) / 10 = around 8,333 tris per object.

That's a lot, even on such mediocre hardware.

edit: tpyo

[edited by - Countach on July 7, 2003 10:57:13 AM]

##### Share on other sites
That''s what I was thinking about.
The only missing point for me is how do you know video card throughput?
I saw on the nVidia site Vertices/sec, but no textured triangles per sec.

##### Share on other sites
Rule of thumb for this kind of thing:

Go out and buy the worst 3D card you want to support. (Should set you back less than \$30.)

Write a quick test app to generate a level/creature with x number of textured faces in your engine''s format. (It doesn''t need to make sense or look good, just have the proper number of verts/faces.) Check your frame rate with the level/creature. Continue reducing the number of verts/faces until performance is acceptable to you for your minimum hardware.

Alternative: Grab models from Polycount at Planetquake of varying poly counts, convert them to your engine''s format, and test them.

Cut the number of verts/faces in half, and tell your artist THAT amount.

Why half? It is much easier to go back in during the polish phase and add in verts/faces to make things look better when you have a buffer than to go back and reduce the poly counts after the fact because you didn''t have a buffer and your performance sucks.

##### Share on other sites
1) If you can, get the artist to model everthing with something like patches or NURBS. Most exporters collapse these into triangle meshes. That way, you can determine an initial maximum based on target hardware specifications and looking at other peoples engines doing similar things.
But importantly if you discover that vertex throughput/tri setup isn''t a bottleneck for you, the artist can simply bump up the tesselation slider to easily add loads more detail if you find your engine does better than you expected.

2) A high polygon count really only helps the silhouette and vertex lighting. Per pixel effects is where it''s at now. In particular bump mapping, detail mapping, per pixel lighting etc. Those will get you much much more detail per GPU cycle. The detail of your lighting makes the biggest difference!

3) AFAIK its the **SOURCE** models in games like Doom 3 that are in the region of 10,000 polygons each. The detail from those polygons is then converted into normal maps and the number of polygons in the actual in game mesh reduced to around 2000-4000. This is become quite a common technique now, Google for things like PolyBump (Crytek''s version) etc.

4) Try and work out where your bottlenecks are likely to be. If you have lots of frame buffer blended stuff (alpha, glowing etc), then you can expect your fillrate to be a bottleneck long before polygon/vertex throughput is. If you have lots of dynamic vertex changes (i.e you can''t use static VBs/display lists etc), then the amount you''re shoving over the bus and handling with the CPU might reduce the overall polycount etc.

5) For old hardware I''d say don''t go above 20,000 polygons per frame. That was the rough limit I set for Pac-Man:Adventures in Time, which worked at a _playable_ rate on cards like old 2Mb Matrox Mystiques etc.

For a minimum of GeForce 256 I''d set a limit of around 100,000 (maybe a few more) per frame if 60Hz is the target rate and you want lots of nice stuff going on.

Me personally I''d get the artist to limit the "real" polys per character to somewhere between 2000 and 6000 unless absolutely necessary, but would let them do a high detail version that collapsed into a bump map for the low detail version - as well as a few levels of multipass/multitexture.

--
Simon O''Connor
ex -Creative Asylum
Programmer &
Microsoft MVP

##### Share on other sites
Alright, thanks for the info.
I got some technique studying to do now

1. 1
Rutin
25
2. 2
3. 3
4. 4
JoeJ
18
5. 5

• 14
• 14
• 11
• 11
• 9
• ### Forum Statistics

• Total Topics
631758
• Total Posts
3002137
×