Jump to content
  • Advertisement
Sign in to follow this  
  • entries
  • comments
  • views

You can calculate the number of polygons your computer or a system can handle

Sign in to follow this  
Shane C


Graphics cards have something called triangles/clock. The number of triangles a graphics card can handle is the triangles/clock multiplied by the core clock speed of the graphics processor.

I think my GTX 570 card at stock clock can handle up to 732 million polygons, or about 12 million polygons at 60 frames per second.

While it can handle 12 million polygons at 60 frames per second, I really shouldn't go that high. I have heard that each Pass you do causes the graphics card to basically process the model over. So a 20k model might become 40-60k polygons.

The Wii U I plan on developing for can handle 9 million polygons at 60 frames per second if I have figured correctly for this, but I plan on using 1.1 million polygons total.

You can usually search how many triangles/clock your graphics card can handle, or for a system like the Wii U for example, but the number of results are limited so searches are hard. If anyone wants to know what their card or a system can handle, I can assist you though. Let me know the graphics card or system and I will try to tell you what it can do.

Low-polygon development is best, but there are situations and developers that call for a large number of polygons.
Sign in to follow this  

1 Comment

Recommended Comments

Yep, those figures sound about right, but are really kind of meaningless in some sense because what they really mean is "this is how many triangles per second you can push onto the screen with a single pass, no shading, no postprocessing, no nothing". In practice you want to stay well below the theoretical maximum if you're doing anything beyond rasterizing white polygons (iirc the original Crysis on "ultra" settings hovered around 2-3 million triangles per frame, which is a good point of reference in my opinion).


Arguably though poly count is not as relevant as it used to be, as I am told most games nowadays are limited by either the processor or the pixel shading stage, and many tricks to fake high resolution models have been developed, but knowing the limits of the graphics card - even as a rough estimate - is still important.

Share this comment

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!