Jump to content
  • Advertisement
Sign in to follow this  
xsirxx

Whats a good FPS on a Geforce6600 GT?

This topic is 4882 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Many of you may have seen my last post about deferred lighting speed problems. Now Im pretty much wondering whats an acceptable amount of triangles to pass per frame or second? Is 100,000 triangles per second a good amount to push for? Is that low? Thanks, any input would be helpfull. Brad

Share this post


Link to post
Share on other sites
Advertisement
It all depends on many, many factors.

Personally, my map editor got 60 million triangles per second.. but that's not doing anything but rendering a subsection of the map (even without quad-tree culling).

Share this post


Link to post
Share on other sites
Well whats something good to aim for? Using say only 2 shaders.

Also 60 Million per second? Rending at the frame rate of a single box I only get 850,000 per second. & Rendering at the Frame rate of no triangles, I would get 1.4 million.

Share this post


Link to post
Share on other sites
Then I would suggest you take a look at your entire pipeline, or the way you calculate the number. Debug mode w/ nvPerfHUD & a lot of texturing including multi-pass layering gets me over 4MTri/sec on a BFG 6600GT OC.

Realistically, you should be able to process 60-100k triangles per frame at high frame rates (60fps).

This makes somewhere around 3.6MTri/sec. But this isn't 6600GT only. This is just what todays games are pretty much required to do. Oh, and I'm not counting what's CPU bound (physics, sound, etc.)

Share this post


Link to post
Share on other sites
Quote:
Original post by sordid
Then I would suggest you take a look at your entire pipeline, or the way you calculate the number. Debug mode w/ nvPerfHUD & a lot of texturing including multi-pass layering gets me over 4MTri/sec on a BFG 6600GT OC.

Realistically, you should be able to process 60-100k triangles per frame at high frame rates (60fps).

This makes somewhere around 3.6MTri/sec. But this isn't 6600GT only. This is just what todays games are pretty much required to do. Oh, and I'm not counting what's CPU bound (physics, sound, etc.)


I use the latest nvPerfHUD and it tells me im grabbing 768 triangles/frame with deferred lighting on, at 40fps...

I know this is bad, but I cant profile unless im in debug mode, and there are NO slowdowns except for my std::vectors, and when I switch to release mode, I get no speed increase. Cant figure out where im getting killed at.

Share this post


Link to post
Share on other sites
I don't know this card but it sounds pretty fast. When I say that I can get ~2M tri/s on a Geforce2MX (4 generations behind your card) with physics, quadtree visibilty calculations etc it seems you can get a lot more speed. I just tripled my rendering performance by redoing a how the terrain rendering is done. I reckon if I can get decent fps with 30-50K triangles on screen you should easily be able to do 250K. But whether there's any point to that is a whole other question - what is it you're working on here?

Share this post


Link to post
Share on other sites
I was just doing the 60k x 60fps = 3.6MTri. 100k x 60fps = 6MTri. Realistically you want the triangle rate to be much higher than that so you have room to breath with cpu-bound operations.


xsirxx, both the perfhud and a code profiler together should tell you exactly where your problem is.. one, if it's a gpu bottleneck, texture bottleneck, or cpu bottleneck somewhere.

Share this post


Link to post
Share on other sites
I thought it would too, Basically alls im doing is creating a pointer vector each render which in return creates an index buffer through my quadtree. The rest I can cut out and just render that. When I do I get upwards of 850K triangles/sec. But it still sounds slow, thats ALL im doing. Just running my quadtree creating a pointer std::vector, then FILLING an index buffer, then rendering. JUST those operations give me 850K triangles/sec. This still seems slow? Other games run fine. Any ideas or is this a bad idea? I can try to elimanate the quadtree to *check* if thats the problem. Other than using all my time inside of vector operations(in debug) the rest goes into filling and emptying whats in my pointer vector.

UPDATE: I have temporaily removed the quad tree rendering where it fills the buffer every frame, to only filling once and leaving it. The framerate didnt raise at all. It actually only moved at most 4fps... In debug and retail were both the same.

[Edited by - xsirxx on June 8, 2005 4:43:00 AM]

Share this post


Link to post
Share on other sites
I've got a 6800Ultra and my FPS are ~210 when I render ~2700 triangles in 5 passes using per-pixel lights (ps 1.1). Do you think that it is slow?

Share this post


Link to post
Share on other sites
It's impossible to say. That's such a small number of polygons that the CPU-based work is probably the limiting factor. In theory the card could do maybe 1000fps but realistically who cares what it's doing until you have a full scene, and it drops below 60?!

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!