• Advertisement
Sign in to follow this  

How to know if the fps is acceptable for the video card?

This topic is 3224 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I know it is quite impossible because of the different style of code but any estimation? I have a nvidia 8600 gts and I loaded 1000 md2 that has 355,000 triangles overall. It crawls at 6 fps. How can I know if it is acceptable for the video card? BTW, I'm implementing a VBO for my md2 class but I'm experiencing a problem with the triangle fan and stips. If anyone can give me some tips on how to fix it. please tell me. thanks

Share this post


Link to post
Share on other sites
Advertisement
I'm not sure what you mean by "acceptable for the video card". A video card is willing to accept a wide range of framerates. 6 FPS is probably not acceptable for the user, though, so you'll need to fix that. [smile]

If you're not using VBOs, what are you using to render? Please tell me it's not glBegin/glEnd.

Share this post


Link to post
Share on other sites
What I mean there is, if the fps is the maximum fps that the video card can render on that triangle count.

Right now, it is glBegin/glEnd because I follow this Here. I know how to use the glDrawArrays but the problem is the freaking glcommands on the md2 file. I don't know if I'll use a for loop with if and else inside for the glcommand and call the glDrawArrays depending on the if and else output(strip/fan). But the problem is I call glDrawArrays many times. Is it fine?

Share this post


Link to post
Share on other sites
One thing to remember is that when hardware companies are doing their benchmarks for tri counts, they usually run degenerate triangles so its not a very good measure to hold up to your program.

Share this post


Link to post
Share on other sites
Quote:
Original post by DarkBalls
What I mean there is, if the fps is the maximum fps that the video card can render on that triangle count.


That's totally irrelevant data. No matter what you do you'll never get anywhere near the theoretical max framerate for a given triangle count because of texturing, lighting, post-effects, logic performance, etc. It's not even worth knowing what that number is; it's totally meaningless for practical purposes.

The method is: the game needs to run at least 30FPS, and 60FPS if you can get it there. If your game is above that, then you're fine. If it's below that, you need to optimize your code.

Normally you want to vsync anyway to avoid tearing, and few people can notice the difference between 60 fps and greater than 60FPS.

-me

Share this post


Link to post
Share on other sites
Quote:
Original post by DarkBalls
What I mean there is, if the fps is the maximum fps that the video card can render on that triangle count.
Short answer: No.
Quote:
Right now, it is glBegin/glEnd

And that's what's slowing you down. It's not the graphics card at all. It's the load you're putting on the rest of your computer from all those glVertex calls.
Quote:
I don't know if I'll use a for loop with if and else inside for the glcommand and call the glDrawArrays depending on the if and else output(strip/fan). But the problem is I call glDrawArrays many times. Is it fine?
These days it's not generally worthwhile to use triangle strips/fans. Just render the model as a single triangle list.

Share this post


Link to post
Share on other sites
What about preprocessing the whole model? Walking through each frame and split it into two arrays, one containing the data for the triangle strip parts, the other containing the data for the triangle fan parts.

Share this post


Link to post
Share on other sites
Oops. I thought there's a specific triangle count per second on each card.

Quote:
These days it's not generally worthwhile to use triangle strips/fans. Just render the model as a single triangle list.


So it means I just use the GL_TRIANGLES in glDrawArrays? I try doing that but it is not right. Do I need to compute everything and arrange it to make it work? I don't have any clue really.

@baw

Yeah, that's what I'm planning to do know. I'll put it on the LoadMD2 function and call two glDrawArrays in the render, one for the strip and fan. Hope it works.

Share this post


Link to post
Share on other sites
Quote:
Original post by DarkBalls
Do I need to compute everything and arrange it to make it work?
Yes, you'll need to take your strips and fans and split them into triangles. Moreover, if you're concerned about performance, you shouldn't be using glDrawArrays, but rather glDrawElements (that is, drawing indexed primitives).

Share this post


Link to post
Share on other sites
Quote:
Original post by Sneftel
Yes, you'll need to take your strips and fans and split them into triangles.

That's not necessary in the case of md2. How to draw the triangle strips and fans is just additional info, which is holding the idices of the vertices.

Share this post


Link to post
Share on other sites
Quote:
Original post by baw
Quote:
Original post by Sneftel
Yes, you'll need to take your strips and fans and split them into triangles.

That's not necessary in the case of md2. How to draw the triangle strips and fans is just additional info, which is holding the idices of the vertices.

It's certainly necessary if you want to draw with a single indexed draw call, as I was suggesting:
Quote:
These days it's not generally worthwhile to use triangle strips/fans. Just render the model as a single triangle list.

Share this post


Link to post
Share on other sites
Triangle strips and fans are no longer the optimization they were back when the md2 format was created. These days, vertex caches are typically big enough that vertices don't have to be re-calculated, and other bottlenecks have come into the forefront, such as changing the material -- In short, reduce the number of draw calls and state changes as much as possible, and your card will be far happier.

Share this post


Link to post
Share on other sites
Are you culling offscreen models? You can also batch it using a shader, also as you cull the scene are you creating a list on-screen entities that you can sort by material therefore minimize changes to your texture states and vertex buffer?

I would recommend abstracting away OpenGL specific code and make you rendering code API-independent. My reason is because NVidia have a wonderful tool called
NVPerfHUD but it only works with Direct3D. You can singlestep through rendering a frame, see the amount of time each draw called (it groups them by actual primitives drawn), lets you see the idling of the GPU and CPU and all state changes. It's a wonderful tool, and it helped to identify what I could optimise to speed up the frame rate in my framework (which was GPU limited) to around 3x.

Share this post


Link to post
Share on other sites
To give you an order of idea, you should easily reach 150+ fps (assuming you don't have intensive shaders or cpu tasks) with that scene on a 8600.

Y.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement