running out of ways of going faster fps

Started by
30 comments, last by ade-the-heat 19 years, 8 months ago
from what I can work out I have an Intel Extreme 82865G card at work and a pretty hot pc 2.7Ghz 1GB ram (I get 32 FPS in the places where I get 16 at home)- I can't work on the game at work as it's a personal game !

I'll have to try the Octree thing tonight.



Advertisement
Do you use quads or quad/triangle strips? Triangle strips need the least AGP bandwidth, so use them if possible and if you aren't already using them.

Then try backface culling, so that polygons that face away from the camera don't get drawn. Using vertex dot products you can calculate the angle between the poly's normal vector and the vector from the camera to the poly. If it's over 90 degrees, it's facing away -> don't draw it.

glCullFace() (or something like that) can be used for backface culling too, but that happens after the vertex data has been transferred to the GPU -> won't help you with AGP bandwidth issues.

Hope this helps,
Richardo
Quote:Original post by RichardoX
Do you use quads or quad/triangle strips? Triangle strips need the least AGP bandwidth, so use them if possible and if you aren't already using them.

He uses display lists so that won't help much for bandwidth. However drawing will be much faster using triangle strips, especially when you draw a quad of terrain optimized to one triangle strip.

Quote:Original post by RichardoXThen try backface culling, so that polygons that face away from the camera don't get drawn. Using vertex dot products you can calculate the angle between the poly's normal vector and the vector from the camera to the poly. If it's over 90 degrees, it's facing away -> don't draw it.

glCullFace() (or something like that) can be used for backface culling too, but that happens after the vertex data has been transferred to the GPU -> won't help you with AGP bandwidth issues.


Backface culling won't help much if the terrain is flat or near-flat, in fact it could hurt performance (using fragment programs or pixel shader or whatever it will certainly give you a speed increase as the cost of culling is easily outnumbered by the cost of unneccessery fragment operations).

If there are many back facing triangles it might be worth enabling it, anyway you can always try it.
Can you post a screenshot of your program, or even an executable showing the problem? It doesn't sound like you're doing anything too taxing on the graphics card. However, you could be doing several things wrong in the rest of the program. For example, not using a quadtree could be a problem, if you're frustum-culling each chunk of terrain. Do you remember the last change you made before noticing low FPS?
Well, I stuck loads of trace in with time got from
GetTickCount() which is rudimentary time thing from windows.
I found the following:

1. Terrain, even at it's most slowest only took 16ms to draw-I could speed up more so will do so.

2. bitmapped fonts took a few ms a frame to write a few things like bullets remaining/ direction/position/health which surprised me.

3. The big slow ups were in doing the DISPLAY LISTS.
I'm hoping this was because the timer is too crude although it does all seem to add up, eg if I add all the times up I get about 16 FPS

eg to draw a rather ordinary "machine gun nest" took 16ms if it was in the frustum - but sometimes it took virtually zero ms.
I took the time just before the display list call and just after - soemtimes it was zero ms and sometimes 16ms.

I think it's the timer that's wrong as whenever I do anything at all it's either 16ms or nothing at all.

Know any better timers anyone ?









Uhm, you have like a time variable of the total elapsed time and divide the number of frames by that?
If so, don't include the display list compilation, that is indeed slow but it should done only once and has nothing to do with the actual fps during playtime as it's a part of the init part.

Anyway GetTickCount has a 50 msec timer resolution (if i don't mistake), mayb a a little lower depending on your OS.

Search for QueryPerformanceCounter or just performance counter. This is (if i remember correctly) a high precision timer which is perfect for your purposes.
Are you using this glCallList() for drawing bitmap fonts?

From my experience this will eat up alot performance.
yes I'm using calllist for fonts.
I've never seen another way of doing bitmap fonts though.
I'd say they used a lot of time but then my timer was so crude that it only ever said a task took 16ms or 0ms - regardless of what was going on.

I'm going to have to try a better timer tonight.

In the meantime I tried the following last night:

1. reduced texture size (ie 256 instead of 512) - little or no difference

2. used constants where possible- no diff
3. changed doubles to floats -no difference
4. Checked witht task manager and it says its running in 6MB of memory but at 100% of CPU - so something's going on !


cheers

Firstly, as mentioned again and again, a spacial tree will should improve performance because of the nature of trees, if a node is not visible neither is its children, resulting in many nodes being discarded quickly.
Next, the font rendering, that method is indeed very slow. I've seen that kill performance so many times I can't keep a count.
I prefer to use a single texture that hold all fonts, something like a 256*256 or 512*512 texture, which you can then use texture offset for each letter (quad).
heres a little example: http://xout.blackened-interactive.com/dump/new/fonts.jpg
Profiling.. yeah I prefer dynamic profiling, a gui menu of time slices in-game really shows how things are going rendering wise.
Action Plan:

1. octrees
2. change fonts
3. get proper timer in use to see where the time is being lost.

Can't do any of these until tonight though !!

cheers

This topic is closed to new replies.

Advertisement