# Fill Rate Testing

This topic is 4956 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Just a quick question: At the moment I'm trying to write a simple benchmarking program for a project. One of the things I'm trying to test is the fill rate. I've put something together but I have a few queries because its coming up with some pretty unrealistic results (or maybe its not - it might be normal for a graphics card to operate much lower than it theoretical throughput??). Anyway, here is some simple pseudo-code so you get an idea of what I'm doing and point out anything that look wrong: Each cycle I have merely been clearing the frame and z buffer then drawing a polygon that encompasses the whole screen. The following code is executed when the timer is >= to a second: H = Height of the screen. W = Width of the screen. FPS = Frames per second. P = The number of pixels that passed Z-testing, or are visible on-screen (Obtained P through the D3DQUERYTYPE_OCCLUSION COM) Fillrate (in MPixels/sec) = ( ( H * W ) + P )/1000000. I can think of what I could be missing but this bit of code will usually kick out about 200MPixels/sec on my graphics card (GeForce 5700 – I’ve checked up and its maximum throughput is 1700MPixels/sec). Any thoughts :)

##### Share on other sites
Check out this page. It outlines another equation to calculate framerate that I think would be more accurate than (H*W) + P.

That would be cool to put the app online, and then have people compare different cards.

##### Share on other sites
Thanks circlesoft that article was interesting - The equation they use seems to be very similar to mine except they use an approximate ‘depth complexity’ as a coefficient at the end. From what I can tell, I think the 'P' in my equation acts in a very similar way but I will look into it some more. I have ran my program at full-screen (1280 x 1024) and it comes up with some much more realistic figures around 450 - 550MPixels/sec. Judging by what they said in that article you sent me it seems that it's not unusual for graphics cards to fall short of there maximum throughput in applications. When I finish my program I will upload and see what you guys think :)

##### Share on other sites
Never take the marketting speak as "law" [wink] They often quote the theoretical maximum based on perfect input and perfect internal utilization. Real world performance, even GOOD real-world performance, won't usually get close to it. Same thing often happens with quoted vertex/triangle through-puts.

One thing to try is to use TLQuad's to deliver textures to the rasterizer, that way you probably don't need to compensate for overdraw and depth complexities. I seem to remember this is how the 3DMark programs judge fill rate.

Also, a lot of GPU's will probably get better performance by using all of their available pixel pipelines. Usually, this is of most benefit for multi-texture effects/blending operations. I don't honestly know, but it's possible that for single-texturing it'll duplicate the texture N times to make use of all the pipes, or (worst case) is it'll disable all of the ones you don't explicitly tell you to use...

Another thing - make sure you "warm up" before taking measurements... run at least a seconds worth of frames through the GPU to make sure textures and cache are in their optimal states, or you could end up timing how long it takes to move resources around [smile]

hth
Jack

1. 1
2. 2
3. 3
Rutin
24
4. 4
5. 5
khawk
14

• 11
• 11
• 23
• 10
• 9
• ### Forum Statistics

• Total Topics
633651
• Total Posts
3013133
×