CreateQuery problems/Pixel Fill Rate

Started by
3 comments, last by faa 17 years, 1 month ago
hii... I m using Visual Studio 2005(C++) and DirectX 9.0c(June 2006). My main aim is to find the pixel fill rate...I ve found a structure : typedef struct D3DDEVINFO_D3D9BANDWIDTHTIMINGS { FLOAT MaxBandwidthUtilized; FLOAT FrontEndUploadMemoryUtilizedPercent; FLOAT VertexRateUtilizedPercent; FLOAT TriangleSetupRateUtilizedPercent; FLOAT FillRateUtilizedPercent; } D3DDEVINFO_D3D9BANDWIDTHTIMINGS, *LPD3DDEVINFO_D3D9BANDWIDTHTIMINGS I can use the FillRateUtilizedPercent to find the pixel fill rate...BUT to use this structure i need to call the CreateQuery function. hr = d3ddev->CreateQuery(D3DQUERYTYPE_BANDWIDTHTIMINGS , NULL); When i call this function, it keeps returning D3DERR_NOTAVAILABLE instead of D3D_OK... There are various other parameters that can be used with CreateQuery and i checked out all of them..some work and some dont... D3DQUERYTYPE_VCACHE works D3DQUERYTYPE_RESOURCEMANAGER NOT WORKING D3DQUERYTYPE_VERTEXSTATS NOT WORKING D3DQUERYTYPE_EVENT works D3DQUERYTYPE_OCCLUSION works D3DQUERYTYPE_TIMESTAMP works D3DQUERYTYPE_TIMESTAMPDISJOINT works D3DQUERYTYPE_TIMESTAMPFREQ works D3DQUERYTYPE_PIPELINETIMINGS NOT WORKING D3DQUERYTYPE_INTERFACETIMINGS NOT WORKING D3DQUERYTYPE_VERTEXTIMINGS NOT WORKING D3DQUERYTYPE_PIXELTIMINGS NOT WORKING D3DQUERYTYPE_BANDWIDTHTIMINGS NOT WORKING D3DQUERYTYPE_CACHEUTILIZATION NOT WORKING Can anyone pls tell me why some are working and some are not and how to make it work...or some other way to calculate the Pixel Fill Rate... thanks..
Advertisement
Query support is not guaranteed. Therefore it is up to the driver to decide which one will work.

You can try to get the fill rate indirect if you know how many pixels are draw you can measure how long it takes. But you need to be careful because the GPU works asynchrony. The SDK contains an article about this problem.

Another way would be using performance tools from the GPU manufactures like NVShaderPerf or SpeedMonkey.
thanks for the reply...i found out the number of pixels that are been drawn on the screen and added the their sum for 1 second..
i ve used the following code...

d3ddev->CreateQuery( D3DQUERYTYPE_OCCLUSION, &pBWQuery);

pBWQuery->Issue(D3DISSUE_BEGIN);

render_frame(hWnd);

pBWQuery->Issue(D3DISSUE_END);

while(S_FALSE==pBWQuery->GetData(&numberOfPixelsDrawn,sizeof(DWORD),D3DGETDATA_FLUSH));


the number of pixels get returned in numberOfPixelsDrawn....and i m getting an average Pixel Fill Rate of 2173229.0 per second....
I ve got a Nvidia 7600GS 256MB card...and the scene is a rotating plane(airplane2.x from the directx SDK)...
Is this the correct way...and is there any way to verify this is the correct value.
thanks...
Unfortunately there are a bunch of problems.

1. The number returned from D3DQUERYTYPE_OCCLUSION differs depending on which vendor made your graphics card, and can even differ across drivers from the same vendor. ATI returns what the D3D spec says, but NVIDIA returns a different value (the one from the OpenGL spec IIRC). Furthermore, different revisions of the drivers return values conforming to different specs. And that's just the big 2 vendors, for some of the others I wouldn't trust the value I got back to make any real sense on its own.

TIP: the best and most compatible way to use D3D occlusion queries is to treat the values returned as relative. When your application starts up, draw a single quad that exactly fills the whole screen, do an occlusion query for that, store the value (Total). Then when you use occlusion queries normally after that, divide the value returned by the stored total so that you always get "what percentage of of this draw is visible?" instead of some arbitrary number.


2. Actually related to #1 (and part of the reason for the difference between vendors), remember to take anti aliasing into account - super sampled isn't the same as multi sample, and the number of samples and type have a direct affect on fill rate.


3. Occlusion queries [only reliably] tell you how many pixels are visible on the screen (or at least some value relative to that), not necessarily how many were actually rendered.

So if I render a single 10x10 quad at location 200,200 on the screen then do an occlusion query which gives me a return value of 1593 (remember, the return can be complete nonesense taken on its own! ;o).

Now assume Z testing/Z buffering is disabled and I draw a single (silly) mesh that has 100 10x10 quads all at location 200x200 on the screen and do an occlusion query. The value I get back is 1593 again, but the GPU has rasterized 100x more pixels.


4. How much real fill rate is used by a polygon depends on a lot more than how many pixels from that polygon made it to the screen! Enabling more texture channels can use some, Gouraud vs flat shading can affect it, whether or not the frame buffer blend is a read-modify-write will affect it, how many pixel shader instructions are used will affect it, etc...


5. If you need accurate numbers (to write a benchmarking program for example), speak to the hardware vendors directly. Many GPUs have hardware counters that would allow them to support things such as D3DQUERYTYPE_BANDWIDTHTIMINGS internally, but the hardware vendors have decided not to expose those to D3D in their drivers (it's a low priority since most people in the world don't need that functionality). For NVIDIA hardware, get NVPerfHUD (that uses the hardware counters). The IHVs may know of a 'special' way to get at the counters directly if its essential for your app to know.


6. What is it you're really trying to achieve? Do you need an absolutely accurate value for the amount of fillrate consumed by rendering? Or do you just need some sort of relative value ("this mesh used 0.01% of the available fillrate")?

[Edited by - S1CA on March 10, 2007 12:41:16 PM]

Simon O'Connor | Technical Director (Newcastle) Lockwood Publishing | LinkedIn | Personal site

thanks SICA for the reply...can u pls clear the following...

1)you mentioned that Nvidia drivers return a different value..what do u mean by this, are they not returning the number of pixels on the screen??(i m using Nvidia drivers)
if they are not returning the number of pixels drawn then what r they returning??

2)I dont really understand y u said to calculate a relative value,(the drawing of a big quad and dividing the values),can u pls explain..

3)i m currently working on my final year engineering project, which is a GPU benchmarking tool.
i ll be running the same set of scenes for bechmarking the graphic card.(so all that z -buffering, anti alising are taken care of-though not in the most ethical way..;-)
i had taken a look at NVperfHUD...but due to time constraints...i left it..

This topic is closed to new replies.

Advertisement