Advertisement Jump to content
Sign in to follow this  
NEvOl

computation time rendering on the graphics card

This topic is 1791 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

How to calculate time rendering on the graphics card, like there is interface allowing it to make ?

Share this post


Link to post
Share on other sites
Advertisement

Check this out. In particular, the D3D11_QUERY_TIMESTAMP_DISJOINT and D3D11_QUERY_TIMESTAMP ones, which allow you to measure GPU time accurately.

Share this post


Link to post
Share on other sites

ID3D11Query *pQuery_TD;
D3D11_QUERY_DESC qd_td;
ZeroMemory(&qd_td, sizeof(qd_td));
qd_td.MiscFlags = 0;
qd_td.Query = D3D11_QUERY_TIMESTAMP_DISJOINT;
 
UINT64 dataQuery_t = 0;
 
size_t size = sizeof(long double);
 
HRESULT hr;
 
if(FAILED(g_pDevice->CreateQuery(&qd_td, &pQuery_TD)))
{
MessageBox(NULL, L"CreateQuery_TD - Failed", NULL, NULL);
}
 
g_pDeviceContext->Begin(pQuery_TD);
 
ID3D11Query *pQuery_T;
D3D11_QUERY_DESC qd_t;
ZeroMemory(&qd_t, sizeof(qd_t));
qd_t.MiscFlags = 0;
qd_t.Query = D3D11_QUERY_TIMESTAMP;
 
if(FAILED(g_pDevice->CreateQuery(&qd_t, &pQuery_T)))
{
MessageBox(NULL, L"CreateQuery_T - Failed", NULL, NULL);
}
 
while(S_OK == g_pDeviceContext->GetData(pQuery_T, &dataQuery_t, sizeof(UINT64), 0))
{
}
 
Render();
 
while( S_OK == g_pDeviceContext->GetData(pQuery_T, &dataQuery_t, sizeof(UINT64), 0))
{
}
 
g_pDeviceContext->End(pQuery_TD);
 
while( S_OK == g_pDeviceContext->GetData(pQuery_TD, &dataQuery_t, sizeof(UINT64), 0))
{
}
data is not returning(

Share this post


Link to post
Share on other sites
It's a bit involved to get this right (beginner topic ?), but you can save you the trouble: MJP made a blog post, source included. Works great.

If you want to time compute shaders it's even more delicate. Using above code without further care I either got disjoints or silly numbers. MJP once made a note in the DX forum (maybe you can find that post): You need GPU-sync points, IIRC e.g. a CPU-read-back (I guess CopyResource and Map of a something the compute shader spit out) or something similar. That, in turn, will likely bias your timings.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!