Jump to content
  • Advertisement

Wuszt

Member
  • Content Count

    3
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Wuszt

  • Rank
    Newbie

Personal Information

  • Interests
    Programming
  1. @Adam Miles As I said in first post I pasted simplified code. In my real code I do multi buffering, but it makes things even worse. It looks like GPU time is increased by CPU time to the Swapchain::Present in the next frame. Thanks for advices guys. I tried out Flushing as Adam suggested and it works! I'm gonna also try out some external profiler to compare results. Right now it looks stable enough. PS: Nice to meet you @MJP I'm a big fan of your AA articles. I learned a lot from them!
  2. Thanks for an explanation! What I'm trying to do is tiny engine for comparing some AA algorithms so I would appreciate at least stable measurements. I saw in some other engines etc. separated time spent on GPU and CPU per frame and I wanted to do the same so I based my implementation on this article. I'm surprised that noone noticed similar issues. Your explanation make me sad, because I thought that it's just related with swapchain's present and it wouldn't have influence if I just measure specific algorithm. But if I understand you correctly it can..
  3. Hello, my simplified code looks like that: //Queries initialization // disjoint0 is D3D11_QUERY_TIMESTAMP_DISJOINT // queryStart & queryEnd are D3D11_QUERY_TIMESTAMP while (true) { m_d3DeviceContext->Begin(disjoint0); //UpdatingScene //DrawingScene m_d3DeviceContext->End(queryStart); Sleep(10); //First sleep m_swapChain->Present(0, 0); m_d3DeviceContext->End(queryEnd); m_d3DeviceContext->End(disjoint0); Sleep(10); //Second sleep while (m_d3DeviceContext->GetData(disjoint0, NULL, 0, 0) == S_FALSE); D3D10_QUERY_DATA_TIMESTAMP_DISJOINT tsDisjoint; m_d3DeviceContext->GetData(disjoint0, &tsDisjoint, sizeof(tsDisjoint), 0); if (tsDisjoint.Disjoint) continue;; UINT64 frameStart, frameEnd; m_d3DeviceContext->GetData(queryStart, &frameStart, sizeof(UINT64), 0); m_d3DeviceContext->GetData(queryEnd, &frameEnd, sizeof(UINT64), 0); double time = 1000.0 * (frameEnd - frameStart) / (double)tsDisjoint.Frequency; DebugLog::Log(time); } The first sleep is not affecting GPU time at all (what is desirable, obviously), but the second one does. I made a lot of tries and it looks like sleeps before swapchain are ignored by GPU, but for some reason any sleep between swapchain present and getting data from disjoint causes increased GPU time by its value. Changing queries' ends places make no difference. Why do I care about that sleep? In my real code I'm getting data of n frame and in the same frame getting data of frame n-1. So my GPU time results for n-1 are increased by time needed to evaluate n frame. Why is it happening and what can I do to prevent that?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!