Jump to content
  • Advertisement
Sign in to follow this  
sesime

OpenGL How to get instruction excuting time in OpenGL?

This topic is 4124 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

How to get instruction excuting time in OpenGL? I wrote a OpenGL program, want to readback some texturedata from Video memory to System memory. I heard it is very time cost, so I decide to measure the readback time. I just use GetTickCount() method, which likes that: DWORD timestart=GetTickCount(); ReadBack()//OpenGL ReadBack DWORD timecost=GetTickCount()-timestart; But I found that is 0,meaning no time comsumed. That made me puzzied.I heard that's maybe OpenGL's instruction excuted asynchronous, is that true? How can I measure the precisely time cost?

Share this post


Link to post
Share on other sites
Advertisement
  1. Read this.


  2. Download, install and use a profiler, which is a program designed for measuring how much time your code takes.

Share this post


Link to post
Share on other sites
That's not really a solution to his problem. You can't profile graphics API calls the same way you would profile a program running on the CPU. GPU calls are highly asynchronous, and exhibit often highly unintuitive performance behaviour.

Profiling GPU code is not trivial at all. You have to use special purpose profilers, such as NVPerfKit. And even using these tools, extracting and interpreting the profiling information with respect to eg. shader code can be very difficult due to the massively parallel nature of a GPU.

Share this post


Link to post
Share on other sites
A simple solution which can give more or less adequate results is to measure over a longer interval both with and without the ReadBack() call and then calculate the difference and divide by the number of frames instead of over such a small interval.
This will ideally overcome most of the asynchronous aspects of the GPU calls and give you a rough estimate of the time cost, but it's far from perfect.
Even if you average it over a longer time, small variations in cache hits, task scheduling etc can make it hard to see any differences if the call is quite fast.

Share this post


Link to post
Share on other sites
I don't know what the OP means by readback : glreadpixels or glgetTeximage or something else.

This will cost some ms and it is measureable. It is also SYNCRONOUS which means when the call is finished, your buffer is filled up.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!