Hi,
I'm trying to make a simple benchmark application in which I can measure the time needed to transfer data to and from the GPU. In my benchmark application I use the following code to send the data:
timer.startTimer();
glTexSubImage2D(GL_TEXTURE_RECTANGLE_ARB,0,0,0,texSize,texSize,
GL_RGBA,GL_FLOAT,data);
timer.stopTimer();
For reading the data back I use:
timer.startTimer();
glReadPixels(0, 0, texSize, texSize,GL_RGBA,GL_FLOAT,result);
timer.stopTimer();
I've placed this code in a loop which iterates 10 times, so that I can calculate an average after the loop. Now, the strange part is that during the first loop sending and receiving takes much much longer than during the remaining loops. I'm wondering what causes this. Has it something to do with caching maybe? If so, how can I prevent this so I can measure the real transfer times.
I hope someone can clear this up for me. Thanks in advance.