So I made my own little way of watching the render speed and game speed for my game, but I am not sure if it's accurate and the best option.
Here is my code:
GameLoop()
{
double dTickTime = glfwGetTime();
double dFrameTickTime = glfwGetTime();
double dFrame = 0;
double dSecond = 1;
double dTicks = 60;
while( m_bRunning && glfwGetWindowParam( GLFW_OPENED ) )
{
//Update scene
if( ( glfwGetTime() - dTickTime ) >= ( dSecond / dTicks ) )
{
if( dFrame < dTicks )
{
++dFrame;
}
else
{
m_dGameSpeed = dTicks / ( glfwGetTime() - dFrameTickTime );
dFrame = 0;
dFrameTickTime = glfwGetTime();
}
if( m_vStepFunc )
{
m_vStepFunc( dTickTime );
}
dTickTime = glfwGetTime();
}
//Render scene
}
}
I use two timers. One is for each individual tick( 1/60th of a second ). And one is to capture the average game speed over a period of one second. m_vStepFunc( double dTickStartTime ) is a little hook that is called every tick. Right now it doesn't do much other than output the render speed and game speed.
The rendering runs as fast possible and is not limited to the tick speed.
I am having a few issues. The first is that with only one instance being rendered( 4 vertices and a texture ) I am short by my aimed 60FPS for gamespeed by about 2-3FPS which isn't a big issue at the moment but it may matter later. I think it's just due to inaccuracy by GLFW's timer.
My second issues is that once I bump up the instances to 10k( 40k total vertices and a shared texture ) my gamespeed and render speed drop significantly. Gamespeed drops from 57-58 ticks per second to 42-43 ticks per second and my render speed drops even worse from 6k-7k FPS to 125-130FPS. Here is my render code:
double dStartTime = glfwGetTime();
glClear( GL_COLOR_BUFFER_BIT );
glPushMatrix();
glBindBuffer( GL_ARRAY_BUFFER, m_vObjectList.at( m_vInstanceList.at( i )->m_uiObject )->m_uiTextureBuffer[0] );
glEnableClientState( GL_VERTEX_ARRAY );
glVertexPointer( 3, GL_FLOAT, 0, (void*)0 );
glActiveTexture( m_vObjectList.at( m_vInstanceList.at( i )->m_uiObject )->m_uiTexture );
glBindBuffer( GL_ARRAY_BUFFER, m_vObjectList.at( m_vInstanceList.at( i )->m_uiObject )->m_uiTextureBuffer[1] );
glEnableClientState( GL_TEXTURE_COORD_ARRAY );
glTexCoordPointer( 2, GL_FLOAT, 0, (void*)0 );
glTranslated( m_vInstanceList.at( i )->m_iPosition[0], m_vInstanceList.at( i )->m_iPosition[1], 0 );
glDrawArrays( GL_QUADS, 0, 4 );
glDisableClientState( GL_VERTEX_ARRAY );
glDisableClientState( GL_TEXTURE_COORD_ARRAY );
glPopMatrix();
glfwSwapBuffers();
double dEndTime = glfwGetTime();
m_dFPS = ( 1 / ( dEndTime - dStartTime ) );
I'd imagine that on my computer which is considerably powerful, I should be able to get 200k+ vertices minimum at 60fps.... especially considering I am only handling the rendering of instances and nothing else at all.