Managing Game Time ( Ticks ) and Speed Issues

Started by
9 comments, last by Strewya 10 years, 8 months ago

So I made my own little way of watching the render speed and game speed for my game, but I am not sure if it's accurate and the best option.

Here is my code:


GameLoop()
{
    double dTickTime = glfwGetTime();
    double dFrameTickTime = glfwGetTime();
    double dFrame = 0;
    double dSecond = 1;
    double dTicks = 60;

    while( m_bRunning && glfwGetWindowParam( GLFW_OPENED ) )
    {
        //Update scene
        if( ( glfwGetTime() - dTickTime ) >= ( dSecond / dTicks ) )
        {
            if( dFrame < dTicks )
            {
                ++dFrame;
            }
            else
            {
                m_dGameSpeed = dTicks / ( glfwGetTime() - dFrameTickTime );
                dFrame = 0;
                dFrameTickTime = glfwGetTime();
            }

            if( m_vStepFunc )
            {
                m_vStepFunc( dTickTime );
            }

            dTickTime = glfwGetTime();
        }

        //Render scene
    }
}

I use two timers. One is for each individual tick( 1/60th of a second ). And one is to capture the average game speed over a period of one second. m_vStepFunc( double dTickStartTime ) is a little hook that is called every tick. Right now it doesn't do much other than output the render speed and game speed.

The rendering runs as fast possible and is not limited to the tick speed.

I am having a few issues. The first is that with only one instance being rendered( 4 vertices and a texture ) I am short by my aimed 60FPS for gamespeed by about 2-3FPS which isn't a big issue at the moment but it may matter later. I think it's just due to inaccuracy by GLFW's timer.

My second issues is that once I bump up the instances to 10k( 40k total vertices and a shared texture ) my gamespeed and render speed drop significantly. Gamespeed drops from 57-58 ticks per second to 42-43 ticks per second and my render speed drops even worse from 6k-7k FPS to 125-130FPS. Here is my render code:


double dStartTime = glfwGetTime();

glClear( GL_COLOR_BUFFER_BIT );

glPushMatrix();

glBindBuffer( GL_ARRAY_BUFFER, m_vObjectList.at( m_vInstanceList.at( i )->m_uiObject )->m_uiTextureBuffer[0] );
glEnableClientState( GL_VERTEX_ARRAY );
glVertexPointer( 3, GL_FLOAT, 0, (void*)0 );

glActiveTexture( m_vObjectList.at( m_vInstanceList.at( i )->m_uiObject )->m_uiTexture );
glBindBuffer( GL_ARRAY_BUFFER, m_vObjectList.at( m_vInstanceList.at( i )->m_uiObject )->m_uiTextureBuffer[1] );
glEnableClientState( GL_TEXTURE_COORD_ARRAY );
glTexCoordPointer( 2, GL_FLOAT, 0, (void*)0 );

glTranslated( m_vInstanceList.at( i )->m_iPosition[0], m_vInstanceList.at( i )->m_iPosition[1], 0 );

glDrawArrays( GL_QUADS, 0, 4 );

glDisableClientState( GL_VERTEX_ARRAY );
glDisableClientState( GL_TEXTURE_COORD_ARRAY );

glPopMatrix();

glfwSwapBuffers();

double dEndTime = glfwGetTime();
m_dFPS = ( 1 / ( dEndTime - dStartTime ) );

I'd imagine that on my computer which is considerably powerful, I should be able to get 200k+ vertices minimum at 60fps.... especially considering I am only handling the rendering of instances and nothing else at all.

Advertisement

Your code is a little strange but it seems to work, nothing jumps out as wrong.

Off topic note: It is a little strange to render as frequently as possible but lock updating at 60Hz. If nothing moved why render again, the generated image will be the same. Don't get me wrong games do this all the time but usually in a slightly more complicated way. For example: if updating happens at 30Hz and rendering happens at 60Hz then the renderer can linearly interpolate between (properly buffered) frame data generated by the updater to give the illusion of the entire game running at 60 fps.

Back to the problem at hand...

My second issues is that once I bump up the instances to 10k( 40k total vertices and a shared texture ) my gamespeed and render speed drop significantly. Gamespeed drops from 57-58 ticks per second to 42-43 ticks per second and my render speed drops even worse from 6k-7k FPS to 125-130FPS. Here is my render code:

This is a valid observation (something is amiss) but using fps to measure relative changes in performance can be misleading. A change from 57-58 fps to 43-42 fps means each frame is taking approximately 6.1 milliseconds longer to finish. A change from 6k-7k fps to 125-130 fps means each frame is taking approximately 7.6 milliseconds longer to finish. Not as drastic as it first seems. Think of fps measurements as velocities, they are inversely proportional to time. If you have a race between two cars it doesn't make a lot of sense to say car A finished the race at 60 mph and car B finished the race at 75 mph. Want you really want to know is the time it took each car to finish the race.

I'd imagine that on my computer which is considerably powerful, I should be able to get 200k+ vertices minimum at 60fps.... especially considering I am only handling the rendering of instances and nothing else at all.

Agreed but I'm not sure why your performance is tanking. Hopefully someone who knows a bit more about OpenGL can point to something that's blatant performance problem.

Sorry if my code is a bit wonky. I am taking my own approach as to how to do things without stripping code from tutorials. Also a beginner your interpolation is a bit confusing, but I have read about it and I may go do some homework on it and hopefully implement in the future.

I've managed to get a small boost to game speed and render speed by this:


// Before was just glClear( GL_COLOR_BUFFER_BIT );
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT );

I believe it to just be that I am enabling/disabling things on OpenGL or the such... wrongly. I'm also looking for just some silly mistakes such as accidentally calling taxing opengl operations in the rendering elsewhere... but I haven't stumbled upon anything thus far.

You're not going to get a whole bunch of vertices drawn if you only draw a few with every draw call.

Well i've managed to make some improvement, about a 8 millisecond improvement running with 50k instances. My revised code:


DrawingFunction()
{
    int iInstanceListSize = m_vInstanceList.size();

    glEnableClientState( GL_VERTEX_ARRAY );
    glEnableClientState( GL_TEXTURE_COORD_ARRAY );

    for( size_t i = 0; i < iInstanceListSize; ++i )
    {
        Instance* pInstance = m_vInstanceList[i];
        Object* pObject = m_vObjectList[pInstance->m_uiObject];

//Start while loop for 50k times
        glPushMatrix();

        glBindBuffer( GL_ARRAY_BUFFER, pObject->m_uiTextureBuffer[0] );
        glVertexPointer( 3, GL_FLOAT, 0, (void*)0 );

        glActiveTexture( pObject->m_uiTexture );
        glBindBuffer( GL_ARRAY_BUFFER, pObject->m_uiTextureBuffer[1] );
        glTexCoordPointer( 2, GL_FLOAT, 0, (void*)0 );

        glTranslated( pInstance->m_iPosition[0], pInstance->m_iPosition[1], 0 );

        glDrawArrays( GL_QUADS, 0, 4 );

        glPopMatrix();
//While loop end

    }

    glDisableClientState( GL_VERTEX_ARRAY );
    glDisableClientState( GL_TEXTURE_COORD_ARRAY );
}

I did a test of instead of doing 50k instances, I did one instance and drew it 50k times which had improved renderspeed( 27-29 fps to 36-38 fps ) which I marked in the above. Even just the pure opengl functions are slow and I'm a bit at a loss..

Ok, you moved some stuff out of the loop and it saved some time. What else can move out?

Does everything need its own matrix? Any objects you can pack together into the same buffer will save you time. What do you plan for these instances to be, anyway? Tiles?

It's a 2D game. It's an object and instance kind of system. You have objects which define things such as players, enemies, guns, bullets, etc.. but they are not part of the game. You have to create instances of an object to place it into the game and have it drawn or used. Each instance is unique with it's own coordinates and other things needed to be implemented such as animation and the such.

I'm not really sure if I'm taking the right approach.

It's a 2D game. It's an object and instance kind of system. You have objects which define things such as players, enemies, guns, bullets, etc.. but they are not part of the game. You have to create instances of an object to place it into the game and have it drawn or used. Each instance is unique with it's own coordinates and other things needed to be implemented such as animation and the such.

I'm not really sure if I'm taking the right approach.

Well, 50,000 sounds like a heck of a lot of things to have on the screen at the same time. I'm sure it is possible, but it sounds like it'll take some work. Honestly, I'd delay that goal until I fleshed out more of the rest of my game.

You may want to search for OpenGL instancing and see what's out there. There's one issue: it's a relatively recent feature. I see you're using the matrix stack and quads. I don't think either are really used with modern OpenGL, but I'm no expert. What version of OpenGL do you use?

I did a test of instead of doing 50k instances, I did one instance and drew it 50k times which had improved renderspeed( 27-29 fps to 36-38 fps ) which I marked in the above. Even just the pure opengl functions are slow and I'm a bit at a loss..

50k instances?

50,000 unique objects on screen at once, each with their own meshes and textures, that is quite a lot.

Are your objects REALLY tiny? Or do you just have an incredibly high amount of overdraw? Or are you drawing things that are off the screen?

About the only thing that should render that many objects would be particle systems, and those should not be handled as regular game objects.

I think your next big improvement will be an aggressive culling algorithm.

I'm stress testing.

All instances are from one object and thus all instances share one texture. It's a 2D image of a 64x64 butterfly which is rendered to a quad and thus four vertices for each instance. I haven't implemented optimization such as not drawing that which is not on the screen, but every instance is currently on the screen( none are outside and need this optimization ).

You are right that it is very unlikely that there will be 50k unique instances on the screen at once. I'd be amazed if someone could make just 1k( right now it runs at about 57 ticks per second and renders at 1500-1600 frames per second ) once without a huge mess. As I said I am stress testing and making sure that this is all optimized to the best possible.

It might be silly, but I just want to ensure that I have everything optimized to the best it can be in case at some point I will need in fact 10k+ instances on screen.

This topic is closed to new replies.

Advertisement