Jump to content

  • Log In with Google      Sign In   
  • Create Account


Why the efficiency a simple 2D program is so low?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
6 replies to this topic

#1 Cocular   Members   -  Reputation: 101

Like
0Likes
Like

Posted 21 March 2012 - 09:25 AM

I'm a totally new beginner to OpenGL and I'm learning it by online resource http://www.arcsynthesis.org/gltut/.

It is a 2D example that is a simply 2D spinning triangle around one point. A attach its main code below.

My problem is why a simple program run so slow. It does not run very smooth and I can a few obvious lags. This is much more frequent when I open v-sync (CPU 100% on a laptop is annoying). I also try to sleep sometime before call another glutPostRedisplay. This seems even lagger than v-sync.

My question is how to make this program run more smooth? I can never feel such lag when I play some games written in OpenGL. Also is there a convenient way to limit the FPS?

#define ARRAY_COUNT( array ) (sizeof( array ) / (sizeof( array[0] ) * (sizeof( array ) != sizeof(void*) || sizeof( array[0] ) <= sizeof(void*))))

GLuint theProgram;

void InitializeProgram()
{
    std::vector<GLuint> shaderList;

    shaderList.push_back(Framework::LoadShader(GL_VERTEX_SHADER, "standard.vert"));
    shaderList.push_back(Framework::LoadShader(GL_FRAGMENT_SHADER, "standard.frag"));

    theProgram = Framework::CreateProgram(shaderList);
}

const float vertexPositions[] = {
    0.25f, 0.25f, 0.0f, 1.0f,
    0.25f, -0.25f, 0.0f, 1.0f,
    -0.25f, -0.25f, 0.0f, 1.0f,
};

GLuint positionBufferObject;
GLuint vao;


void InitializeVertexBuffer()
{
    glGenBuffers(1, &positionBufferObject);

    glBindBuffer(GL_ARRAY_BUFFER, positionBufferObject);
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertexPositions), vertexPositions, GL_STREAM_DRAW);
    glBindBuffer(GL_ARRAY_BUFFER, 0);
}

//Called after the window and OpenGL are initialized. Called exactly once, before the main loop.
void init()
{
    InitializeProgram();
    InitializeVertexBuffer();

    glGenVertexArrays(1, &vao);
    glBindVertexArray(vao);
}


void ComputePositionOffsets(float &fXOffset, float &fYOffset)
{
    const float fLoopDuration = 1.0f;
    const float fScale = 3.14159f * 2.0f / fLoopDuration;

    float fElapsedTime = glutGet(GLUT_ELAPSED_TIME) / 1000.0f;

    float fCurrTimeThroughLoop = fmodf(fElapsedTime, fLoopDuration);

    fXOffset = cosf(fCurrTimeThroughLoop * fScale) * 0.5f;
    fYOffset = sinf(fCurrTimeThroughLoop * fScale) * 0.5f;
}

void AdjustVertexData(float fXOffset, float fYOffset)
{
    std::vector<float> fNewData(ARRAY_COUNT(vertexPositions));
    memcpy(&fNewData[0], vertexPositions, sizeof(vertexPositions));

    for(int iVertex = 0; iVertex < ARRAY_COUNT(vertexPositions); iVertex += 4)
    {
        fNewData[iVertex] += fXOffset;
        fNewData[iVertex + 1] += fYOffset;
    }

    glBindBuffer(GL_ARRAY_BUFFER, positionBufferObject);
    glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(vertexPositions), &fNewData[0]);
    glBindBuffer(GL_ARRAY_BUFFER, 0);
}

void display()
{
    float fXOffset = 0.0f, fYOffset = 0.0f;
    ComputePositionOffsets(fXOffset, fYOffset);
    AdjustVertexData(fXOffset, fYOffset);

    glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
    glClear(GL_COLOR_BUFFER_BIT);

    glUseProgram(theProgram);

    glBindBuffer(GL_ARRAY_BUFFER, positionBufferObject);
    glEnableVertexAttribArray(0);
    glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);

    glDrawArrays(GL_TRIANGLES, 0, 3);

    glDisableVertexAttribArray(0);
    glUseProgram(0);

    glutSwapBuffers();
	    glutPostRedisplay();
}

Standard.vert
#version 330

layout(location = 0) in vec4 position;

void main()
{
    gl_Position = position;
}

Standard.frag
#version 330

out vec4 outputColor;

void main()
{
    outputColor = vec4(1.0f, 1.0f, 1.0f, 1.0f);
}


BTW, Is http://www.arcsynthesis.org/gltut/ a good choice for beginner to learn opengl?

Sponsor:

#2 MarkS   Prime Members   -  Reputation: 882

Like
1Likes
Like

Posted 22 March 2012 - 05:09 AM

You are doing all of your calculations on the CPU (slow), using std::vector (also somewhat slow) for a static array (unnecessary) and then moving your vertex data across your system bus (from the CPU to the GPU) each frame (very slow).

If your array isn't dynamic, skip std::vector and use a standard array. Send you vertex data to the GPU once and then either transform it in the vertex shader or use glRotate each frame.

#3 mhagain   Crossbones+   -  Reputation: 7812

Like
0Likes
Like

Posted 22 March 2012 - 08:52 AM

For 3 verts in a single triangle, doing things on the CPU should be effectively "free" - or at the very least have overhead so low that it's not even measurable. For sure you should move them to the GPU (put them in your vertex shader) but in this case they're not going to be the cause of the symptoms you describe.

Can you post your main function? And in particular elaborate a little on your sleep call - sleep calls are a very bad way of limiting framerate and are not suitable for this kind of use case. They are suitable for reducing battery usage, but you need a fairly light touch with them.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#4 MarkS   Prime Members   -  Reputation: 882

Like
1Likes
Like

Posted 22 March 2012 - 04:47 PM

For 3 verts in a single triangle, doing things on the CPU should be effectively "free" - or at the very least have overhead so low that it's not even measurable. For sure you should move them to the GPU (put them in your vertex shader) but in this case they're not going to be the cause of the symptoms you describe.


I agree. Since he is learning I felt it was best to steer him away from this practice. It may be free now, but it wont be with a much higher vertex model.

I didn't look at the book he linked to, but if that code is straight from that book, it should raise flags.

#5 mhagain   Crossbones+   -  Reputation: 7812

Like
0Likes
Like

Posted 22 March 2012 - 07:34 PM

I agree. Since he is learning I felt it was best to steer him away from this practice. It may be free now, but it wont be with a much higher vertex model.


Good point, I definitely agree there.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#6 SimonForsman   Crossbones+   -  Reputation: 6036

Like
0Likes
Like

Posted 23 March 2012 - 05:06 PM


For 3 verts in a single triangle, doing things on the CPU should be effectively "free" - or at the very least have overhead so low that it's not even measurable. For sure you should move them to the GPU (put them in your vertex shader) but in this case they're not going to be the cause of the symptoms you describe.


I agree. Since he is learning I felt it was best to steer him away from this practice. It may be free now, but it wont be with a much higher vertex model.

I didn't look at the book he linked to, but if that code is straight from that book, it should raise flags.


It is straight from that site, it goes over the proper way to do things later on, i think it is using that method in order to keep the vertex shader simple for the initial example to prevent pushing too much information on the reader at once, I havn't actually looked through it all properly though so i can't vouch for its quality and it does seem a bit odd to start off by showing a retarded way to do things.
I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!

#7 Cocular   Members   -  Reputation: 101

Like
0Likes
Like

Posted 24 March 2012 - 11:13 AM

Thanks for everyone. :)

I understand that the way moving the triangle directly in the CPU is not a good way. But the strange thing is that I can't reprocedure the efficiency problem on the same computer again. I can only believe that this is just my illusion.(Maybe I'm running my computer on powersaving mode?)

Another problem is that when I run two OpenGL applications and enable v-sync, the FPS will drop to 30 and application becomes a little lag. Is there a solution to this problem?




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS