Sign in to follow this  

Vertex buffer object issue

This topic is 1117 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, i'm having issue with VBO and can't understand how to solve this problem, can't find answer in Google too.

 

I wrote the VBO, applied it and nothing shows, can't understand what's wrong if even I copy code from Google it doesn't shows anything. No errors, nothing. Here's render code:


static void display(void)
{
    glClear(GL_COLOR_BUFFER_BIT);

    glPushMatrix();

    glEnableVertexAttribArray(0);
    glBindBuffer(GL_ARRAY_BUFFER, uiVBO[0]);
    glVertexAttribPointer(
        0,                  // The attribute we want to configure
        3,                  // size
        GL_FLOAT,           // type
        GL_FALSE,           // normalized?
        0,                  // stride
        0                   // array buffer offset
    );
    glDrawArrays(GL_TRIANGLES, 0, 3);

    glBindBuffer(GL_ARRAY_BUFFER, uiVBO[1]);
    glVertexAttribPointer(
        0,                  // The attribute we want to configure
        3,                  // size
        GL_FLOAT,           // type
        GL_FALSE,           // normalized?
        0,                  // stride
        0                   // array buffer offset
    );
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

    glPopMatrix();
    glutSwapBuffers();
}

int main(int argc, char *argv[])
{
    glutInit(&argc, argv);
    glutInitWindowSize(640,480);
    glutInitWindowPosition(10,10);
    glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);

    glutCreateWindow("Test");

    glewInit();

    // Setup triangle vertices
    fTriangle[0] = -0.4f; fTriangle[1] = 0.1f; fTriangle[2] = 0.0f;
    fTriangle[3] = 0.4f; fTriangle[4] = 0.1f; fTriangle[5] = 0.0f;
    fTriangle[6] = 0.0f; fTriangle[7] = 0.7f; fTriangle[8] = 0.0f;

    // Setup quad vertices

    fQuad[0] = -0.2f; fQuad[1] = -0.1f; fQuad[2] = 0.0f;
    fQuad[3] = -0.2f; fQuad[4] = -0.6f; fQuad[5] = 0.0f;
    fQuad[6] = 0.2f; fQuad[7] = -0.1f; fQuad[8] = 0.0f;
    fQuad[9] = 0.2f; fQuad[10] = -0.6f; fQuad[11] = 0.0f;

	glGenBuffers(2, uiVBO);

	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, uiVBO[0]);
	glBufferData(GL_ELEMENT_ARRAY_BUFFER, 9*sizeof(float), fTriangle, GL_STATIC_DRAW);

    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, uiVBO[1]);
	glBufferData(GL_ELEMENT_ARRAY_BUFFER, 12*sizeof(float), fQuad, GL_STATIC_DRAW);

    glutReshapeFunc(resize);
    glutDisplayFunc(display);
    glutKeyboardFunc(key);
    glutMouseFunc(mouse);
    glutMotionFunc(motion);
    glutIdleFunc(idle);

    glClearColor(102.0/255.0, 255.0/255.0, 255.0/255.0, 1.0);
    glutMainLoop();

    return EXIT_SUCCESS;
}
Edited by povilaslt2

Share this post


Link to post
Share on other sites

No errors, nothing.

That might be due to your lack of error checking wink.png

 

It seems like you're new to OpenGL, so I would highly recommend learning from this website: http://www.arcsynthesis.org/gltut/ It also uses glut so you shouldn't need to change anything with your current setup to dive right in.

 

Regarding your code, there's all sorts of wrong with it so I'm not surprised it doesn't work.

Edited by Xycaleth

Share this post


Link to post
Share on other sites

 

No errors, nothing.

That might be due to your lack of error checking wink.png

 

It seems like you're new to OpenGL, so I would highly recommend learning from this website: http://www.arcsynthesis.org/gltut/ It also uses glut so you shouldn't need to change anything with your current setup to dive right in.

 

Regarding your code, there's all sorts of wrong with it so I'm not surprised it doesn't work.

 

Sorry, I posted wrong code. There's right one, but same effect :X

Edited by povilaslt2

Share this post


Link to post
Share on other sites

You're creating a GL context with depth but you're not clearing the depth buffer each frame.  First thing you should do is change this:

 

glClear(GL_COLOR_BUFFER_BIT);

 

to this:

 

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

 

You've also got no modelview and projection matrices set - I'm not sure if this is intentional or not?  Likewise I'm not sure if all of the other things you're missing are intentional or not.

 

Can you confirm please if this is code that had worked before using glBegin/glEnd calls, and if all the other stuff being missing is intentional or not?

Share this post


Link to post
Share on other sites

You're creating a GL context with depth but you're not clearing the depth buffer each frame.  First thing you should do is change this:

 

glClear(GL_COLOR_BUFFER_BIT);

 

to this:

 

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

 

You've also got no modelview and projection matrices set - I'm not sure if this is intentional or not?  Likewise I'm not sure if all of the other things you're missing are intentional or not.

 

Can you confirm please if this is code that had worked before using glBegin/glEnd calls, and if all the other stuff being missing is intentional or not?

Thanks for answer. My code works perfect with glBegin, glEnd. Also i'm using ortho2d projection, is it compatible with VBO?

static void resize(int width, int height)
{
    //const float ar = (float) width / (float) height;

    glViewport(0, 0, width, height);
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    gluOrtho2D (0.0, (GLfloat) width, 0.0, (GLfloat) height);
    glMatrixMode(GL_MODELVIEW);
}
Edited by povilaslt2

Share this post


Link to post
Share on other sites

Which version of OpenGL are you using? It looks like you're using compatibility mode if you're using OpenGL 3. If your context is using OpenGL 3.3+, then you'll have to use something known as a vertex array object (VAO).

 

I'd also suggest putting the following line:

glViewport(0, 0, 640, 480);

Put that in there right after you call glewInit(); Some desktop implementations of OpenGL don't need this, but I've found that some do. Here's some replacement code to try out. Put this code at the top of your main file:

#include <stdio.h>
#include <string>


// PUT ALL OF THIS ABOVE YOUR main() FUNCTION:

// VAO/VBO handles
GLuint vao = 0;
GLuint vbo = 0;
GLuint program = 0;

// basic vertex shader source
std::string vertSource =
"#version 330 core\n"
"layout(location = 0) in vec3 in_position;"
"void main()"
"{"
"	gl_Position = vec4(in_position.xyz, 1.0);" // probably could have done better here :-/
"}";

// basic fragment shader source
std::string fragSource =
"#version 330 core\n"
"out vec4 out_color;"
"void main()"
"{"
"	out_color = vec4(1.0, 1.0, 1.0, 1.0);" // color the fragment "white"
"}";

This code contains variables used for your VAO and VBO handles, and the source code that gets compiled to produce your shader.

 

Comment out this code:

// Setup triangle vertices
    fTriangle[0] = -0.4f; fTriangle[1] = 0.1f; fTriangle[2] = 0.0f;
    fTriangle[3] = 0.4f; fTriangle[4] = 0.1f; fTriangle[5] = 0.0f;
    fTriangle[6] = 0.0f; fTriangle[7] = 0.7f; fTriangle[8] = 0.0f;

    // Setup quad vertices

    fQuad[0] = -0.2f; fQuad[1] = -0.1f; fQuad[2] = 0.0f;
    fQuad[3] = -0.2f; fQuad[4] = -0.6f; fQuad[5] = 0.0f;
    fQuad[6] = 0.2f; fQuad[7] = -0.1f; fQuad[8] = 0.0f;
    fQuad[9] = 0.2f; fQuad[10] = -0.6f; fQuad[11] = 0.0f;

	glGenBuffers(2, uiVBO);

	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, uiVBO[0]);
	glBufferData(GL_ELEMENT_ARRAY_BUFFER, 9*sizeof(float), fTriangle, GL_STATIC_DRAW);

    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, uiVBO[1]);
	glBufferData(GL_ELEMENT_ARRAY_BUFFER, 12*sizeof(float), fQuad, GL_STATIC_DRAW);

Then, put this where your code should be:

// 3D vertices that make up a triangle in screen-space
float triangle[12] = {
	 0.0f, 1.0f, 0.0f, // top vertex
	-1.0f, 0.0f, 0.0f, // lower-left point
	 1.0f, 0.0f, 0.0f, // lower-right point
};

// generate and bind vertex array object (VAO)
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);

// generate, bind and setup vertex buffer object (VBO)
glGenBuffers(1, &vbo); // find a free vbo ID
glBindBuffer(GL_ARRAY_BUFFER, vbo); // binding for the first time actually allocates the buffer
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * 9, triangle, GL_STATIC_DRAW); // fill the buffer with data
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(0);

// create the shaders, and compile them
GLuint vertShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertShader, 1, &vertStr, NULL);
glCompileShader(vertShader);

// display the compiler log
glGetShaderiv(vertShader, GL_INFO_LOG_LENGTH, &logLength);
if(logLength > 0)
{
	char *log = new char[logLength];
	glGetShaderInfoLog(vertShader, logLength, &logLength, log);
	std::cout << "Vertex Shader Compile Log:\n" << log << std::endl;
	if(log) delete [] log;
}
	
GLuint fragShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fragShader, 1, &fragStr, NULL);
glCompileShader(fragShader);

// display the compiler log
glGetShaderiv(fragShader, GL_INFO_LOG_LENGTH, &logLength);
if(logLength > 0)
{
	char *log = new char[logLength];
	glGetShaderInfoLog(fragShader, logLength, &logLength, log);
	std::cout << "Fragment Shader Compile Log:\n" << log << std::endl;
	if(log) delete [] log;
}

// create the shader program, and attach the shaders
program = glCreateProgram();
glAttachShader(program, vertShader);
glAttachShader(program, fragShader);
glLinkProgram(program);

// get the program's info log
glGetProgramiv(program, GL_INFO_LOG_LENGTH, &logLength);
if(logLength > 0)
{
	char *log = new char[logLength];
	glGetProgramInfoLog(program, logLength, &logLength, log);
	std::cout << "Log:\n" << log << std::endl;
	if(log) delete [] log;
}

// delete the shaders
glDeleteShader(vertShader);
glDeleteShader(fragShader);

// use the newly-created shader, and set the clear color
glUseProgram(program);
glClearColor(0.25f, 0.25f, 0.25f, 1.0f);

There's quite a bit going on, but this is the very basic you need to get something drawing onscreen in OpenGL 3.3+ (assuming that's your minimum target). I wasn't able to check this code to see if it compiles as I'm currently at work. Try this out, and get back to us! I'd also suggest looking at this tutorial. Ironically, this tutorial seems to do exactly what the code I prepared above does.

Share this post


Link to post
Share on other sites

Which version of OpenGL are you using? It looks like you're using compatibility mode if you're using OpenGL 3. If your context is using OpenGL 3.3+, then you'll have to use something known as a vertex array object (VAO).

 

I'd also suggest putting the following line:

glViewport(0, 0, 640, 480);

Put that in there right after you call glewInit(); Some desktop implementations of OpenGL don't need this, but I've found that some do. Here's some replacement code to try out. Put this code at the top of your main file:

#include <stdio.h>
#include <string>


// PUT ALL OF THIS ABOVE YOUR main() FUNCTION:

// VAO/VBO handles
GLuint vao = 0;
GLuint vbo = 0;
GLuint program = 0;

// basic vertex shader source
std::string vertSource =
"#version 330 core\n"
"layout(location = 0) in vec3 in_position;"
"void main()"
"{"
"	gl_Position = vec4(in_position.xyz, 1.0);" // probably could have done better here :-/
"}";

// basic fragment shader source
std::string fragSource =
"#version 330 core\n"
"out vec4 out_color;"
"void main()"
"{"
"	out_color = vec4(1.0, 1.0, 1.0, 1.0);" // color the fragment "white"
"}";

This code contains variables used for your VAO and VBO handles, and the source code that gets compiled to produce your shader.

 

Comment out this code:

// Setup triangle vertices
    fTriangle[0] = -0.4f; fTriangle[1] = 0.1f; fTriangle[2] = 0.0f;
    fTriangle[3] = 0.4f; fTriangle[4] = 0.1f; fTriangle[5] = 0.0f;
    fTriangle[6] = 0.0f; fTriangle[7] = 0.7f; fTriangle[8] = 0.0f;

    // Setup quad vertices

    fQuad[0] = -0.2f; fQuad[1] = -0.1f; fQuad[2] = 0.0f;
    fQuad[3] = -0.2f; fQuad[4] = -0.6f; fQuad[5] = 0.0f;
    fQuad[6] = 0.2f; fQuad[7] = -0.1f; fQuad[8] = 0.0f;
    fQuad[9] = 0.2f; fQuad[10] = -0.6f; fQuad[11] = 0.0f;

	glGenBuffers(2, uiVBO);

	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, uiVBO[0]);
	glBufferData(GL_ELEMENT_ARRAY_BUFFER, 9*sizeof(float), fTriangle, GL_STATIC_DRAW);

    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, uiVBO[1]);
	glBufferData(GL_ELEMENT_ARRAY_BUFFER, 12*sizeof(float), fQuad, GL_STATIC_DRAW);

Then, put this where your code should be:

// 3D vertices that make up a triangle in screen-space
float triangle[12] = {
	 0.0f, 1.0f, 0.0f, // top vertex
	-1.0f, 0.0f, 0.0f, // lower-left point
	 1.0f, 0.0f, 0.0f, // lower-right point
};

// generate and bind vertex array object (VAO)
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);

// generate, bind and setup vertex buffer object (VBO)
glGenBuffers(1, &vbo); // find a free vbo ID
glBindBuffer(GL_ARRAY_BUFFER, vbo); // binding for the first time actually allocates the buffer
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * 9, triangle, GL_STATIC_DRAW); // fill the buffer with data
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(0);

// create the shaders, and compile them
GLuint vertShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertShader, 1, &vertStr, NULL);
glCompileShader(vertShader);

// display the compiler log
glGetShaderiv(vertShader, GL_INFO_LOG_LENGTH, &logLength);
if(logLength > 0)
{
	char *log = new char[logLength];
	glGetShaderInfoLog(vertShader, logLength, &logLength, log);
	std::cout << "Vertex Shader Compile Log:\n" << log << std::endl;
	if(log) delete [] log;
}
	
GLuint fragShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fragShader, 1, &fragStr, NULL);
glCompileShader(fragShader);

// display the compiler log
glGetShaderiv(fragShader, GL_INFO_LOG_LENGTH, &logLength);
if(logLength > 0)
{
	char *log = new char[logLength];
	glGetShaderInfoLog(fragShader, logLength, &logLength, log);
	std::cout << "Fragment Shader Compile Log:\n" << log << std::endl;
	if(log) delete [] log;
}

// create the shader program, and attach the shaders
program = glCreateProgram();
glAttachShader(program, vertShader);
glAttachShader(program, fragShader);
glLinkProgram(program);

// get the program's info log
glGetProgramiv(program, GL_INFO_LOG_LENGTH, &logLength);
if(logLength > 0)
{
	char *log = new char[logLength];
	glGetProgramInfoLog(program, logLength, &logLength, log);
	std::cout << "Log:\n" << log << std::endl;
	if(log) delete [] log;
}

// delete the shaders
glDeleteShader(vertShader);
glDeleteShader(fragShader);

// use the newly-created shader, and set the clear color
glUseProgram(program);
glClearColor(0.25f, 0.25f, 0.25f, 1.0f);

There's quite a bit going on, but this is the very basic you need to get something drawing onscreen in OpenGL 3.3+ (assuming that's your minimum target). I wasn't able to check this code to see if it compiles as I'm currently at work. Try this out, and get back to us! I'd also suggest looking at this tutorial. Ironically, this tutorial seems to do exactly what the code I prepared above does.

Thank you, it all works, but when I disable shaders it don't. So it means that I always have to use shaders?

Share this post


Link to post
Share on other sites

Yes, you'll need to use shaders as well. If you were using OpenGL 3.3+, you could run it in compatibility mode, which would allow you to use the fixed-function pipeline, and use OpenGL's built-in, deprecated matrix stack. It's not suggested, however, because the matrix stack is less efficient. You can expect that pretty much every function call that starts with gl* will attempt to leverage the GPU. Whenever your a thread of your program makes a gl* call, your thread actually halts execution until that gl* call is is complete. This means that the CPU has to use OpenGL to send a command to the GPU, have the GPU do the tasks it's been given, then let that OpenGL function return. It happens so quickly that you won't notice your application chugging, but if you were able to analyze gl* call overhead when compared to your own functions when called cycle-for-cycle, you'd notice that there's quite a bit of spike. That said, your GPU's collective processing power outclasses the CPU --as long as the work can be done in parallel. Your GPU may be able to calculate matrices much faster than your CPU, but the time it takes to send the information back and forth between the CPU and GPU is the real bottleneck making it non-worthwhile. It's more efficient on the GPU CPU side if you learn to write your own matrix library and learn GLSL. You could pre-multiply a lot of matrices on the CPU before rendering (adding logic to your objects so that you only pre-calculate the matric combinations you actually need), then send the results to your shader instead of calculating the same matrix per-vertex. It's not as difficult as it may sound, and in the end, you'll become able to write effects to your graphics that you couldn't possibly achieve on fixed-function hardware.

Edited by Vincent_M

Share this post


Link to post
Share on other sites

This topic is 1117 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this