Jump to content
  • Advertisement
Sign in to follow this  
tre

OpenGL Working with OGL3.x-

This topic is 2761 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi!
I've been setting up a new context for some time now. And I finally seem to get to grips with it. However, I can't render anything in my window, which is quite crippling...

I'm loading an OBJ into vectors and load that information into a couple of VBO's for texccords, vertex position, normals and a vector for indices.
I know for a fact that the code I'm using is functional under lower contexts and I just can't get it to work in the new one.

I'm thinking that it's got something to do with using my own matrices and how the shaders are set up... but I can't figure it out.

My render function:
void cOpenGLContext::renderScene(void) {
glViewport(0, 0, windowWidth, windowHeight); // set the viewport to fill the entire window
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); // clear required buffers

viewMatrix = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, -5.0f)); // create our view matrix which will translate us back 5 units
modelMatrix = glm::scale(glm::mat4(1.0f), glm::vec3(0.5f));// create our model matrix which will halve the size of our model

shader->bind();
int projectionMatrixLocation = glGetUniformLocation(shader->id(), "projectionMatrix"); // get the location of our projection matrix in the shader
int viewMatrixLocation = glGetUniformLocation(shader->id(), "viewMatrix"); // get the location of our view matrix in the shader
int modelMatrixLocation = glGetUniformLocation(shader->id(), "modelMatrix"); // get the location of our model matrix in the shader

glUniformMatrix4fv(projectionMatrixLocation, 1, GL_FALSE, &projectionMatrix[0][0]);// send our projection matrix to the shader
glUniformMatrix4fv(viewMatrixLocation, 1, GL_FALSE, &viewMatrix[0][0]); // send our view matrix to the shader
glUniformMatrix4fv(modelMatrixLocation, 1, GL_FALSE, &modelMatrix[0][0]); // send our model matrix to the shader

// rest of render function here
// VBO [start]
// drawing
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
// vertex buffer
glBindBuffer(GL_ARRAY_BUFFER, earth.vBufferObject);
glVertexPointer(3, GL_FLOAT, 0, 0);

// texture buffer
glBindBuffer(GL_ARRAY_BUFFER, earth.tBufferObject);
glTexCoordPointer(3, GL_FLOAT, 0, 0);

// normal buffer
glBindBuffer(GL_ARRAY_BUFFER, earth.nBufferObject);
glNormalPointer(GL_FLOAT, 0, 0);

// index buffer
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, earth.iBufferObject);

// draws it all
glDrawElements(GL_TRIANGLES, earth.vFaces.size(), GL_UNSIGNED_INT, 0);

// unbinds the buffers
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
// drawing end
shader->unbind();

SwapBuffers(hdc); // swap buffers so we can see our rendering
}





My shaders

// vert
#version 150 core

in vec3 in_Position;
in vec3 in_Color;
out vec3 pass_Color;

uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;

void main(void) {
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(in_Position, 1.0);
pass_Color = in_Color;
}



// frag
#version 150 core

in vec3 pass_Color;
out vec4 out_Color;

void main(void) {
out_Color = vec4(pass_Color, 1.0);
}





Can anyone see what I'm doing wrong? I'm going blind, staring at this code.

Thanks!
Marcus

Share this post


Link to post
Share on other sites
Advertisement
It looks like you need to study the 3.x specification a bit more closely. OpenGL 3 deprecates a lot of functionality. Just skimming over your code, I can see lots of deprecated code: glEnableClientState, glVertexPointer, etc. You need to learn how to use vertex attribute arrays.

If you are running OpenGL 3.2 or later, those functions simply will not work at all. Running 3.1 should enable that deprecated functionality.

Share this post


Link to post
Share on other sites
Hi, TheBuzzSaw.
Do you mean something like:

// function for setting up the buffers
glGenBuffers(1, &vBufferObject);
glBindBuffer(GL_ARRAY_BUFFER, vBufferObject);
glBufferData(GL_ARRAY_BUFFER, ((int)(vertices.size()) * 3 * sizeof(float)), &(vertices.at(0)), GL_STATIC_DRAW);
glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0);

// ...

// draw function
glBindVertexArray(earth.vBufferObject);
glDrawArrays(GL_TRIANGLES, 0, 3);


Thanks for your previous answer.

Share this post


Link to post
Share on other sites
Yes, you are on the right track. I've not gotten into the habit of using vertex array objects yet, but vertex attribute arrays and vertex buffer objects should pretty much drive your entire program. ;)

Share this post


Link to post
Share on other sites
Hm hm. Okay. I've got stuff to think about then. Crap. But thanks for your answer :)

If anyone else have anything to add, please do.

Thanks!
Marcus

Share this post


Link to post
Share on other sites
Quote:
Original post by Trefall
Here's how I use VAO and VBO: Cube.cpp


Thanks for posting that. That's how I'm doing it right now. But I'm still not getting anything on the screen. Just my regular background.

// vao, vbo, ibo setup
vao = 0;

// vertex buffer
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);

glGenBuffers(1, &vBufferObject);
glBindBuffer(GL_ARRAY_BUFFER, vBufferObject);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*vertices.size(), NULL, GL_STATIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(float)*vertices.size(), &vertices[0]);


//.....


// render function
glBindVertexArray(earth.vao);
glDrawElements(GL_TRIANGLES, earth.vFaces.size(), GL_UNSIGNED_INT, 0);
glBindVertexArray(0);


Thanks for the help, guys.

Share this post


Link to post
Share on other sites
I assume you also bind and buffer your index list? :) Use this snippet to see if OpenGL is throwing any errors internally:


{
GLenum err = glGetError();
if(err != GL_NO_ERROR)
std::cout << "OpenGL prog error: " << gluErrorString(err) << std::endl;
std::cout.flush();
}

Share this post


Link to post
Share on other sites
You don't wanna be using GLU in OpenGL 3.

I would recommend trying to draw without VAOs for testing. Just see if you can bind a buffer and draw it.

Share this post


Link to post
Share on other sites
You also may want to calculate your modelview * projection matrix on the CPU and uploading it once to the GPU instead of doing it per-vertex. The model matrix has to be passed up separately for each unique object. The only access you have to the GL default attributes is to the gl_* attributes in your shader. User defined attributes have to be bound and specified by the user...it doesn't happen automagically. See the GLSL spec for reference. With that said your shader is transforming some unknown attributes since the default vertex attribute that is bound to glVertexPointer is gl_Vertex which is not valid when version 150 is define.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!