OpenGL Not Rendering Models

Started by
10 comments, last by powly k 11 years, 5 months ago
Hello,

I have an issue with an "engine" that I "wrote" , the problem is that the 3D models are not rendered (.obj) , and this only occurs on nVidia graphic cards ,I only tried on GT 220, and GT 430, I also tried running the application on an ATI video card and it has no problems, also the shaders are compiling correctly. The OpenGL context is actually initializing cause I can see the glClear color.
If you guys can help me with any information about his problem , or if you encountered this before please help.

Thank You!
Advertisement
It is hard to tell a problem from text only. Could you please provide some comparision screenshots ? Have you checked for opengl errors ? There're always differences between ati and nvidia GPUs and it needs some time to fix all issues which runs on only one plattform.
After checking for OpenGL errors, I managed to pinpoint the problem to this line of code :
[source lang="cpp"]glVertexAttribPointer
(
0,
3,
GL_FLOAT,
GL_FALSE,
0,
(void*)0
);[/source]
which is the vertex attribute buffer. It just says "inavlid operation".
1. Check if you don't call this between a call of glBegin and glEnd.

2. Check for errors before this call and after this call, because the error will refer to the last ocurred error, which could be produced by a previous ogl api call .
This function is called in the rendering loop, and I have two error checks before and after the function.

[source lang="java"]CHECKOGLERRORS;
glVertexAttribPointer
( ... );
CHECKOGLERRORS;[/source]
I am not sure what the last parameter should do and how I could change it because I think that's the problem.
The last parameter, unless I'm sadly mistaken, is a pointer to the beginning of the data. If this is part of a buffer, it gets treated more as an offset then a direct pointer to memory. (void*)0 simply tells it "start at the beginning of the buffer" so if you have a vertex struct, the very first data entry in the struct should be this attribute. Typically when I get an error there (and I use only nvidia cards), it's because I've failed to enable that vertex array OR because I have passed an invalid value (like 8 instead of 3 floats). Be sure also that your shader program uses a vec3 when you pass it those 3 floats.

EDIT: Just though, also make sure that the layout of your shader program matches the order of the attribute arrays. Thus, attribute 0 must be a vec3 or other 3 float containing data type.
After browsing some forums I found out that nVidia enforces the use of use vao + vbo with 3.x. with the new drivers. The problem is that I don't know what VAO is or where I should use it , I am already using VBO if you guys can help by pointing me in the right direction on how I should use VAO I would appreciate it very much.
That's probably what your issue is. Just call glCreateVertexArray to create a VAO, and then glBindVertexArray to bind it before you do your glBindBuffer, glEnableVertexAttribArray and glVertexPointer calls (Do this once). From that point on, all you have to do is bind the vertex array in your render loop and the rest is of the state (which was saved by making those calls with a VAO bound) is set up automatically.
The reason vertex array objects were introduced is to have less traffic in the binding of your attribute and index buffers, so you can just enable a VAO, bind all the buffers the way you want for that VAO and then just switch between different VAOs. I don't get why it was enforced; there's no "default VAO" that you could just use if you only want to render a single set of geometry, but that's the way it is.

Koehler already told what functions to use and what to do, I just wanted to clear up what VAOs are around for :)
Thank you all for your help, creating a Vertex Array Object fixed the problem. It's actually sad that I had to spend so much time figuring out what the problem was.

This topic is closed to new replies.

Advertisement