Jump to content
  • Advertisement
Sign in to follow this  
MiguelMartin

OpenGL Vertex Buffer Objects, what on earth am I doing wrong?

This topic is 2506 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Well it seems like I can't do anything right, I've tried for hours to get this working but I just can't seem to do it. Basically I'm getting 'EXC_BAD_ACCESS' (I believe a null pointer dereference when I'm running on windows with VC++) with this function:


glDrawArrays(GL_QUADS, 0, 4);


Now I'm not sure why exactly this isn't working, as I have generated a buffer, updated it and then rendered it. I also tried to get VBO's to work with glut instead of what I am using SFML 2.0 and my library that I'm making. It seemed to work with GLUT and just regular C functions, but won't for C++. I've been trying to see what I've done differently to the GLUT version, but everything look just about the same (except for me checking if it requires texture coordinates, which it doesn't even hit).

Here's my code, can someone PLEASE explain what I have done wrong?
P.S. I am doing this after creating the OpenGL context and window (which SFML 2.0 does). And sorry if the indentation is wrong/messed up, Xcode is weird o:



// Initalizing the VBO

/////////////////////////////////////////////////////////
/// Adds a Renderbale2DObject that the Renderer will render.
/// \param renderableObject The Renderable2DObject you wish to add.
/////////////////////////////////////////////////////////
void OGLRenderer::addObjectToRenderList(const Renderable2D& renderableObject)
{
// Create a VBO for a specific object.
GLuint sizeOfBuffer = 0; // The size of the buffer.
GLuint nameOfBuffer = 0; // The name of the buffer

// if it is a Renderable2DRect
if(typeid(renderableObject) == typeid(Renderable2DRect*))
{
sizeOfBuffer = 4 * sizeof(OGLVertex);
}

glGenBuffers(sizeOfBuffer, &nameOfBuffer); // Generate the buffer

glBindBuffer(GL_ARRAY_BUFFER, nameOfBuffer); // Bind to the buffer
// Tell OpenGL how we're going to manage data, but not upload anything at this time.
glBufferData(GL_ARRAY_BUFFER,
sizeOfBuffer, // The size of the buffer
0, // The actual data
GL_STREAM_DRAW
);

renderList.insert(RenderListPair(nameOfBuffer, &renderableObject)); // Add it to the Rendering List.
}

// Updating the VBO
/////////////////////////////////////////////////////////
/// Updates the VBO information for a Renderable2D object.
/// \param rect The Renderable2DRect you wish to update information for.
/////////////////////////////////////////////////////////
void OGLRenderer::updateVboInformationForRect(const Renderable2DRect& rect)
{
std::vector<OGLVertex> vertices; // The vertices of the object

// If the Renderable2D object is actually a Renderable2DRect.
const Colour& colour = rect.getTintColour();
// The size of the Final Rectangle. If it's using cropping, get the cropping dimensions. Else if it's not get the size of the Rectangle
const DimensionFloat& size = rect.doesUseCropping() ? rect.getSubRectDimensions() : rect.getSize();

vertices.resize(4); // Resize the Vertices's to 4 points

// Now setup the data

// Set the vertices's up
vertices[0].position.x = 0;
vertices[0].position.y = 0;

vertices[1].position.x = size.width;
vertices[1].position.y = 0;

vertices[2].position.x = size.width;
vertices[2].position.y = size.height;

vertices[3].position.x = 0;
vertices[3].position.y = size.height;


// If Texture2D mapping is enabled and there is a texture
if(isRenderingOptionEnabled(RenderingOptions::Texture2DMapping) &&
rect.getTexture() != 0)
{
// Set the Texture Coords up.
Rect2DFloat textureCoords = calcTextCoords(rect);

vertices[0].textureCoords.x = textureCoords.position.x;
vertices[0].textureCoords.y = textureCoords.position.y;

vertices[1].textureCoords.x = textureCoords.size.width;
vertices[1].textureCoords.y = textureCoords.position.y;

vertices[2].textureCoords.x = textureCoords.size.width;
vertices[2].textureCoords.y = textureCoords.size.height;

vertices[3].textureCoords.x = textureCoords.position.x;
vertices[3].textureCoords.y = textureCoords.size.height;
}

// Set the Colour of the Object
for(int i = 0; i < 4; ++i)
{
vertices.colour = colour;
}

// Now send this information to OpenGL
glBufferSubData(GL_ARRAY_BUFFER, 0, 4 * sizeof(OGLVertex), &vertices[0]);
}


// Rendering the VBO
/////////////////////////////////////////////////////////
/// Renders a Renderable2DRect.
/// \param rect The Renderable2DRect you wish to render.
/////////////////////////////////////////////////////////
void OGLRenderer::renderRect(const Renderable2DRect& rect)
{
// Update the information
updateVboInformationForRect(rect);

bool isTexturingEnabled = isRenderingOptionEnabled(RenderingOptions::Texture2DMapping); // Tells whether Texutring is enabled.
bool shouldUseTextures = isTexturingEnabled && (rect.getTexture() != 0); // If the renderer should use texturing

// Enable the Client States (for VBO's), getting ready to draw
glEnableClientState(GL_VERTEX_ARRAY); // Enable Vertex Arrays for the VBO's
glVertexPointer(3, GL_FLOAT, sizeof(OGLVertex), BUFFER_OFFSET(0));

glEnableClientState(GL_COLOR_ARRAY); // Enable Colours Arrays for the VBO's
glColorPointer(3, GL_UNSIGNED_BYTE, sizeof(OGLVertex), BUFFER_OFFSET(sizeof(Vector3<GLfloat>)));

if(shouldUseTextures)
{
// Enable Texturing
glEnableClientState(GL_TEXTURE_2D_ARRAY_EXT);
glTexCoordPointer(2, GL_FLOAT, sizeof(OGLVertex), BUFFER_OFFSET(sizeof(Vector3<GLfloat>) + sizeof(Colour)));

// Bind to the actual texture
glBindTexture(GL_TEXTURE_2D, rect.getTexture()->getID());
}

glDrawArrays(GL_QUADS, 0, 4); // Draw the Renderable2DRect.

// Now disable them
glDisable(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);

// If texturing is enabled
if(shouldUseTextures)
{
// That means it was enabled, so disable it
glDisableClientState(GL_TEXTURE_2D_ARRAY_EXT);
}
}


I'm sorry if it isn't indented properly o.O, oh and here's that OGLVertex.


///////////////////////////////////////////////////////////////////////////
/// \struct OGLVertex
/// \brief A Vertex data structure.
///
/// A Vertex Data structure that holds everything that is required for one Vertex,
/// this is mainly used to upload data to the VRAM, using OpenGL.
///
/// \author Miguel Martin.
///////////////////////////////////////////////////////////////////////////
struct OGLVertex
{
public:
/////////////////////////////////////////////////////////
/// The Position of the OGLVertex.
/////////////////////////////////////////////////////////
Vector3<GLfloat> position;
/////////////////////////////////////////////////////////
/// The Colour of the OGLVertex.
/////////////////////////////////////////////////////////
Colour colour;
/////////////////////////////////////////////////////////
/// The Texture Coordinates of the OGLVertex.
/////////////////////////////////////////////////////////
Vector2<GLfloat> textureCoords;

/////////////////////////////////////////////////////////
/// Extra padding to round off the size of the Vertex to 64 bytes.
/////////////////////////////////////////////////////////
GLfloat padding[2];
};

Share this post


Link to post
Share on other sites
Advertisement
I don't see anything obvious in your OpenGL code, although I didn't do a thorough check. What caught my eye, though, was this (cut down to the essential parts):
[source]
void OGLRenderer::addObjectToRenderList(const Renderable2D& renderableObject)
{
GLuint sizeOfBuffer = 0;

if(typeid(renderableObject) == typeid(Renderable2DRect*))
{
sizeOfBuffer = 4 * sizeof(OGLVertex);
}

[/source]
If the dyanmic type of renderableObject is indeed Rednerable2DRect, which I assume is that you indent in this case, then the type id's still won't match. The left hand side is of type Renderable2DRect and the right hand side is Renderable2Drect *. Observe the reference type vs. pointer type.

The result is that sizeOfBuffer remains zero, and the vertex buffer is thus created with zero bytes. Basically, you're drawing four vertices from a vertex buffer containing zero vertices.

Share this post


Link to post
Share on other sites
What is this?
glEnableClientState(GL_TEXTURE_2D_ARRAY_EXT);

I suggest using glGetError() a lot when you are debugging.

Share this post


Link to post
Share on other sites
OP, it might help to use an opengl debugging software like gDEBugger.

Have it put a set the program to break on gldrawarrays, and on the breakpoint useit to look inside the VBOs to make sure everything looks okay.

Usually when I get an opengl error like the one you describe, it's because the coordinates were uploaded to the VBO wrong or weren't uploaded at all.

Share this post


Link to post
Share on other sites

glGenBuffers(sizeOfBuffer, &nameOfBuffer); // Generate the buffer


The first parameter of GenBuffers is the number of buffers, not the size of the buffer. Pass 1 instead of sizeOfBuffers. The size of the buffer is determined when you call BufferData, which is when the buffer is allocated

Share this post


Link to post
Share on other sites
Thanks a lot, it seems not complain any more :D. Oh and V-man, my Mac OpenGL headers didn't seem to declare GL_TEXTURE_2D_ARRAY, but it did declare GL_TEXTURE_2D_EXT. Anyhow I wasn't using textures, so that wasn't the problem. Thanks again :).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!