Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 27 Nov 2007
Offline Last Active Feb 23 2015 01:16 AM

Topics I've Started

OpenGL uninitialized buffer warning for iPhone

19 February 2015 - 07:26 PM

I've been wracking my head trying to figure this out for some time now and it's clear that I need some outside help.


I have some VBOs/IBOs that I render through the following method. Batch objects just group data based on common textures:

- (void)render {
    //  if the buffer is empty, don't waste time
    if ([indexBuffer getCurrentSize] <= 0)
    //binding shared buffer objects
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, [indexBuffer getID]);
    glBindBuffer(GL_ARRAY_BUFFER, [vertexBuffer getID]);
    glVertexPointer(3, GL_FLOAT, sizeof(struct Vertex), (char *)NULL + 0);
    glTexCoordPointer(2, GL_FLOAT, sizeof(struct Vertex), (char *)NULL + 12);
    glNormalPointer(GL_FLOAT, sizeof(struct Vertex), (char *)NULL + 20);
    glColorPointer(4, GL_UNSIGNED_BYTE, sizeof(struct Vertex), (char *)NULL + 32);
    DynamicArray *renderBatches = [indexBuffer getRenderBatches];
    RenderBatch *batch;

    for (int i = 0; i < [indexBuffer getActiveBatchCount]; i++) {
        batch = (RenderBatch *)[renderBatches get:i];
        [BufferManager setTextureID:[batch getTextureID]];
        //  render the subset batch of geometry
        glDrawElements(GL_TRIANGLES, [batch getIndexCount], GL_UNSIGNED_SHORT, (GLvoid *)[batch getIndexBatchOffset]);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
    glBindBuffer(GL_ARRAY_BUFFER, 0);

IBOs are setup via:

- (void)build {
    glGenBuffers(1, &bufferID);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, bufferID);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
    curBufferSize = bufferSize;

While VBOs are setup via:

- (void)build {
    glGenBuffers(1, &bufferID);
    glBindBuffer(GL_ARRAY_BUFFER, bufferID);
    glBufferData(GL_ARRAY_BUFFER, bufferSize, NULL, GL_DYNAMIC_DRAW);
    glBindBuffer(GL_ARRAY_BUFFER, 0);
    curBufferSize = bufferSize;

IBOs get data with:

- (void)updateData:(GraphicElement *)ge_ {
    RenderBatch *batch;
    GLuint textureID;
    //updating the index offset of the ge
    [ge_ setIndexOffset:curBufferSize / sizeof(unsigned short)];
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, bufferID);
    glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, curBufferSize, [ge_ getIndexCount] * sizeof(unsigned short), [ge_ getIndices]);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
    //  working with render batches
    textureID = [[[ge_ getElement] getParent] getOpenGLTextureID];
    //  if first graphic element
    if (activeBatchCount == 0) {
        // pseudocode: create first batch
    //  otherwise try and match new graphic element's texture id with batch already in use
    else {
        //  pseudocode: find batch with matching texture id, or create new batch
    curBufferSize += [ge_ getIndexCount] * sizeof(unsigned short);

VBOs get data from:

- (void)updateData:(GraphicElement *)ge_ {
    glBindBuffer(GL_ARRAY_BUFFER, bufferID);
    glBufferSubData(GL_ARRAY_BUFFER, curBufferSize, [ge_ getVertexCount] * sizeof(struct Vertex), [ge_ getVertices]);
    glBindBuffer(GL_ARRAY_BUFFER, 0);
    curBufferSize += [ge_ getVertexCount] * sizeof(struct Vertex);

ph34r.png Now, for the actual problem: Visually, there are no issues at all. The game renders perfectly and is 100% stable. However, when running the OpenGL ES Analyzer, I get thousands of the following warning:

"Your application made a drawing call using a buffer that contains uninitialized data. If a draw call uses uninitialized data, the rendering results are incorrect and unpredictable. One way to fix this issue is to provide the buffer data to any BufferData calls instead of a NULL pointer."

Example of Responsible Command column:

I'm working with OpenGL ES 1.1 (I plan on moving on to 2.0 or 3.0 for my next game) and the OpenGL instrument was running on an iPad Air with iOS 7.x.


If anyone has any insight into how to correct these warnings, that would be amazingly helpful!

Pandemic 2.5 on App Store for iPad / iPhone

02 May 2012 - 07:10 PM

Posted Image

Pandemic 2.5, an expanded and refined version of my popular flash game Pandemic 2, has hit the App Store today for iPads, iPhones and iPod Touches.

So far the response has been great, but since this is my first commercial effort, I'm attempting to branch out and reach as many people as possible without spending money I don't have on marketing.

Posted Image

Brief feature highlights:
  • Create your very own custom disease and watch as it spreads across the world through the human population.
  • Combine real symptoms to produce the most infectious and deadly disease the world has ever seen.
  • Select from Viral, Bacterial or Parasitic disease classes based on your play-style.
  • Outmaneuver governments, health organizations and doctors as the human world tries to prevent the spread of your disease through vaccine research, quarantines, body disposal, martial laws and more.
  • Be opportunistic: take advantage of natural disasters whenever and wherever they may strike.
  • Unlock achievements and compare highscores will full Game Center support.
  • Purchase once for just $0.99, and enjoy the game on any of your devices.
More info and screens available on my site: www.darkrealmstudios.com
Link to Pandemic 2.5 in the App Store: http://itunes.apple....37492?ls=1&mt=8

And a special thank you to GameDev and its members for their insight and help!

Odd texture filtering issue

24 September 2011 - 01:00 PM

I'm approaching completion on my first OpenGL ES game (yay!) but just recently ran into an odd issue with texture filtering. I was trying to improve the game's performance when linear filtering suddenly broke. Code when creating textures was this:

glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);

But now, I'm having to set the texture parameters when binding textures and rendering geometry. Is this normal and was it just a fluke that it was working before?

Quick confirmation on Triangle Strips / Degenerate Triangles

11 August 2011 - 03:22 PM

In order to squeeze some increased performance out of iOS devices, I'm working on converting my geometry into indexed triangle strips. I found a program (htgen) that works with wavefront obj files and produces a list of indices to produce triangle strips. Brilliant I thought! But then when I tried rendering the newly generated data, the end result looked worse than this:

Posted Image

Now, initially I knew that I had to connect strips with degenerate triangles, however I had no idea that the number of degenerate triangles between strips were not constant.

After spending hours reading up on triangle strips, I finally discovered the winding of a proceeding strip can be altered if the incorrect number of degenerate triangles were inserted in between two strips. I spent a few more hours writing out some basic triangle strips over pages and pages of paper and figured out (or so I thought?) exactly how many degenerate triangles were needed depending upon the previous triangle's number.

So please correct me if I'm wrong, but:
1) If I have 4 triangles, defined as the indices 0 1 2 3 4 5, OpenGL will draw 4 triangles in the following order: 0 1 2, 2 1 3, 2 3 4, 4 3 5 (which let's assume is CCW for this example).

When joining two strips, normally only 2 extra indices need to be inserted, generating 4 degenerate triangles. So building on the previous strip, if I were to want to join a new strip consisting of 3 new triangles defined as a strip of 6 7 8 9 10 (triangles 6 7 8, 8 7 9, 8 9 10), the final joined list of indices would look like 0 1 2 3 4 5 5 6 6 7 8 9 10 where the bolded numbers are the extra inserted vertices creating 4 new degenerate triangles.

If joining two strips, and the first triangle of the new strip is odd (as in, the first strip had 3 triangles, numbered 0, 1 and 2, and now the starting triangle of the 2nd strip is numbered 3) three additional indices must be inserted in between the two strips generating a total 5 degenerate triangles in order for the winding of the 2nd strip to remain CCW.

Now based on what I just outline above, I fixed up my code to insert the correct number of degenerate triangles between strips and got the above image which while is an improvement over the original, it's still in no way correct (the object should look like Canada and the US). So now I'm stuck and I have to assume that if my above understandings are correct, that the htgen program is not generating strips with consistent winding orders (the objects render perfectly fine if I turn off culling).

If that's the case, it looks like I'll have try generating strips myself because I literally cannot find another suitable application that will generate triangle strips. So in the interest of saving myself even more headache and time spent on writing my own application to generate triangle strips, I wanted to confirm that my understanding of the subject is indeed correct.

Possibly corrupt VBO/IBO?

09 August 2011 - 10:26 PM

I'm currently attempting to get vertex buffer objects to work with interleaved data for indexed triangles.

I've manually confirmed that both the index data and the actual vertex data is correct prior to being loaded into their appropriate buffer objects. The array holding the index data is simply an array of unsigned shorts. The vertex array is an array of floats with the following format (with each bracket symbolizing a float):

[vect x][vect y][vect z] [texture u][texture v] [norm x][norm y][norm z] [r g b a]

In total, each vertex in array is a series of 9 floats. I have the following offsets and strides as:

newMesh.vertexOffset = 0;
newMesh.vertexStride = sizeof(GLfloat) * 6;
newMesh.uvOffset = sizeof(GLfloat) * 3;
newMesh.uvStride = sizeof(GLfloat) * 7;
newMesh.normalOffset = sizeof(GLfloat) * 5;
newMesh.normalStride = sizeof(GLfloat) * 6;
newMesh.colorOffset = sizeof(GLfloat) * 8;
newMesh.colorStride = sizeof(GLfloat) * 8;

Here is where and how I create the buffer objects. I was originally using glMapBufferOES() but replaced it with a simpler solution in an attempt to find what is wrong. The arrays being passed into the objects are allocated dynamically, but I'm unsure if I should free them after having passed them to the buffer objects.

glGenBuffers(1, &vertexBufferID);
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferID);
glBufferData(GL_ARRAY_BUFFER, vertexBufferSize, vertexBufferList, GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);

//creating index buffer object
glGenBuffers(1, &indexBufferID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indexBufferSize, indexBufferList, GL_STATIC_DRAW);

The buffer sizes have been checked and rechecked and are correct. And here is how I actually go about attempting to render the content. It's probably safe to ignore the texture code block but I included it just in case:


glTranslatef(tempMeshInstance.x ,tempMeshInstance.y, tempMeshInstance.z);
glRotatef(tempMeshInstance.rotX, 1.0f, 0.0f, 0.0f);
glRotatef(tempMeshInstance.rotY, 0.0f, 1.0f, 0.0f);
glRotatef(tempMeshInstance.rotZ, 0.0f, 0.0f, 1.0f);
glScalef(tempMeshInstance.scaleX, tempMeshInstance.scaleY, tempMeshInstance.scaleZ);

//binding vertices
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, tempMeshInstance.meshPtr.indexBufferID);
//binding vertex buffer objects
glBindBuffer(GL_ARRAY_BUFFER, tempMeshInstance.meshPtr.vertexBufferID);
//passing the mesh instance data to OpenGL
glMaterialfv(GL_BACK, GL_AMBIENT_AND_DIFFUSE, tempMeshInstance.material);
glVertexPointer(3, GL_FLOAT, tempMeshInstance.meshPtr.vertexStride, ((char *)NULL + (tempMeshInstance.meshPtr.vertexOffset)));
glTexCoordPointer(2, GL_FLOAT, tempMeshInstance.meshPtr.uvStride, ((char *)NULL + (tempMeshInstance.meshPtr.uvOffset)));
glNormalPointer(GL_FLOAT, tempMeshInstance.meshPtr.normalStride, ((char *)NULL + (tempMeshInstance.meshPtr.normalOffset)));
glColorPointer(4, GL_UNSIGNED_BYTE, tempMeshInstance.meshPtr.colorStride, ((char *)NULL + (tempMeshInstance.meshPtr.colorOffset)));

//working with textures
if (tempMeshInstance.meshPtr.textureID != prevTextureID && tempMeshInstance.meshPtr.textureID != 0) {
	glBindTexture(GL_TEXTURE_2D, tempMeshInstance.meshPtr.textureID);
	//if 2d texturing is disabled, renable it
	if (!glIsEnabled(GL_TEXTURE_2D))
} else if (tempMeshInstance.meshPtr.textureID == 0 && glIsEnabled(GL_TEXTURE_2D))
prevTextureID = tempMeshInstance.meshPtr.textureID;

//rendering the mesh instance
glDrawElements(tempMeshInstance.meshPtr.renderFormat, tempMeshInstance.meshPtr.indices, GL_UNSIGNED_SHORT, ((char *)NULL + (0)));
//popping the matrix off the stack

//continuing through list
gameObjs = gameObjs.nextNode;
glBindBuffer(GL_ARRAY_BUFFER, 0);

And this is the lovely rendering I'm rewarded with:

Posted Image

Which looks nothing like what the actual object used to look like. So in summary, I don't know if the buffer objects are somehow becoming corrupted, or if I'm incorrectly using the buffer objects (strides and offsets?), or if I'm incorrectly creating the buffer objects, but something is clearly wrong.

And clues or hints as to what may be causing this would be more than welcome!