# glDrawElements/Arrays Crash

## Recommended Posts

dopplex    164
EDIT: Still having the issue, see most recent post for updated troubleshooting. I'm banging my head against a wall here. As far as I can tell, everything is properly set up (all the proper data in the VBO and IBO, not trying to draw too many primitives, etc.) and yet glDrawElements keeps crashing with an "Access Violation reading location (some memory location)" I'm checking glError in an OCD manner and turning nothing up prior to the crash. I also mapped the VBO and IBO and dumped their contents to disk before theglDrawElements call, and the contents are exactly what I would expect (So, the underlying data seems fine) One other note is that the call works with a number of indices up to around 10,000 or so - I'm trying to draw 240,000 of them. I checked GL_MAX_ELEMENTS_VERTICES and GL_MAX_ELEMENTS_INDICES and it seems that that shouldn't be a problem. Since I also checked my VBO and do have 240,000 pairs of floats in there, and 240,000 indices, I would think that that ought to be fine as well. Here's the code where I set the VBO and IBO up:
void initFSPointQuad(unsigned int width, unsigned int height, unsigned int &VBO_ID, unsigned int &IBO_ID)
{
float *tempVertices;
unsigned int *inds;
unsigned int numElem = 2;
unsigned int IBOSize = height * width * sizeof(unsigned int);
unsigned int VBOSize = numElem * height * width * sizeof(float);
tempVertices = (float*)malloc(VBOSize);
inds = (unsigned int*)malloc(IBOSize);
float xDiff = 0.5 / width;
float yDiff = 0.5 / height;
//populate VBO data with appropriate UV coords for each pixel
for(unsigned int i = 0; i < width; i++)
{
for(unsigned int j = 0; j < height; j++)
{
if (numElem * (i * height + j) * sizeof(float) > VBOSize)
{
cerr << "Trying to write to a bad location!" << endl;
};
tempVertices[numElem * (i * height + j)] = (float) i / width + xDiff;
tempVertices[numElem * (i * height + j) + 1] = (float) j / height + yDiff;

inds[i * height + j] = i * height + j;
}
}

GLuint tIBO, tVBO;
GLERRCHECK;
glGenBuffersARB(1, &tVBO);
GLERRCHECK;
glGenBuffersARB(1, &tIBO);
GLERRCHECK;
glBindBufferARB(GL_ARRAY_BUFFER_ARB, tVBO);
GLERRCHECK;
glBufferDataARB(GL_ARRAY_BUFFER_ARB, VBOSize, tempVertices, GL_STATIC_DRAW);
GLERRCHECK;

glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, tIBO);
GLERRCHECK;
glBufferDataARB(GL_ELEMENT_ARRAY_BUFFER_ARB, IBOSize, inds, GL_STATIC_DRAW);
GLERRCHECK;

glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0);
GLERRCHECK;
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, 0);
GLERRCHECK;

free(tempVertices);
free(inds);
VBO_ID = tVBO;
IBO_ID = tIBO;
};


And here's the draw call (With lots of extra random debugging code commented out, generated in the process of trying to figure out what's going on here):
void drawFSPointQuad(void* args)
{

float xBase = 0.5f / (float)QuadArgs->width;
float yBase = 0.5f / (float)QuadArgs->height;
float xDiff = 2.0 * xBase;
float yDiff = 2.0 * yBase;

GLint maxVertElem;
GLint maxIndElem;
glGetIntegerv(GL_MAX_ELEMENTS_VERTICES, &maxVertElem);
glGetIntegerv(GL_MAX_ELEMENTS_INDICES, &maxIndElem);

glClear(GL_COLOR_BUFFER_BIT);
GLERRCHECK;
glPointSize(1.0);
GLERRCHECK;
glEnableClientState(GL_VERTEX_ARRAY);
GLERRCHECK;

GLERRCHECK;
GLERRCHECK;
//unsigned int * testInds = (unsigned int*)malloc(nElemDraw * sizeof(unsigned int));
//for (unsigned int i = 0; i < nElemDraw; i++) testInds[i] = i;

glVertexPointer(2, GL_FLOAT, 0, 0);
GLERRCHECK;
/*unsigned int numChunks = 100;
unsigned int chunkSize = nElemDraw / numChunks;
unsigned int i = 0;
while(i + chunkSize < nElemDraw)
{
glDrawRangeElements(GL_POINTS, i, i + chunkSize, chunkSize, GL_UNSIGNED_INT, 0);
i += chunkSize;
GLERRCHECK;
};

chunkSize = nElemDraw - i;
glDrawRangeElements(GL_POINTS, i, i + chunkSize, chunkSize, GL_UNSIGNED_INT, 0);
*/
glDrawElements(GL_POINTS, nElemDraw, GL_UNSIGNED_INT, 0);
GLERRCHECK;
glBindBuffer(GL_ARRAY_BUFFER_ARB, 0);
GLERRCHECK;
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER_ARB, 0);
GLERRCHECK;
glDisableClientState(GL_VERTEX_ARRAY);
GLERRCHECK;
};


At this point I feel like I must be missing something blindingly obvious (Am I using indices wrong for GL_POINTS? After all, I'm a bit confused as to why I'd need indices at all rather than just a VBO. Documentation seems to indicate that I'm doing it the right way, though.). Can anyone help? [Edited by - dopplex on February 15, 2010 3:21:20 PM]

##### Share on other sites
swiftcoder    18437
Quote:
 Original post by dopplexAt this point I feel like I must be missing something blindingly obvious (Am I using indices wrong for GL_POINTS? After all, I'm a bit confused as to why I'd need indices at all rather than just a VBO. Documentation seems to indicate that I'm doing it the right way, though.). Can anyone help?
You don't need indices for GL_POINTS. Just use glDrawArrays() to render from the VBO without indices.

On the flip side, indices shouldn't actually hurt (apart from chewing up some extra bandwidth), and I don't see anything immediately wrong with your code...

##### Share on other sites
dopplex    164
I didn't know about glDrawArrays - let me give it a shot and see if the code happens to work with it.

(brief pause)

Well, it appears to work now! Thanks a lot.

Still no clue what was going wrong, though I guess it doesn't matter that much!

Edit: May have spoken too soon... Went crashless for a bit, but then abruptly started crashing again after I made a change to the shader it was using (the shader compiled and validated) If it's shader related, I was looking in the complete wrong direction. Can't seem to get it back to a state where it doesn't crash either.

[Edited by - dopplex on February 14, 2010 6:04:08 PM]

##### Share on other sites
swiftcoder    18437
What graphics card/OpenGL driver/OS are you running?

##### Share on other sites
dopplex    164
Windows 7 (64-bit), Radeon 5970, OpenGL driver version 6.14.10.9232

Extra note: Immediate mode seems to be working (The shader is doing some weird stuff, but in theory that's a separate issue to debug) so I'm not sure that the shader is actually responsible for the crashes.

glDrawArrays is working if I cut down the number of elements to under 10k or so.

##### Share on other sites
swiftcoder    18437
Quote:
 Original post by dopplexWindows 7 (64-bit), Radeon 5970, OpenGL driver version 6.14.10.9232
I don't know if the version numbers equate exactly, but that looks ancient. My Radeon 4870 is running on driver version 8.690.0.0.

Recommend you try installing the latest Catalyst drivers from AMD's site.
Quote:
 glDrawArrays is working if I cut down the number of elements to under 10k or so.
I routinely render 300,000+ vertices in a single glDrawArrays() call - unless your drivers are bugged, that shouldn't be a problem.

##### Share on other sites
dopplex    164
It is a recent one - the number I gave was the OpenGL driver version number (it's different that the version for the overall Catalyst package). That's Catalyst v9.12. Sorry for the confusion! (I'll check to update them anyway, just in case)

I'm pretty sure it has to be something I'm causing. I'm just somewhat at a loss for what it might be.

##### Share on other sites
dopplex    164
I'm still having this problem.

Here's what I've done so far (in addition to what I originally listed.)

1. Drivers are now fully up to date (They weren't THAT out of date before..)
2. I've now added the following code chunk before the glDrawArrays call to make sure that no client states were set that I wasn't providing data for:

   glEnableClientState(GL_VERTEX_ARRAY);   glDisableClientState(GL_TEXTURE_COORD_ARRAY);   glDisableClientState(GL_NORMAL_ARRAY);   glDisableClientState(GL_COLOR_ARRAY);   glDisableClientState(GL_INDEX_ARRAY);   glDisableClientState(GL_EDGE_FLAG_ARRAY);   glDisableClientState(GL_FOG_COORD_ARRAY);   glDisableClientState(GL_SECONDARY_COLOR_ARRAY);

Those are all the possible Client States that I'm seeing on the doc page... But it doesn't help.

I've also tried both using regular memory and VBOs. No change (and I've also verified that both held the proper data by mapping them and writing the contents to file)

I have found one thing that causes it not to crash: using glDrawArrays with no shader bound.

I think I may have a vague idea of what may be causing this now, but I'm going to see if I can verify it. (I'm hoping I'm not right, since I'm not entirely sure how to deal with the problem if I am!)

##### Share on other sites
I haven't looked in detail, but I've got a feeling you are writing past the end of the array with this line:
tempVertices[numElem * (i * height + j) + 1] = (float) j / height + yDiff;

I think the issues might be down to corrupting the stack rather than GL.

P.S. Also what is numElem is for?