Why doesn't glDrawElements work on Nvidia?

Started by
9 comments, last by silvermace 19 years, 2 months ago
I'm trying to use glDrawElements, and it works fine on my ATI9800 pro, and my friends X300, and some rubbish intel card at uni, but when I test my program on Nvidia cards it crashes. I tried it on two different GeForce FX 5200's and a Quadro4 380 XGL. Anyone have any other problems with this, I don't see why it shouldn't work as it's not an extention specific to ATI. Thanks.
---When I'm in command, every mission's a suicide mission!
Advertisement
Post the relevant code that you're using.
If at first you don't succeed, redefine success.
Works here on GF2, with va and vbo.

VA:
    if( vertex_array )    {      glEnableClientState( GL_VERTEX_ARRAY );      glVertexPointer( 3, GL_FLOAT, 0, cast(void *)vertex_array );    }    if( normal_array )    {      glEnableClientState( GL_NORMAL_ARRAY );      glNormalPointer( GL_FLOAT, 0, cast(void *)normal_array );    }      if( texcoord_array )    {      glEnableClientState( GL_TEXTURE_COORD_ARRAY );      glTexCoordPointer( 2, GL_FLOAT, 0, cast(void *)texcoord_array );    }    glDrawElements( primitivetype, index_array.length, GL_UNSIGNED_SHORT, cast(void*)index_array );          glDisableClientState( GL_VERTEX_ARRAY );    glDisableClientState( GL_NORMAL_ARRAY );    glDisableClientState( GL_TEXTURE_COORD_ARRAY );


VBO:
    if( vertex_buffer )    {      glEnableClientState( GL_VERTEX_ARRAY );      glBindBufferARB( GL_ARRAY_BUFFER_ARB, vertex_buffer );      glVertexPointer( 3, GL_FLOAT, 0, null );    }    if( normal_buffer )    {      glEnableClientState( GL_NORMAL_ARRAY );      glBindBufferARB( GL_ARRAY_BUFFER_ARB, normal_buffer );      glNormalPointer( GL_FLOAT, 0, null );    }    if( texcoord_buffer )    {      glEnableClientState( GL_TEXTURE_COORD_ARRAY );      glBindBufferARB( GL_ARRAY_BUFFER_ARB, texcoord_buffer );      glTexCoordPointer( 2, GL_FLOAT, 0, null );    }    glBindBufferARB( GL_ELEMENT_ARRAY_BUFFER_ARB, index_buffer );    glDrawElements( primitivetype, index_count, GL_UNSIGNED_SHORT, null );    glDisableClientState( GL_VERTEX_ARRAY );    glDisableClientState( GL_NORMAL_ARRAY );    glDisableClientState( GL_TEXTURE_COORD_ARRAY );
here's my code

void C3dMesh::draw(Cmaterial *mat){	if (texCoord)	{		glEnable(GL_TEXTURE_2D);		glBindTexture(GL_TEXTURE_2D, mat->tex);		glTexCoordPointer(2, GL_FLOAT, 0, texCoord);	}		glMaterialfv(GL_FRONT, GL_AMBIENT, mat->ambient);	glMaterialfv(GL_FRONT, GL_DIFFUSE, mat->diffuse);	glMaterialfv(GL_FRONT, GL_SPECULAR, mat->specular);	glMaterialf (GL_FRONT, GL_SHININESS, mat->shininess);		glVertexPointer(3, GL_FLOAT, 0, vertices);	glNormalPointer(GL_FLOAT, 0, normals);	glDrawElements(GL_TRIANGLES, numFaces * 3, GL_UNSIGNED_INT, indices);	if (texCoord)	{		glDisable(GL_TEXTURE_2D);	}	}


I'm in the process of converting to VBO's, just about to test them... I'll get back to you on that.
---When I'm in command, every mission's a suicide mission!
Are you enabling the respective client states needed for VA operation? You don't seem to be in that code. Also, are there actually num_faces * 3 indices in the array? If not, it'll cause a crash.
If at first you don't succeed, redefine success.
Yeah, I previously enable all the arrays and there are the right number of indices. Like I said, it works fine on most cards except Nvidia.

I got a VBO working on Nvidia, so just in the process of converting all to VBO's now. Which is also giving me more hassle. sigh...
---When I'm in command, every mission's a suicide mission!
glDrawElements() is an absolutely vital API call, and a core part of any existing 3D rendering engine. The chances of it exhibiting such a hard bug in such a simplistic setup are very, very slim - in fact approaching the impossible. It would've been discovered a long time ago.

The chance that this is a bug in your code is something like 99.99%. Nvidia is known to be much more intolerant to invalid input than ATI. If you reference an out of range index on nvidia, for example, you will get a crash with almost 100% certainty, while ATI drivers often simply ignore it.

So I'd recommend checking the data you feed to the API.
*kisses his ATI card*

wow, and i was gonna switch to nvidia... my VBO md2 code would be so screwed if it was on an nvidia card (as it still doesnt work haha)

-Dan
When General Patton died after World War 2 he went to the gates of Heaven to talk to St. Peter. The first thing he asked is if there were any Marines in heaven. St. Peter told him no, Marines are too rowdy for heaven. He then asked why Patton wanted to know. Patton told him he was sick of the Marines overshadowing the Army because they did more with less and were all hard-core sons of bitches. St. Peter reassured him there were no Marines so Patton went into Heaven. As he was checking out his new home he rounded a corner and saw someone in Marine Dress Blues. He ran back to St. Peter and yelled "You lied to me! There are Marines in heaven!" St. Peter said "Who him? That's just God. He wishes he were a Marine."
u have somethingelse enabled

eg
perhaps this is somewhere in your code
glEnableClientState( GL_COLOR_ARRAY );

if u then call
glTexCoordPointer(2, GL_FLOAT, 0, texCoord);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glNormalPointer(GL_FLOAT, 0, normals);
it will crash in nvoglnt.dll <- this is what u see

prolly its another texcoord arry that u have enabled (to see whats enabled at the time see gldebugger/glintercept)
failing that check the arrays are large enuf,
Thanks zedzeek and Yann. I'd fixed it by the time i read your posts but you confirmed what I had found and was confused about.

I've converted it to VBO's now, but I was still having some (i.e. loads of) problems, then I noticed that the texture array was still enabled when i wasn't using it (just like you thought zedzeek). Also, I was testing it on my machine (ATI) and it was running fine but crashing on my friends laptop (Nvidia), so thanks for clearing that up for me Yann.

I can see I'm going to have to be much more careful testing my stuff now, but at least i know where to look first when i get problems. And to think, I used to love Nvidia!

In case anyone is intersted heres my new code, and my frame rate went from ~400 to ~550..

void C3dMesh::draw(Cmaterial *mat){	if (texCoord)	{		glEnableClientState(GL_TEXTURE_COORD_ARRAY);		glEnable(GL_TEXTURE_2D);		glBindTexture(GL_TEXTURE_2D, mat->tex);		glBindBufferARB(GL_ARRAY_BUFFER_ARB, vboTexCoord);		glTexCoordPointer(2, GL_FLOAT, 0, (char *)NULL);	}	else	{		glDisable(GL_TEXTURE_2D);		glDisableClientState(GL_TEXTURE_COORD_ARRAY); // YOU BUGGER!!!!	}			glMaterialfv(GL_FRONT, GL_AMBIENT, mat->ambient);	glMaterialfv(GL_FRONT, GL_DIFFUSE, mat->diffuse);	glMaterialfv(GL_FRONT, GL_SPECULAR, mat->specular);	glMaterialf (GL_FRONT, GL_SHININESS, mat->shininess);	/* 	// Old VA code		glVertexPointer(3, GL_FLOAT, 0, vertices);	glNormalPointer(GL_FLOAT, 0, normals);	glDrawElements(GL_TRIANGLES, numFaces * 3, GL_UNSIGNED_INT, indices);*/	// New VBOs --- VBOs --- VBOs --- VBOs --- VBOs --- VBOs ---	glBindBufferARB(GL_ARRAY_BUFFER_ARB, vboVertex);	glVertexPointer( 3, GL_FLOAT, 0, (char *)NULL);	glBindBufferARB(GL_ARRAY_BUFFER_ARB, vboNormal);	glNormalPointer(GL_FLOAT, 0, (char *) NULL );	glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, vboIndices);	glDrawElements(GL_TRIANGLES, numFaces * 3, GL_UNSIGNED_INT, (char *)NULL);		glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0);	glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, 0);}


Thanks again you guys.
---When I'm in command, every mission's a suicide mission!

This topic is closed to new replies.

Advertisement