Sign in to follow this  

Why doesn't glDrawElements work on Nvidia?

This topic is 4689 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to use glDrawElements, and it works fine on my ATI9800 pro, and my friends X300, and some rubbish intel card at uni, but when I test my program on Nvidia cards it crashes. I tried it on two different GeForce FX 5200's and a Quadro4 380 XGL. Anyone have any other problems with this, I don't see why it shouldn't work as it's not an extention specific to ATI. Thanks.

Share this post


Link to post
Share on other sites
Works here on GF2, with va and vbo.

VA:

if( vertex_array )
{
glEnableClientState( GL_VERTEX_ARRAY );
glVertexPointer( 3, GL_FLOAT, 0, cast(void *)vertex_array );
}
if( normal_array )
{
glEnableClientState( GL_NORMAL_ARRAY );
glNormalPointer( GL_FLOAT, 0, cast(void *)normal_array );
}

if( texcoord_array )
{
glEnableClientState( GL_TEXTURE_COORD_ARRAY );
glTexCoordPointer( 2, GL_FLOAT, 0, cast(void *)texcoord_array );
}
glDrawElements( primitivetype, index_array.length, GL_UNSIGNED_SHORT, cast(void*)index_array );

glDisableClientState( GL_VERTEX_ARRAY );
glDisableClientState( GL_NORMAL_ARRAY );
glDisableClientState( GL_TEXTURE_COORD_ARRAY );



VBO:

if( vertex_buffer )
{
glEnableClientState( GL_VERTEX_ARRAY );
glBindBufferARB( GL_ARRAY_BUFFER_ARB, vertex_buffer );
glVertexPointer( 3, GL_FLOAT, 0, null );
}
if( normal_buffer )
{
glEnableClientState( GL_NORMAL_ARRAY );
glBindBufferARB( GL_ARRAY_BUFFER_ARB, normal_buffer );
glNormalPointer( GL_FLOAT, 0, null );
}
if( texcoord_buffer )
{
glEnableClientState( GL_TEXTURE_COORD_ARRAY );
glBindBufferARB( GL_ARRAY_BUFFER_ARB, texcoord_buffer );
glTexCoordPointer( 2, GL_FLOAT, 0, null );
}
glBindBufferARB( GL_ELEMENT_ARRAY_BUFFER_ARB, index_buffer );
glDrawElements( primitivetype, index_count, GL_UNSIGNED_SHORT, null );

glDisableClientState( GL_VERTEX_ARRAY );
glDisableClientState( GL_NORMAL_ARRAY );
glDisableClientState( GL_TEXTURE_COORD_ARRAY );


Share this post


Link to post
Share on other sites
here's my code


void C3dMesh::draw(Cmaterial *mat)
{
if (texCoord)
{
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, mat->tex);
glTexCoordPointer(2, GL_FLOAT, 0, texCoord);
}

glMaterialfv(GL_FRONT, GL_AMBIENT, mat->ambient);
glMaterialfv(GL_FRONT, GL_DIFFUSE, mat->diffuse);
glMaterialfv(GL_FRONT, GL_SPECULAR, mat->specular);
glMaterialf (GL_FRONT, GL_SHININESS, mat->shininess);

glVertexPointer(3, GL_FLOAT, 0, vertices);
glNormalPointer(GL_FLOAT, 0, normals);

glDrawElements(GL_TRIANGLES, numFaces * 3, GL_UNSIGNED_INT, indices);

if (texCoord)
{
glDisable(GL_TEXTURE_2D);
}
}



I'm in the process of converting to VBO's, just about to test them... I'll get back to you on that.

Share this post


Link to post
Share on other sites
Yeah, I previously enable all the arrays and there are the right number of indices. Like I said, it works fine on most cards except Nvidia.

I got a VBO working on Nvidia, so just in the process of converting all to VBO's now. Which is also giving me more hassle. sigh...

Share this post


Link to post
Share on other sites
glDrawElements() is an absolutely vital API call, and a core part of any existing 3D rendering engine. The chances of it exhibiting such a hard bug in such a simplistic setup are very, very slim - in fact approaching the impossible. It would've been discovered a long time ago.

The chance that this is a bug in your code is something like 99.99%. Nvidia is known to be much more intolerant to invalid input than ATI. If you reference an out of range index on nvidia, for example, you will get a crash with almost 100% certainty, while ATI drivers often simply ignore it.

So I'd recommend checking the data you feed to the API.

Share this post


Link to post
Share on other sites
u have somethingelse enabled

eg
perhaps this is somewhere in your code
glEnableClientState( GL_COLOR_ARRAY );

if u then call
glTexCoordPointer(2, GL_FLOAT, 0, texCoord);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glNormalPointer(GL_FLOAT, 0, normals);
it will crash in nvoglnt.dll <- this is what u see

prolly its another texcoord arry that u have enabled (to see whats enabled at the time see gldebugger/glintercept)
failing that check the arrays are large enuf,

Share this post


Link to post
Share on other sites
Thanks zedzeek and Yann. I'd fixed it by the time i read your posts but you confirmed what I had found and was confused about.

I've converted it to VBO's now, but I was still having some (i.e. loads of) problems, then I noticed that the texture array was still enabled when i wasn't using it (just like you thought zedzeek). Also, I was testing it on my machine (ATI) and it was running fine but crashing on my friends laptop (Nvidia), so thanks for clearing that up for me Yann.

I can see I'm going to have to be much more careful testing my stuff now, but at least i know where to look first when i get problems. And to think, I used to love Nvidia!

In case anyone is intersted heres my new code, and my frame rate went from ~400 to ~550..


void C3dMesh::draw(Cmaterial *mat)
{
if (texCoord)
{
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, mat->tex);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vboTexCoord);
glTexCoordPointer(2, GL_FLOAT, 0, (char *)NULL);
}
else
{
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_TEXTURE_COORD_ARRAY); // YOU BUGGER!!!!
}

glMaterialfv(GL_FRONT, GL_AMBIENT, mat->ambient);
glMaterialfv(GL_FRONT, GL_DIFFUSE, mat->diffuse);
glMaterialfv(GL_FRONT, GL_SPECULAR, mat->specular);
glMaterialf (GL_FRONT, GL_SHININESS, mat->shininess);

/* // Old VA code
glVertexPointer(3, GL_FLOAT, 0, vertices);
glNormalPointer(GL_FLOAT, 0, normals);

glDrawElements(GL_TRIANGLES, numFaces * 3, GL_UNSIGNED_INT, indices);
*/

// New VBOs --- VBOs --- VBOs --- VBOs --- VBOs --- VBOs ---

glBindBufferARB(GL_ARRAY_BUFFER_ARB, vboVertex);
glVertexPointer( 3, GL_FLOAT, 0, (char *)NULL);

glBindBufferARB(GL_ARRAY_BUFFER_ARB, vboNormal);
glNormalPointer(GL_FLOAT, 0, (char *) NULL );

glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, vboIndices);

glDrawElements(GL_TRIANGLES, numFaces * 3, GL_UNSIGNED_INT, (char *)NULL);

glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0);
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, 0);

}



Thanks again you guys.

Share this post


Link to post
Share on other sites
Quote:
Original post by Ademan555
*kisses his ATI card*

wow, and i was gonna switch to nvidia... my VBO md2 code would be so screwed if it was on an nvidia card (as it still doesnt work haha)

-Dan
quite the opposite, very very bad idea to develop on a machine that tolerates errors.

i have a friend who tests all his production GL code on an 3Dlabs Wildcat because 3Dlabs drivers spit at you every single possible error/warning in your API code and afterwards his code is always rocksolid; no matter what card you're running it on.

i like drivers that tolerate error, but its a really bad idea when developing.
i have two cards, an NVti4200 and a R9500pro, i code on one and test on the other, on any given testrun i can find 5-6 potential crashes, and sometimes (no often) a situation where the code will only run on the development card.

Share this post


Link to post
Share on other sites

This topic is 4689 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this