[solved] nVidia's GLSL implementation?

Started by
7 comments, last by V3rt3x 18 years, 8 months ago
Hello, I got strange behaviours with GLSL on nvidia cards (I haven't tried on ATI cards since I have'nt any one). It causes crashes for no reason, for example: I add 4 blank lines in my shader, and it causes au segmentation fault at compile time! o_O I comment a line, it crashes... unless I remove them, or add blank lines >_< Another (concrete) exemple: I want to implement linear interpolation between two frames of a MD2 model in hardware. I send to GLSL the vertices of the two frames, one array via glVertexPointer, the other via glVertexAttribArray. Here is the vertex shader:
uniform float fInterp;

attribute vec3 secondVertex;


void main()
{
  vec4 v1 = gl_Vertex;
  vec4 v2 = vec4(secondVertex,1.0);
  vec4 itp = mix( v1, v2, fInterp );

  gl_Position = gl_ModelViewProjectionMatrix * itp;
}

At run time, it seems that secondVertex is always 0! To ensure the data was really sent to OpenGL, I changed my shader to this:
uniform float fInterp;

attribute vec3 secondVertex;

void main()
{
  gl_Position = gl_ModelViewProjectionMatrix * vec4(secondVertex,1.0);
}

This one worked well! I have seen that if I use the gl_Vertex variable, for exemple declaring a dummy temporary variable affected with gl_Vertex value (vec4 v1 = gl_Vertex;), it breaks the shader! secondVertex became null! It has no sens!!! Why using a built-in attribute would break the others? I also tried this code, sending the two arrays via glVertexAttribArray:
uniform float fInterp;

attribute vec3 firstVertex;
attribute vec3 secondVertex;


void main()
{
  vec4 v1 = vec4(firstVertex,1.0);
  vec4 v2 = vec4(secondVertex,1.0);

  vec4 itp = mix( v1, v2, fInterp );

  gl_Position = gl_ModelViewProjectionMatrix * itp;
}

It has the same results than using gl_Vertex: secondVertex is null... Finally, I could get my linear interpolation, passing my second vertex array via... glTexCoordPointer... *vomit*
uniform float fInterp;

attribute vec3 firstVertex;
attribute vec3 secondVertex;

void main()
{
  vec4 v1 = gl_Vertex;
  vec4 v2 = gl_MultiTexCoord1;
  vec4 itp = mix( v1, v2, fInterp );

  gl_Position = gl_ModelViewProjectionMatrix * itp;
}

I tried multiple attribute locations (6, 7, 3, 4, 0), it doesn't change the result. I have seen some GLSL demos (from nvidia) running fine and using only glVertexAttrib to send data. I have modified one of them in order to use glVertex, glNormal, glTexCoord and glVertexAttrib (it was not via vertex arrays) together, and it still worked well... Where's the problem with my code? Is the nVidia's GLSL implementation so poor? (crashing for a blank line) [Edited by - V3rt3x on August 5, 2005 4:22:24 AM]
Advertisement
It sounds like a problem with nvidias parser unfortunatly (why isnt anyone using 3DLab's excellent compiler frontend?). Have you tried getting newer drivers (beta drivers maybe?) or even going back to older drivers and see if that changes anything.

You can always try running the code through GLSLValidate and see if there are any syntax problems nvidias perser doesnt find. Thats what I have to do most of the time since I develop on a 6600 at home and have to show off code at uni on an ati card (I'v showed up with broken code way too many times).

[Edited by - rollo on August 4, 2005 3:45:09 PM]
for my own edification could you post your rendering code you are using with each shader?
@rollo: I have tested my shaders with glslparser, no error whas reported. I use recent drivers (I'm running Linux).

@_the_phantom_: Here are my rendering functions:

The main display func:
void Display( void ){  // Clean window  glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );  glLoadIdentity();  // some code...  glUseProgram( lerpProg );  glBindAttribLocationARB( lerpProg, 5, "firstVertex" );  glBindAttribLocationARB( lerpProg, 6, "secondVertex" );  checkOpenGLErrors( "display" );  // Draw objects  cyberpunk.DrawObjectItp( bAnimated );  weapon.DrawObjectItp( bAnimated );  glUseProgram( 0 );  glDisable( GL_LIGHTING );  glDisable( GL_TEXTURE_2D );}


The mesh “setupVertexArrays” func:
void Mesh::setupVertexArraysItp( int frameA, int frameB, float interp ){  _itpFrame.vertexArray = _frames[ frameA ].vertexArray;  _itpFrame.normalArray = _frames[ frameA ].normalArray;  _itpFrame2.vertexArray = _frames[ frameB ].vertexArray;  _itpFrame2.normalArray = _frames[ frameB ].normalArray;  GLint interpLoc = glGetUniformLocation( lerpProg, "fInterp" );  glUniform1f( interpLoc, interp );}


The mesh rendering func:
void Mesh::DrawModelItpWithVertexArrays( void ){  glEnableClientState( GL_VERTEX_ARRAY );  glEnableClientState( GL_NORMAL_ARRAY );  glEnableClientState( GL_TEXTURE_COORD_ARRAY );  glEnableVertexAttribArray( 5 );  glEnableVertexAttribArray( 6 );  // Upload model data to OpenGL  glVertexPointer( 3, GL_FLOAT, 0, _itpFrame.vertexArray );  glNormalPointer( GL_FLOAT, 0, _itpFrame.normalArray );  glClientActiveTexture( GL_TEXTURE0 );  glTexCoordPointer( 2, GL_FLOAT, 0, _texCoordArray );  glClientActiveTexture( GL_TEXTURE1 );  glTexCoordPointer( 3, GL_FLOAT, 0, _itpFrame2.vertexArray );  glVertexAttribPointer( 5, 3, GL_FLOAT, GL_FALSE, 0, _itpFrame.vertexArray );  glVertexAttribPointer( 6, 3, GL_FLOAT, GL_FALSE, 0, _itpFrame2.vertexArray );  // Bind to model's texture  glBindTexture( GL_TEXTURE_2D, _texId );  // Draw the model  glDrawElements( GL_TRIANGLES, _numTris * 3, GL_UNSIGNED_INT, _vertIndices );  glDisableClientState( GL_VERTEX_ARRAY );  glDisableClientState( GL_NORMAL_ARRAY );  glDisableClientState( GL_TEXTURE_COORD_ARRAY );  glDisableVertexAttribArray( 5 );  glDisableVertexAttribArray( 6 );}


You can download the demo at http://tfc.duke.free.fr/old/models/md2opti.zip
The Diplay() function is in Main.cpp, the two other shown above are in Md2.cpp. Shader loading related code is in Shaders.h/.cpp.
hmmmm two points;

1) you dont need to constantly requery the shader for the uniform, infact this is a pretty slow operation, ask once and cache the result.
2) When drawing there MUST be data send to attribute 0 in order for vertex submission to be completed, as such anything bound to attribute zero must also be the last data sent.

Also, NV have some rules about how you map attributes around, I'll see if I can dig up the document again...

edit: I downloaded the code hoping to see an exe.. however... errm.. lacking.. and I dont have time to sort out a project etc
Screen shots are always handy for high lighting a problem however..
There was a linux executable though :)

The demo worked fine for me, does that code break for you? I ran it on a geforce 6600 with driver version 76.67
In the Orange Book, page 96, it is said that the gl_Vertex attribute, like the attribute 0, signal the end of the vertex. Since I send the first frame vertices through gl_Vertex attribute, I think it is correct, no?

And yes, I forgot to mention, but the demo works fine, it runs with the working shader, but you just have to replace lerp.vert's code with one of the listings I posted in the original thread to see the problem... I have a GeForceFX 5500 and 76.64 drivers. If I have the time tomorrow, I'll upgrade my drivers and build a windows executable. Now it's time to sleep for me :)
I changed the lerp.vert code and I get the same error as you. :/

EDIT: I thought you must be doing something fishy with binding your attribute locations, and I was right.

you cant use glBindAttribLocation after your shader has been linked, see docs here, it has no effect then:
http://developer.3dlabs.com/documents/GLmanpages/glBindAttribLocation.htm

this means that firstVertex/secondVertex would have been automatically bound to some other indices, which is why it didnt work. I recommend querying the attribute locations after linkage instead of setting it yourself (with glGetAttribLocation). Thats what I do in my code and it seems to work fine.
You're right rollo! Now it works! Thanks a lot!

This topic is closed to new replies.

Advertisement