Jump to content

  • Log In with Google      Sign In   
  • Create Account

glCreateProgram/glCreateProgramObjectARB not working


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
19 replies to this topic

#1 RIZAX   Members   -  Reputation: 127

Like
0Likes
Like

Posted 06 May 2008 - 03:26 PM

I have the oddest problem, would anyone know why both glCreateProgram and glCreateProgramObjectARB refuses to return a valid value on non-NVIDIA hardware? I have 5 machines I've tried my shader on, 2 Nvidia, 2 ATI, 1 Intel and only the Nvidia's work. The program compiles\links fine on all platforms, but on the non-Nvidia platforms when I try to create a program instance I get nothing but invalid values back. Platforms: Dual 7800GT = success Single 8800GT = success X1600 = fail 9800xt = fail Integrated Intel (some chipset - didnt get which one, but it was a dell with a centrino in it). = fail Thanks for any feedback.

Sponsor:

#2 MARS_999   Members   -  Reputation: 1299

Like
0Likes
Like

Posted 06 May 2008 - 08:33 PM

What does the error log say? Checking for errors is a start.

#3 samoth   Crossbones+   -  Reputation: 5140

Like
0Likes
Like

Posted 06 May 2008 - 08:53 PM

"Works on nVidia but not on all others" is usually due to illegal GLSL code which is using something that's legal in Cg. The nVidia compiler is permissive with those things, as it compiles via Cg anyway. The others aren't and fail (being "permissive" isn't necessarily good in this case, however).

For example, using the "half" datatype or assigning an int to a float (GLSL does no implicit casts) are things to look for.

#4 RIZAX   Members   -  Reputation: 127

Like
0Likes
Like

Posted 06 May 2008 - 11:15 PM

Thanks for the replies,

The error code is 501 when you try to execute either, so thats an GL_INVALID_VALUE.

I understand that my GLSL might not be the greatest (actually it isnt, being that the frag shader is one giant main), and I would look to cleaning it up, but first I need to be able to create a program instance, which is where the error comes.

The error happens in my loader class in the constructor of all places.

The constructor code is (typing this at work so bare with me if theres some syntax issue):

Shader()
{
GLenum error;
program = glCreateProgram();
if((error=glGetError())!=GL_NO_ERROR)
exit(error);
}



error is always 501 on non-nvidia machines. BTW, im using glew for the extensions in case that has a bearing on the problem.

[Edited by - RIZAX on May 7, 2008 6:15:24 AM]

#5 MARS_999   Members   -  Reputation: 1299

Like
0Likes
Like

Posted 07 May 2008 - 05:08 AM

You should try this

std::cout << gluErrorString(glGetError()); See what is reported. Also if you haven't download RenderMonkey and dump your shader code into RenderMonkey and see if you have errors. It will tell you if you do.

#6 phantom   Moderators   -  Reputation: 7923

Like
0Likes
Like

Posted 07 May 2008 - 05:33 AM

It's all very well you people saying 'check the GLSL' but it's the program object creation which is failing; we haven't even got to the source code yet.

@OP

The only thing I can say is check you have a valid context as the code posted looks fine to me.

#7 V-man   Members   -  Reputation: 805

Like
0Likes
Like

Posted 07 May 2008 - 07:09 AM

Probably he has a class and this is his constructor


Shader()
{
GLenum error;
program = glCreateProgram();
if((error=glGetError())!=GL_NO_ERROR)
exit(error);
}



so the constructor gets called while there is no GL context or even a window. It's the typical C++/Java/other OO language problem.

#8 RIZAX   Members   -  Reputation: 127

Like
0Likes
Like

Posted 07 May 2008 - 12:14 PM

Everything should be up and running, to verify everything, i put a small little main together just to see what was happening


int main(int argc, char** argv)
{
GLuint program;
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
glutInitWindowPosition(400, 100);
glutInitWindowSize(640, 480);
glutCreateWindow("ExtensionTest");

GLenum err = glewInit();
if (err == GLEW_OK)
logmsg("GLEW WORKS!!!");
else
{
logmsg("GLEW NO INIT\n");
exit(1);
}

if(GLEW_VERSION_2_0)
logmsg("Opengl 2.0 is supported");
else
{
logmsg("Opengl 2.0 not supported");
exit(1);
}

program = glCreateProgram();
logmsg("program: %u",program);
return 0;




which produces the output:


GLEW WORKS!!!
Opengl 2.0 is supported
program: 2147483649




The odd thing, that I just started to notice is that the value that program gets is always the same on every platform thats not NVIDIA ... the plot thickens....

#9 MARS_999   Members   -  Reputation: 1299

Like
0Likes
Like

Posted 07 May 2008 - 01:18 PM

It must have a valid context, he said the code works fine with Nvidia cards. If you don't have a valid context, nothing should work, not just one vendor, but all vendors shouldn't work. Unless I am not reading it clearly that he has gotten it to work on Nvidia hardware...

[Edited by - MARS_999 on May 7, 2008 10:18:26 PM]

#10 RIZAX   Members   -  Reputation: 127

Like
0Likes
Like

Posted 07 May 2008 - 01:22 PM

Quote:
Original post by MARS_999
If must have a valid context, he said the code works fine with Nvidia cards. If you don't have a valid context, nothing will work, not just one vendor. Unless I am not reading it clearly that he has gotten it to work on Nvidia hardware...


Your right, everything works like a charm on my 8800/7800 ... it dies everywhere else.



#11 phantom   Moderators   -  Reputation: 7923

Like
0Likes
Like

Posted 07 May 2008 - 01:43 PM

Just because the context is valid on one configuration doesn't mean it is valid on all configurations; thus it is my go to question.

So, the questions remain;
the GLUT code posted, does this 'work' on all your test systems?
what glError() values do you get back if not?
does the returned value really change? (forget the meaning of the returned glCreateProgram() function, there is no standard for that and it could mean anything depending on the driver).
what does a program such as realtech VR's OpenGL caps viewer tell you about the broken target systems?

#12 MARS_999   Members   -  Reputation: 1299

Like
0Likes
Like

Posted 07 May 2008 - 04:33 PM

Ok I am going to ask the obvious, do you have these functions called somwhere?

glCreateShader
glShaderSource
glCompileShader
glAttachShader
glLinkProgram

better yet dump some more code, code that deals with glsl setup. BTW did you try your shader code in Rendermonkey yet? Just to make sure the shader source code works also?



#13 RIZAX   Members   -  Reputation: 127

Like
0Likes
Like

Posted 07 May 2008 - 05:59 PM

To Phantom:

That glut code works fine on all my systems as far as it compiling-linking-running. I added the glGetError after the program object creation and its returning 0, so I started looking else where. As far as gl caps of the target systems GPU Caps Viewer is reporting 102 ext. available, which consist of the ARB extensions for the fragment and vertex shaders. So after double checking everything, setting everything to use ARB extensions, I can get everything to run up to
glShaderSourceARB(), where I recieve a 500 for GL_INVALID_ENUM, which for the moment only happens on the vert shaders (I've ran 3 fragment shaders through and it only happens when I try to run the 2 vertex shaders in).

I'll try figuring out some more stuff tommorow ...

thanks for all the input guys.

#14 phantom   Moderators   -  Reputation: 7923

Like
0Likes
Like

Posted 07 May 2008 - 09:48 PM

Quote:
Original post by MARS_999
Ok I am going to ask the obvious, do you have these functions called somwhere?

glCreateShader
glShaderSource
glCompileShader
glAttachShader
glLinkProgram



Unimportant.

If the problem is with 'glCreateProgram()' as he mentioned then none of those have even come into effect yet. You can't attach and link a program together if you can't create a program object in the first place.



#15 phantom   Moderators   -  Reputation: 7923

Like
0Likes
Like

Posted 08 May 2008 - 05:16 AM

Meant to come back to this earlier but work and fire alarms have stopped me..

Quote:
Original post by RIZAX
To Phantom:

That glut code works fine on all my systems as far as it compiling-linking-running. I added the glGetError after the program object creation and its returning 0, so I started looking else where. As far as gl caps of the target systems GPU Caps Viewer is reporting 102 ext. available, which consist of the ARB extensions for the fragment and vertex shaders. So after double checking everything, setting everything to use ARB extensions, I can get everything to run up to
glShaderSourceARB(), where I recieve a 500 for GL_INVALID_ENUM, which for the moment only happens on the vert shaders (I've ran 3 fragment shaders through and it only happens when I try to run the 2 vertex shaders in).

I'll try figuring out some more stuff tommorow ...

thanks for all the input guys.


OK, so the start of your reply seems to indicate that GLSL works fine, which means the problem lies with setup of your application which is having the problem.

The bit of code you pasted earlier looks fine, which means something isn't being setup sanely prior to that code being called, that's the only way it's likely to fail. As previously mentioned, context is the main reason but it could be something else, without more code it's hard to tell.

But it seems you've got passed that now? Getting an error of GL_INVALID_ENUM is a little odd as it's not an error you'd expect to see from the glShaderSourceARB() function, mostly because it doesn't take an enum. Posting source might help there as well.

Anyways, let us know if you figure it out [smile]


#16 RIZAX   Members   -  Reputation: 127

Like
0Likes
Like

Posted 10 May 2008 - 03:40 AM

UPDATE - RESOLOVED!!!

After reading about the difference in how the shaders compile, I reverified everything yet again. I also went back to the straight 2.0 spec in my loader class
and verified that everything was loading correctly. That last error I was reporting traced back to a compile error in my fragment shader so I went line by line through the fragment shader and found something very very odd.

I have a varying normal that I pass into the fragment shader from the vertex shader that I'm using to compute light intensities. Whenever I try to use it directly in a dot product operation, it crashes as a compile error, but if I set a local equal to it, then use it - everything works fine. Aside from that little tidbit its running on my x1600 now.

Again thanks for all the input guys.

#17 MARS_999   Members   -  Reputation: 1299

Like
0Likes
Like

Posted 10 May 2008 - 07:37 AM

Glad to hear you figured it out. Only thing is this could have been avoided if you used Rendermonkey as I had pointed out earlier. RM is a nice app to check your shader code in the future before you beat you head against the wall in your own code.

Happy coding.

#18 RIZAX   Members   -  Reputation: 127

Like
0Likes
Like

Posted 10 May 2008 - 09:39 AM

Yea, I grabbed RM and FX Composer, as that I'm still new to shaders, I'm still trying to figure out the 2 apps so I can decided on which one I want to use.

Also, I figured out why the normal thing was producing an odd problem, and magically its a type mismatching thing: int vs float, that produces a warning in the compiler, that is allowed to pass on nvidia hardware, but the others complain about it. Also, the logging information that the shader compiler on ATI hardware produces is rather lacking ... all I get is random characters out, where as the NVIDIA stuff produces full strings as to what the problem is. I honestly don't know how all the artist deal with these flimsy standards...


#19 MARS_999   Members   -  Reputation: 1299

Like
0Likes
Like

Posted 10 May 2008 - 02:07 PM

IMO Nvidia's OpenGL support is better than ATI's. But this will start a flame war. I have used both, and right now ATI's extension support for newer features e.g. DX10 isn't here yet, as for Nvidia they are. That is why I use Nvidia's hardware now for coding. Do you mean the float vs. int like you were putting 1.0f? vs. 1.0?

#20 RIZAX   Members   -  Reputation: 127

Like
0Likes
Like

Posted 10 May 2008 - 04:30 PM

No, I was taking a sum that was a float and dividing it by gl_MaxLights which is defined as an int. When compiling with the Nvidia hardware you get warnings about a type mismatch, but when I use the ATI hardware it completely fails to compile, and of course it won't link. At first I thought it had something to do with the normal calculation which was inside the same code block, but it was really the division.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS