OpenGL glUniform*f seems to... not work.

This topic is 3730 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

I'm having the weirdest problem. It seems like glUniform*f just isn't working right. This is reproducible on multiple systems, all using Quadros and the latest nVidia drivers, so I feel like it must be something I'm doing wrong. I just can't figure out what that something is. To make matters more fun, the same code works for int uniforms with only the slight modifications you'd expect. Here's the relevant code:
char * source[1] = { "float mine; void main() { gl_FragColor = vec4(mine, 0, 0, 0); }" };

int program = glCreateProgram();
printf("glerror=%d\n", glGetError());

int len;
glGetProgramInfoLog(p, 1023, &len, c);
printf("program info:\n%s\nend of program info.\n", c);
printf("isProgram(%d)=%s\n", p, glIsProgram(p) ? "true" : "false");
glUseProgram(p);

int cp = -1;
glGetIntegerv(GL_CURRENT_PROGRAM, &cp);
printf("current program=%d\n", cp);

printf("program handle=%d\n", p);

int loc = glGetUniformLocation(p, "mine");
printf("uniform location=%d\n", loc);
glUniform1f(loc, 0.7f);

// Draw something...

Here's what that code prints:
glerror=0
program info:

end of program info.
isProgram(1)=true
current program=1
program handle=1
uniform location=0
getUniformfv(1, 0)=36893488147419103232.000000

That... isn't right. 3689... is also 0x0002.0000.0000.0000.0000 which might be coincidental, or it might be indicative of some deep problem. If I change glUniform1f(0.7) to 0.5, everything's the same, except now the getUniform1f call prints 0.000000. Which is still not equal to 0.5. Relevantly, I've run this in glsldevil, and its trace shows glUniform1f as executing on the messed up values. I'm not sure where it injects itself into the GL pipeline, though, so I'm not sure what that indicates. And, again, when I use integers, all this works fine. Which is fine, if I want to pass parameters as shorts. Help? The actual complete test code follows.
#include <GL/gl.h>
#include <GL/glext.h>
#include <GL/glut.h>
#include <stdio.h>

int p;

void draw() {
glUseProgram(p);
glClear(GL_COLOR_BUFFER_BIT);
glBegin(GL_TRIANGLE_STRIP); {
glTexCoord2f(0.0f, 0.0f); glVertex2f(0.0f, 0.0f);
glTexCoord2f(1.0f, 0.0f); glVertex2f(1.0f, 0.0f);
glTexCoord2f(0.0f, 1.0f); glVertex2f(0.0f, 1.0f);
glTexCoord2f(1.0f, 1.0f); glVertex2f(1.0f, 0.8f);
} glEnd();
glutSwapBuffers();
}

"uniform float mine; uniform int imine; void main() { gl_FragColor = vec4(mine, 0.0, 0.0, 1.0); }"
};

int main( int argc, char *argv[] ) {
char c[1024];
c[0] = 0;

glutInitWindowSize( 400, 400 );
glutInit( &argc, argv );
glutInitDisplayMode( GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH );
glutCreateWindow( "OpenGL Demonstration 12" );

p = glCreateProgram();
printf("glerror=%d\n", glGetError());
int len;
glGetProgramInfoLog(p, 1023, &len, c);
printf("program info:\n%s\nend of program info.\n", c);
printf("isProgram(%d)=%s\n", p, glIsProgram(p) ? "true" : "false");
glUseProgram(p);
int cp = -1;
glGetIntegerv(GL_CURRENT_PROGRAM, &cp);
printf("current program=%d\n", cp);

printf("program handle=%d\n", p);

int loc = glGetUniformLocation(p, "mine");
//int loc = glGetUniformLocation(p, "imine");
printf("uniform location=%d\n", loc);
glUniform1f(loc, 0.7f);
//glUniform1i(loc, 100);

float retval;
glGetUniformfv(p, loc, &retval);
printf("getUniformfv(%d, %d)=%f\n", p, loc, retval);

glMatrixMode(GL_PROJECTION);
glOrtho(0.0, 1.0, 0.0, 1.0, 0.0, 1.0);

glMatrixMode(GL_MODELVIEW);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);

glutDisplayFunc(draw);
glutMainLoop();
return 0;
}


Share on other sites
I've just tried your example program under linux with the 64 bit 100.14.03 nvidia drivers with an 8600 GTS card and it works OK. (I had to change the type of the shaders global to be const GLchar *).

The output was:

glerror=0
program info:

end of program info.
isProgram(1)=true
current program=1
program handle=1
uniform location=0
getUniformfv(1, 0)=0.700000

It looks like it might be a driver problem. It might be worth trying an earlier version just in case that makes any difference.

Share on other sites
That's comforting and also spooky. I'd expect a lot more people to suffer from a driver issue like this. (I'm on Linux64, too, nVidia driver 100.14.23. I was using the stable 100.24.11 before though, and that exhibited the same behavior).

I'll try some older drivers, and if they work, I'll file a bug with nVidia.

• Similar Content

• By _OskaR
Hi,
I have an OpenGL application but without possibility to wite own shaders.
I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
• By xhcao
Does sync be needed to read texture content after access texture image in compute shader?
My simple code is as below,
glUseProgram(program.get());
glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
glDispatchCompute(1, 1, 1);
// Does sync be needed here?
glUseProgram(0);
GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);

Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?

• My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:

• I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.

#define NUM_LIGHTS 2
struct Light
{
vec3 position;
vec3 diffuse;
float attenuation;
};
uniform Light Lights[NUM_LIGHTS];

• By pr033r
Hello,
I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things.
I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help.
My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
Exe file (if you want to look) and models folder (for those who will download the sources):
http://leteckaposta.cz/367190436
Thanks for any help...

• 14
• 16
• 10
• 17
• 9