Jump to content

  • Log In with Google      Sign In   
  • Create Account

d0Sm4o20

Member Since 13 Feb 2011
Offline Last Active Apr 22 2012 07:23 PM

Topics I've Started

[SOLVED] GLSL subroutine not getting used

08 April 2012 - 03:57 PM

I'm using a gaussian blur fragment shader. In it, I thought it would be concise to include 2 subroutines: one for selecting the horizontal texture coordinate offsets, and another for the vertical texture coordinate offsets. This way, I just have one gaussian blur shader to manage.

Here is the code for my shader. The {{NAME}} bits are template placeholders that I substitute in at shader compile time:

	#version 420
	
	subroutine vec2 sample_coord_type(int i);
	subroutine uniform sample_coord_type sample_coord;
	
	in vec2 texcoord;
	out vec3 color;
	
	uniform sampler2D tex;
	uniform int texture_size;
	
	const float offsets[{{NUM_SAMPLES}}] = float[]({{SAMPLE_OFFSETS}});
	const float weights[{{NUM_SAMPLES}}] = float[]({{SAMPLE_WEIGHTS}});
	
	
	subroutine(sample_coord_type) vec2 vertical_coord(int i) {
	 return vec2(0.0, offsets[i] / texture_size);
	}
	
	subroutine(sample_coord_type) vec2 horizontal_coord(int i) {
	 //return vec2(offsets[i] / texture_size, 0.0);
	 return vec2(0.0, 0.0); // just for testing if this subroutine gets used
	}

	void main(void) {	
		color = vec3(0.0);
		
		for (int i=0; i<{{NUM_SAMPLES}}; i++) {
			color += texture(tex, texcoord + sample_coord(i)).rgb * weights[i];
			color += texture(tex, texcoord - sample_coord(i)).rgb * weights[i];
		}
	}

Here is my code for selecting the subroutine:

	blur_program->start();
	blur_program->set_subroutine("sample_coord", "vertical_coord", GL_FRAGMENT_SHADER);
	blur_program->set_int("texture_size", width);
	blur_program->set_texture("tex", *deferred_output);
	blur_program->draw(); // draws a quad for the fragment shader to run on

and:

	void ShaderProgram::set_subroutine(constr name, constr routine, GLenum target) {
	 GLuint routine_index = glGetSubroutineIndex(id, target, routine.c_str());
	 GLuint uniform_index = glGetSubroutineUniformLocation(id, target, name.c_str());
	 glUniformSubroutinesuiv(target, 1, &routine_index);
	
	 // debugging
	 int num_subs;
	 glGetActiveSubroutineUniformiv(id, target, uniform_index, GL_NUM_COMPATIBLE_SUBROUTINES, &num_subs);
	 std::cout << uniform_index << " " << routine_index << " " << num_subs << "\n";
	}


I've checked for errors, and there are none. When I pass in `vertical_coord` as the routine to use, my scene is blurred vertically, as it should be. The `routine_index` variable is also 1 (which is weird, because `vertical_coord` subroutine is the first listed in the shader code...but no matter, maybe the compiler is switching things around)

However, when I pass in `horizontal_coord`, my scene is STILL blurred vertically, even though the value of `routine_index` is 0, suggesting that a different subroutine is being used. Yet the `horizontal_coord` subroutine explicitly does not blur.

What's more is, whichever subroutine comes first in the shader, is the subroutine that the shader uses permanently. Right now, `vertical_coord` comes first, so the shader blurs vertically always. If I put `horizontal_coord` first, the scene is unblurred, as expected, but then I cannot select the `vertical_coord` subroutine! Posted Image

Also, the value of `num_subs` is 2, suggesting that there are 2 subroutines compatible with my `sample_coord` subroutine uniform.

Just to re-iterate, all of my return values are fine, and there are no glGetError() errors happening.

Any ideas?

Bizarre GLSL link error

03 December 2011 - 09:04 PM

Hi guys, this is driving my crazy. I have 1 vertex shader and 1 fragment shader linked into 1 program (so far, so good :)

However, on the linking stage, my shader crashes, *but only if I make a very specific code alteration*. Here's what *works*:

fragment:

#version 410

in vec3 vnormal;
in vec3 vlight[10];

out vec4 color;
uniform vec4 fill_color;

void main(void) {
	color = fill_color;
}


vertex:

#version 410

layout(location = 0) in vec4 position;
layout(location = 1) in vec3 normal;

layout(std140) uniform matrices_block {
	mat4 projection;
	mat4 modelview;
	mat3 normal;
} mat;

out vec3 vnormal;
out vec3 vlight[10];

uniform int num_lights;

void main(void) {
	vnormal = mat.normal * normal;
	vec4 vertex_eye = mat.modelview * position;
	
	int j = 0;
	for (int i=0; i<num_lights; i++) {
                j = 0;
		vlight[j] = vec3(0.0);
	}
	
	gl_Position = mat.projection * mat.modelview * position;
}


This works fine. Compiles, runs, etc. I know it looks a little crazy that I'm using "j" to index vlight, and setting j=0 every time in the loop, but hear me out for a second. Here's what doesn't work in the vertex shader:

If i move this:

                j = 0;
		vlight[j] = vec3(0.0);


To this:

		vlight[j] = vec3(0.0);
                j = 0;


The shader programs refuse to link. What's worse is, there's no output log from glGetShaderInfoLog, so I can't tell what's going wrong. This is really weird behavior, and it's starting to make me question my drivers. If it's any help, my graphics card is an Nvidia GeForce GTX 485M with driver 280.13.

Anyone have any insight on what I can poke at next? Thanks!

FBO texture attachment only being rendered to once?

13 February 2011 - 11:38 PM

Hi guys,

I have a texture attached to an FBO that i'm rendering to, and then sending that texture to a shader. It all works fine and well....for 1 frame, the first frame. Then the texture is always black. It seems like the FBO texture can only be drawn to once(!?) in my code, and afterwards, it is erased and only black is drawn to it. It's really driving my nuts. I feel like I'm missing something completely obvious. Here is my main draw function:

void draw() {
	printf("Beginning of draw\n");
	glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);

	// clear the FBO texture and set the draw color
	glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
	glColor4f(1.0f, 1.0f, 1.0f, 1.0f);

	// draw a square where the mouse is
	float x, y, w, h;
	w = 100.0f;
	h = 100.0f;
	x = mouse.x;
	y = mouse.y;
	glRectf(x, y, x+w, y+h);

	glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);


	// generate mipmaps for the FBO texture
	glBindTexture(GL_TEXTURE_2D, fbo_tex);
	glGenerateMipmapEXT(GL_TEXTURE_2D);
	glBindTexture(GL_TEXTURE_2D, 0);


	// this should all apply to the actual screen now
	glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
	glColor3f(1.0f, 1.0f, 1.0f);

	glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_2D, fbo_tex);


	// draw a quad which we'll screen align in a vertex shader, and then paint
	// with the FBO texture in the fragment shader
	glUseProgram(blur_program);
	glRecti(0, 0, 1, 1);
	glFlush();
	glUseProgram(0);

	printf("End of draw\n");
	SDL_Delay(1000);
}


This is the vertex shader that aligns that second glRecti call to the screen:

void main(void) {
	gl_Position =  gl_Vertex * 2.0 - 1.0;
  	gl_TexCoord[0]  = gl_Vertex;
}

And the fragment shader:


uniform sampler2D fbo;

void main(void) {
	gl_FragColor = vec4(texture2D(fbo, gl_TexCoord[0].st));
}

Like I said, it seems to work for only *one frame* (I can see the 100x100 white square for 1 second, thanks to the SDL_Delay(1000))....so it is getting drawn to the FBO texture and the shaders are processing everything correctly. It's just that every time draw() is called *after the first time*, the FBO texture is painted with black.

I've changed the clear color on the FBO to red, and indeed the screen is rendered red instead of black, showing that the FBO texture is still being painted, even when the square for some reason is not. I've also tried attaching an arbitrary image as the texture getting fed into my shader, and that works on every frame, showing that the error isn't in my shaders. I'm really at a loss here...why isn't that white 100x100 square being drawn on every draw() call?

Any ideas? Thanks in advance!

PARTNERS