problems with multiple sampler2D and FBO [solved]

Started by
1 comment, last by moribundus 15 years, 11 months ago
Hi, I'm trying to create a simple deferred renderer and I've encountered strange behaviour when doing the second lighting pass. I read from two samplers in the shader (GLSL) and it seems that values retrieved from them are the same (even if bound textures are different). I've double checked almost everything and still haven't found the source of the problem. I must be missing something fundamental. Hmm. Anyway, here's the relevant code (I wrote a small glut app that can recreate the problem on my machine). Redraw function:

  // set up perspective projection (omitted)

  glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo1);
  GLenum buffers[] = {GL_COLOR_ATTACHMENT0_EXT, GL_COLOR_ATTACHMENT1_EXT};
  glDrawBuffers(2, buffers);
  glUseProgram(program1);

  glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

  glutSolidTeapot(5.0f);

  glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
  glUseProgram(program2);

  glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

  // set up ortho projection (omitted)

  // bind g-buffers as textures, loc1 and loc2 are locations of uniform variables
  // and are retrieved after the program2 is linked
  if(loc1 != -1) {
    glActiveTexture(GL_TEXTURE0 + loc1);
    glBindTexture(GL_TEXTURE_2D, buf[0]);
  }
  if(loc2 != -1) {
    glActiveTexture(GL_TEXTURE0 + loc2);
    glBindTexture(GL_TEXTURE_2D, buf[1]);
  }

  glRectf(0.0f, 0.0f, 640.0, 480.0);

  // flush and swap buffers (omitted)



FBO creation:

  // 640x480 are the viewport dimensions

  glGenFramebuffersEXT(1, &fbo1);

  glGenTextures(3, buf);

  // diffuse
  glBindTexture(GL_TEXTURE_2D, buf[0]);
  glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 640, 480, 0,
    GL_RGBA, GL_UNSIGNED_BYTE, NULL);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

  // normal
  glBindTexture(GL_TEXTURE_2D, buf[1]);
  glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 640, 480, 0,
    GL_RGBA, GL_UNSIGNED_BYTE, NULL);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

  // depth
  glBindTexture(GL_TEXTURE_2D, buf[2]);
  glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, 640,
    480, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

  // attach G-Buffers to FBO
  glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo1);

  glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
    GL_TEXTURE_2D, buf[0], 0);

  glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT1_EXT,
    GL_TEXTURE_2D, buf[1], 0);

  glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT,
    GL_TEXTURE_2D, buf[2], 0);

  if(glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT) != GL_FRAMEBUFFER_COMPLETE_EXT)
    cout << "Framebuffer incomplete!" << endl;



And here's the problematic shader

uniform sampler2D tr_Buffer0;  // loc1
uniform sampler2D tr_Buffer1;  // loc2

void main()
{
  vec3 diff = texture2D(tr_Buffer0, gl_TexCoord[0].xy).rgb;
  vec3 normal = texture2D(tr_Buffer1, gl_TexCoord[0].xy).rgb;

  // forces both samplers to be used (and not optimized out)
  // runtime assertion
  if(normal == diff)
    discard;

  normal = normalize((normal * 2.0) - 1.0);

  vec3 light = normalize(vec3(-1.0,1.0,0.0));

  float dotNL = dot(normal, light);

  dotNL = max(0.0, dotNL);
  dotNL = min(1.0, dotNL);

  gl_FragColor.rgb = diff * dotNL;
}



The problem is that I cannot get the diffuse lighting to work, because if I use more than one sampler in the shader, it returns the diffuse color twice (in this case). If I use just one sampler (eg. the gl_FragColor.rgb = diff; and the other sampler gets optimized out) everything is correct. Also if I render g-buffer textures onto a quad (using fixed pipeline), everything is ok (both diffuse and normal texture). So the problem must lie somewhere in the second phase. And I'm working on GeForce 8600GT with the latest nVidia drivers. I get the same results on both Windows and Linux, so it seems it's probably error on my part. Any help will be appreciated. [Edited by - moribundus on May 10, 2008 12:15:18 PM]
Advertisement
The problem *I think* is how you are doing the texture binding: to be precise, what one does is that the uniform parameter for a texture is an integer, so what you should do is:

f(loc1 != -1) {
glUniform1i(loc1, 0);
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, buf[0]);
}
if(loc2 != -1) {
glUniform1i(loc2, 1);
glActiveTexture(GL_TEXTURE0 + 1);
glBindTexture(GL_TEXTURE_2D, buf[1]);
}
Close this Gamedev account, I have outgrown Gamedev.
Ha! That's it. Thanks a lot, works like a miracle now.

The most embarassing thing is that my older texture code works correctly and it seems that I just forgot how. The error on my part was treating sampler location like vertex attribute location, which of course didn't yield correct results :)

This topic is closed to new replies.

Advertisement