Jump to content
  • Advertisement
Sign in to follow this  
Sirisian

OpenGL FBO with differing texture sizes? (glViewportArray?)

This topic is 2468 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

// edit hmm apparently when using different texture sizes it uses the minimum sized texture for all of them. So what I was trying was not possible.

I've been learning FBOs and wanted to try dealing with frame buffer textures that were different sizes. So in my test I created a texture the resolution of the screen to store parts of my model that didn't glow. Then I created a texture half the width and height of the screen for the part for the part of the model that did glow. My test was that I'd have a shader that would output to both textures. This works when both textures are the same width and height, but if I change the second texture (the glow one) to be half the width and height suddenly my viewport is cut into half when rendering just the first non-glow texture.

So the working code is as follow:

// Model Texture
glGenTextures(1, &modelTexture);
glBindTexture(GL_TEXTURE_2D, modelTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
// Blur Texture
glGenTextures(1, &modelBlurTexture);
glBindTexture(GL_TEXTURE_2D, modelBlurTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
// Depth buffer
glGenRenderbuffers(1, &modelDepthRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, modelDepthRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
// Model FBO
glGenFramebuffers(1, &modelFBO);
glBindFramebuffer(GL_FRAMEBUFFER, modelFBO);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, modelDepthRenderBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, modelTexture, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, modelBlurTexture, 0);
GLenum modelTextureFBOStatus;
if ((modelTextureFBOStatus = glCheckFramebufferStatus(GL_FRAMEBUFFER)) != GL_FRAMEBUFFER_COMPLETE)
{
std::cerr << "glCheckFramebufferStatus: error " << modelTextureFBOStatus << std::endl;
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);


And the render code is just:

// Draw the model to the FBO
glUseProgram(modelShader.Program);
GLfloat viewports[] = { 0, 0, width, height, 0, 0, width, height };
glViewportArrayv(0, 2, viewports);
glBindFramebuffer(GL_FRAMEBUFFER, modelFBO);
GLenum modelBuffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 };
glDrawBuffers(2, modelBuffers);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArray(model.GPU.VAO);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, noiseTexture);
glUniformMatrix4fv(modelShader.MVP.Model, 1, true, model.ModelMatrix);
glUniformMatrix4fv(modelShader.MVP.View, 1, true, view);
glUniform1f(modelShader.Distance, distance);
glDrawElements(GL_TRIANGLES, model.CPU.IBO.size(), GL_UNSIGNED_SHORT, BUFFER_OFFSET(0));
glBindVertexArray(0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);

Notice I have glViewportArray set so the first viewport is (0, 0, width, height) mapped to (left, bottom, width, height). Am I using that correct? According the documentation: http://www.opengl.or...ewportArray.xml it says I am. Meaning the normalized device coordinates (-1, -1) would map to the window coordinates of (0, 0) and (1, 1) would map to (width, height) which seems right.

The above produces this:
nclogo7.png

But if I change the blurTexture to half the width and height to:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width / 2, height / 2, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);

suddenly the modelTexture's results change when I run the program:
nclogo8.png

I can't figure out why that is happening. I'd appreciate it if someone could explain what I'm doing wrong. I imagine I've just misunderstood how FBOs and viewports arrays work together to allow different texture sizes. (If it isn't clear I'm using OpenGL 4.2).

Share this post


Link to post
Share on other sites
Advertisement
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!