# OpenGL Rendering to texture/FBO issue

## Recommended Posts

Hi everybody,

just registered here after beeing stuck for hours. I'm fairly new to OpenGL programming, though I wrote some simple programs and shaders before.

I'm sure somebody around here can tell me, what's going wrong here.

What I'm trying to achieve here is a simple multi-pass shadowmapping rendering:
#1 Render the z-buffer of the light to a texture
#2 Render the shadows to another texture (viewport resolution)
(#3 Do something with this texture, that's not implemented at the moment)
#4 Render the scene and apply the shadows from the previously rendered texture in step #2

For reference, that's what the scene looks like without shadows:

When I display the result of step #2 in the viewport I get, what I expect - the calculated shadows, which are correctly occluded by the geometry:

Now when I simply display the texture rendered in step #2 in the fragment shader of step #4 I get this:

As you can see, this is basically the identical image, [b]but[/b] the first and the third cylinder are suddenly missing, eventhough I didn't change anything. So obviously there is something different rendered to the texture then, when I render the same thing to the viewport (maybe some depth/drawing order issue?).

So I guess there must be either something wrong how I setup the FBO or the texture or some settings I set before drawing the scene.

Here I generate the texture used in step #2 and bind it to the FBO:

[Code]
{

// generate and bind the rawShadowTexture

// set the parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );

// texture size = viewport size
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, (int) observerWidth, (int) observerHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);

// generate and bind the rawShadowsFBO

// set the rawShadowTexture as render target

// switch back to window-system-provided framebuffer
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER, 0);

}[/Code]

Here's the shadow calculating part (step #2):

[Code]
{
//set the viewport
glViewport(0, 0, (int) observerWidth, (int)observerHeight);

glClearColor(0,0,0,1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

//set the matrices
M3DMatrix44f viewMat;
M3DMatrix44f eyeToLightView;

glClear(GL_DEPTH_BUFFER_BIT);

// bind the previously rendered depthmap

// set the Model View Projection Matrix for the shader
//Projektionsmatrix

//calc the light's position, at the origin of the light-view matrix
M3DMatrix44f invLight;
m3dInvertMatrix44(invLight,lightView);
M3DVector4f lp;
m3dTransformVector4(lightPos, lp, invLight);

// set the light's parameters

//draw the model
modelBatch.Draw();

//(de)activate stuff
glUseProgram(0);
glBindTexture(GL_TEXTURE_2D, 0);

glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
modelViewMatrix.PopMatrix();

}[/Code]

And here's the final step #4, at the moment the shader used here just displays the rendered texture in step #2:

[Code]
{
//set the viewport
glViewport(0, 0, (int) observerWidth, (int)observerHeight);

glClearColor(0,0,0,1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

//set the projection matrix
viewFrustum.SetPerspective(45.0f,aspect,0.1f,100);

//switch cameras
if(!toggleView)
modelViewMatrix.PushMatrix(observerFrame);
else
modelViewMatrix.PushMatrix(lightFrame);

// set the Model View Matrix
1, GL_FALSE, transformPipeline.GetModelViewMatrix());
//set the Projectionsmatrix
1, GL_FALSE, transformPipeline.GetProjectionMatrix());

//set the Normalmatrix
1, GL_FALSE, transformPipeline.GetNormalMatrix());

//light position
M3DMatrix44f invLight;
m3dInvertMatrix44(invLight,lightView);
M3DVector4f lp;
m3dTransformVector4(lightPos, lp, invLight);
//light parameters

//draw the model
modelBatch.Draw();

glUseProgram(0);
glBindTexture(GL_TEXTURE_2D, 0);

modelViewMatrix.PopMatrix();
}
[/Code]

Anybode an idea what goes wrong, when rendering to the texture? I'd be really grateful for any hints [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]

##### Share on other sites
You don't have a depth buffer in your FBO, so the geometry just appears in the order it's drawn. You need to have a depth buffer in step 2 too since you're trying to draw all the geometry again, and you obviously need the closest depth there.

I get the feeling you're trying to do a screen space blur of the shadow map to get smooth shadows, correct? This won't look very good and will be very inaccurate, especially for steep angles since it's uniformly blurred. You'll also possible get light bleeding where you're not supposed to unless you take the depth of neighboring pixels into account. Anyway, it's far from optimal. The easiest way to do soft shadows is with PCF, which is basically sampling lots of nearby depth values from the shadow map and checking how many of these that pass. This gives problems with self shadowing though so you need a high depth bias which in turn pretty much removes self shadowing. If you're feeling really adventurous look up variance shadow maps.

##### Share on other sites
Hey agentd,

I suspected something like that, but didn't know I had to explicitely create a depth buffer FBO for it. So I tried to do my homework and checked several (very similar) tutorials about it, but after implementing the following code my texture just stays black [img]http://public.gamedev.net//public/style_emoticons/default/unsure.png[/img] (rest of the code didn't change)
[code]
{

// generate and bind the rawShadowTexture

// set the parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );

// texture size = viewport size
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, (int) observerWidth, (int) observerHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, 0);

// unbind texture
glBindTexture(GL_TEXTURE_2D, 0);

// generate and bind depth buffer
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, (int) observerWidth, (int) observerHeight);
//unbind depth buffer
glBindRenderbuffer(GL_RENDERBUFFER, 0);

// generate and bind the rawShadowsFBO

// attach the color texture to the frame buffer
// attach depth buffer

// switch back to window-system-provided framebuffer
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
}
[/code]

If I don't attach the depth buffer FBO ([size=2][font=courier new,courier,monospace]glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rawShadowsDepthBuffer[/font]);[/size] ) I get the same result as before. So I'm afraid my depth buffer FBO is setup in a wrong way, though I'm unable to see what's the issue, as I can't see any difference to the 3 tutorials I checked on that matter.

That's the thing I which makes it sometimes hard to enjoy OpenGL coding for a beginner like me, as it's really hard to debug. Feels like a big blackbox. [img]http://public.gamedev.net//public/style_emoticons/default/wacko.png[/img]

Your assumption was right. I'm trying to achieve something along that matter, though I don't plan to apply a uniform blur. Had an idea to compensate the viewing angle dependent issue you mentioned. However I've no idea if this will work out as I've planned, but that doesn't really matter, as I'm having fun playing around with stuff like that.

I did implement both PCF and PCSS shadow mapping a while ago (2nd with rotated poisson disc):

So as I mentioned before, my goal here is not really about just implementing a shadow algorithm, but more playing around with my own ideas (as bad as they might turn out ;) ). I also looked into VSM when I did the other shadow mapping techniques a while back, but felt like it had too many limitations.

##### Share on other sites
I think the rendering is working fine, but you're not clearing the FBO, causing the depth test to fail after the first frame. Make sure that all FBOs are cleared properly, both the shadow map and the shadow post processing buffer. I see for example that you call glViewport(), then glClear() and THEN glBindFramebuffer(). I don't think you wanted to clear a shadow map sized part of the screen, right? =S

##### Share on other sites
Of course :facepalm: I cleared before binding the buffer. Just had to swap these in the code and now everything works fine.

Thanks again [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
627738
• Total Posts
2978881
• ### Similar Content

• Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
So, here's what the plan is so far as far as loading goes:
Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!

• I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks

• A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

-What I'm using:
C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.
-Questions
Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.

• 10
• 10
• 21
• 14
• 13