## Recommended Posts

Seroja    280
Hi! I'm working on a graphics project, and I wanted to add simple shadows to it. I tried to do shadow mapping like in Shadow Mapping Tutorial, but it didn't exactly work. I'm basically having 2 different problems: 1) This seems to be an ATi only problem. As soon as I enable shadowing (e.g. render to a depth map and use it), performance drops to about 3-4 FPS. Otherwise it's smooth (no FPS counter, but it's at least 30 FPS, since there are no lags). Funny thing is, the same happens when I use that tutorial's example - a trivial scene at 5 FPS. This all happened on my X800 GTO2. Then I tested it on nVidia's 6600GT, and it works just fine - tutorial's example at 100FPS, and mine quite smooth. I tried different texture filtering (nearest/linear/mipmap), different Z-buffer depths (16/24/32), different depth map depths (16/24/32/default), and nothing helped - same 3-4 FPS. Seems like copying to the depth map takes huge amount of time. Here's what I use now:
// Initialize depth texture (my texture loader, basically sets the passed params)
if (!glTex::UseTex(NULL, DepthSize, DepthSize, TexDepth, GL_DEPTH_COMPONENT, GL_LINEAR_MIPMAP_LINEAR, GL_CLAMP_TO_EDGE)) return false;
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_ALPHA);

// Update depth texture
glBindTexture(GL_TEXTURE_2D, TexDepth);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, DepthSize, DepthSize);


I should note that I'm not using pbuffers/FBO - only glCopyTexSubImage2D. I know that's inefficient, but FBOs are only available on new hardware, and pbuffers aren't that easy to work with. But this shouldn't be the problem since I'm also using cube mapping, and that is 6 copies. If cube mapping works fast, why shadow mapping doesn't? (Just in case, I updated the drivers - no change) Did anyone have this problem before? I've only seen similar drop in performance when I used NPOT textures with mipmaps on ATi (again, nVidia didn't seem to care). Could this be remedied by using pbuffers? If anyone has an ATi card, please try to download the demo of the tutorial and tell the FPS you get (it could be due to my computer/driver/card combo). 2) This problem is unrelated, and is actually about the algorithm. In the tutorial, only back faces are rendered to the depth map. I cannot do that since I have non-closed (flat) objects in my scene. So what I do is render whole scene, same way, and use polygon offset. Now when rendering, I tried to render bright first, shadowed second, but eventually used the opposite (with depth test of GL_LESS). This is because I have alpha blending in the scene (the grass), and it produced weird bright spots because bright didn't overwrite dark but combined with it. Here's how I render:
// Snap shadow map
SnapDepth(TexDepth);

// Go to reflective object position and snap cubemap
SnapCubemap(TexCubemap);

// Draw objects (bright light)
glAlphaFunc(GL_GEQUAL, 0.01f);
glEnable(GL_ALPHA_TEST);
DrawObjects();
glDisable(GL_ALPHA_TEST);

// Draw objects (dim light)
glDepthFunc(GL_LESS);
DrawObjects(3);
glDepthFunc(GL_LEQUAL);


First shadowed, then bright almost works. The problem can be seen in the following screenshot: First image is camera view, second is light view. Object is a concave mirror. In camera view, you can see the weird bright strip between 2 shadowed parts. That is between shadow due to lighting model (opposite side to light) and shadow due to shadow mapping (self-shadowing). Reversing bright/dark render order didn't change anything regarding that strip. I guess that the reason for it is either imprecision (512*512 depth map, maybe 2K*2K could fix that, but that's too big), or polygon offset (moving shadow back a bit). I can't remove polygon offset because then whole scene would be Z-fighting. Any ideas how to fix it, leaving the mirror completely flat? (A solution is making it a shape with volume, but that's not what I want) P.S. shadow looks rather ugly, any way to improve that without using shaders? (OpenGL 1.4 only). I know nVidia has some kind of PCF built in and enabled through GL_LINEAR filter, but it doesn't seem to work on ATi. Thanks in advance!

##### Share on other sites
Seroja    280
Anyone?

Even if you can't help me to solve the problems, I'd appreciate if you downloaded the tutorial demo, and told me what FPS you get on what hardware. At least I'd know if it's an ATi issue or something else. Surely I'm not the only one with an ATi card here :-)

##### Share on other sites
deathkrush    350
The FPS drop on the ATI card could be caused by unsupported extensions. If something isn't supported by the hardware, OpenGL driver can silently switch to software rendering mode. Try using gDEBugger to figure out what's going on. Also, glGetError could be useful.

EDIT:
Try replacing glCopyTexSubImage2D with glCopyTexImage2D.

##### Share on other sites
sto8qc    100
Just tried tutorial's executable - I get about 240 FPS with my Mobility Radeon 9600 (64m vram).

##### Share on other sites
Imgelling    222
I am getting 480fps on the demo using a 7600 Geforce. Sorry, not an ATI, but I wanted to check out the demo

##### Share on other sites
deathkrush    350
Another thought: what's the FPS on the ATI card when glCopyTexSubImage2D is commented out?

##### Share on other sites
Seroja    280
I tried the following:

1) glCopyTexSubImage2D
2) glCopyTexImage2D

Performance is roughly the same - 3-4 FPS. Commenting these out effectively disables shadows, but performance goes up to a reasonable level again.

Software rendering seems like a good reason for this behaviour (though what exactly is there to emulate in software - array copying?)

I'll try gDEBugger, never used it, and see what happens.

Though if it works fine on ATi 9600, it does make it more likely that it's my computer making problems than the code. Or maybe some X??? series driver bug...

##### Share on other sites
deathkrush    350
Quote:
 Original post by SerojaPerformance is roughly the same - 3-4 FPS. Commenting these out effectively disables shadows, but performance goes up to a reasonable level again.

Sounds like glCopy family of functions is implemented slowly in the X800 driver. Perhaps it's doing something stupid like:

glFinish();
memcpy(...);

One way you can make it run faster on the X800 is by using FBO (Frame Buffer Object). With an FBO there is no need to do glCopy, you can just render to texture directly.

##### Share on other sites
Seroja    280
Quote:
Original post by deathkrush
Quote:
 Original post by SerojaPerformance is roughly the same - 3-4 FPS. Commenting these out effectively disables shadows, but performance goes up to a reasonable level again.

Sounds like glCopy family of functions is implemented slowly in the X800 driver. Perhaps it's doing something stupid like:

glFinish();
memcpy(...);

One way you can make it run faster on the X800 is by using FBO (Frame Buffer Object). With an FBO there is no need to do glCopy, you can just render to texture directly.

I wish I could. It's an academic project, and we have only Intel's embedded chipsets (915G). It has OpenGL 1.4, so that's what I'm targeting. I'll try tomorrow the code on that chipset, but for now I've been working at home, keeping an eye on extensions I use.

FBOs didn't even make into a standard, and I haven't seen them supported on anyt embedded chipset (e.g. only nVidia and ATi seem to support them). They aren't even ARB, only EXT, and didn't make into any OpenGL standard (yet?).

If it won't work on Intels, I'll either fallback to planar shadows or go for pbuffers (both alternatives sound bad, especially since I might waste time on pbuffers only to find that performance didn't improve).

glCopy could indeed be implemented badly, but:
1) It's 1 copy of 512*512 depth texture per frame, why would it decrease preformance x3 or even x4?
2) I use same glCopy for cube mapping, which is 6 256*256 RGB(A) textures, and even increasing that to 6 512*512 textures didn't hurt performance as much.

I used gDEBugger, 200k calls per frame :S Code doesn't seem that long, and I used display lists (I have a skybox, 3 spheres, grass grid of about 6*6, + cubemapping and shadow mapping). OK, spheres are well-tesselated, but still I doubt I need VBOs for such low amount of geometry.
Anyway, I used removal of draw commands feature, brought that down to 5k, FPS went up by 1 -> which means enormous amount of GL calls isn't the problem. And previous version had 90k calls and worked fine.

Seems like either I have something weird on my computer, or I discovered a render path that the driver devs haven't thought of. Actually, I didn't find anywhere any shadow mapping with glCopy, except for that tutorial. Others use cube shadow maps / pbuffers / FBOs / etc. Maybe it's a subtle hint that I'm not using hardware the way it should be used...

##### Share on other sites
deathkrush    350
Quote:
 Original post by SerojaglCopy could indeed be implemented badly, but:1) It's 1 copy of 512*512 depth texture per frame, why would it decrease preformance x3 or even x4?

If the OpenGL driver is doing something silly like using the CPU to copy data, that would kill performance because accessing VRAM from CPU is very slow on many graphic cards. Think 10MB/s instead of 10GB/s !!!

Quote:
 Original post by Seroja2) I use same glCopy for cube mapping, which is 6 256*256 RGB(A) textures, and even increasing that to 6 512*512 textures didn't hurt performance as much.

Maybe the OpenGL driver supports copying an RGBA texture using the GPU, but it falls back to 10MB/s for depth textures? In that case you can lie to the driver and say it's an RGBA texture so it does a fast copy and then change it back to DEPTH texture. You can use PBO (pixel buffer object) for that.

##### Share on other sites
Seroja    280
Quote:
Original post by deathkrush
Quote:
 Original post by SerojaglCopy could indeed be implemented badly, but:1) It's 1 copy of 512*512 depth texture per frame, why would it decrease preformance x3 or even x4?

If the OpenGL driver is doing something silly like using the CPU to copy data, that would kill performance because accessing VRAM from CPU is very slow on many graphic cards. Think 10MB/s instead of 10GB/s !!!

Quote:
 Original post by Seroja2) I use same glCopy for cube mapping, which is 6 256*256 RGB(A) textures, and even increasing that to 6 512*512 textures didn't hurt performance as much.

Maybe the OpenGL driver supports copying an RGBA texture using the GPU, but it falls back to 10MB/s for depth textures? In that case you can lie to the driver and say it's an RGBA texture so it does a fast copy and then change it back to DEPTH texture. You can use PBO (pixel buffer object) for that.

OK, now that makes sense. But I don't think I can fool the driver that easily. FBOs are out of my reach, so PBOs definitely are (not that I know how to use them anyway). Or is it possible with something more simple like pbuffers?

Otherwise, I don't know how to do it. If I tell it's RGBA, it will copy black (since I disable color writes during depth map creation pass). If I glRead the depth, I'll get same performance drop.

Maybe there is a way to convert depth (Z-buffer value) to color... Actually why not... It would be quite cumbersome with the way my code is set up, but the depth pass could have color writes enabled, everything disabled, except for eye-linear texture giving value according to distance from eye (which is the Z value). Will glCopy be able to copy RGBA into depth texture? e.g. what would the following call result with:
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 0, 0, DepthSize, DepthSize, 0);
If a depth texture is bound?

OK, I actually tried that while writing the post :-)
I didn't do the eye-linear thing, but I bound the scene rendered from light view as a texture. Shadow became colorized :D
Actually, I think it ended up with some interesting projective texture. Unfortunately the R-coordinate comparison won't work this way (RGBA doesn't have R-coordinate). Speed did get up to the reasonable level, but this isn't shadow mapping (at least not without a shader). Unless I can convince OpenGL to use s or t as if they were the r coordinate - which I have no idea of how to do.

I still have doubts about this being a driver bug. Somehow, almost everything that didn't work for me in OpenGL ended up being my own bug.

##### Share on other sites
deathkrush    350
Quote:
 Original post by SerojaOK, now that makes sense. But I don't think I can fool the driver that easily. FBOs are out of my reach, so PBOs definitely are (not that I know how to use them anyway). Or is it possible with something more simple like pbuffers?

Is ARB_render_texture supported? If so, it can be used in combination with pbuffers to achieve the same thing as FBO.

##### Share on other sites
Seroja    280
Render texture isn't supported, so I can only use glCopy.

Tested it today on Intel and it works fine :D
So it's either my computer's or ATi's bad.

Positive: Intel (which turned out to be 945G not 915G), in addition to properly shadowing, seems to also support PCF. I used GL_LINEAR_MIPMAP_LINEAR, turned on blending, changed order of drawing to first dim then bright, and it indeed became quite smooth. Without blending, there were "halos" around shadows (that's where alpha isn't 1 or 0). If Intel can do it and nVidia can do it, why ATi can't?

Negative: The grass started getting "animated" - e.g. I see the triangles that make up the grass, and texture is moving on them. I distorted the texture coordinates and didn't supply proper Q coordinate, I know, but on nVidia and ATi at least the behavior is stable (so when moving the grass plane texture sticks to it and doesn't move). Oh, and Intel's driver doesn't know that fog enable is part of the enable bits, so I had to change PushAttrib to all bits to make it work.

Thanks for the help, I'm pretty much OK with the situation now. I still would like to see someone with an X800 (or similar) tell me if they have same low FPS.

Edit: just a quick question that came into my mind - assuming GPU has PCF, if a fragment has alpha of 0.3, and PCF says 50% coverage (0.5 alpha), what will be the final alpha of te fragment? 0.15, 0.5 or anything else?

[Edited by - Seroja on August 6, 2007 1:50:31 PM]

##### Share on other sites
nullsquared    126
Quote:
 Original post by SerojaRender texture isn't supported, so I can only use glCopy.Tested it today on Intel and it works fine :DSo it's either my computer's or ATi's bad.Positive: Intel (which turned out to be 945G not 915G), in addition to properly shadowing, seems to also support PCF. I used GL_LINEAR_MIPMAP_LINEAR, turned on blending, changed order of drawing to first dim then bright, and it indeed became quite smooth. Without blending, there were "halos" around shadows (that's where alpha isn't 1 or 0). If Intel can do it and nVidia can do it, why ATi can't?Negative: The grass started getting "animated" - e.g. I see the triangles that make up the grass, and texture is moving on them. I distorted the texture coordinates and didn't supply proper Q coordinate, I know, but on nVidia and ATi at least the behavior is stable (so when moving the grass plane texture sticks to it and doesn't move). Oh, and Intel's driver doesn't know that fog enable is part of the enable bits, so I had to change PushAttrib to all bits to make it work.Thanks for the help, I'm pretty much OK with the situation now. I still would like to see someone with an X800 (or similar) tell me if they have same low FPS.Edit: just a quick question that came into my mind - assuming GPU has PCF, if a fragment has alpha of 0.3, and PCF says 50% coverage (0.5 alpha), what will be the final alpha of te fragment? 0.15, 0.5 or anything else?

I can't contribute anything other than the fact that my X850XT runs that demo at a constant 4FPS [sad].

##### Share on other sites
Seroja    280
Quote:
Original post by agi_shi
Quote:
 Original post by SerojaRender texture isn't supported, so I can only use glCopy.Tested it today on Intel and it works fine :DSo it's either my computer's or ATi's bad.Positive: Intel (which turned out to be 945G not 915G), in addition to properly shadowing, seems to also support PCF. I used GL_LINEAR_MIPMAP_LINEAR, turned on blending, changed order of drawing to first dim then bright, and it indeed became quite smooth. Without blending, there were "halos" around shadows (that's where alpha isn't 1 or 0). If Intel can do it and nVidia can do it, why ATi can't?Negative: The grass started getting "animated" - e.g. I see the triangles that make up the grass, and texture is moving on them. I distorted the texture coordinates and didn't supply proper Q coordinate, I know, but on nVidia and ATi at least the behavior is stable (so when moving the grass plane texture sticks to it and doesn't move). Oh, and Intel's driver doesn't know that fog enable is part of the enable bits, so I had to change PushAttrib to all bits to make it work.Thanks for the help, I'm pretty much OK with the situation now. I still would like to see someone with an X800 (or similar) tell me if they have same low FPS.Edit: just a quick question that came into my mind - assuming GPU has PCF, if a fragment has alpha of 0.3, and PCF says 50% coverage (0.5 alpha), what will be the final alpha of te fragment? 0.15, 0.5 or anything else?

I can't contribute anything other than the fact that my X850XT runs that demo at a constant 4FPS [sad].

Thanks. That means that it's most probable that this is an X??? / X8?? series bug. Any point notifying ATi about it? I mean, glCopy is old and isn't widely used (especially on a card with FBOs), and X800 isn't a new card they really care about. If yes, anyone knows how / where?

##### Share on other sites
nullsquared    126
Quote:
Original post by Seroja
Quote:
Original post by agi_shi
Quote:
 Original post by SerojaRender texture isn't supported, so I can only use glCopy.Tested it today on Intel and it works fine :DSo it's either my computer's or ATi's bad.Positive: Intel (which turned out to be 945G not 915G), in addition to properly shadowing, seems to also support PCF. I used GL_LINEAR_MIPMAP_LINEAR, turned on blending, changed order of drawing to first dim then bright, and it indeed became quite smooth. Without blending, there were "halos" around shadows (that's where alpha isn't 1 or 0). If Intel can do it and nVidia can do it, why ATi can't?Negative: The grass started getting "animated" - e.g. I see the triangles that make up the grass, and texture is moving on them. I distorted the texture coordinates and didn't supply proper Q coordinate, I know, but on nVidia and ATi at least the behavior is stable (so when moving the grass plane texture sticks to it and doesn't move). Oh, and Intel's driver doesn't know that fog enable is part of the enable bits, so I had to change PushAttrib to all bits to make it work.Thanks for the help, I'm pretty much OK with the situation now. I still would like to see someone with an X800 (or similar) tell me if they have same low FPS.Edit: just a quick question that came into my mind - assuming GPU has PCF, if a fragment has alpha of 0.3, and PCF says 50% coverage (0.5 alpha), what will be the final alpha of te fragment? 0.15, 0.5 or anything else?

I can't contribute anything other than the fact that my X850XT runs that demo at a constant 4FPS [sad].

Thanks. That means that it's most probable that this is an X??? / X8?? series bug. Any point notifying ATi about it? I mean, glCopy is old and isn't widely used (especially on a card with FBOs), and X800 isn't a new card they really care about. If yes, anyone knows how / where?

TBH, I don't see any point. They probably know about it already (well, they are the ones who wrote the drivers [lol])... and besides, these cards are behind 2 generations already [wink].

Speaking of drivers... try out the Omega drivers. They're made "for" games. Maybe they fix this issue?

##### Share on other sites
Seroja    280
Quote:
Original post by agi_shi
Quote:
Original post by Seroja
Quote:
Original post by agi_shi
Quote:
 Original post by SerojaRender texture isn't supported, so I can only use glCopy.Tested it today on Intel and it works fine :DSo it's either my computer's or ATi's bad.Positive: Intel (which turned out to be 945G not 915G), in addition to properly shadowing, seems to also support PCF. I used GL_LINEAR_MIPMAP_LINEAR, turned on blending, changed order of drawing to first dim then bright, and it indeed became quite smooth. Without blending, there were "halos" around shadows (that's where alpha isn't 1 or 0). If Intel can do it and nVidia can do it, why ATi can't?Negative: The grass started getting "animated" - e.g. I see the triangles that make up the grass, and texture is moving on them. I distorted the texture coordinates and didn't supply proper Q coordinate, I know, but on nVidia and ATi at least the behavior is stable (so when moving the grass plane texture sticks to it and doesn't move). Oh, and Intel's driver doesn't know that fog enable is part of the enable bits, so I had to change PushAttrib to all bits to make it work.Thanks for the help, I'm pretty much OK with the situation now. I still would like to see someone with an X800 (or similar) tell me if they have same low FPS.Edit: just a quick question that came into my mind - assuming GPU has PCF, if a fragment has alpha of 0.3, and PCF says 50% coverage (0.5 alpha), what will be the final alpha of te fragment? 0.15, 0.5 or anything else?

I can't contribute anything other than the fact that my X850XT runs that demo at a constant 4FPS [sad].

Thanks. That means that it's most probable that this is an X??? / X8?? series bug. Any point notifying ATi about it? I mean, glCopy is old and isn't widely used (especially on a card with FBOs), and X800 isn't a new card they really care about. If yes, anyone knows how / where?

TBH, I don't see any point. They probably know about it already (well, they are the ones who wrote the drivers [lol])... and besides, these cards are behind 2 generations already [wink].

Speaking of drivers... try out the Omega drivers. They're made "for" games. Maybe they fix this issue?

I *was* on Omega before I got this bug. Then I decided to update drivers, got Omega's latest, which didn't work for some reason. So currently I'm on latest Catalyst.

Which means this is a "serious" bug - that has been for some time in drivers. But this is likely due to the technique not being used in games and thus considered low priority (or maybe they never even noticed the bug).

## Create an account

Register a new account

• ### Similar Content

• By Zaphyk
I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?

• I'm trying to get some legacy OpenGL code to run with a shader pipeline,
The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
I've got a version 330 vertex shader to somewhat work:
#version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
Question:
What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?

Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.

• Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.

• By KarimIO
Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!

• Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
I'm available for a good conversation about Game Engine / Graphics Programming

• 15
• 13
• 15
• 10
• 18