# OpenGL Problems with camera movement and rotation.

## Recommended Posts

Wodzu    122
Hi guys. I am trying to learn how to operate my camera in OpenGL. However I think I have problems with understaind the translations in local \ global coordinate system. I want to move my camera freearly around a cube which is located at (0,0,-5). However instead of moving my camera the sceene looks like the cube is moved to the orgin of global coordinate system. Also the rotation of my camera doesn't look "natural" to me. Something is wrong with that also. Here is a crucial part of my code: procedure ReSizeGLScene(Width, Height: Integer); cdecl; begin if Height = 0 then Height := 1; glViewport(0, 0, Width, Height); glMatrixMode(GL_PROJECTION); glLoadIdentity; gluPerspective(45, Width / Height, 0.1, 1000); glMatrixMode(GL_MODELVIEW); glLoadIdentity; end; procedure GLKeyboard(Key: Byte; X, Y: Longint); cdecl; begin if Key = 27 then Halt(0); case Key of 97: CameraPosition.Z := CameraPosition.Z + 1; 122: CameraPosition.Z := CameraPosition.Z - 1; end; end; procedure GLSpecialKeyboard(Key: Longint; X, Y: Longint); cdecl; begin case Key of GLUT_KEY_LEFT: CameraAngle.Y := CameraAngle.Y - 1; GLUT_KEY_RIGHT: CameraAngle.Y := CameraAngle.Y + 1; GLUT_KEY_DOWN: CameraAngle.X := CameraAngle.X + 1; GLUT_KEY_UP: CameraAngle.X := CameraAngle.X - 1; end; end; procedure DrawGLScene; cdecl; begin glClear(GL_COLOR_BUFFER_BIT or GL_DEPTH_BUFFER_BIT); glLoadIdentity; glRotatef(CameraAngle.Y, 0, 1, 0); glRotatef(CameraAngle.X, 1, 0, 0); glTranslatef(CameraPosition.X, CameraPosition.Y, CameraPosition.Z); glutWireCube(1); glutSwapBuffers; end; Also, the full working program can be downloaded here: http://www.speedyshare.com/files/21386187/MyCamera.zip The effect which I want to achive is to point my camera towards some point in scene and move camera towards that point. Thanks for your time.

##### Share on other sites
coderx75    435
If you want to look at a specific object/position, you can just use the glLookAt() at function. However, if you're trying to freely rotate your camera, the rotation matrix based on the rotation of the camera won't do. You must use the inverted matrix of the camera's rotation.

EDIT: If the inverted matrix is your solution, let me know. It's a bit of a pain in the ass and I've got a working implementation here somewhere that I can post.

##### Share on other sites
Wodzu    122

I want to move my camera freerly like in a 3D Space game.

Could yu give me an example how to use such inverted matrix?

Firstly I need to create a rotation matrix umm...from my angles?
Than I need to converted it to the inverted matrix?
And than I need to multiply that matrix by the current view matrix?

EDIT: Yes, that is exactly what I need, thank you for your help. But will I be able to understand your implementation? I mean I need to learn how to use such matrix to achieve a given result.

[Edited by - Wodzu on March 12, 2010 8:40:14 AM]

##### Share on other sites
coderx75    435
I'm probably jumping ahead and should be careful not to make this more confusing than it really is. It's been a while since I've had to delve into OpenGL so I need to get my bearings.

Okay, I see you're using a glRotatef() call for each axis. This is a pretty straightforward method so you can scratch my earlier comment about inverted matrices. This method (for me anyway) tends to require some trial and error. For instance, (and this is going on memory alone) you may need to negate the coordinates before translating:
glTranslatef(-CameraPosition.X, -CameraPosition.Y, -CameraPosition.Z);

glTranslatef(CameraPosition.X, CameraPosition.Y, CameraPosition.Z);

If you think about this logically, the objects in your screen space appear to move in the opposite direction of the camera movement. Looking out of a train window (left) as the train moves forward (right), the train station appears to move opposite (left).

Also, I don't see where you're cube's position is being set and it appears that the cube should be at origin. To shift the position:

glPushMatrix (); //preserve the camera rotation matrix
glTranslatef (0.0, 0.0, -5.0); //shift the cube position
glMultMatrixf (Obj->GetRotation()); //if you need to rotate the cube, do it here
glutWireCube (1); //same as before
glPopMatrix (); //retrieve the camera rotation matrix

Hope this helps!

##### Share on other sites
coderx75    435
In case you need it at any time in the future, here's an implementation of the invert matrix function in BASIC:

SUB TPLInvertMatrix (Dest AS SINGLE PTR, Source AS SINGLE PTR)	DIM X AS INTEGER	DIM Y AS INTEGER	DIM Index AS INTEGER	DIM Minor(11) AS SINGLE	DIM Adjoint(11) AS SINGLE	DIM AS SINGLE Determinant = Source[0] * (Source[5] * Source[10] - Source[9] * Source[6]) - _					Source[4] * (Source[1] * Source[10] - Source[9] * Source[2]) + _					Source[8] * (Source[1] * Source[6] - Source[5] * Source[2])	DIM AS SINGLE DetRec = 1.0 / Determinant  'Determinant reciprocal	'Calculate minors of source matrix	Minor(0) = Source[5] * Source[10] - Source[9] * Source[6]	Minor(1) = Source[4] * Source[10] - Source[8] * Source[6]	Minor(2) = Source[4] * Source[9] - Source[8] * Source[5]	Minor(4) = Source[1] * Source[10] - Source[9] * Source[2]	Minor(5) = Source[0] * Source[10] - Source[8] * Source[2]	Minor(6) = Source[0] * Source[9] - Source[8] * Source[1]	Minor(8) = Source[1] * Source[6] - Source[5] * Source[2]	Minor(9) = Source[0] * Source[6] - Source[4] * Source[2]	Minor(10) = Source[0] * Source[5] - Source[4] * Source[1]	'Calculate cofactors and adjoint in one shot	Adjoint(0) = Minor(0)	Adjoint(1) = -Minor(4)	Adjoint(2) = Minor(8)	Adjoint(4) = -Minor(1)	Adjoint(5) = Minor(5)	Adjoint(6) = -Minor(9)	Adjoint(8) = Minor(2)	Adjoint(9) = -Minor(6)	Adjoint(10) = Minor(10)	'Finally, we find the inverse by dividing the adjoint by 	'the determinant |A|.  Since you can't divide a matrix, 	'we simply multiply each value by the reciprocal.	FOR Y = 0 TO 2		FOR X = 0 TO 2			Index = Y * 4 + X			Dest[Index] = DetRec * Adjoint(Index)		NEXT X	NEXT Y	'Last column can simply be copied	Dest[3] = Source[3]	Dest[7] = Source[7]	Dest[11] = Source[11]END SUB

This inverts ODE (physics) matrices so the format is different than OpenGL. Here's another function that performs the conversion:

SUB RenderConvertODEMatrix (Source AS dReal PTR, Dest AS GLFloat PTR)	Dest[0] = Source[0]:Dest[1] = Source[4]:Dest[2] = Source[8]:Dest[3] = 0	Dest[4] = Source[1]:Dest[5] = Source[5]:Dest[6] = Source[9]:Dest[7] = 0	Dest[8] = Source[2]:Dest[9] = Source[6]:Dest[10] = Source[10]:Dest[11] = 0	Dest[12] = 0:Dest[13] = 0:Dest[14] = 0:Dest[15] = 1END SUB

##### Share on other sites
Wodzu    122
Thank you codrex75 for the answers.

However you said that I do not need the inversion matrix but then you propose me to use glMultMatrixf(). So I am totaly lost now...

##### Share on other sites
karwosts    840

So when dealing with cameras and objects, you have two matrices that you need to worry about. One is the Model Matrix, and the other is the View matrix. These are often combined together to get the "ModelView" matrix that you will often hear of.

When you define a mesh, all of the coordinates will typically be in "Object Space". This defines the location of vertices relative to the origin of your object. The object space coordinates have no information about where the object is in the "World" (or Global Coordinate System). If you have a cube at 0,0,0 and you translate the cube to another position, the object space coordinates are always the same, because they only define vertices relative to the object.

So if we want to move our cube to a new location, we need to find a way to move each vertex from "Object Space" into "World Space". This is done via a Model Matrix. If you want to translate your object 5 units to the right, then you define a Transformation Matrix that defines a translation to the right by five units. This matrix can be 'M', or our model matrix. So now if we have a point 'p' in object space, and we want to move it into world space 'P', we transform it with the Model matrix like so: P = Mp. Your model matrix can contain as many translations, rotations, scalings as you need. If you want to Translate, then rotate, then scale your model, via matrices To (T object), Ro, So, then your modelview matrix is constructed M = To*Ro*So.

But what about the view matrix? Because there is no camera construct in OpenGL, we must transform our vertices again into a new space called "Eye Space". The viewport in opengl always looks out from 0,0,0 in the negative z direction, so we must "transform the entire world" so that it looks accurate from that space.

As coderx was describing with the train analogy, you move your view in openGL by moving the entire world in the opposite direction. When you turn your head to the right, this is exactly the same thing as if the entire world is rotating to the left. When you move your eyes up, it is also as if the entire world is moving down. So however we want to move our "camera", we must transform the world by the inverse of this movement.

So we need to come up with a matrix V (view matrix), that transforms our world coordinates into eye coordinates. Using our original point p and model matrix M, the equation now looks like this: P = VMp, where P is now in eye space.

Now remember that V is supposed to be the inverse of the camera movement that we want. Lets say what we really want to do is move our camera up and then rotate it 45 degrees downward to get a birds eye view of our world. If we treat the camera like an object, then we want to transform our camera by a translation (Tc) and a rotation (Rc): C = Tc*Rc.

Now transform matrix C will take an object and translate it up and rotate it. But what we really want is the inverse of C, which will be our view matrix. inverse(C) = C' = V

Now to find C' (also known as V), you can either invert the matrix C using matrix inversion methods, or you can just compute it from the original transformations. Because of the properties of matrices, this holds true:

C' = (TcRc)' = Rc'Tc'

Where Tc and Rc are our camera transforms. However Tc' and Rc' are very easy to calculate. The inverse of a translation is just a translation in the opposite direction, and the inverse of a rotation is just a rotation in the opposite direction.

So now you can transform your original vertex into eye space like so:

P = Rc' * Tc' * To * Ro * So * p

or

P = VMp

You can either construct this by building the "VM" matrix yourself, or you can build it with opengl transform functions. For example:

glLoadIdentity(); //Modelview matrix now identity//Setup the cameraglRotatef(negative camera rotation) //Modelview matrix now = Rc'glTranslatef(negative camera movement) //Rc' * Tc'//Setup the model matrixglTranslatef(object translation) //Rc' * Tc' * ToglRotatef(object rotation) //Rc' * Tc' * To * RoglScalef(object scale)//Rc' * Tc' * To * Ro * So//Now send your object space vercices, which are translated into eye spaceglVertex(p) // P = Rc' * Tc' * To * Ro * So * p

And that concludes the basics of opengl cameras :)

I know it is confusing at first, but after you work at it for a while it will make more sense.

##### Share on other sites
Wodzu    122
Thank you karwosts for you explanation and your time.

You expleined it very nicely even nicer the the OpenGL book itself ;)

There is one thing strictly mathematical which I do not know how to calculate.
I have some ideas but I do not want to reinvent the wheel.

Lets assume that I've rotated mine camera view about three angles. So I have now new vector pointing in space. I would like to move my camera along this vector by some unit distance. So I need to know how much I must translate in X,Y,Z-plane.

I know how to do this in 2D-space but I do not know how to do this in 3D, atleast in easy way.

The idea which I have is that:

I have old vector and 3 angles. I need to rotate this vector and calculate it's new coordinates. When I have new coordinates I normalize the vector and multiply by the unit distance. Then I add this value to the calculated vector.

But this is a lot of work and I am redoing some thing which is already done by the OpenGL.

How to do it in a simpler way? The ideal way would be to know only how much I need to translate without calculating the new vector by myself.

Regards.

##### Share on other sites
karwosts    840
Actually this information is stored inside the model matrix for you and easy to pull out. When you look at the actual elements of the matrix, this is what they represent:

 0  4  8 12  1  5  9 13 2  6 10 14 3  7 11 15Rx Ux Ox PxRy Uy Oy PyRz Uz Oz Pz 0  0  0  1

So elements 0,1,2 are the Right Vector (Rx, Ry, Rz), 4,5,6 is the Up Vector, and 8, 9, 10 is the Out Vector (or direction). 12,13,14 contain your translation. So whenever you perform the matrix math you already have your out vector. If you want to use the OpenGL transformations instead of performing the matrix op's yourself (preferred way is to do it yourself, but that is more advanced), you can download the matrix with glGetFloatfv and examine it's elements.

##### Share on other sites
Wodzu    122
Thank you karwosts :)

I have now a working camera however it is made after trials and errors and I feel that i know only in half how it works :|

I know that the order of command isue is crucial due to the matrix multiplication.
Howeve I can not find a logic in it. I wanted to compare two thhe most simple cases to observe how to rotation occurs.

Here are the cases:

CASE 1:

gluLookAt(0,0,5, 0, 0, 0, 0, 1, 0);
glRotatef(90, 1, 0, 0);
glRotatef(90, 0, 1, 0);
glutWireCube(1);

CASE 2:

gluLookAt(0,0,5, 0, 0, 0, 0, 1, 0);
glRotatef(90, 0, 1, 0);
glRotatef(90, 1, 0, 0);
glutWireCube(1);

So I only switched the order of rotation commands.

What I find illogical and impossible to understand (after rotating this damn cube for hours;)) is that:

In first case when I am thinking in terms of grand fixed orgin the command are issued in the reversed order, so:

1. Cube is drawn
2. Cube is rotated around OY counterclockwise by 90 degrees.
3. Cube is rotated around OX counterclockwise by 90 degrees.
4. Cube is translated -5 units in Z direction from the orgin.

Am I thinking correct?

But the same thinking in CASE two fails me, here it is:

1. Cube is drawn.
2. Cube is rotated around OX counterclockwise by 90 degrees.
3. Cube is rotated around OY counterclockwise by 90 degrees.
However the effect is different from expected! It looks like the step 2 (rotation around OX) also rotated the OY axis by 90 degrees! But in first case rotation around OY did not rotate the OX axis. This is the thing which I do not udenrstand.

Why in the first case the coordinate system has not been rotated with object and in the second case coordinate system has been rotated.

I can not see the logic here. Eiter in both cases the coordinate systems should be rotated with an object or they should stay fixed.

I am lost... :|

##### Share on other sites
karwosts    840
Quote:
 In first case when I am thinking in terms of grand fixed orgin the command are issued in the reversed order, so:1. Cube is drawn2. Cube is rotated around OY counterclockwise by 90 degrees.3. Cube is rotated around OX counterclockwise by 90 degrees.4. Cube is translated -5 units in Z direction from the orgin.Am I thinking correct?

Sounds right to me. Your second case should work equally well, and I'm not sure I really understand what is wrong.

Forgive me if I'm missing something, but how can you tell what is happening by rotating a cube by 90 degrees? If I take a cube and rotate it by 90 degrees doesn't that look exactly the same? I think you need to render some kind of object that is visually unique from all sides so you can tell what is happening. Either that you can just draw some axis on your cube:

glLoadIdentity;gluLookAt(0,0,5, 0, 0, 0, 0, 1, 0);glRotatef(90, 1, 0, 0);glRotatef(90, 0, 1, 0);glutWireCube(1);glBegin(GL_LINES);glColor3f(1,0,0);glVertex3f(0,0,0); glVertex3f(2,0,0);glColor3f(0,1,0);glVertex3f(0,0,0); glVertex3f(0,2,0); glColor3f(0,0,1);glVertex3f(0,0,0); glVertex3f(0,0,2); glEnd();

If that's still not working you can post some images and I can maybe understand better.

##### Share on other sites
thomasfn1    111
I had more or less the same problem. Here was my solution (in C++):

glLoadIdentity();glRotatef( -camPitch, 1, 0, 0 );glRotatef( -camYaw, 0, 1, 0 );glTranslatef( -camX, -camY, -camZ );

The cam values are handled in a Lua script in my implementation, which has dodgy angle calculations so I'm not sure if camPitch and camYaw follow standard form, but a little tinkering with - signs should fix it.

Edit:
You'd then render all your stuff in world coords (so your cube at 0, 0, 5) after that.

##### Share on other sites
Wodzu    122
Firs of all, thank you karwosts for your time and helping me out.

Yes you have absolutely right, one the example which I have given I could not say how to cube was rotated. I just modified my working example, maybe not neceserly and I confused you.

I am drawing the axis as you suggested, so we have right now:

RED: X - axis
GREEN: Y - axis
BLUE: Z - axis

In case 1 I am rotating cube firstly 90-degrees around OY and after that the Z-axis is on the place of X-axis and the X-axis is on the place of Z-axis. So the X-axis has been rotated as weel. Then I am rotating around X-axis, hovewer X-axis is now on the position of Z-axis but the cube is rotating like the X-axis would be on the oryginal position!

In 2 case I am also performing the same rotation around OY, and the axises has been rotated in the same way(X-axis is on the Z-axis position). But then when I am rotating around X-axis the rotation occurs in a different way! Now the cube rotating around X-axis like it would been on the Z-axis!

I can not understand that.

Here are images with my commentary, hope this will be now easier to udenrstand.

Here is the starting position for both cases:

Rotation 90 counterclockwise around OY gives this result for both cases:

As we see now the X-axis is on the place of Z-axis. Also the Z-axis should be on the left side of the cube, I don't understand why it is on the right side.

Now I am performing rotation around X-axis which is hidden (it is on the place of Z-axis).

But instead of local coordinate system rotation (X-has been rotated) now the image is rotated about the world coordinate system around BLUE axis.
So the question is, why the cube has been rotated around BLUE axis?

Now lets compare it with case two rotation:

Now the cube has been rotated aroud the RED-axis instead of the BLUE axis (like in case 1).

Why there is adifference? I can not understand the inconsistency in this rotations. Either both examples shoudl rotate around local coordinate system or around a world coordinate system but they behave differently and ONLY the order of rotations has been changed.

Here is the link to the working examples:

http://www.speedyshare.com/files/21431956/Rotation.zip

thomasfn1: Yes this solves the problem (I've found that solution on the NeHe tutorials page) but I would like to understand this rotation thing.

##### Share on other sites
karwosts    840
I think I must have confused you talking about applying operations in reverse order.

Quote:
 In case 1 I am rotating cube firstly 90-degrees around OY and after that the Z-axis is on the place of X-axis and the X-axis is on the place of Z-axis. So the X-axis has been rotated as weel. Then I am rotating around X-axis, hovewer X-axis is now on the position of Z-axis but the cube is rotating like the X-axis would be on the oryginal position!

Quote:
 CASE 1:glLoadIdentity;gluLookAt(0,0,5, 0, 0, 0, 0, 1, 0);glRotatef(90, 1, 0, 0);glRotatef(90, 0, 1, 0);glutWireCube(1);

Everytime you call a "gl{MatrixOp}f" command, this happens on the coordinate axis that has already been transformed by all of the previous operations.

So when you call gluLookAt (essentially glTranslate), you first translate the cube on its local coordinate system. Then when you call glRotate on X, you rotate the cube on its local X axis. Finally calling glRotate on the Y axis rotates it on the local Y axis (which is now parallel to the global Z axis as you already rotated the object around X).

Quote:
 Rotation 90 counterclockwise around OY gives this result for both cases....Also the Z-axis should be on the left side of the cube, I don't understand why it is on the right side.

No this is correct. The +Z is towards the camera by definition. So if you rotate Y 90 degrees CCW then it will be pointed to the right, while X is pointed out.

I think I would suggest that you just spend some more time playing with it, possibly searching the internet for articles and more explanations.

I don't mind helping, it helps me too to verbalize these concepts and think about them, even after I think I understand it can still be confusing sometimes. I'm just afraid I've reached the limit of how I can explain it.

Best of luck!

##### Share on other sites
Wodzu    122
I think I've finally understand this. Somehow during the time I confused the thing whish is drawn on the screen with actual code execution and this gave me all the trobule.

Thank you for devoting your time in helping me on this. :)

## Create an account

Register a new account

• ### Similar Content

• By Zaphyk
I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?

• I'm trying to get some legacy OpenGL code to run with a shader pipeline,
The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
I've got a version 330 vertex shader to somewhat work:
#version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
Question:
What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?

Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.

• Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.

• By KarimIO
Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!

• Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
I'm available for a good conversation about Game Engine / Graphics Programming

• 16
• 14
• 16
• 10
• 18