Sign in to follow this  
uzipaz

OpenGL Camera and Viewing Transformations

Recommended Posts

uzipaz    240
First of all, I am aware that there is no such entity called camera in OpenGL. The eye is located at the origin looking down the negative z-axis with the up vector being the positive y-axis (please correct me if I am wrong here or anywhere).

So here's the deal, I am trying to make a class that will handle the view of the scene using keyboard input. Just like in an aircraft, there is thrust which causes the camera to move forward(if we are sitting in a cockpit). Let us say, I am using W and S keys to control the thrust of this hypothetical camera and A and D keys to control my heading(the direction at which I am looking) and Up and Down keys to control the Pitch and Left and Right to control Roll.

If you have ever played Descent, there are actually ten ways to control the ship. Thrust forward, backward.. slide left, right, slide up and down, Pitch up and down, and roll left and right.

This is what I am trying to do here. By using basic opengl transformation functions I want to build my own camera class that I can control the view the scene very easily and how I want to view it.

Here is the code so far that I was able to do:
[CODE]
#ifndef CAMERA_H_INCLUDED
#define CAMERA_H_INCLUDED
#include <SDL.h>
#include <GLEW\glew.h>
#include <math.h>
#include "Misc.h"
const GLfloat INCREMENT_HEADING = 0.1f;
const GLfloat INCREMENT_PITCH = 0.1f;
const GLfloat INCREMENT_ROLL = 0.1f;
const GLfloat INCREMENT_FORWARD = 1.0f;
// From Right to Left, in Bits, the keys are
// right left down up d a s w
class EyeCam
{
private:
GLfloat Angle_Heading;
GLfloat Angle_Pitch;
GLfloat Angle_Roll;
Uint8 KeyState;
public:
GLfloat eyeX, eyeY, eyeZ;
EyeCam(GLfloat eyeX, GLfloat eyeY, GLfloat eyeZ, GLfloat aimX, GLfloat aimY, GLfloat aimZ);
void onEvent(SDL_Event* event);
void update();
};
#endif // CAMERA_H_INCLUDED
[/CODE]

[CODE]

#include "EyeCam.h"
#include <iostream>
EyeCam::EyeCam(GLfloat eyeX, GLfloat eyeY, GLfloat eyeZ, GLfloat aimX, GLfloat aimY, GLfloat aimZ)
{
this->eyeX = eyeX;
this->eyeY = eyeY;
this->eyeZ = eyeZ;
Angle_Heading = Misc::toDegrees(atan2f((eyeX - aimX), (eyeZ - aimZ)));
GLfloat base = sqrt(pow(eyeZ-aimZ, 2) + pow(eyeX - aimX, 2));
Angle_Pitch = Misc::toDegrees(atan2f( -(eyeY - aimY), base));
//Angle_Roll = 0.0;
KeyState = 0x00;
}


void EyeCam::onEvent(SDL_Event* event)
{
if (event->type == SDL_KEYDOWN)
{
if (event->key.keysym.sym == SDLK_RETURN)
{
system("cls");
std::cout << "X: " << eyeX << '\n';
std::cout << "Y: " << eyeY << '\n';
std::cout << "Z: " << eyeZ << "\n\n";
std::cout << "Heading: " << Angle_Heading << '\n';
std::cout << "Pitch: " << Angle_Pitch << '\n';
//std::cout << "Roll: " << Angle_Roll << '\n';
}
if (event->key.keysym.sym == SDLK_w)
KeyState = KeyState | 0x01;
if (event->key.keysym.sym == SDLK_s)
KeyState = KeyState | 0x02;
if (event->key.keysym.sym == SDLK_a)
KeyState = KeyState | 0x04;
if (event->key.keysym.sym == SDLK_d)
KeyState = KeyState | 0x08;
if (event->key.keysym.sym == SDLK_UP)
KeyState = KeyState | 0x10;
if (event->key.keysym.sym == SDLK_DOWN)
KeyState = KeyState | 0x20;
if (event->key.keysym.sym == SDLK_LEFT)
KeyState = KeyState | 0x40;
if (event->key.keysym.sym == SDLK_RIGHT)
KeyState = KeyState | 0x80;
}
else if (event->type == SDL_KEYUP)
{
if (event->key.keysym.sym == SDLK_w)
KeyState = KeyState & 0xFE;
if (event->key.keysym.sym == SDLK_s)
KeyState = KeyState & 0xFD;
if (event->key.keysym.sym == SDLK_a)
KeyState = KeyState & 0xFB;
if (event->key.keysym.sym == SDLK_d)
KeyState = KeyState & 0xF7;
if (event->key.keysym.sym == SDLK_UP)
KeyState = KeyState & 0xEF;
if (event->key.keysym.sym == SDLK_DOWN)
KeyState = KeyState & 0xDF;
if (event->key.keysym.sym == SDLK_LEFT)
KeyState = KeyState & 0xBF;
if (event->key.keysym.sym == SDLK_RIGHT)
KeyState = KeyState & 0x7F;
}
}
void EyeCam::update()
{
if (KeyState & 0x01) // w
{
eyeX = eyeX - INCREMENT_FORWARD * sin(Misc::toRadians(Angle_Heading));
eyeY = eyeY + INCREMENT_FORWARD * sin(Misc::toRadians(Angle_Pitch));
eyeZ = eyeZ - INCREMENT_FORWARD * cos(Misc::toRadians(Angle_Heading));
}
if (KeyState & 0x02) // s
{
eyeX = eyeX + INCREMENT_FORWARD * sin(Misc::toRadians(Angle_Heading));
eyeY = eyeY - INCREMENT_FORWARD * sin(Misc::toRadians(Angle_Pitch));
eyeZ = eyeZ + INCREMENT_FORWARD * cos(Misc::toRadians(Angle_Heading));
}
if (KeyState & 0x04) // a
{
Angle_Heading = Angle_Heading + INCREMENT_HEADING;
}
if (KeyState & 0x08) // d
{

Angle_Heading = Angle_Heading - INCREMENT_HEADING;
}
if (KeyState & 0x10) // Up
{
Angle_Pitch = Angle_Pitch + INCREMENT_PITCH;
}
if (KeyState & 0x20)
{
Angle_Pitch = Angle_Pitch - INCREMENT_PITCH;
}
if (KeyState & 0x40) // Left
{
Angle_Roll = Angle_Roll + INCREMENT_ROLL;
}
if (KeyState & 0x80) // Right
{
Angle_Roll = Angle_Roll - INCREMENT_ROLL;
}
glRotatef(-Angle_Pitch, 1.0, 0, 0);
glRotatef(-Angle_Heading, 0, 1.0, 0);
glTranslatef(-eyeX, -eyeY, -eyeZ);
glRotatef(-Angle_Roll, 0.0, 0.0, 1.0);
}

[/CODE]

The constructor of the class takes the same arguments as gluLookAt() function, except for the Up vector (I've not implemented it yet in my class). The position of the eye in x, y, z coordinates and a point of aim in x, y, z coordinates.

In the constructor, I am calculating two angles, the Heading and the Pitch by using spherical trigonometry and then using these angles to rotate the view accordingly. Please note that, I am not using gluLookAt() function at all in my class.

Then, by using keyboard commands, I am just making changes to the angles to adjust my aim and calculating eye coordinates to adjust my position.

Now, the problem I am having here is that, If I am originally looking down the -ve z- axis and my up is +ve y axis. then I know to adjust my heading I have to Rotate the scene around the y - axis, if I want to adjust pitch I have to Rotate around the x - axis and if I want to adjust roll I have to Rotate around the z-axis. Hence, I am using equation of circle with heading angle to calculate my eyeX and eyeZ and pitch angle to calculate y - axis. But unfortunately, this will only work till I stay in the XZ plane. Any change in the plane, I will not get the desired results.

For example, at the start if I make a 90 degree Roll such that my up now is -ve x-axis, then if I want to Pitch up relative to the camera, I have rotate around the y-axis. Originally I was rotating about the x-axis when roll was 0. Similarly, If I am looking down the -ve z-axis and then roll will perfectly fine because I am rotating around the +ve z axis, however, If I move the camera to aim at the +ve x-axis and then roll it will not work, in that case I have to rotate on the -ve x-axis to make the roll work. I was able to solve this problem by changing the glRotate function for Roll

from:
glRotatef(Angle_Roll, 0, 0, 1);
to:
glRotatef(Angle_Roll, sin(toRadians(Angle_Heading), 0, cos(toRadians(Angle_Heading));

but what if I am looking down the -ve or +ve y-axis, this again will not work as desired and also I try to change the heading then it will automatically affect the roll transformation.

I hope that I am not confusing you guys too much with the my description, please correct me if I am wrong in my assumptions.

So, the bottom line question is, what I need to do make this camera work in every orientation so that it pitchs, rolls, and turns correctly relative to its orientation.

I am not very skilled with mathematics, but I have studied trigonometry and equations of curves and shapes. Please let me know If I need to cover more mathematics in order to implement what I want.

Any input will be appreciated, Thanks.

Share this post


Link to post
Share on other sites
Ignifex    520
If you are up for it, I recommend storing and manipulating your camera's orientation as a [url="http://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation"]quaternion[/url]. Especially if you are also implementing a roll for rotation, this will save you quite a bit of trouble. There are a few benefits of using quaternions:
- You gain the ability to Interpolate between two orientations, which is very valuable if you want to move your camera along a spline for instance.
- By not having separate angles, there are no "duplicate" poses (look up, and either rotate by yaw or roll).
- Concatenating rotations (say you want a camera fixed to an object, and you rotate the object) becomes a lot easier.

Regardless of what method of orientation you use, camera's will have some position and orientation in space. Combining them will give you the camera's transformation. To apply this to the scene, you will need to use the inverse of this transformation on the whole scene.
First imagine doing several rotations and translations on your camera. Then, as you "undo" each of those in the inverse order, but also apply them to your scene, you will end up with your camera back where it was, but your scene has moved back along with it. This is exactly what you want to achieve.

Let's take Euler angles as an example. In matrix form, you can view it like this:
Camera Transform = Txyz * Ry * Rp * Rr
where Txyz is the translation, and R denotes Rotation, yaw, pitch and roll. The inverse of this, is the following:
Inverse Transform = Rr[sup]-1[/sup] * Rp[sup]-1[/sup] * Ry[sup]-1[/sup] * Txyz[sup]-1[/sup].
Apply that to your scene, and you are done.

Share this post


Link to post
Share on other sites
haegarr    7372
IMHO one has to understand the concept of spaces. Statements like "looking along the negative z-axis", " rotating around the y axis", and "you'll end up with your camera back where it was" are somewhat meaningless without knowing the reference in use. Sometimes it becomes clear from the context, but often enough it doesn't do so.

E.g. when one applies the said inverse camera transformation (a.k.a. view transformation) to the entirety of objects including your camera, the scene doesn't really change. Instead, one is switching over to another reference system (usually called the view space) in which the camera is located at the origin and orientated normally. (Whether looking is done along the negative z-axis in view space or something else belongs to the projection matrix and clipping.)

When considering the concept of space, the questions of when to apply which transformation at what position in the pipeline becomes clear.

[quote name='Ignifex' timestamp='1345366628' post='4971044']
If you are up for it, I recommend storing and manipulating your camera's orientation as a [url="http://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation"]quaternion[/url]. ...
[/quote]
Only unit quaternions represent ure rotations, to be precise. This is important insofar that one need to normalize quaternions from time to time (not so often as rotation matrices need to be normalized, I agree).

[quote name='Ignifex' timestamp='1345366628' post='4971044']
- By not having separate angles, there are no "duplicate" poses (look up, and either rotate by yaw or roll).
[/quote]
A unit quaternion has 2 representations for each orientation, namely q and -q, so it isn't unique as well. Especially when doing interpolation one has the choice of following the short or the long arc (ignoring pathological cases here).

[quote name='Ignifex' timestamp='1345366628' post='4971044']
- Concatenating rotations (say you want a camera fixed to an object, and you rotate the object) becomes a lot easier.
[/quote]
Why this? Concatenation means to multiply quaternions on the one or matrices on the other hand. Spaces and order play their role regardless whether one uses quaternions or matrices.

IMHO especially the example "you want a camera fixed to an object" is less intuitive when using quaternions, because "fixed to an object" implies a fixed position, too. This can be expressed as a single homogenous matrix and a simple matrix product. Edited by haegarr

Share this post


Link to post
Share on other sites
Ignifex    520
I agree the concept of spaces is essential for a good understanding of how transformations work, especially in the case of view transformations where there are quite a few distinct spaces you have to understand (object space, world space, view/eye space, clip space).
IMO, there are many ways in which to view transformations. As you progress in your understanding of how these work, your view will likely change.
You might well be right that one should start by understanding spaces. How about reading something like [url="http://http.developer.nvidia.com/CgTutorial/cg_tutorial_chapter04.html"]this[/url].

My earlier post was mainly an attempt to intuitively explain some of the core mechanics of transformations, without diving too deep into the maths behind it.
[quote name='haegarr' timestamp='1345371483' post='4971060']
Why this? Concatenation means to multiply quaternions on the one or matrices on the other hand. Spaces and order play their role regardless whether one uses quaternions or matrices.
[/quote]
This is mainly a comparison between quaternions and euler angles. Finding the euler angles of a concatination from two rotations is not very optimal. I couldn't think of a proper use case where concatenating euler angles makes sense though, instead of the matrices derived from them.

Euler angles are good for allowing a user to specify an angle, but are far less useful when applying these angles as transformations.
Matrices are the default approach for applying transformations.
Quaternions are useful for manipulating rotation, such as interpolation.

Share this post


Link to post
Share on other sites
uzipaz    240
@Ignifex

As you suggested, I will take a look at quaternions, I never heard this term before.

As far as camera and inverse transformations are concerned, this is what I am trying to do here already, the knowledge that I have with Opengl and viewing transformations come from the red book.

From what I understand, there is no such thing called a 'Camera' in Opengl library. If you want view an object that is drawn and centered at the origin, then you can either use the gluLookAt(0, 0, +z, 0, 0, 0, 0, 1, 0) or insted you can use glTranslatef(0, 0, -z), both of these will have the same effect (for this case at least).

From what I have studied and understand, the function name " LookAt " is like a false advertisement. You are not actually looking somewhere in the scene, it just applies inverse transformations to the whole scene and you think that you are moving a camera in the scene when you are acutally just transforming the scene. But then again, I might be incorrect here and its a completely different debate.

In my class, I am doing a similar thing, taking vector input in euclidean space and then finding the appropriate angle of rotations of heading and pitch, and then apply the inverse of this when I am rotating about an axis and it works fine as I please.

The problem I am having, is that if you take a look at my code, as long you give the initial arguments such that your hypothetical camera is located at the XZ plane, the heading will work fine, however, If I pitch or Roll to 180 degrees, and then I apply my heading calculation, then they will seem to work in reverse order. If I give the input to turn left, it will turn right and vice versa, because I am still rotating around this +ve y-axis.

Please note that, when I was trying to implement this class, I was not thinking in terms of matrices, I was thinking in terms of these transformation commands (glRotatef and glTranslatef). Edited by uzipaz

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
    • By C0dR
      I would like to introduce the first version of my physically based camera rendering library, written in C++, called PhysiCam.
      Physicam is an open source OpenGL C++ library, which provides physically based camera rendering and parameters. It is based on OpenGL and designed to be used as either static library or dynamic library and can be integrated in existing applications.
       
      The following features are implemented:
      Physically based sensor and focal length calculation Autoexposure Manual exposure Lense distortion Bloom (influenced by ISO, Shutter Speed, Sensor type etc.) Bokeh (influenced by Aperture, Sensor type and focal length) Tonemapping  
      You can find the repository at https://github.com/0x2A/physicam
       
      I would be happy about feedback, suggestions or contributions.

  • Popular Now