# OpenGL Rotating Camera

## Recommended Posts

Hello, I'm playing around with the concepts of having a camera that will orbit a target (i.e. the gaze direction will always be towards the target, and the camera is rotated by having the eye coordinates confined to a sphere around the target). To implement this I'm using OpenGL; to be more specific, I'm trying to use the gluLookAt function and modifying the eye, 'up', and gaze coordinates to achieve this orbiting effect. There's been a lot of problems so far and I was hoping to get some help here :) What I do is, when there is a mouse movement I attempt to translate this to how many degrees it should rotate around the x, y, and z axis. Then I try to rotate the camera; I translate the target and eye of the camera so that the target is at the origin and then I transform the Cartesian coordinates of the camera 'eye' to spherical coordinates. I then add the desired angle of rotation to theta and phi and translate back to Cartesian coordinates. The relevant code is below: Function to rotate camera:
void rotateCamera(GLfloat x, GLfloat y, GLfloat z)
{
// Translate eye so that target is at origin
cam.eye[ 0 ] -= cam.gaze[ 0 ];
cam.eye[ 1 ] -= cam.gaze[ 1 ];
cam.eye[ 2 ] -= cam.gaze[ 2 ];

// Spherical Coordinates
double theta = 0;
if(cam.eye[ 0 ] != 0)
{
theta = atan(cam.eye[ 1 ] / cam.eye[ 0 ]);
}
// x-coordinate is zero... we have a problem
else if(cam.eye[ 1 ] > 0)
{
}
else if(cam.eye[ 1 ] < 0)
{
}

double phi = acos(cam.eye[ 2 ] / cam.radius);

// Rotate around y-axis

// Rotate around x-axis

cam.eye[ 0 ] = cam.radius * cos(theta) * sin(phi);
cam.eye[ 1 ] = cam.radius * sin(theta) * sin(phi);
cam.eye[ 2 ] = cam.radius * cos(phi);

// Re-calculate 'up' vector for camera
GLfloat tmpVec[] = {cam.eye[ 0 ], cam.eye[ 1 ], cam.eye[ 2 ]};
GLfloat yVec[] = { 0.0f, 1.0f, 0.0f};
GLfloat tmpLen = sqrt(tmpVec[ 0 ] * tmpVec[ 0 ] + tmpVec[ 1 ] * tmpVec[ 1 ] + tmpVec[ 2 ] * tmpVec[ 2 ]);

tmpVec[ 0 ] /= tmpLen;
tmpVec[ 1 ] /= tmpLen;
tmpVec[ 2 ] /= tmpLen;

GLfloat xprod1[] = {tmpVec[ 1 ] * yVec[ 2 ] - tmpVec[ 2 ] * yVec[ 1 ],
tmpVec[ 2 ] * yVec[ 0 ] - tmpVec[ 0 ] * yVec[ 2 ],
tmpVec[ 0 ] * yVec[ 1 ] - tmpVec[ 1 ] * yVec[ 0 ]};

cam.up[ 0 ] = tmpVec[ 1 ] * xprod1[ 2 ] - tmpVec[ 2 ] * xprod1[ 1 ] * -1.0f;
cam.up[ 1 ] = tmpVec[ 2 ] * xprod1[ 0 ] - tmpVec[ 0 ] * xprod1[ 2 ] * -1.0f;
cam.up[ 2 ] = tmpVec[ 0 ] * xprod1[ 1 ] - tmpVec[ 1 ] * xprod1[ 0 ] * -1.0f;

tmpLen = sqrt(cam.up[ 0 ] * cam.up[ 0 ] + cam.up[ 1 ] * cam.up[ 1 ] + cam.up[ 2 ] * cam.up[ 2 ]);
cam.up[ 0 ] /= tmpLen;
cam.up[ 1 ] /= tmpLen;
cam.up[ 2 ] /= tmpLen;

// Translate eye back
cam.eye[ 0 ] += cam.gaze[ 0 ];
cam.eye[ 1 ] += cam.gaze[ 1 ];
cam.eye[ 2 ] += cam.gaze[ 2 ];
}


and translating mouse movements to rotations:
void mouseMove(int x, int y)
{
if(mouse.leftButton)
{
GLfloat degX = mouse.xPos - (float) x;  // How much we rotate around the y-axis
GLfloat degY = mouse.yPos - (float) y;  // How much we rotate around the x-axis

rotateCamera(-1.0f * degX, degY, 0.0f);

mouse.xPos = (x > 0 ? x : 0.0);
mouse.yPos = (y > 0 ? y : 0.0);
}
}


At first for the 'up' vector I was always using (0.0f, 1.0f, 0.0f) as I always wanted the camera's up direction to be parallel with the positive y-axis, but I suspected that that might be the source of my problem, so I added in all the code after the comment 'Re-calculate 'up' vector for camera' to try and fix this. Now I'm suspecting that it might be my mouseMove function and how it translates mouse movements to rotation.

##### Share on other sites
YellowMaple,

I'm familiar with what you're trying to do, however I dont usually implement it the way you do.

First I take the difference in the mouse coordinates (cur-prev) and determine the angle the camera has changed along the x and y axis. This is just based on screen size and some magic numbers that I use until moving the mouse from the left to the right side of the screen rotates the camera somewhere between 360 and 720 degrees (use whatever looks best to you).

Movement along the y-axis is considered pitch (up/down movement) and movement along the x-axis is considered yaw (side/side movement.) Then I perform the following calculations to orbit the camera around its target

void Camera::Orbit( float32 yaw, float32 pitch ){    // Create a rotation matrix that rotates about the y-axis    Matrix4 rotationY;    rotationY.BuildRotationY( yaw );    // Get the direction vector    Vector3 dir = m_Target - m_Eye;	    // Normalize the direction vector and cross it with the UP vector in    // order to get the "right" vector, which is just a vector pointing    // out of my right side.  This will be used as our arbitrary axis for    // which to pitch on.  If we dont do this, then when we rotate around our    // object and then try and pitch, our pitch direction will be reversed    Vector3 right = dir.Normalize().Cross( m_Up );    right.NormalizeInPlace();    // Create a rotation matrix about the arbitary axis created above, by the    // angle passed in from the calling function    Matrix4 rotationX;    rotationX.BuildAxisAngle( right, pitch );    // Create a composite transformation matrix    Matrix4 transform = rotationY * rotationX;    // Transform the direction vector into a new direction    dir = transform.Transform3( dir );	    // Subtract the new direction from the target to get the new eye position    m_Eye = m_Target - dir;    // Re-build the camera matrix from the new eye, target, and up vectors    // which represents the view transform    UpdateTransform();}

I hope this code helps you out. Please let me know if you have any specific questions on the implementation of anything above.

Cheers!

##### Share on other sites
Thanks for your help! After some contemplation, I realized that spherical coordinates is actually more complicated than it first sounds. I went with the rotation matrices as you did and it is working much more smoothly now. Thanks!

##### Share on other sites
I'm still having a couple of problems (although there are fewer problems than I had with spherical coordinates). It seems that if the camera eye is near or on the plane defined by (1, 0, -1), (-1, 0, 1), (0, 1, 0) it has trouble rotating for some reason. Also, using the same plane mentioned above, on one side of the plane, moving the mouse up points the camera down, while on the other side, moving the mouse up points the camera up.

Any ideas? I've used the same method jwalsh has mentioned and have used this as a guide to rotating around and arbitrary axis.

I hope that made sense :p Let me know if I need to clarify anything. Thanks!

The function below rotates the point 'point' around the axis 'axis' specified by 'angle' in degrees:
void rotateMatrixArbitrary(GLfloat angle, GLfloat* axis, GLfloat* point){  GLfloat yzProj[ 3 ] = { 0.0f, axis[ 1 ], axis[ 2 ] };  GLfloat xzProj[ 3 ] = { axis[ 0 ], 0.0f, axis[ 2 ] };  GLfloat d = sqrt(yzProj[ 1 ] * yzProj[ 1 ] + yzProj[ 2 ] * yzProj[ 2 ]);    MATRIX4 m1, m2, m3, mi1, mi2, tmp;  initMatrix(&m1); initMatrix(&m2); initMatrix(&m3);  initMatrix(&mi1); initMatrix(&mi2);  initMatrix(&tmp);    if(d != 0.0)  {    m1.matrix[ 1 ][ 1 ] = fabs(yzProj[ 2 ]) / d;    m1.matrix[ 1 ][ 2 ] = -1.0f * fabs(yzProj[ 1 ]) / d;    m1.matrix[ 2 ][ 1 ] = fabs(yzProj[ 1 ]) / d;    m1.matrix[ 2 ][ 2 ] = fabs(yzProj[ 2 ]) / d;      mi1.matrix[ 1 ][ 1 ] = fabs(yzProj[ 2 ]) / d;    mi1.matrix[ 1 ][ 2 ] = fabs(yzProj[ 1 ]) / d;    mi1.matrix[ 2 ][ 1 ] = -1.0f * fabs(yzProj[ 1 ]) / d;    mi1.matrix[ 2 ][ 2 ] = fabs(yzProj[ 2 ]) / d;  }    d = sqrt(xzProj[ 0 ] * xzProj[ 0 ] + xzProj[ 2 ] * xzProj[ 2 ]);    if(d != 0.0)  {    m2.matrix[ 0 ][ 0 ] = fabs(xzProj[ 2 ]) / d;    m2.matrix[ 0 ][ 2 ] = fabs(xzProj[ 0 ]) / d;    m2.matrix[ 2 ][ 0 ] = -1.0f * fabs(xzProj[ 0 ]) / d;    m2.matrix[ 2 ][ 2 ] = fabs(xzProj[ 2 ]) / d;      mi2.matrix[ 0 ][ 0 ] = fabs(xzProj[ 2 ]) / d;    mi2.matrix[ 0 ][ 2 ] = -1.0f * fabs(xzProj[ 0 ]) / d;    mi2.matrix[ 2 ][ 0 ] = fabs(xzProj[ 0 ]) / d;    mi2.matrix[ 2 ][ 2 ] = fabs(xzProj[ 2 ]) / d;  }  rotateMatrixZ(angle, &m3);  // Apply rotations  matrixMult(m1, point);  matrixMult(m2, point);  matrixMult(m3, point);  matrixMult(mi2, point);  matrixMult(mi1, point);}

This is the function which will rotate the camera eye based on mouse movements. It rotates around the point specified by cam.gaze:
void rotateCamera(GLfloat x, GLfloat y){  // Translate eye so that target is at origin  cam.eye[ 0 ] -= cam.gaze[ 0 ];  cam.eye[ 1 ] -= cam.gaze[ 1 ];  cam.eye[ 2 ] -= cam.gaze[ 2 ];    MATRIX4 rotateY;    rotateMatrixY(y, &rotateY);    GLfloat dir[ 3 ] = { cam.eye[ 0 ], cam.eye[ 1 ], cam.eye[ 2 ] };  normalize(dir);  GLfloat rightVec[ 3 ];  xprod(dir, cam.up, rightVec);  normalize(rightVec);    rotateMatrixArbitrary(x, rightVec, cam.eye);  matrixMult(rotateY, cam.eye);  // Translate eye back  cam.eye[ 0 ] += cam.gaze[ 0 ];  cam.eye[ 1 ] += cam.gaze[ 1 ];  cam.eye[ 2 ] += cam.gaze[ 2 ];}

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628298
• Total Posts
2981890
• ### Similar Content

• By mellinoe
Hi all,
First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
• By aejt
I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
I have these classes:
For GPU resources:
Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).
And my plan is to define everything into these XML documents to abstract away files:
Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
Factory classes for resources:
For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
Factory classes for assets:
Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).

Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
Thanks!
• By nedondev
I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
Thanks.

• So I've recently started learning some GLSL and now I'm toying with a POM shader. I'm trying to optimize it and notice that it starts having issues at high texture sizes, especially with self-shadowing.
Now I know POM is expensive either way, but would pulling the heightmap out of the normalmap alpha channel and in it's own 8bit texture make doing all those dozens of texture fetches more cheap? Or is everything in the cache aligned to 32bit anyway? I haven't implemented texture compression yet, I think that would help? But regardless, should there be a performance boost from decoupling the heightmap? I could also keep it in a lower resolution than the normalmap if that would improve performance.
Any help is much appreciated, please keep in mind I'm somewhat of a newbie. Thanks!

• Hi,
I'm trying to learn OpenGL through a website and have proceeded until this page of it. The output is a simple triangle. The problem is the complexity.
I have read that page several times and tried to analyse the code but I haven't understood the code properly and completely yet. This is the code: