Sign in to follow this  
YellowMaple

OpenGL Rotating Camera

Recommended Posts

YellowMaple    174
Hello, I'm playing around with the concepts of having a camera that will orbit a target (i.e. the gaze direction will always be towards the target, and the camera is rotated by having the eye coordinates confined to a sphere around the target). To implement this I'm using OpenGL; to be more specific, I'm trying to use the gluLookAt function and modifying the eye, 'up', and gaze coordinates to achieve this orbiting effect. There's been a lot of problems so far and I was hoping to get some help here :) What I do is, when there is a mouse movement I attempt to translate this to how many degrees it should rotate around the x, y, and z axis. Then I try to rotate the camera; I translate the target and eye of the camera so that the target is at the origin and then I transform the Cartesian coordinates of the camera 'eye' to spherical coordinates. I then add the desired angle of rotation to theta and phi and translate back to Cartesian coordinates. The relevant code is below: Function to rotate camera:
void rotateCamera(GLfloat x, GLfloat y, GLfloat z)
{
  // Translate eye so that target is at origin
  cam.eye[ 0 ] -= cam.gaze[ 0 ];
  cam.eye[ 1 ] -= cam.gaze[ 1 ];
  cam.eye[ 2 ] -= cam.gaze[ 2 ];
  
  // Spherical Coordinates
  double theta = 0;
  if(cam.eye[ 0 ] != 0)
  {
    theta = atan(cam.eye[ 1 ] / cam.eye[ 0 ]);
  }
  // x-coordinate is zero... we have a problem
  else if(cam.eye[ 1 ] > 0)
  {
    theta = DEG_TO_RAD(90.0);
  }
  else if(cam.eye[ 1 ] < 0)
  {
    theta = DEG_TO_RAD(270.0);
  }

  double phi = acos(cam.eye[ 2 ] / cam.radius);
  
  // Rotate around y-axis
  theta += DEG_TO_RAD(y);
  
  // Rotate around x-axis
  phi += DEG_TO_RAD(x);

  cam.eye[ 0 ] = cam.radius * cos(theta) * sin(phi);
  cam.eye[ 1 ] = cam.radius * sin(theta) * sin(phi);
  cam.eye[ 2 ] = cam.radius * cos(phi);
  
  // Re-calculate 'up' vector for camera
  GLfloat tmpVec[] = {cam.eye[ 0 ], cam.eye[ 1 ], cam.eye[ 2 ]};
  GLfloat yVec[] = { 0.0f, 1.0f, 0.0f};
  GLfloat tmpLen = sqrt(tmpVec[ 0 ] * tmpVec[ 0 ] + tmpVec[ 1 ] * tmpVec[ 1 ] + tmpVec[ 2 ] * tmpVec[ 2 ]);
  
  tmpVec[ 0 ] /= tmpLen;
  tmpVec[ 1 ] /= tmpLen;
  tmpVec[ 2 ] /= tmpLen;
  
  GLfloat xprod1[] = {tmpVec[ 1 ] * yVec[ 2 ] - tmpVec[ 2 ] * yVec[ 1 ],
					  tmpVec[ 2 ] * yVec[ 0 ] - tmpVec[ 0 ] * yVec[ 2 ],
					  tmpVec[ 0 ] * yVec[ 1 ] - tmpVec[ 1 ] * yVec[ 0 ]};

  cam.up[ 0 ] = tmpVec[ 1 ] * xprod1[ 2 ] - tmpVec[ 2 ] * xprod1[ 1 ] * -1.0f;
  cam.up[ 1 ] = tmpVec[ 2 ] * xprod1[ 0 ] - tmpVec[ 0 ] * xprod1[ 2 ] * -1.0f;
  cam.up[ 2 ] = tmpVec[ 0 ] * xprod1[ 1 ] - tmpVec[ 1 ] * xprod1[ 0 ] * -1.0f;
  
  tmpLen = sqrt(cam.up[ 0 ] * cam.up[ 0 ] + cam.up[ 1 ] * cam.up[ 1 ] + cam.up[ 2 ] * cam.up[ 2 ]);
  cam.up[ 0 ] /= tmpLen;
  cam.up[ 1 ] /= tmpLen;
  cam.up[ 2 ] /= tmpLen;

  // Translate eye back
  cam.eye[ 0 ] += cam.gaze[ 0 ];
  cam.eye[ 1 ] += cam.gaze[ 1 ];
  cam.eye[ 2 ] += cam.gaze[ 2 ];
}

and translating mouse movements to rotations:
void mouseMove(int x, int y)
{
  if(mouse.leftButton)
  {
	GLfloat degX = mouse.xPos - (float) x;  // How much we rotate around the y-axis
	GLfloat degY = mouse.yPos - (float) y;  // How much we rotate around the x-axis

    rotateCamera(-1.0f * degX, degY, 0.0f);
	
	mouse.xPos = (x > 0 ? x : 0.0);
	mouse.yPos = (y > 0 ? y : 0.0);
  }
}

At first for the 'up' vector I was always using (0.0f, 1.0f, 0.0f) as I always wanted the camera's up direction to be parallel with the positive y-axis, but I suspected that that might be the source of my problem, so I added in all the code after the comment 'Re-calculate 'up' vector for camera' to try and fix this. Now I'm suspecting that it might be my mouseMove function and how it translates mouse movements to rotation.

Share this post


Link to post
Share on other sites
JWalsh    498
YellowMaple,

I'm familiar with what you're trying to do, however I dont usually implement it the way you do.

First I take the difference in the mouse coordinates (cur-prev) and determine the angle the camera has changed along the x and y axis. This is just based on screen size and some magic numbers that I use until moving the mouse from the left to the right side of the screen rotates the camera somewhere between 360 and 720 degrees (use whatever looks best to you).

Movement along the y-axis is considered pitch (up/down movement) and movement along the x-axis is considered yaw (side/side movement.) Then I perform the following calculations to orbit the camera around its target


void Camera::Orbit( float32 yaw, float32 pitch )
{
// Create a rotation matrix that rotates about the y-axis
Matrix4 rotationY;
rotationY.BuildRotationY( yaw );

// Get the direction vector
Vector3 dir = m_Target - m_Eye;

// Normalize the direction vector and cross it with the UP vector in
// order to get the "right" vector, which is just a vector pointing
// out of my right side. This will be used as our arbitrary axis for
// which to pitch on. If we dont do this, then when we rotate around our
// object and then try and pitch, our pitch direction will be reversed
Vector3 right = dir.Normalize().Cross( m_Up );
right.NormalizeInPlace();

// Create a rotation matrix about the arbitary axis created above, by the
// angle passed in from the calling function
Matrix4 rotationX;
rotationX.BuildAxisAngle( right, pitch );

// Create a composite transformation matrix
Matrix4 transform = rotationY * rotationX;

// Transform the direction vector into a new direction
dir = transform.Transform3( dir );

// Subtract the new direction from the target to get the new eye position
m_Eye = m_Target - dir;

// Re-build the camera matrix from the new eye, target, and up vectors
// which represents the view transform
UpdateTransform();
}



I hope this code helps you out. Please let me know if you have any specific questions on the implementation of anything above.

Cheers!

Share this post


Link to post
Share on other sites
YellowMaple    174
Thanks for your help! After some contemplation, I realized that spherical coordinates is actually more complicated than it first sounds. I went with the rotation matrices as you did and it is working much more smoothly now. Thanks!

Share this post


Link to post
Share on other sites
YellowMaple    174
I'm still having a couple of problems (although there are fewer problems than I had with spherical coordinates). It seems that if the camera eye is near or on the plane defined by (1, 0, -1), (-1, 0, 1), (0, 1, 0) it has trouble rotating for some reason. Also, using the same plane mentioned above, on one side of the plane, moving the mouse up points the camera down, while on the other side, moving the mouse up points the camera up.

Any ideas? I've used the same method jwalsh has mentioned and have used this as a guide to rotating around and arbitrary axis.

I hope that made sense :p Let me know if I need to clarify anything. Thanks!

The function below rotates the point 'point' around the axis 'axis' specified by 'angle' in degrees:

void rotateMatrixArbitrary(GLfloat angle, GLfloat* axis, GLfloat* point)
{
GLfloat yzProj[ 3 ] = { 0.0f, axis[ 1 ], axis[ 2 ] };
GLfloat xzProj[ 3 ] = { axis[ 0 ], 0.0f, axis[ 2 ] };
GLfloat d = sqrt(yzProj[ 1 ] * yzProj[ 1 ] + yzProj[ 2 ] * yzProj[ 2 ]);

MATRIX4 m1, m2, m3, mi1, mi2, tmp;

initMatrix(&m1); initMatrix(&m2); initMatrix(&m3);
initMatrix(&mi1); initMatrix(&mi2);
initMatrix(&tmp);

if(d != 0.0)
{
m1.matrix[ 1 ][ 1 ] = fabs(yzProj[ 2 ]) / d;
m1.matrix[ 1 ][ 2 ] = -1.0f * fabs(yzProj[ 1 ]) / d;
m1.matrix[ 2 ][ 1 ] = fabs(yzProj[ 1 ]) / d;
m1.matrix[ 2 ][ 2 ] = fabs(yzProj[ 2 ]) / d;

mi1.matrix[ 1 ][ 1 ] = fabs(yzProj[ 2 ]) / d;
mi1.matrix[ 1 ][ 2 ] = fabs(yzProj[ 1 ]) / d;
mi1.matrix[ 2 ][ 1 ] = -1.0f * fabs(yzProj[ 1 ]) / d;
mi1.matrix[ 2 ][ 2 ] = fabs(yzProj[ 2 ]) / d;
}

d = sqrt(xzProj[ 0 ] * xzProj[ 0 ] + xzProj[ 2 ] * xzProj[ 2 ]);

if(d != 0.0)
{
m2.matrix[ 0 ][ 0 ] = fabs(xzProj[ 2 ]) / d;
m2.matrix[ 0 ][ 2 ] = fabs(xzProj[ 0 ]) / d;
m2.matrix[ 2 ][ 0 ] = -1.0f * fabs(xzProj[ 0 ]) / d;
m2.matrix[ 2 ][ 2 ] = fabs(xzProj[ 2 ]) / d;

mi2.matrix[ 0 ][ 0 ] = fabs(xzProj[ 2 ]) / d;
mi2.matrix[ 0 ][ 2 ] = -1.0f * fabs(xzProj[ 0 ]) / d;
mi2.matrix[ 2 ][ 0 ] = fabs(xzProj[ 0 ]) / d;
mi2.matrix[ 2 ][ 2 ] = fabs(xzProj[ 2 ]) / d;
}

rotateMatrixZ(angle, &m3);

// Apply rotations
matrixMult(m1, point);
matrixMult(m2, point);
matrixMult(m3, point);
matrixMult(mi2, point);
matrixMult(mi1, point);
}



This is the function which will rotate the camera eye based on mouse movements. It rotates around the point specified by cam.gaze:

void rotateCamera(GLfloat x, GLfloat y)
{
// Translate eye so that target is at origin
cam.eye[ 0 ] -= cam.gaze[ 0 ];
cam.eye[ 1 ] -= cam.gaze[ 1 ];
cam.eye[ 2 ] -= cam.gaze[ 2 ];

MATRIX4 rotateY;

rotateMatrixY(y, &rotateY);

GLfloat dir[ 3 ] = { cam.eye[ 0 ], cam.eye[ 1 ], cam.eye[ 2 ] };
normalize(dir);

GLfloat rightVec[ 3 ];
xprod(dir, cam.up, rightVec);
normalize(rightVec);

rotateMatrixArbitrary(x, rightVec, cam.eye);
matrixMult(rotateY, cam.eye);

// Translate eye back
cam.eye[ 0 ] += cam.gaze[ 0 ];
cam.eye[ 1 ] += cam.gaze[ 1 ];
cam.eye[ 2 ] += cam.gaze[ 2 ];
}

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
    • By Picpenguin
      Hi
      I'm new to learning OpenGL and still learning C. I'm using SDL2, glew, OpenGL 3.3, linmath and stb_image.
      I started following through learnopengl.com and got through it until I had to load models. The problem is, it uses Assimp for loading models. Assimp is C++ and uses things I don't want in my program (boost for example) and C support doesn't seem that good.
      Things like glVertexAttribPointer and shaders are still confusing to me, but I have to start somewhere right?
      I can't seem to find any good loading/rendering tutorials or source code that is simple to use and easy to understand.
      I have tried this for over a week by myself, searching for solutions but so far no luck. With tinyobjloader-c and project that uses it, FantasyGolfSimulator, I was able to actually load the model with plain color (always the same color no matter what I do) on screen and move it around, but cannot figure out how to use textures or use its multiple textures with it.
      I don't ask much: I just want to load models with textures in them, maybe have lights affect them (directional spotlight etc). Also, some models have multiple parts and multiple textures in them, how can I handle those?
      Are there solutions anywhere?
      Thank you for your time. Sorry if this is a bit confusing, English isn't my native language
  • Popular Now