# Smooth rotation and movement of camera

This topic is 2264 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hello. I managed to implement an fps camera and to simplify it for myself at the moment I control everything by the keyboard. My camera is able to do yaw and pitch rotations (with pitch being limited to +- 90 degrees) and I'm able to move it up/down, left/right and forward/backward.

To do the rotation I store the camera target in spherical coordinates and modify these spherical coords and then convert them to cartesian coords. and re-calculate the camera matrix. To move the camera around I simply modify the camera position and re-calculate the camera matrix.

So far everything works as I want it. But whenever I rotate my camera (each time I press a button I change the latitude/longitude of my camera target by one unit/angle) or move my camera around I get "jerky movements". Basically they are nowhere close to being smooth rotations and movements. If I press and hold a button that keeps rotating the camera I merely want this rotation to be smooth and the same if I were to press and hold a button which moves the camera in some direction. What exactly should I be looking into to be able to achieve this?

##### Share on other sites
Anyone? Apparently if I were to control the rotation of my camera with mouse movement this would make the rotations more smooth but I can't see why it solves the problem while using keyboard buttons does not. So if anyone can explain that it would be nice.

Furthermore even if I were to implement mouse movement it doesn't solve the "jerky" movement if I move around the camera itself in the scene.

##### Share on other sites
Perhaps your problem is caused by the step sizes you are taking when accumulating the new angles to the camera orientation? To get smoother rotations, apply the rotation in smaller steps, or decouple the increments and maintain a "currently rendered" orientation and a "desired target" orientation, between which you interpolate smoothly.

Seeing some code samples would probably help here.

##### Share on other sites
What you are looking for is interpolation. Basically, you use an elapsed-time value to interpolate between the camera's previous position/orientation and the camera's next position/orientation, and calculate the view matrices from the interpolated values.

Consider that at time t=0, the camera is at (0,0). The camera moves at 1 unit per second. Say that at t=0 you receive a keystroke to move the camera, so you translate the camera by one unit. However, this translation occurs instantaneously. At t=0 the camera is at (0,0) and at t=1 the camera is suddenly at (0,1). It causes a visible jump.

The thing is, in that 1 second span of time, the screen refreshed 60 times or more (depending on frame rate). For 60 frames it drew the view from camera at (0,0), then suddenly switched to drawing the scene from camera at (0,1), causing a visual jump. What you could be doing instead, though, is tracking the frame time of when the scene is drawn, and using that frame time to interpolate between (0,0) and (0,1) so that intermediate frames drawn will show the camera at different locations in between (0,0) and (0,1)

You might read the old standby, Fix Your Timestep! or Javier Arevalo's Main Loop with Fixed Time Steps at the old Flipcode site for more information on how this works.

##### Share on other sites

Perhaps your problem is caused by the step sizes you are taking when accumulating the new angles to the camera orientation? To get smoother rotations, apply the rotation in smaller steps, or decouple the increments and maintain a "currently rendered" orientation and a "desired target" orientation, between which you interpolate smoothly.

Seeing some code samples would probably help here.

Wouldn't applying smaller steps to my rotation cause the camera to rotate slower though? Or am I misunderstanding something here? I'll post the necessary code here:

Function to create camera matrix
 void createCameraMatrix() { glm::vec3 camTargetCart = sphericalToCartesian(cameraTarget); //Convert the camera target's spherical coords. to cartesian. glm::vec3 lookDir = glm::normalize(camTargetCart); //Look direction glm::vec3 upDir = glm::normalize(glm::vec3(0.0f, 1.0f, 0.0f)); //Up direction of camera, aligned with world's y-axis at beginning. glm::vec3 rightDir = glm::normalize(glm::cross(lookDir, upDir)); //Calculate remaining axis of camera glm::vec3 perpUpDir = glm::cross(rightDir, lookDir); //Re-calculate up direction //Create camera matrix rotMat[0] = glm::vec4(rightDir, 0.0f); rotMat[1] = glm::vec4(perpUpDir, 0.0f); rotMat[2] = glm::vec4(-lookDir, 0.0f); rotMat = glm::transpose(rotMat); transMat[3] = glm::vec4(-cameraPos, 1.0f); finalMatrix = rotMat * transMat; //Give value to uniform in shader glUseProgram(theProgram); glUniformMatrix4fv(locCameraTest, 1, GL_FALSE, glm::value_ptr(finalMatrix)); glUseProgram(0); } 

Some examples of my functions that are called when a keystroke is received
 void rotCamHorizontal(GLfloat angle) { cameraTarget.y = cameraTarget.y + angle; //Add angle units to camera target (in spherical coords.) createCameraMatrix(); //Re-calculate camera matrix to account for modified camera target } void rotCamVertical(GLfloat angle) { cameraTarget.z = cameraTarget.z + angle; //Add angle units to camera target (in spherical coords.) cameraTarget.z = glm::clamp(cameraTarget.z, -90.0f, 90.0f); //Clamp the pitch between +- 90 degrees createCameraMatrix(); //Re-calculate camera matrix to account for modified camera target } void moveCamForward(GLfloat moveSpeed) { cameraPos = cameraPos+(sphericalToCartesian(cameraTarget)*moveSpeed); createCameraMatrix(); } void moveCamRight(GLfloat moveSpeed) { cameraPos.x = cameraPos.x + (rotMat[0].x*moveSpeed); //rotMat[0].x correspond to (row, column) = (1, 1) cameraPos.y = cameraPos.y + (rotMat[1].x*moveSpeed); //rotMat[1].x correspond to (row, column) = (1, 2) cameraPos.z = cameraPos.z + (rotMat[2].x*moveSpeed); //rotMat[2].x correspond to (row, column) = (1, 3) createCameraMatrix(); } 

Function where movement/rotation functions are called.
 void keyboard(unsigned char key, int x, int y) { GLfloat angle = 1.0f; GLfloat moveSpeed = 1.0f; switch(key) { //Move forward case 'w': moveCamForward(moveSpeed); break; //Rotate right (yaw) case 'd': rotCamHorizontal(-angle); break; //Rotate up (pitch) case 'q': rotCamVertical(angle); break; //Strafe right case 'x': moveCamRight(moveSpeed); break; } } 

I'd also like to point out that I only initialize the camera matrix once as my program is running (even when it is idle and doing nothing). The only time I re-calculate it is as shown above when either the camera position or camera target are changed. I'm not sure of this matters though. Also thanks FLeBlanc for the suggestion, will definitely look into it. But as requested the code is above for people to see, any suggestions are appreciated. Edited by Suen

##### Share on other sites
As suggested, look into interpolating your camera position and rotation. That is the way to achieve the smaller steps the first poster recommended, but without slowing the speed down.

##### Share on other sites

As suggested, look into interpolating your camera position and rotation. That is the way to achieve the smaller steps the first poster recommended, but without slowing the speed down.

Will do, haven't looked very much yet through the links posted by FLEBlanc (since it's pretty late over here) but I went through it a bit fast and it seems to describe how to do the interpolation.

From what I understand so far a linear interpolation would be sufficient for smooth camera movement while quaternion slerp would be sufficient for smooth camera rotation. Either way I will look more into this when I wake up. Thanks for the suggestions Edited by Suen

##### Share on other sites
Well having read into this (and thanks to the answers above) I understand the issue behind why the animation is stuttering but I'm still having somewhat of a hard time with it. I'm not really sure if I should go on with this in this section of the forum (since the code is OpenGL specific) but I'll give it a try.

I checked the link provided by FLeBlanc (Fix Your Timestep!) and did also take a look at the source code provided there. I pretty much did the same:

I created a function for interpolation. I then changed one of my camera movement functions, storing the old and new position and then using them, together with what I assume would be the elapsed time value, as arguments for the interpolation function. I then create the view matrix from the interpolated camera position. See this below

 glm::vec3 interpolate(const glm::vec3 &amp;start, const glm::vec3 &amp;end, float alpha) { glm::vec3 interp; interp.x = end.x*alpha + start.x*(1-alpha); interp.y = end.y*alpha + start.y*(1-alpha); interp.z = end.z*alpha + start.z*(1-alpha); return interp; } void moveCamLeft(GLfloat moveSpeed) { glm::vec3 startPos(cameraPos); glm::vec3 endPos; endPos.x = cameraPos.x - rotMat[0].x; endPos.y = cameraPos.y - rotMat[1].x; endPos.z = cameraPos.z - rotMat[2].x; cameraPos = interpolate(startPos, endPos, timeValue); createCameraMatrix(); } 

Now as far as I've understood from the posts here and from the links provided is that if I want a smooth movement from one position to another I would need to draw all values between the two positions where each one of these values would correspond to a frame being drawn. For example if we have 60 fps then the scene would be drawn from 60 different positions and then when the next frame is drawn I am at the final value. To do this I understood it as using a time-based value for my interpolation (where the time-based value would vary from 0 to 1, 0 corresponding to the start and 1 to the end position when I interpolate). But I am still quiet confused...exactly what am I supposed to keep time of? The amount of time it takes to draw a frame? Something else? I am also confused to when I start to measure the time and exactly in what function I should do so.

I feel this should be quite easy to understand (conceptually it is). I'm probably making it harder than what it is actually. Edited by Suen

##### Share on other sites
You measure the time in your main loop. Look at the loop from "Fix Your Timesteps!" again:

 double t = 0.0; const double dt = 0.01; double currentTime = hires_time_in_seconds(); double accumulator = 0.0; State previous; State current; while ( !quit ) { double newTime = time(); double frameTime = newTime - currentTime; if ( frameTime > 0.25 ) frameTime = 0.25; // note: max frame time to avoid spiral of death currentTime = newTime; accumulator += frameTime; while ( accumulator >= dt ) { previousState = currentState; integrate( currentState, t, dt ); t += dt; accumulator -= dt; } const double alpha = accumulator / dt; State state = currentState*alpha + previousState * ( 1.0 - alpha ); render( state ); } 

Here, currentState and previousState represent the set of transforms for the camera and all objects in the world at time t(n-1) and t(n), where n is the current logic step. accumulator is used to accumulate the advancement of time, and is also used to track how far into the current logic step we are. Each iteration through the loop, the current time is compared to the last time stored from the last time through the loop, and the difference is applied to accumulator. Then the logic update portion iterates on accumulator; as long as accumulator is larger than the length of 1 logic step (dt, or 0.01 in this example) then a physics step is performed (objects are moved, etc...). As soon as accumulator drops below the value of dt, we know that we are "current" or caught-up on logic updates and can proceed to rendering. However, the value of accumulator will now tell us how far along into the next physics step we are, so we can use it as the timeValue to interpolate transforms.

##### Share on other sites
Using interpolation to smooth from the previous position to the intended position will slow the camera down. Perhaps what you are looking for is input-smoothing which is averaging the deltas and then applying the delta to the camera.

Time delta is necessary to convert a velocity-in-seconds value into a velocity-in-frames value. When using mouse input, I divide the mouse delta by dt to get the pixels-per-second value. Buffer that over 15 frames, and average out the last 15 frames worth of velocities. Then multiply that by dt again to turn it back into the value for the frame.

The side effect of this is that the camera will have a bit of inertia after stopping movement of the mouse, but that tends to feel better than a harsh stop anyway.

1. 1
Rutin
23
2. 2
3. 3
4. 4
JoeJ
18
5. 5

• 14
• 15
• 11
• 11
• 9
• ### Forum Statistics

• Total Topics
631757
• Total Posts
3002150
×