Smooth rotation and movement of camera

Started by
15 comments, last by Suen 11 years, 11 months ago
Sorry for the slow reply, my network has been done since I wrote the last reply and it's barely being stable now but here goes.

Thanks for the suggestion DigitalFragment, I've never heard of that approach but it seems to make sense. However I'm lost now on what way to approach. Like you I thought at first that a smooth interpolation between the prev. position and the intended one would slow down the camera but as suggested above (and as well in a few other places I've been looking into) this does not seem to be the so. So...while I am actually still confused and have several questions to ask (about both yours and the suggested one by FLeBlanc) I would appreciate it if I could opinions on what approach I should take.

Finally I want to say that I "solved" my problem. I was playing around with being able to register multiple keyboard inputs in GLUT. Basically I have two callback functions which are registered whenever a button is pressed/not pressed. I then have a third function which describes what each keyboard button I want to use do (here is where I call my camera movement/rotation functions). I finally call this third function in my display method (the method responsible for doing all rendering stuff), basically it's called for each loop (each time a new frame is drawn). This results in both smooth translation and rotation as I wanted. Compare to what I had before; I had a keyboard function (see above) which, instead of getting called every frame, only got called when I pushed a button. The result was that when the next frame is drawn after the keyboard input, with a modified view matrix, I would see the movement of my camera, but with this effect of "stuttering" which I wanted to solve.

Now here's the thing.......I don't have the slightest bit of clue as to why this second approach I took work. So no I really haven't solved the problem. I have a working solution without really understanding what's going on behind the scenes. Or have I perhaps confused everyone with what I intended to do? Well now that I think of it it's not really a working solution at all. Right now it's working fine because my FPS locked down to something around 60 FPS due to VSync being turned on, however the refresh rates obviously vary from one PC to another so if I use this solution in a PC with better/worse performance (or with VSync turned off) I would end up getting slower/faster movement. I'm still just curious to why it works...for 60 FPS at least.

Anyhow I'm still at step one so before I go further what is the recommended way to go for? Smooth interpolation or what DigitalFragment suggested? Would smooth interpolation slow the camera down as DigitalFragment said and as I thought from the beginning?
Advertisement
Smooth interpolation doesn't exactly slow the camera down any except for the fact that its always falls short of the target:
If you want to rotate from 30 degrees to 40 degrees in one frame, but are interpolating, then you aren't going to hit 40, you are going to hit anywhere > 30 and < 40.
On the next frame, if you have stopped rotating, then the interpolation doesn't continue to 40, as you will want to interpolate from the previous value, to a value no different to the previous value. It does work to solve the issue, but it results in 'sudden termination' when you stop the rotation.

The other trick causes smoothing both when the rotation starts and when the rotation ends. As such, it causes the same lag initially, but while continuing to hold the rotate button, that lag disappears.

I'd suggest trying both and seeing which feels better for your input system. It largely depends on what sort of game you are making, the choice here is really a design issue not an implementation one.
Since we've gone a bit into interpolation already I'll go with it first but I definitely want to try out the way Digitalfragment suggested, hopefully it won't be hard to implement.

FLeBlanc, I do have several questions I want to ask about the particular implementation you posted there from "Fix Your Timesteps!". I've read through it about 3-4 times now and still have some problems grasping exactly everything going on there (I do understand it a bit better now but still mostly only understand some bits here and there). However before that I think I really need to clarify (for my own sake) about what exactly is going wrong in my implementation or else I won't understand why the methods suggested by you guys work. Obviously you and more have explained what's going on but I feel slightly shaky about it still. I'll try to simplify my explanation.

I have a callback function called 'display' which is where I draw all my stuff (I assume THIS is what is usually referred to as my game logic, please correct me if I'm wrong about it here). Now normally this function, due to the nature of GLUT, would only be called during certain events (resizing the window, minimizing it etc.) but what I've done is to put a command at the very end of the function which will call the function again. This essentially means that I'm calling the display function X number of times per second, in other words X frames are being drawn each second. Since I have VSYNC turned on the value of X ends up being locked down to something around 60 (assume 60 for simplicity). So 60 frames per second are drawn, 60 FPS. So far so fine, nothing special here.

Now I have a callback function which is called whenever a keyboard button I've specified is pushed. In my implementation this moves/rotates the camera a certain amount of (x,y,z) units. This is where the movement starts looking rough as mentioned before and it's from here I get slightly confused. What EXACTLY is happening the very moment I push the button? Is the call/registration of my keystroke put in some queue which is registered after certain amount of time? For example is this what happens (assuming we have 60 FPS):

Display //Render scene at t=0 ms
Display //Render scene at t=16 ms
Display //Render scene at t=32 ms
.....
Display //Render scene at t=1000 ms
Keystroke //Make change to my camera at t=1000+x ms

Or is something else happening? Again what exactly is happening the very moment I push it?

I'm going to explain it the way I understood it according to FLeBlanc's first reply; I push the button at t=0 but by the time this push has registered a second has elapsed (how long does it take to register a keystroke so we get the new change for the camera? is it strictly a second?). During this second 60 frames were drawn from the original viewpoint of the camera. Then after that comes whatever changes the keystroke did and the next frame is drawn from the new viewpoint of the camera. This cause a sudden change, a jump as described by FLeBlanc in the first reply, because none of the intermediate values between the original and new camera viewpoint are shown. Please correct me here if I am thinking wrong about what's going on when I push a button.

Now comes the next thing I am not quite getting. As I said in an earlier post I made three functions. Two of these are callback functions which basically modify a flag to know whether a key is being pressed or not. The third function has the code that updates my camera position/orientation and this third function is called in my display function, in other words it is called 60 times per second as well. The only thing different here is that while it is called nothing happens if no key is pressed. If I press and hold a key then once the keystroke has been registered the third function described above updates my camera position/orientation EACH frame by (x,y,z) units. Since I am ALSO drawing the updated camera each frame that is the reason to why I get a smooth motion. I know this would be frame-dependent but disregard that for a moment. Am I thinking correct of this? Please confirm this as well

This is quite a long post and I apologize about that but I really feel that if I can't properly grasp the problem I'm having then I'm just going to have a harder time understanding why a suggested method works.
I suspect that what might be going wrong in your code is key repeat. It's been years and years since I bothered with Glut, but if I remember correctly by default it repeats keys. This means that it will send a sequence of keydown/keyup callbacks timed according to the timing of the system key repeat rate. Typically with games, you want to use glutIgnoreKeyRepeat so that keys won't be repeated and the callback for glutKeyboardFunc will store state for the key in a table. The callback for glutKeyboardUpFunc will clear that state. Then, in your update function, you query the state of this key table and move the camera accordingly.

Also, typically, your display function, where you "draw your stuff" is not what is meant by your game logic. Game logic is all the moving of bullets, the walking of enemies, the physics updates, etc.... A function called display should be just that: something that draws things.

In the fixed timestep loops that FLeblanc and others are talking about, you will allow display to be called in the outer, or main, part of the loop. You can see that in the above posted loop code as the function render. All it does is draw the scene, using the interpolated transformation state, as many times per second as available time will allow once the logic updating is done. The actual physics and logic updating, though, take place in the function integrate which is nested inside an inner loop that is there to make sure that physics updates take priority over screen rendering if and only iff the simulation "falls behind". Otherwise, the advancement of accumulator and associated conditionals exist merely to ensure that integrate is called in a timely and precise fashion.

Now, it is important to note that Glut implements its own internal loop, implemented in glutMainLoop. If you wish to continue using Glut, you would be well advised to seek out FAQs or other sources of info specific to Glut, since you would need to work within Glut's own loop and framework of callbacks in order to implement the presented fixed time stepping, perhaps doing something like using glutTimerFunc to time the calling of your physics and logic update, and using glutIdleFunc to call glutPostRedisplay. Note that I don't use Glut anymore and never used it for any serious projects even when I did use it, so this might be completely wrong information.

(by the way, mentioning that you were using Glut would have helped immensely, and probably would have led to a solution much sooner; something like that would be considered important information for us to know.)
JTippets, thanks for the suggestions and sorry for the late reply (have been busy these days though I've also been taking a closer look at the code discussed here. I've been wondering one thing today with regard to my problem. I'm starting to kind of get the whole picture about this (code-wise, though most of you have explained it pretty well here, I'm still trying to grasp my head around it). I'd like to actually post my understanding of it just to see if I got it right but I'm in a bit of a hurry at the moment. Instead I just want to ask a really short question for the time being

As you and the rest of the people in this thread knows I only want to control my camera but I want it to be frame-independent. I just want to move the camera which is quite simple compared to having performing some kind of more advanced physical simulation. So I thought of it and...wouldn't it be enough to actually calculate the time it takes from updating one frame to the next and take that time and multiply it by some camera speed? I do understand that this is not the best solution because this could give us different deltatimes for each frame update and generally a physical simulation might not accept a wide range of dt's. What you want instead in that case is a fixed dt to make things easier and more predictable for the physical aspects of the prorgram.

But my purpose is rather simple, only moving/rotating the camera from pos/orient. 1 to pos/orient. 2 and no more. What I thought I could do is what I did earlier, make a call to my function which controls the game logic for each frame. If a key is pressed this function will perform some update and this update would be based on the difference in time between two frames. Basically the code would be something like this:


cameraPos.x = cameraPos.x + (rotMat[0].x*moveSpeed*timeInterval);
cameraPos.y = cameraPos.y + (rotMat[1].x*moveSpeed*timeInterval);
cameraPos.z = cameraPos.z + (rotMat[2].x*moveSpeed*timeInterval);


where timeInterval = currentTime - prevTime and moveSpeed is some camera speed. rotMat[0/1/2] is to describe in what direction to move the camera by using the camera's local axes. Wouldn't this be sufficient for my purpose?
For simple camera motion, it is fine to just use dt*speed, rather than a fixed step. Where a fixed step really becomes important is a) physics simulations, that can become unstable with high values of dt, and b) multiplayer, where slight mathematical error occurring on each machine can cause a de-sync.

For simple camera motion, it is fine to just use dt*speed, rather than a fixed step. Where a fixed step really becomes important is a) physics simulations, that can become unstable with high values of dt, and b) multiplayer, where slight mathematical error occurring on each machine can cause a de-sync.

Yes this is what I suspected as well. I just tried implementing the code I posted in my previous reply. I tried it out with enabling/disabling Vsync and it seems to work fine. One thing I've been wondering after looking through several resources is that people say dt*speed is enough for a frame-independent movement (well in case of a simple purpose). My corresponding code, from what I understand, would simply be this (assuming one coordinate to keep this short):


cameraPos.x = cameraPos.x + (rotMat[0].x*timeInterval);


rotMat[0].x would be the actual displacement while timeInterval is the time it takes to draw one frame as mentioned before. But running it like that was way too slow so I had to add in a weighting factor (moveSpeed in prev reply). Am I understanding something wrong here?

I suspect that what might be going wrong in your code is key repeat. It's been years and years since I bothered with Glut, but if I remember correctly by default it repeats keys. This means that it will send a sequence of keydown/keyup callbacks timed according to the timing of the system key repeat rate. Typically with games, you want to use glutIgnoreKeyRepeat so that keys won't be repeated and the callback for glutKeyboardFunc will store state for the key in a table. The callback for glutKeyboardUpFunc will clear that state. Then, in your update function, you query the state of this key table and move the camera accordingly.[/quote]

I checked some sources and read about this as well. Supposedly in glut (or freeglut to be more precise) you press the button and keep holding it you should have a repeating pattern; your keydown/keyup callbacks are repeatedly called as you wrote. Out of curiosity I wanted to confirm this so I simply placed code for printing out a message in both callback methods to see if those messages would be shown repeatedly as I held some defined key down. The result was that the keydown callback was called repeatedly, the keyup callback wasn't called until I released the button. Quite different from what I expected and have read about the callback methods repeating. Well as you said you haven't used GLUT for years so you might not care for this but I thought it was interesting to note anyway smile.png

This topic is closed to new replies.

Advertisement