Sign in to follow this  
lem

Sriptable camera for vert shootemup

Recommended Posts

Right now I am implementing a camera system, that should go through my vertical shooter scene (like Ikaruga). Engine is Opengl/SDL. A scriptable camera is the choice to be used, but I have not too much experience. My idea is, that I have certain controlpoints in a script and i interpolate through them. If there are 5 controlpoints, I would increment them each tick by the frame-time-delta. So the length of my level is defined by the count of controlpoints. On the other hand, I also need to interpolate the view (no specific idea yet, maybe a position before the player ship). I dont think, that this is a good way to do it, maybe someone has a nice idea to help me out. Also the view interpolation makes me haedache. Thanks

Share this post


Link to post
Share on other sites
If I may explain the problem in more detail:

I´m using a spline function where I put in a time float and get a vector back.

There are two ways I could run the camera path:

1. Define all controllpoints and with each tick run through them and increment the time float by fps-time-delta. Then I dont know the summed time for the path, so I cant interpolate the view vector.

2. I define a time, lets say 10000 milliseconds, which are divided, so I get the part of runned-time to inject into the spline function (100 milliseconds = 0.01). But then I had problems adding the fps-delta-time, since it would be bigger then the initial 10000 milliseconds.

Am I thinking completly wrong? Please, any suggestions or comments ...

Share this post


Link to post
Share on other sites
Quote:
Original post by lem
But then I had problems adding the fps-delta-time, since it would be bigger then the initial 10000 milliseconds.


I don't think I'm understanding this correctly. Is your fps-delta larger than 10 seconds? That would be really bad fps. Or are you talking about the time taken between two camera points being too large, and resulting in incorrect extrapolation?

I just recently implemented this myself, and went about it in almost the exact same way as you describe. I generate camera position and view points, and then interpolate between them using a bicubic function that takes the timedelta as weight.

Share this post


Link to post
Share on other sites
You need three things:

- Define a path as a series of control points and the time at which the camera should pass through that point. Camera speed is then determined by the time difference between control points.

- Set up an interpolation function that accepts a time value and returns the interpolated camera position and view direction. Each frame, pass the current time value (current frame time - time when path was started) and update the camera with the returned position/view details.

- Decide how you want to handle "out of bounds" behavior, which is what happens when your path takes 10 seconds to traverse but your frame is rendered at 10.1 seconds. The easiest solution is to get the last control point in the sequence and just "stick" at that point.

Share this post


Link to post
Share on other sites
Quote:
Original post by DragonL
I don't think I'm understanding this correctly. Is your fps-delta larger than 10 seconds? That would be really bad fps. Or are you talking about the time taken between two camera points being too large, and resulting in incorrect extrapolation?

I just recently implemented this myself, and went about it in almost the exact same way as you describe. I generate camera position and view points, and then interpolate between them using a bicubic function that takes the timedelta as weight.


Sorry about my "bad expression"... Just wondering, how I could make camera movement dependent on frame time, so that, if it runs on slow computer it will be the same (although "sloppy"), as on fast computer.
I meant, if I say, that spline path with 3 controllpoints should be run in 10000 ms, so that time-float 0.0 for spline function is 0ms and 1.0 for spline-function is 10000ms, I had to increment time-float each second by 100. Would that be enough or do I have to use a timer-frame-delta?

Thanks guys, you help me very much

Share this post


Link to post
Share on other sites
Quote:
Original post by DragonL
I generate camera position and view points, and then interpolate between them using a bicubic function that takes the timedelta as weight.

I dont understand , how timedelta is taken as weight. Can you describe it,please?

Share this post


Link to post
Share on other sites
Basically in all the interpolation algorithms, time could be interpreted as weight.

Let's assume you have 3 points (A,B,C) and three times (t1, t2, t3), when the camera should be at the corresponding points. You could than say, that you interpolate between these points using a function f(t), that must fulfill the following:
f(t1) = A
f(t2) = B
f(t3) = C

,or you could consider time as weight, and construct a function g(t) which gives you the "weights" of points at the given time. This function needs to fulfill:
g(t1) gives weight 1 for A, 0 for B, 0 for C
g(t2) gives weight 0 for A, 1 for B, 0 for C
g(t3) gives weight 0 for A, 0 for B, 1 for C

As you can see, the two approaches are pretty much identical, it's just that some algorithms can work better with the first, some with the second.

I hope I didn't confuse you further... :)

Share this post


Link to post
Share on other sites
Quote:
Original post by lem
Just wondering, how I could make camera movement dependent on frame time, so that, if it runs on slow computer it will be the same (although "sloppy"), as on fast computer.


Listen to ApochPiQ, he's spot on. A fairly simple solution could look something like this though:


// weight is initialized to zero
weight += deltatime * 0.1f; // changing 0.1f changes the speed
if (weight > 1.0f)
{
weight = 0.0f;
i++;
}
currentPoint = interpolate(point[i], point[i + 1], weight);

// ...and just for completeness, interpolate() might do this:

p.x = a.x * (1.0f - weight) + b.x * weight;
p.y = a.y * (1.0f - weight) + b.y * weight;
p.z = a.z * (1.0f - weight) + b.z * weight;
return p;

Share this post


Link to post
Share on other sites
Try to set up your interpolation as DragonL suggested, you will get linear interpolation which is very good for a start.

When you are comfortable with how it works, you can consider going after a more sophisticated interpolation, like Bezier (don't be scared of the maths involved, it is really straightforward once you grasp the basics).

Share this post


Link to post
Share on other sites
On a related note I too have been experimenting with a scripted camera. However, I ran into problems when I wanted my in-game third person camera to follow a scripted path. That is, it follows a scripted path (using control points) but it tries to keep the player in focus at all times by rotating as necessary. What I was thinking was to translate according to control points but rotate according to player.

The issue is that because it's interpolating translations the camera tends to run at different speeds than the player. How do I get around this problem? Or am I doing this in a completely wrong way?

Share this post


Link to post
Share on other sites
Quote:
Original post by Specchum
The issue is that because it's interpolating translations the camera tends to run at different speeds than the player. How do I get around this problem?


Why is that a problem - could you clarify a little? You're moving the camera along a scripted path, right? If it gets to the position it's supposed to be at, at the time it's supposed to be there, then it all seems fine and dandy to me... :)

If the interpolation is resulting in unwanted accelerations and slowdowns (a problem I encountered using simple cubic interpolation), you might want to use another type of interpolation, such as hermite interpolation. Check Paul Bourke's excellent website for more info on that:

http://astronomy.swin.edu.au/~pbourke/other/interpolation/index.html

Share this post


Link to post
Share on other sites
The thing is that while the camera is following the path it is also tracking the player. That is, if the player begins running forward, the camera begins interpolating forward along the scripted path. When the player runs backward the camera begins interpolating backwards along the scripted path. Player runs right, camera rotates right. Player runs left, camera rotates left.

However, when the player starts running and I begin interpolating, the camera doesn't seem to match the speed of the player - the camera invariably speeds up too much or slows down too much wrt player. How do I get around this?

Share this post


Link to post
Share on other sites
Uh... fix your control points and/or interpolation?

Cameras aren't so magical that we can just say "call DirectXSlowDownCameraMovementALittleBit() and it will fix your problem." There could be literally hundreds of things going on that lead to your perceived problem.

The best useful advice to be offered is to analyze the numbers by hand: compare the speed/movement that you want to what you actually get. Then figure out where the discrepancy comes from, and address it. That's not something anyone will be able to do from the outside without complete access to your system and code.

Share this post


Link to post
Share on other sites
Quote:
Original post by ApochPiQ
Cameras aren't so magical that we can just say "call DirectXSlowDownCameraMovementALittleBit() and it will fix your problem."


I figured that was the case. :)

Anyhow, let me clarify that I'm not having a problem with interpolating the camera. It interpolates between 2 points using deltaTime as is usual. My problem lies in the apparent flaw in logic I'm applying in moving the camera wrt to player. In pseudocode, I do this:

1) If key Forward is pressed, move player forward by x units (ie frame dependent movement {for now}).
2) Begin interpolation of camera along path in the forward direction.
3) Move camera to new interpolated point.
4) Repeat as above for backward movement.

The issue is that my camera doesn't remain a fixed distance from the player as the case should be for a third person camera. This I figure is due to the fact that while the player is moving by a fixed unit, the camera is moving to an interpolated point that is generated independent of the position of the player. I guess the right question would be to ask "how do I interpolate while maintaining a fixed distance from the player"? Or am I missing the point completely?



Share this post


Link to post
Share on other sites
Easy: interpolate the "ideal" camera position P along the preset control path from point A to point B. Once you know P, draw a vector from the player's position to P. Normalize that vector, then scale it by the distance you want the camera to have from the player; that's your final position.

Careful with objects passing between the camera and player, though; it might look like crap if this accidentally stuffs the camera into a wall or something [wink]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this