Jump to content
  • Advertisement
Sign in to follow this  
lem

Sriptable camera for vert shootemup

This topic is 4416 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Right now I am implementing a camera system, that should go through my vertical shooter scene (like Ikaruga). Engine is Opengl/SDL. A scriptable camera is the choice to be used, but I have not too much experience. My idea is, that I have certain controlpoints in a script and i interpolate through them. If there are 5 controlpoints, I would increment them each tick by the frame-time-delta. So the length of my level is defined by the count of controlpoints. On the other hand, I also need to interpolate the view (no specific idea yet, maybe a position before the player ship). I dont think, that this is a good way to do it, maybe someone has a nice idea to help me out. Also the view interpolation makes me haedache. Thanks

Share this post


Link to post
Share on other sites
Advertisement
If I may explain the problem in more detail:

I´m using a spline function where I put in a time float and get a vector back.

There are two ways I could run the camera path:

1. Define all controllpoints and with each tick run through them and increment the time float by fps-time-delta. Then I dont know the summed time for the path, so I cant interpolate the view vector.

2. I define a time, lets say 10000 milliseconds, which are divided, so I get the part of runned-time to inject into the spline function (100 milliseconds = 0.01). But then I had problems adding the fps-delta-time, since it would be bigger then the initial 10000 milliseconds.

Am I thinking completly wrong? Please, any suggestions or comments ...

Share this post


Link to post
Share on other sites
Quote:
Original post by lem
But then I had problems adding the fps-delta-time, since it would be bigger then the initial 10000 milliseconds.


I don't think I'm understanding this correctly. Is your fps-delta larger than 10 seconds? That would be really bad fps. Or are you talking about the time taken between two camera points being too large, and resulting in incorrect extrapolation?

I just recently implemented this myself, and went about it in almost the exact same way as you describe. I generate camera position and view points, and then interpolate between them using a bicubic function that takes the timedelta as weight.

Share this post


Link to post
Share on other sites
You need three things:

- Define a path as a series of control points and the time at which the camera should pass through that point. Camera speed is then determined by the time difference between control points.

- Set up an interpolation function that accepts a time value and returns the interpolated camera position and view direction. Each frame, pass the current time value (current frame time - time when path was started) and update the camera with the returned position/view details.

- Decide how you want to handle "out of bounds" behavior, which is what happens when your path takes 10 seconds to traverse but your frame is rendered at 10.1 seconds. The easiest solution is to get the last control point in the sequence and just "stick" at that point.

Share this post


Link to post
Share on other sites
Quote:
Original post by DragonL
I don't think I'm understanding this correctly. Is your fps-delta larger than 10 seconds? That would be really bad fps. Or are you talking about the time taken between two camera points being too large, and resulting in incorrect extrapolation?

I just recently implemented this myself, and went about it in almost the exact same way as you describe. I generate camera position and view points, and then interpolate between them using a bicubic function that takes the timedelta as weight.


Sorry about my "bad expression"... Just wondering, how I could make camera movement dependent on frame time, so that, if it runs on slow computer it will be the same (although "sloppy"), as on fast computer.
I meant, if I say, that spline path with 3 controllpoints should be run in 10000 ms, so that time-float 0.0 for spline function is 0ms and 1.0 for spline-function is 10000ms, I had to increment time-float each second by 100. Would that be enough or do I have to use a timer-frame-delta?

Thanks guys, you help me very much

Share this post


Link to post
Share on other sites
Quote:
Original post by DragonL
I generate camera position and view points, and then interpolate between them using a bicubic function that takes the timedelta as weight.

I dont understand , how timedelta is taken as weight. Can you describe it,please?

Share this post


Link to post
Share on other sites
Basically in all the interpolation algorithms, time could be interpreted as weight.

Let's assume you have 3 points (A,B,C) and three times (t1, t2, t3), when the camera should be at the corresponding points. You could than say, that you interpolate between these points using a function f(t), that must fulfill the following:
f(t1) = A
f(t2) = B
f(t3) = C

,or you could consider time as weight, and construct a function g(t) which gives you the "weights" of points at the given time. This function needs to fulfill:
g(t1) gives weight 1 for A, 0 for B, 0 for C
g(t2) gives weight 0 for A, 1 for B, 0 for C
g(t3) gives weight 0 for A, 0 for B, 1 for C

As you can see, the two approaches are pretty much identical, it's just that some algorithms can work better with the first, some with the second.

I hope I didn't confuse you further... :)

Share this post


Link to post
Share on other sites
Quote:
Original post by lem
Just wondering, how I could make camera movement dependent on frame time, so that, if it runs on slow computer it will be the same (although "sloppy"), as on fast computer.


Listen to ApochPiQ, he's spot on. A fairly simple solution could look something like this though:


// weight is initialized to zero
weight += deltatime * 0.1f; // changing 0.1f changes the speed
if (weight > 1.0f)
{
weight = 0.0f;
i++;
}
currentPoint = interpolate(point, point[i + 1], weight);

// ...and just for completeness, interpolate() might do this:

p.x = a.x * (1.0f - weight) + b.x * weight;
p.y = a.y * (1.0f - weight) + b.y * weight;
p.z = a.z * (1.0f - weight) + b.z * weight;
return p;

Share this post


Link to post
Share on other sites
Try to set up your interpolation as DragonL suggested, you will get linear interpolation which is very good for a start.

When you are comfortable with how it works, you can consider going after a more sophisticated interpolation, like Bezier (don't be scared of the maths involved, it is really straightforward once you grasp the basics).

Share this post


Link to post
Share on other sites
On a related note I too have been experimenting with a scripted camera. However, I ran into problems when I wanted my in-game third person camera to follow a scripted path. That is, it follows a scripted path (using control points) but it tries to keep the player in focus at all times by rotating as necessary. What I was thinking was to translate according to control points but rotate according to player.

The issue is that because it's interpolating translations the camera tends to run at different speeds than the player. How do I get around this problem? Or am I doing this in a completely wrong way?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!