Jump to content

  • Log In with Google      Sign In   
  • Create Account


My Ball's deacceleration is too fast: Speeds up nicely, slows down too fast.


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
3 replies to this topic

#1 tom_mai78101   Members   -  Reputation: 568

Like
0Likes
Like

Posted 10 October 2012 - 11:37 AM

It may seem simple enough, but I realized how tough it is to get it right.

Here's my basic logic:

[source lang="java"]//Some codes were called to update acceleration values.public void accelerate(){for (int i = 0; i < 2; i++){ speed[i] += acceleration[i]; position[i] += speed[i]; speed[i] *= 0.1f;}}//Some codes were called to render everything.[/source]
You may noticed that I multiply 0.1 to the speed values. The reason is that, due to game logic requirements, the acceleration values can easily speed up the speed values within a fraction of a second. I would like it so that the objects can move just as fast when it's accelerating, but takes a while to slow down when its deaccelerating.

I did thought about doing this:

[source lang="java"]//Some codes were called to update acceleration values.public void accelerate(){for (int i = 0; i < 2; i++){ tempValues[i] += acceleration[i]; speed[i] += tempValues[i]; position[i] += speed[i]; speed[i] *= 0.1f;}}//Some codes were called to render everything.[/source]

I made the acceleration a cubic power, and if I were to convert the values down to position, it will create a rippling effect of decreasing power and lengthening the time it takes to stop itself from increasing when the ball is slowing down. I could make it like this, if I want:

[source lang="java"]//Some codes were called to update acceleration values.public void accelerate(){for (int i = 0; i < 2; i++){ tempValuesN[i] += acceleration[i]; tempValuesN1[i] += tempValuesN[i]; //... //... tempValues3[i] += tempValues4[i]; tempValues2[i] += tempValues3[i]; tempValues1[i] += tempValues2[i]; tempValues[i] += tempValues1[i]; speed[i] += tempValues[i]; position[i] += speed[i]; speed[i] *= 0.1f;}}//Some codes were called to render everything.[/source]

But that would be very unrealistic and very unreasonable. Are there any other ways to make the deacceleration work nicely, behaving like the ball in the game Teeter for Android?

Edited by tom_mai78101, 10 October 2012 - 11:37 AM.


Sponsor:

#2 Bacterius   Crossbones+   -  Reputation: 8286

Like
0Likes
Like

Posted 10 October 2012 - 02:11 PM

Multiply the acceleration and velocity by your refresh rate (1.0 / 30.0 or 1.0 / 60.0), it'll be much smoother (since acceleration and velocity are defined with SI seconds, you need to take into account the number of times you update them every second). And get rid of the 0.1 multiplication, which looks like it's there as a band-aid.
[source lang="csharp"]public void accelerate(){for (int i = 0; i < 2; i++){ speed[i] += acceleration[i] * (1.0 / refreshRate); position[i] += speed[i] * (1.0 / refreshRate);}}[/source]
Look up Euler Time Integration for details, and you might have to swap around the velocity and position updates, not sure. Also deceleration is just negative acceleration, which you don't seem to be using here. It should work then.

If you need slightly slower deceleration, you can multiply your deceleration by any factor, like 0.5, and the ball will decelerate slower (because velocity will take more time to reach zero).

Edited by Bacterius, 10 October 2012 - 06:19 PM.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#3 tom_mai78101   Members   -  Reputation: 568

Like
0Likes
Like

Posted 11 October 2012 - 12:29 AM

Any refresh rates are fine? I never had to set them before.

The acceleration values can easily go from negative to positive and back to negative within a very short time span, as it's directly hooked to an accelerometer embedded in the hardware system.

#4 Bacterius   Crossbones+   -  Reputation: 8286

Like
0Likes
Like

Posted 11 October 2012 - 02:07 AM

You need to use the physics refresh rate you are using. In fact, ideally you use the elapsed time since the last physics update, but we'll assume it is constant for simplicity. Although it depends a lot on what your values are. Are the values reported by your accelerometer instantaneous accelerations? Are their units in meters per second? This is all important if you want an accurate result.

If the acceleration is not well-behaved, you will need to use a better time integration scheme. The one you currently use (Euler, e.g. velocity+=acceleration dt, position+=velocity dt) is numerically unstable and doesn't work well at all when your acceleration changes very quickly (you need an increasingly high refresh rate to keep it accurate). You'll get weird stuff like slingshot effect (bad energy conservation), and the simulation will be wrong overall. Look into better methods like Verlet integration, or Runge-Kutta.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS