# ball-curve intersections

This topic is 3021 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I plan on writing a simple bouncing-ball-engine where multiple balls bounce off the environment and each other. No problem if the environment is made of lines. But I want something smoother. Bezier curves for example. I looked up on bezier curves and found that it isn't possible - or at least not easy enough to implement - to calculate the intersection between a moving ball and a bezier curve.
Then I figured out that if I could find intersection between a moving ball and a parabola I could 'add' a coordinate system for each parabola and in that way rotate and scale the parabola. It turned out too hard for me too...
Is there any type of curve that is commonly used in this kind of engines?
Btw, I have some kind of fetisj for exact calculations so if there is any exact solution I would prefer it.
But all help and every comment is welcome of course.

##### Share on other sites
Consider the stationary problem first. The standard parabola is y = a*x^2 and the circle has center (x0,y0) and radius r. The parabola and circle intersect when the distance from (x0,y0) to the parabola is smaller or equal to r. The parabola point (x,y) = (x,a*x^2) closest to (x0,y0) has the property that the tangent to the graph, (1,2*a*x), is perpendicular to (x-x0,y-y0). This leads to a cubic polynomial in x: (x-x0) + 2*a*x*(a*x^2-y0) = 0.

For the circle to be tangent to the graph of the parabola, you need f(x) = (x-x0)^2 + (a*x^2 - y0)^2 - r^2 = 0 (circle intersects the graph). Because the circle is tangent, you also know the derivative is zero: 0 = f'(x) = 2*[(x-x0) + 2*a*x*(a*x^2-y0)]. You can use algebraic elimination to solve simultaneously f(x) = 0 and f'(x) = 0 to compute roots x. In the process, you obtain algebraic conditions on x0, y0, and r for when this happens. These conditions give you an "envelope" of the parabola. This consists of two curves, one "outside" the parabola and one "inside" the parabola. Points on the two curves are distance r from the parabola. Points inside the two curves are centers of circles for which the parabola cuts through the circle (intersection but not tangency).

For a center moving with constant linear velocity, (x0+t*u, y0+t*v), you look at the equations above and obtain one involving t linearly and one involving t quadratically. Assuming the circle is initially not intersecting the parabola, you can use algebraic elimination methods to determine the smallest positive time t for which the circle becomes tangent to the parabola (collides with the parabola).

Quote:
 Original post by RubyNLBtw, I have some kind of fetisj for exact calculations so if there is any exact solution I would prefer it.

You might have a fetish, but sadly computers do not. The idea of wanting an "exact solution" is an artifact of a mathematical education system whereby exercises are designed to produce closed-form solutions. In practice, this is not always possible, and even if it were, the closed-form solutions can be problematic when using floating-point arithmetic (computing roots of polynomials, for example).

##### Share on other sites
Wow! Thanks for your reaction. I have to take some time to fully understand it, my math and english are only so-so and the combination makes it worse :P.

Quote:
 Original post by Dave EberlyYou might have a fetish, but sadly computers do not. The idea of wanting an "exact solution" is an artifact of a mathematical education system whereby exercises are designed to produce closed-form solutions. In practice, this is not always possible, and even if it were, the closed-form solutions can be problematic when using floating-point arithmetic (computing roots of polynomials, for example).

I know that with todays (and probably future's) implemention of fixed and floating point math, it is impossible to calculate most non-integer values exactly since they have a infinite amount of decimals.
The problem I have with approximations is that in some cases an approximation would cause problems if you compare two values that are very close to each other. An approximation could wrongly point out one of the values as bigger then the other, while in fact, the other value is bigger. An algorhitm that in theory produces exact results does not suffer from this problems. (In theory, or when you calculate exactly, that is, you don't calculate square roots for example, you don't have this problems at all. however when implemented using floating or fixed point arithmetic the problems might still occur, but I assume you would have less of this problems when you use exact algorhitms).
Sorry for the long text(edit: when editing it looks a lot longer ;)), I will soon respond on the actual mathematical stuff in your post (as soon as I understand/have something to reply)

##### Share on other sites
Quote:
 Original post by RubyNLThe problem I have with approximations is that in some cases an approximation would cause problems if you compare two values that are very close to each other. An approximation could wrongly point out one of the values as bigger then the other, while in fact, the other value is bigger. An algorhitm that in theory produces exact results does not suffer from this problems. (In theory, or when you calculate exactly, that is, you don't calculate square roots for example, you don't have this problems at all. however when implemented using floating or fixed point arithmetic the problems might still occur, but I assume you would have less of this problems when you use exact algorhitms).

Exact numbers practically only exists in mathematics. In the real world you have to live with approximations and errors. Your error is to think the numbers you are obtaining are exact, when they are not. You simply can't compare two values together as you do with exact results. You have to also consider the possible error and adding some bias. Note that this isn't only related to computer programs. Engineers always do it.

##### Share on other sites
Quote:
 Original post by RubyNLWow! Thanks for your reaction. I have to take some time to fully understand it, my math and english are only so-so and the combination makes it worse :P.

The squared distance f(x) = (x-x0)^2 + (a*x^2-y0)^2 - r^2 is a degree 4 polynomial in x, so write it as f(x) = a4*x^4+a2*x^2+a1*x+a0 (there is no x^3 term). Then f'(x) = 4*a4*x^3+2*a2*x+a1. For the circle to be tangent to the parabola, you know that f(x) = 0 and f'(x) = 0.

Define g(x) = 4*f(x) - x*f'(x) = b2*x^2+b1*x+b0. You know that g(x) = 0.

Define h(x) = a2*f'(x) - 2*a4*g(x) = c2*x^2+c1*x+c0. You know that h(x) = 0.

Define i(x) = c2*g(x) - b2*h(x) = d1*x+d0. You know that i(x) = 0. This lets you solve x = -d0/d1.

Define j(x) = d1*g(x) - b2*x*i(x) = e1*x+e0. You know that j(x) = 0.

Define k(x) = e1*i(x) - d1*j(x) = e1*d0 - e0*d1. You know that k(x) = 0, so the equation e1*d0-e0*d1 = 0 is a "consistency" equation and tells you how x0, y0, r, and a (the circle and parabola parameters) are related when the circle and parabola are tangent.

You can use algebra to derive how d0, d1, e0, and e1 are related to the coefficients a0, a2, a2, and a4.

Quote:
 I know that with todays (and probably future's) implemention of fixed and floating point math,

There are times when you want exact arithmetic or arbitrary precision arithmetic, but these are performed in software and so are slow. However, I think it is an error to confuse "real numbers" and "floating-point numbers". Consider something as simple as "float oneThird = 1.0f/3.0f;". Is oneThird exactly the real number 1/3? It is not, because you cannot represent 1/3 with a finite number of bits (binary representation is repeating). So you even have to be careful when typing in floating-point constants in a program, because they might themselves be approximated. The IEEE standard requires "correctly rounded" results, so oneThird is computed as if it were infinite precision but rounded to the nearest 32-bit float (rounding based on what mode you have the FPU configured to use).

##### Share on other sites
I like this discussion of exact math vs. math on computers. Dave has quite strong, good insights and experience with this, and his advice is good. My only comment about this matter would be that a goal of any numerical algorithm implemented on a digital computer should be to make the implementation robust, so that it doesn't fail in the case of floating point precision errors/roundoff error. There is a concept of "robust predicates" that provides approaches to dealing with this, for example. Jonathan Shewchuk has written about it, and has some code out there that is useful for some applications. Similarly, Christer Ericson has some practical approaches as well, which you can find out about at Essential Math Tutorials or RealTimeCollisionDetection.net (companion site for his book). These things, along with some of Dave's writings, can be very useful tools to help you create robust simulation software.

##### Share on other sites
I feel strongly enough about the topic that I am writing a series of books on it ("Numerical Computing for Games and Science"). The first involves an in-depth discussion about the IEEE 754-2008 Standard and has many examples of (1) where thinking "mathematically" can lead you down the wrong path and (2) where the floating-point results are counterintuitive. Shipping with the book will be a new set of libraries designed for robustness, accuracy, and speed (using SIMD, multicore, GPUs), and includes support for exact arithmetic (integer, rational), software floating-point for 128/256 bits, arbitrary precision arithmetic, interval arithmetic.

Several in-depth applications show what issues can arise and how you have to think "floating-point" for algorithm development. And they show that non-typical approaches to solving problems are called for: when you have a lot of computing power, you tend to do things different from when you design for a sequential algorithm on a CPU.

Even something as simple as computing the average of a large quantity of large floating-point numbers can be problematic. If you simply add all the numbers and divide by the quantity, you run the risk of floating-point overflow. More robust is to use a mipmapping approach (averages of averages) and build a pyramid whose 1x1 final value is the average. When the inputs are finite floating-point numbers, the result is guaranteed to be a finite floating-point number (albeit with round-off errors). The trade-off is that the mipmapping approach uses more cycles (when programming on a single core of a CPU).

This example came up when numerically solving a 3D parabolic PDE that models combustion of solid fuel (think Navier-Stokes as used in Stam's real-time fluids). Using the mipmapping approach uses a lot more cycles than the straightforward average, but the mipmapping approach was essential for a numerically stable solution. On a 256^3 grid, a 3 GHz core of a CPU used 98 minutes to compute the solution until "blow up". I managed to get all the computations mapped onto the GPU. On an NVIDIA Geforce 9800 GT, the computation time was 48 seconds (although the GPU heated up so much its fan kicked into high speed and sounded like a helicopter). With all this great hardware these days, you can get your robustness and accuracy and speed all at the same time (the computer science joke of "choose 2 out of 3" no longer applies :)

1. 1
2. 2
3. 3
Rutin
14
4. 4
5. 5

• 9
• 9
• 11
• 11
• 23
• ### Forum Statistics

• Total Topics
633674
• Total Posts
3013277
×