When working with geometry, you'll need to do some intersection tests at some point. Sometimes it's directly related to graphics, but sometimes it helps determine other useful things, like optimum paths. This article is meant to give the up-and-coming game developer a few more tools in their computational toolbox.
For this article, I'm assuming you know all about vectors, points, dot and cross products. We'll cover some quick properties of polynomials, some things about some basic curves, and then go over intersection tests.
**Note: In an attempt to make this article more accessible to those people uncomfortable with making code from straight math, I've added some coding tips to help you figure out how to use these methods.**
Polynomials
The Stone-Weierstrass theorem states that any continuous function defined on a closed interval can be uniformly approximated as closely as desired with a polynomial function. For graphics and game development, that means that most 2D objects we deal with can be expressed as polynomials. The key to these intersection test methods is using that to our advantage.
Bezout's Theorem
In order to find all our intersection points, we need to know how many intersections there can be between 2 polynomials. Bezout's theorem states that for a planar algebraic curve of degree

*n*and a second planar algebraic curve

*m*, they intersect in exactly

*mn*points if we properly count complex intersections, intersections at infinity, and possible multiple intersections. If they intersect in more than that number, then they intersect at infinitely many points, which means that they are the same curve. Bezout's theorem also extends to surfaces. A surface of degree

*m*intersects a surface of degree

*n*in a curve of degree

*mn*. As well, a space curve of degree

*m*intersects a surface of degree

*n*in

*mn*points. The key here is to identify how to count intersections. For example, if 2 curves are tangent, they intersect twice. If they have the same curvature, then they intersect 3 times. Simple intersections (not tangent and not self-intersecting) are counted once. So how do we count intersections at infinity? Using homogeneous coordinates, of course! Homogeneous Coordinates Counting intersections at infinity sounds hard, but we can use

*homogeneous coordinates*to do this. Here, we define a point in 3D homogeneous space \((X,Y,W)\) to correspond to a 2D point \((x,y)\) whose coordinates are \((X/W,Y/W)\). This means a 3D homogeneous point \((4,2,2)\) corresponds to a the 2D point \((4/2,2/2) = (2,1)\). Going the other way, the 2D point \((3,1)\) becomes the 3D point \((3,1,1)\), since the transformation back to 2D is simply \((3/1,1/1) = (3,1)\). This creates some weird equalities, but you can visualize this 3D-2D transformation as projecting the points (and curves) in 3D onto the plane \(z=1\). This helps us with define infinities with finite numbers. Let's say we have the point \((2,3,1)\) in homogeneous coordinates. If we changed the weight coordinate W so that we had \((2,3,0.5)\), the 2D point would be \((4,6)\). Smaller weights mean larger projected coordinates. As the weight approaches zero, the 2D point gets larger and larger, until at W = 0, the projected point is at infinity. Thus, any homogeneous point \((X,Y,0)\) is a point at infinity.

__Usually, polynomials have terms that differ in their algebraic degrees. Some are quadratic, some cubic, some constant, etc. The polynomial takes the following form: \[f(x,y) = \sum_{i+j\le n} a_{ij}x^iy^j = 0 \] However, the same curve can be expressed in homogeneous form by adding a homogenizing variable__

**Aside: Equations in Homogeneous Form***w*: \[f(X,Y,W) = \sum_{i+j+k = n} a_{ij}X^iY^jW^k = 0 \] Here, every term in the polynomial is of degree

*n*. Equation Types To use polynomials effectively, we need to be familiar with how they can be expressed. There are basically 3 types of equations that can be used to describe planar curves: parametric, implicit, and explicit. If you've only had high-school math, you've probably dealt with explicit curves a lot and not so much with the others.

__The curve is specified by a real number \(t\) and each coordinate is given by a function of \(t\), like \(x = x(t)\) and \(y = y(t)\). Rational polynomials are defined the same way, but each coordinate divided by a different function of \(t\), like \(x = x(t) / w(t)\) and \(y = y(t)/w(t)\).__

**Parametric:**__This curve takes the form \(f(x,y) = 0\), where the point \((x,y)\) is on the curve.__

**Implicit:**__This is really a special case of both parametric and implicit forms. The explicit form of a curve is the classic \(y=f(x)\) form. Lines Let's start with a very basic curve: the line. It's a degree-1 polynomial. There are many definitions of this kind of curve. Some are helpful for games and others...not so much. Some you may have seen in algebra class and others you may be seeing for the first time.__

**Explicit:**

**Common Forms****Slope-intercept:**\( y = mx+b \)

**Point-slope:**\( y - y_0 = m(x-x_0) \)

**Affine:**\( P(t) = [(t_1-t)P_0+(t-t_0)P_1] / (t_1-t_0) \)

**Vector Implicit:**\((P-P_0)\cdot n = 0\)

**Parametric:**\( P(x(t),y(t)) = (x_0+at,y_0+bt) \)

**Algebraic Implicit:**\( ax+by+c=0 \) In secondary schools, they use the first 2 methods almost exclusively, but these turn out to be the least helpful for computing. Here, I've tried to order the definitions from least helpful to most helpful for our needs. Sometimes it's helpful to illustrate how useful these forms can be with a motivating example. This will hopefully help to connect formulas like dot product to traditional algebra and polynomials so you can see the connections. Motivating Example: Closest Point on 3D Line to a Given Point In 3D, we can't represent a line as an implicit equation like ax+by+c=0. It becomes a parametric equation: \[ L(x(t),y(t),z(t)) = \begin{cases} x&=x_0+ut \\ y&=y_0+vt \\ z&=z_0+wt \end{cases}\] where \(\vec{a}=(u,v,w)\) defines the vector along the line and \(P_0=(x_0,y_0,z_0)\) defines a point on the line. These two things define the line completely. What we want is an expression for the closest point on that line to another given point, which we'll name \(Q\). If we drew a vector from the closest point on the line to \(Q\) we would see that this vector would be perpendicular to \(\vec{a}\), or in other words, the dot product will be zero. This point will be unique on the line, so we should be able to solve for it algebraically. Remember the equation for the general point is \(P=P_0+\vec{a}t\). The dot product equation would look like this: \[ \begin{aligned} \vec{a} \cdot (Q-P) &= 0 \\ (u,v,w) \cdot (dx-ut,dy-vt,dz-wt) &= 0 \end{aligned} \] where \( (dx,dy,dz) = (x_Q-x_{P_0},y_Q-y_{P_0},z_Q-z_{P_0} ) \). If we expand this out and solve for \(t\), we get the following expression: \[ t = \frac{\vec{a} \cdot (Q-P_0)}{\vec{a} \cdot \vec{a}} \] If we restrict \(\vec{a}\) to be a unit vector, the denominator will become 1 and vanish. This gives us a very simple expression for the point on the line, \( P = P_0 + \left [ \vec{a} \cdot (Q-P_0) \right ] \vec{a} \). The parameter multiplied to the vector is simply the dot product of the vector \(\vec{a}\) with the vector from the known point on the line to the given point. This serves to show 2 things: The dot product helps us find components of vectors in certain directions. We can see that what we get back from the dot product is the component of the \((Q-P_0)\) vector in the direction of \(\vec{a}\) vector. This is consistent with the linear algebra definition.