Convexity question
This is an N-dimensional question rather than the 3D of most games, but it arises from some 3D issues. I've thought of some heuristics, but not proved that one always works. There is probably a standard answer of which I am too ignorant even to have the right keywords...
I have points P1,...,PM in N-dimensional space, with N large (dozens or hundreds) and M larger, plus another point Q. How do I quickly decide if Q is a convex combination of [some of] the points P1,...,PM?
I do _not_ need the whole convex hull, I just want to know if Q is inside it, and a set of points to use. As a bonus, it would be nice if the coefficients in the convex combination are as equal as the set of points allows.
Any pointers?
hmm... something like GJK / MPR to detect if you are inside the convex hull of a shape?
Dunno how it extends to N dimensions, but it is based on simple vector calculus.
Dunno how it extends to N dimensions, but it is based on simple vector calculus.
First, subtract P[M] from P[1] through P[M-1] and from Q, say P' = P - P[M] and Q' = Q - P[M]. This gets you from an affine space to a linear space. You have M-1 vectors in an N-dimensional space, where M-1 >= N. This means that the set {P'} must have a subset of vectors that are linearly independent. Compute such a set B ("complete a basis" is the linear algebra term for this). Project out each vector in B from Q'. If the remainder is zero, then Q' is a linear combination of the vectors in B. If the remainder is not zero, then Q is not a combination of the P (convex or otherwise). When Q is a linear combination, the coefficients are not necessarily nonnegative. Oops.
Alternatively, let D = sum_{i=1}^{M} c*P - Q. You want to choose 0 ≤ c ≤ 1 for all i so that D = 0. Make this instead a constrained quadratic minimization with linear inequality constraints: Minimize |D|^2 subject to 0 ≤ c ≤ 1 for all i. If this has a solution with minimum D = 0, then Q is a convex combination of the c. I'd have to check, but this might be solvable as an LCP.
As oliii mentioned, you can construct the convex hull of the P and test whether Q is inside it. From this, you can compute c.
Alternatively, let D = sum_{i=1}^{M} c*P - Q. You want to choose 0 ≤ c ≤ 1 for all i so that D = 0. Make this instead a constrained quadratic minimization with linear inequality constraints: Minimize |D|^2 subject to 0 ≤ c ≤ 1 for all i. If this has a solution with minimum D = 0, then Q is a convex combination of the c. I'd have to check, but this might be solvable as an LCP.
As oliii mentioned, you can construct the convex hull of the P and test whether Q is inside it. From this, you can compute c.
If you Google for "positive solutions" and "linear systems" you'll see that the problem has been studied before, although I couldn't get a very good link.
If you take a linear combination of P1...PM with coefficients x1...xM, you want to know if there is a setting of x1...xM such that x1+x2+...+xM=1, x1>=0, x2>=0, ..., xM>=0 and x1*P1+x2*P2+...+xM*PM=Q (this last equation is actually N real equations, one per dimension). This can be seen as a linear-programming problem, except you don't have a function to maximize. Anyway, you can use whatever algorithm a linear-programming solver uses to get an initial vertex in the set, or simply give a linear-programming solver an arbitrary function (like 0).
If you take a linear combination of P1...PM with coefficients x1...xM, you want to know if there is a setting of x1...xM such that x1+x2+...+xM=1, x1>=0, x2>=0, ..., xM>=0 and x1*P1+x2*P2+...+xM*PM=Q (this last equation is actually N real equations, one per dimension). This can be seen as a linear-programming problem, except you don't have a function to maximize. Anyway, you can use whatever algorithm a linear-programming solver uses to get an initial vertex in the set, or simply give a linear-programming solver an arbitrary function (like 0).
This looks to me lime you could look into linear programming. You only need the first part of the Simplex algorithm that concerns itself with finding a "basic feasible solution". There is - to my knowledge - no guarantee that the algorithm will finish fast enough.
All I can offer for now are two keywords for wikipedia: Kirkpatrick-Seidel algorithm and Chan's algorithm for computing the convex hull. If I am not completly asleep today, your first question can be answered reasonably fast, here's a quick idea and a few questions:
Compute the extremal points {E1...Eh} of {P1,...,Pn,Q} (Q1: is this the same as computing the convex hull?)
Clearly, if Q is not an extremal point, it lies within the convex hull of {P1,..,Pn}, and we are done.
(Q2: Can this be right? This would mean that the problem has a runtime of O(n log h), just like Chan's algorithm.
The second part, writing Q as a "balanced" convex combination (I made that term up), should be suitable for a least squares algorithm.
Out of curiosity, what was the original 3D problem?
All I can offer for now are two keywords for wikipedia: Kirkpatrick-Seidel algorithm and Chan's algorithm for computing the convex hull. If I am not completly asleep today, your first question can be answered reasonably fast, here's a quick idea and a few questions:
Compute the extremal points {E1...Eh} of {P1,...,Pn,Q} (Q1: is this the same as computing the convex hull?)
Clearly, if Q is not an extremal point, it lies within the convex hull of {P1,..,Pn}, and we are done.
(Q2: Can this be right? This would mean that the problem has a runtime of O(n log h), just like Chan's algorithm.
The second part, writing Q as a "balanced" convex combination (I made that term up), should be suitable for a least squares algorithm.
Out of curiosity, what was the original 3D problem?
Kirkpatrick-Seidel is for the plane -- very special case -- and Chan (as I understand it) is 2D or 3D. My dimensions are much higher.
Computing the whole convex hull seems overkill, like solving a linear [A]x = b by finding the whole inverse of matrix [A] and applying it to b: for a single solution, Gaussian elimination is much faster. Finding the extremal points {E1...Eh} of {P1,...,Pn,Q} is computing the vertices of the convex hull: either the same thing, or a bit less, depending on one's format for specifying the hull. (Does one want the edges, faces, hyperface, ..., also?)
I just want to know (constructively) if Q is inside.
Agreed, least squares is good for balance.
My original 3D problem is finding a stable and easily-tied set of strings for a tensegrity with given rods.
Computing the whole convex hull seems overkill, like solving a linear [A]x = b by finding the whole inverse of matrix [A] and applying it to b: for a single solution, Gaussian elimination is much faster. Finding the extremal points {E1...Eh} of {P1,...,Pn,Q} is computing the vertices of the convex hull: either the same thing, or a bit less, depending on one's format for specifying the hull. (Does one want the edges, faces, hyperface, ..., also?)
I just want to know (constructively) if Q is inside.
Agreed, least squares is good for balance.
My original 3D problem is finding a stable and easily-tied set of strings for a tensegrity with given rods.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement