Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Cross Product of Many Vectors


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
12 replies to this topic

#1 Geometrian   Crossbones+   -  Reputation: 1583

Like
0Likes
Like

Posted 10 March 2013 - 11:50 AM

Hi,

 

The standard cross product determines a new vector perpendicular to exactly two vectors.

 

I'm looking for something that determines a new vector mostly perpendicular to a set of roughly coplanar vectors.

 

The handedness of the operation need not be well-defined. Optimal accuracy is secondary to speed.

 

Thanks,

-G


And a Unix user said rm -rf *.* and all was null and void...|There's no place like 127.0.0.1|The Application "Programmer" has unexpectedly quit. An error of type A.M. has occurred.

Sponsor:

#2 TheChubu   Crossbones+   -  Reputation: 4583

Like
0Likes
Like

Posted 10 March 2013 - 12:04 PM

If you know that the vectors are in the same plane, wouldn't just making the cross product between two of them yield the same result?

 

If you have two vectors on the same plane, calculate their normal, then add another vector that lies in the same plane, the normal would be the same. If you have a bunch of co planar vectors, means that most of them are multiples of each other.


"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator


#3 EWClay   Members   -  Reputation: 659

Like
0Likes
Like

Posted 10 March 2013 - 02:26 PM

I can think of an algorithm that will get you there. Maybe it could be improved.

Pick two random vectors. Find the cross product. Repeat as many times as needed for the desired accuracy. Store the cross products as a set of vectors s.

Pick one of the vectors from s as a starting point n. Calculate:

n' = Normalise(Sum over s (s[i] * dot(n, s[i])))

Repeat the last step, substituting n' for n until the result is stable.

#4 Geometrian   Crossbones+   -  Reputation: 1583

Like
0Likes
Like

Posted 10 March 2013 - 02:39 PM

If you know that the vectors are in the same plane

As I wrote, "roughly coplanar".

 

I can think of an algorithm that will get you there.
. . .

Hmmmm, this is happening on the GPU, so random numbers are kindof a pain. Also, I suspect the convergence won't be fast enough (e.g., for a set of maybe 12 vectors, it might just be faster to do a brute force 144 cross products instead of hoping for 1/sqrt(n) convergence to get small enough).


And a Unix user said rm -rf *.* and all was null and void...|There's no place like 127.0.0.1|The Application "Programmer" has unexpectedly quit. An error of type A.M. has occurred.

#5 Bacterius   Crossbones+   -  Reputation: 9098

Like
0Likes
Like

Posted 10 March 2013 - 02:48 PM

Do you have information on the distribution of those "roughly coplanar" vectors? How are they generated? I suggest this because if you do, you might be able to statistically evaluate the expected "average cross product".


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#6 Geometrian   Crossbones+   -  Reputation: 1583

Like
0Likes
Like

Posted 10 March 2013 - 02:55 PM

Do you have information on the distribution of those "roughly coplanar" vectors? How are they generated?

Not really. The vectors are connectivity information from a GPU cloth simulation--i.e., vectors on the cloth's surface. Naturally, the surface deforms, so the vectors aren't in general pairwise orthogonal. The idea is to calculate a decent normal for lighting purposes.


And a Unix user said rm -rf *.* and all was null and void...|There's no place like 127.0.0.1|The Application "Programmer" has unexpectedly quit. An error of type A.M. has occurred.

#7 Bacterius   Crossbones+   -  Reputation: 9098

Like
0Likes
Like

Posted 10 March 2013 - 03:07 PM

Not really. The vectors are connectivity information from a GPU cloth simulation--i.e., vectors on the cloth's surface. Naturally, the surface deforms, so the vectors aren't in general pairwise orthogonal. The idea is to calculate a decent normal for lighting purposes.

 

If the surface is well-behaved enough, you could select a few points on the cloth surface, interpolate the surface based on a spline curve or something, and analytically compute the normal. This should be quite stable after just a few control points, since presumably your cloth surface is smooth, on at least some level of detail.


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#8 EWClay   Members   -  Reputation: 659

Like
0Likes
Like

Posted 10 March 2013 - 03:18 PM

Actually I think you could avoid the random numbers and merge the two phases of my algorithm.

Just replace s[i] with ((n x v[i]) x v[i]), and normalise, where v is the initial set of vectors.

You still need a starting point and some number of iterations.

#9 Brother Bob   Moderators   -  Reputation: 8454

Like
0Likes
Like

Posted 10 March 2013 - 04:15 PM

I know you said that this will run on the GPU, but I sat down to find an analytical solution to this problem (if nothing else, I was just curious myself) under some definition of an optimal solution. Whether it's a feasible solution depends on what constraints you have on your solution and implementation. I came up with an analytical solution at least, so do what you want with it.

 

  • Let vn be the n:th vector in your initial set of vectors.
  • Let V = [v1, v2, v3 ... vn] be a matrix of all vector vn.
  • Let p be the final vector perpendicular to all vectors vn.

Now, define the problem as finding p such that p is as perpendicular as possible to all vectors vn. The definition of an optimal solution here is that the sum of the squares of all projections is as small as possible. The trivial solution to this problem is p=0, so constrain p to be a unit vector. Thus, the problem is stated as:

  • minimize norm(V*pT)2 over p, subject to norm(p)2 = 1.

Or equivalently:

  • minimize pT*V*VT*p over p, subject to pT*p = 1.

This is a standard constrained quadratic optimization. Introduce the Lagrange multiplier to turn the constrained optimization into an unconstrained optimization:

  • minimize pT*V*VT*p - L*(pT*p - 1) over p.

Solve the optimization by finding where the partial derivative of the function to be minimized, with respect to the variable being optimized, is equal to zero:

  • V*VT*popt - L*popt = 0
  • V*VT*popt = L*popt

This is a standard eigenvector problem. The optimal solution, popt, is the eigenvector of V*VT with the minimum eigenvalue.

 

The product V*VT is a 3x3 matrix, assuming your initial vectors vn are 3-dimensional vectors, so you "only" need to find the eigenvector for the smallest eigenvalue of quite small (relatively speaking) matrix. Furthermore, the product V*VT is also fairly trivial to compute; it is simply V*VT = sum(vn*vnT) for all n; that is, the sum of all outer products of all vectors.

 

Keep in mind here that the optimal solution is not unique. If popt is a solution, then its inverse -popt is also an equally optimal solution. This is not a problem with my solution, but with your problem. Remember that the cross-product is not commutaive, so AxB != BxA. Well, in fact, AxB=-BxA. The operands are order-sensitive, but your set of vectors are inherently unordered. Unless you have additional constraints that indicates in which direction you want the final perpendicular vector to point in, you have to deal with this uncertainty.



#10 Brother Bob   Moderators   -  Reputation: 8454

Like
1Likes
Like

Posted 10 March 2013 - 04:35 PM

On a second thought, the derivation of an optimal solution could have been much easier, so I'll just throw in another point of view just for the sake of it.

 

Given the notations from my last post, treat vn as a sample of a three-dimensional random process. Its distribution is dictated by the covariance matrix C and can be estimated by C=V*VT. You want to find the direction of least energy of the distribution, which is the direction that is perpendicular to the plane where the random samples are located. As in my previous solution, that is the eigenvector of V*VT with the smallest eigenvalue.


Edited by Brother Bob, 10 March 2013 - 04:36 PM.


#11 quasar3d   Members   -  Reputation: 706

Like
1Likes
Like

Posted 11 March 2013 - 06:21 AM

This is exactly what least squares solves.


Edited by quasar3d, 11 March 2013 - 06:22 AM.


#12 Geometrian   Crossbones+   -  Reputation: 1583

Like
0Likes
Like

Posted 13 March 2013 - 08:16 PM

I know you said that this will run on the GPU, but I sat down to find an analytical solution to this problem (if nothing else, I was just curious myself) under some definition of an optimal solution. Whether it's a feasible solution depends on what constraints you have on your solution and implementation. I came up with an analytical solution at least, so do what you want with it.

This is most excellent. I attempted to implement it. Notice that V*VT is real symmetric (since it is the sum of real symmetric matrices). This simplifies the eigenproblem significantly.

Since this is being implemented in OpenCL, which as of version 1.2 doesn't support matrices directly, I had to manually implement much of the necessary matrix code. However, before I even finished, I was noticing that it was slowing performance unacceptably. It wasn't too bad, but it was still too much.

So, thanks for the derivation, but I don't think it's practical on the GPU yet.

If the surface is well-behaved enough, you could select a few points on the cloth surface, interpolate the surface based on a spline curve or something, and analytically compute the normal.

I thought of that first actually, but it's difficult in general since the cloth surface's simulation grid is not necessarily a simple 2D parametrization of the surface.


And a Unix user said rm -rf *.* and all was null and void...|There's no place like 127.0.0.1|The Application "Programmer" has unexpectedly quit. An error of type A.M. has occurred.

#13 Dirk Gregorius   Members   -  Reputation: 799

Like
0Likes
Like

Posted 13 March 2013 - 09:52 PM

You seem to be looking for a best fit plane through a set of nearly coplanar points. The Newell plane is what you are looking for:

 

This technique, first suggested by Newell (Sutherland et al., 1974), works for concave polygons and polygons containing collinear vertices, as well as for nonplanar polygons, e.g., polygons resulting from perturbed vertex locations...

 

Newell's method may seem inefficient for planar polygons, since it uses all the vertices of a polygon when, in fact, only three points are needed to define a plane. It should be noted, though, that for arbitrary planar polygons, these three points must be chosen very carefully:

  1. Three points uniquely define a plane if and only if they are not collinear; and

  2. if the three points are chosen around a "concave" corner, the normal of the resulting plane will point in the direction opposite to the expected one.

Checking for the properties would reduce the efficiency of the three-point method as well as making its coding rather inelegant. A good strategy may be that of using the three-point method for polygons that are already known to be planar and strictly convex (no collinear vertices,) and using Newell's method for the rest.

 

 
Source: Filippo Tampieri. “Newell's Method for Computing the Plane Equation of a Polygon”. In Graphics Gems III, Academic Press, 1992, pp. 231–232.
 
 
Here is a good free reference:

http://cs.haifa.ac.il/~gordon/plane.pdf


Edited by Dirk Gregorius, 13 March 2013 - 09:53 PM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS