I know you said that this will run on the GPU, but I sat down to find an analytical solution to this problem (if nothing else, I was just curious myself) under some definition of an optimal solution. Whether it's a feasible solution depends on what constraints you have on your solution and implementation. I came up with an analytical solution at least, so do what you want with it.

- Let
**v**_{n} be the *n*:th vector in your initial set of vectors.
- Let
**V** = [**v**_{1}, **v**_{2}, **v**_{3} ... **v**_{n}] be a matrix of all vector **v**_{n}.
- Let
**p** be the final vector perpendicular to all vectors **v**_{n}.

Now, define the problem as finding **p** such that **p** is as perpendicular as possible to all vectors **v**_{n}. The definition of an optimal solution here is that the sum of the squares of all projections is as small as possible. The trivial solution to this problem is **p**=**0**, so constrain **p** to be a unit vector. Thus, the problem is stated as:

- minimize norm(
**V*****p**^{T})^{2} over **p**, subject to norm(**p**)^{2} = 1.

Or equivalently:

- minimize
**p**^{T}***V*****V**^{T}***p** over **p**, subject to **p**^{T}***p** = 1.

This is a standard constrained quadratic optimization. Introduce the Lagrange multiplier to turn the constrained optimization into an unconstrained optimization:

- minimize
**p**^{T}***V*****V**^{T}***p** - L*(**p**^{T}***p** - 1) over **p**.

Solve the optimization by finding where the partial derivative of the function to be minimized, with respect to the variable being optimized, is equal to zero:

**V*****V**^{T}***p**_{opt} - L***p**_{opt} = **0**
**V*****V**^{T}***p**_{opt} = L***p**_{opt}

This is a standard eigenvector problem. The optimal solution, **p**_{opt}, is the eigenvector of **V*****V**^{T} with the minimum eigenvalue.

The product **V*****V**^{T} is a 3x3 matrix, assuming your initial vectors **v**_{n} are 3-dimensional vectors, so you "only" need to find the eigenvector for the smallest eigenvalue of quite small (relatively speaking) matrix. Furthermore, the product **V*****V**^{T} is also fairly trivial to compute; it is simply **V*****V**^{T} = sum(**v**_{n}***v**_{n}^{T}) for all *n*; that is, the sum of all outer products of all vectors.

Keep in mind here that the optimal solution is not unique. If **p**_{opt} is a solution, then its inverse -**p**_{opt} is also an equally optimal solution. This is not a problem with my solution, but with your problem. Remember that the cross-product is not commutaive, so AxB != BxA. Well, in fact, AxB=-BxA. The operands are order-sensitive, but your set of vectors are inherently unordered. Unless you have additional constraints that indicates in which direction you want the final perpendicular vector to point in, you have to deal with this uncertainty.