Complete orthogonal factorization always solvable?

Started by
3 comments, last by Numsgil 14 years, 5 months ago
Say I have an arbitrary matrix of any size and shape. Call it A. Now I QR decompose it in to: A = Q1R1. Then I QR decompose R1T into: R1T = Q2R2 and transpose back: R1 = R2TQ2T Using that I can form a complete orthognal factorization of A: A = Q1R2TQ2T So far so good. I want to find the minimum norm ("pseudo inverse") solution of Ax = b. (Where A is mxn, x is nx1, and b is mx1). So I have: Q1R2TQ2Tx = b I can bring Q1 to the right side easily enough. The problem I have is with R2T. If it's full rank triangular, I can use forward substitution easily enough to solve. If it's diagonal I can do the pseudo inverse of the diagonal matrix where if an element is ~0 or ~infinity I just leave it as 0, and then bring it over to the right hand side of the equation. But what if it's a non full rank triangular matrix? Something like:

1 0 0
2 3 0
2 3 0
I've been trying to create an example where the above double QR decomposition would produce a rank deficient triangular matrix and would be unsolvable, but I'm not having much luck. So it might be impossible (which is good). Some examples:

0 0
0 0
0 0
The all 0 matrix produces an all 0 triangular matrix. I can technically treat it as a diagonal matrix and so just basically set the right hand side of the equation to all 0s. Another example:

1 1 1
1 2 1
0 0 0
This produces a triangular matrix that looks like:

x 0 0
x x 0
0 0 0
Which is slightly harder but all I have to do is drop the last column from Q1 and all 0 row from the triangular matrix. And then drop the last column from the triangular matrix and the last row from Q2T. But then it starts to feel like I'm special casing things quite a bit. So given the orthogonal factorization, what's the most practical (robust and cheap) way to extract the solution? What sort of guarantees do I have about the triangular matrix? Is this a case where I need to do pivoting with the QR decomposition to be sure that the diagonal elements in the triangular matrix (the eigenvalues) are strictly decreasing?
[size=2]Darwinbots - [size=2]Artificial life simulation
Advertisement
Sounds like to me you are hunting for the Singular Value Decomposition of a matrix:

A=U D V^t

where U and V are orthonormal matrices and D is a diagonal matrix. This factorization is true for all matrices (if A has complex co-efficients than the decompoistion is A= U D V^h, with U and V being hermitian).
Close this Gamedev account, I have outgrown Gamedev.
No, I know about the SVD. It's a type of complete orthogonal factorization, but it's expensive to compute. And you don't really need it if all you want is the minimum norm solution.

See, for example, This note in the LAPACK manual. It suggests that you can find the solution by doing (converting the notation):

AP = Q1*R2T*Q2T

x = P * Q2*R2-T*Q1T * b

That is, you end up inverting the triangular matrix. But I'm not sure that you actually need the inverse of the triangular matrix. If you can do some forward/backward substitution that should work just fine, too. But either way you need to make sure that R2-T is actually invertible.

In general, the triangular matrix obtained from a complete orthogonal factorization isn't invertible. See, for example:

1 0 * 0 0 * 1 00 1   1 0   0 1


But I think that because a QR factorization was done, maybe the orthogonal matrices are such that the triangular matrix will always be either invertible or diagonal.

I'm just not sure how to prove it to myself, or what sort of special cases I'd need to consider. I could look at the LAPACK source, but I have a hard time working my way through the logic of Linear Algebra in source code form. Especially with the sort of packing together of matrices I think LAPACK does.
[size=2]Darwinbots - [size=2]Artificial life simulation
Quote:Original post by Numsgil
Say I have an arbitrary matrix of any size and shape.


In your discussions, you talk about the inverse of R2T. I assume you are doing so only to work with examples to generate a rank-deficient A? If A is not square, then R2 is not square, so you have to be careful about using matrix-inverse notation, talking about eigenvalues, and so on.

Quote:
This produces a triangular matrix that looks like:
x 0 0x x 00 0 0


Which is slightly harder but all I have to do is drop the last column from Q1 and all 0 row from the triangular matrix. And then drop the last column from the triangular matrix and the last row from Q2T.


I do not understand the "drop" comments. Your triangular matrix, call it L for lower-triangular, is rank-deficient. You are trying to solve L*x = c. The minimum norm solution x requires both |L*x-c| and |x| to be minimized. I do not believe dropping columns and rows will provide you the solution.

Consider an example where L is of the form
x 0 0x x 0x x 0

The least-squares formulation is LT*L*x = LT*c, which leads to a 2-by-2 system of linear equations in the first two components of x. The solution to this system is not the same as forward solving L*x = c for the first two components.

Quote:Original post by Dave Eberly
Quote:Original post by Numsgil
Say I have an arbitrary matrix of any size and shape.


In your discussions, you talk about the inverse of R2T. I assume you are doing so only to work with examples to generate a rank-deficient A? If A is not square, then R2 is not square, so you have to be careful about using matrix-inverse notation, talking about eigenvalues, and so on.


That's what the two pass QR decomposition is about. You decompose A into QR, then decompose the transpose of that R matrix again. The resulting triangular matrix is square for any size or shape of A.

Quote:
Quote:
This produces a triangular matrix that looks like:
x 0 0x x 00 0 0


Which is slightly harder but all I have to do is drop the last column from Q1 and all 0 row from the triangular matrix. And then drop the last column from the triangular matrix and the last row from Q2T.


I do not understand the "drop" comments. Your triangular matrix, call it L for lower-triangular, is rank-deficient. You are trying to solve L*x = c. The minimum norm solution x requires both |L*x-c| and |x| to be minimized. I do not believe dropping columns and rows will provide you the solution.

Consider an example where L is of the form
x 0 0x x 0x x 0

The least-squares formulation is LT*L*x = LT*c, which leads to a 2-by-2 system of linear equations in the first two components of x. The solution to this system is not the same as forward solving L*x = c for the first two components.


Again, the LAPACK manual says you can extract the minimum norm solution the way I describe, at least as far as the long form with the inversion. But I'm not sure if dropping columns like I suggest wouldn't just give me the basic solution instead, which isn't what I want. I have an actual book here in front of me from Bjorck, and I'm still combing through it, but the section on the complete orthogonal factorization is really just a single page and says the same thing as the LAPACk manual. I'm still not sure how to turn it in to an actual implementation for the general case.
[size=2]Darwinbots - [size=2]Artificial life simulation

This topic is closed to new replies.

Advertisement