This topic is 3303 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hello, I am attempting to somehow numerically "quantify" the concept of performing some fraction of an affine transformation. For example let's say if we have a simple 2d rotation. In homegeneous co-ordinates (I'm using MATLAB notation) T = [cos(pi/2) -sin(pi/2) 0; sin(pi/2) cos(pi/2) 0; 0 0 1]; Now let's say I would like to somehow "break up" this transformation into two equivalent operations. Well in this simple case it is quite easy. We know that T will rotate by pi/2, so we can simply define U = [cos(pi/4) -sin(pi/4) 0; sin(pi/4) cos(pi/4) 0; 0 0 1]; So now U will rotate by pi/4 and and U*U will perform the full T. ~ Now, how about a more generic case. Suppose we have an affine transformation T = [a b c; d e f; g h 1]; This could represent any sort of operation, not necessarily something simple like a rotation. So there is no way (just by looking at it) to infer a "fraction" of the transformation. (In my case T represents warping of the image plane performed during image rectification and I need to perform this rectification to different "degrees", ranging from none to full). So how can I derive this operation rigorously? I have two ideas at the moment: 1) Nth root of T: We diagonalize T using D = inv(P) * T * P then compute the Nth root of T using U = P * (D^(1/N)) * inv(P) so then we have U*U*...*U = T My only concern is that most of the time T has complex eigenvalues so I end up with U being complex. Just because U^N = T is real doesn't mean that U has to be real. So in that case I just defined W = abs(U) and applied W to the image. I tried out this concept on images and it definitely seems to work. But I am not sure if this is mathematically sound. 2) Using some sort of pseudo-linear interpolation Not sure if this one makes any sense at all but the results look convincing. Defining I = 3x3 identity U = (1 - k/N)*I + (k/N)*T where k/N is the "fraction" of the transformation to perform. e.g. to perform "one third" the transformation we would use U = (1 - 1/3)*I + (1/3)*T ~ So, does any of this make sense? And why are both methods giving adequate but (only slightly, which is annoying) different results? Sorry about all the "double quotes", I hope I was clear with my English.

##### Share on other sites
I think (1) is the right approach. You should be able to extend your results to some matrices that are not diagonalizable by studying this Wikipedia article. You can then compute the n-th root of M as exp(1/n*log(M)).

I hope that helps.

##### Share on other sites
Quote:
 Original post by alvaroYou can then compute the n-th root of M as exp(1/n*log(M)).

Although the exp(M) Taylor series converges for all square matrices M, the log(M) Taylor series does not.

##### Share on other sites
Quote:
 Original post by Dave EberlyAlthough the exp(M) Taylor series converges for all square matrices M, the log(M) Taylor series does not.

Quote:
 Original post by alvaroI think (1) is the right approach. You should be able to extend your results to some matrices that are not diagonalizable by studying this Wikipedia article. You can then compute the n-th root of M as exp(1/n*log(M)).

##### Share on other sites
Great, thanks for the replies!

How about the fact that I get complex matrices as my results?

Do I simply use the modulus of each element? I only did it because it's the only thing I could think of. Not sure if it makes any sense.

##### Share on other sites
Quote:
 Original post by phalarisDo I simply use the modulus of each element? I only did it because it's the only thing I could think of. Not sure if it makes any sense.

I don't think it does. Think of the 1-dimensional flip f(x) = -x. You can try to do it in two steps using complex numbers: g(x) = i*x. But there is no way you'll get that with real numbers, and taking absolute values won't help you either.

Now that I think about it, depending on exactly what you are trying to achieve, "performing some fraction of an affine transformation" could just be satisfied by interpolating...

Perhaps you could explain in some detail what your problem really is.

##### Share on other sites
Quote:
Original post by alvaro
Quote:
 Original post by Dave EberlyAlthough the exp(M) Taylor series converges for all square matrices M, the log(M) Taylor series does not.

Quote:
 Original post by alvaroI think (1) is the right approach. You should be able to extend your results to some matrices that are not diagonalizable by studying this Wikipedia article. You can then compute the n-th root of M as exp(1/n*log(M)).

The "some" that this applies to invariably will not include the ones you want in computer graphics. Even something that appears "simple" such as sqrt(M) does not work for the most matrices you encounter in computer graphics. (Been there, done that.)

##### Share on other sites
So I'm not entirely sure how useful it would be, but you can always perform a singular value decomposition on the matrix.

Decompose the matrix A into U * E * VT where U and V are orthogonal (U * UT = I, V * VT = I) and possibly complex (also, in cases of complex matrices you want to use the conjugate instead of transpose...)

The matrix E will be real and diagonal and the diagonal elements will be positive or 0, regardless of the matrix A chosen.

##### Share on other sites
Of-course,

I am working on image rectification of stereo pairs. The theory and algorithm is described here:
http://www.maths.tcd.ie/~volkovm/Papers/A%20Compact%20Algorithm%20for%20Rectification%20of%20Stereo%20Pairs.pdf

The end result of the algorithm is an affine transformation describing a warping of the image plane. Applying the obtained transformation performs the warp and rectifies the image.

My wish is to perform this transformation to different degrees, for the purpose of analysis and evaluation.

That's it in a nutshell.

Here is an example of two typical transformation matrices:

TL =

1.1068 -0.0299 -32.6653
0.0973 0.9996 -35.2820
0.0003 0 0.8931

TR =

1.1351 -0.0142 -48.2197
0.0946 0.9985 -31.6560
0.0003 -0.0000 0.8669

Again, having run more tests I can say that the results do look convincing. Using method 1 with large N, I can almost see an animation of the transformation taking place.

But I do see your point about the "oscillatory" behaviour inherent in performing something like f(x) = -x in two steps.

Thanks again for the insightful feedback,

1. 1
Rutin
44
2. 2
3. 3
4. 4
5. 5

• 10
• 27
• 20
• 9
• 20
• ### Forum Statistics

• Total Topics
633405
• Total Posts
3011680
• ### Who's Online (See full list)

There are no registered users currently online

×