Cyclic Tridiagonal Matrix

Started by
4 comments, last by superminion 12 years, 8 months ago
Hello! I have a system of linear equations A*x = d, where A is a cyclic tridiagonal matrix. So A looks like this: If matrix A hadn't element b1 and c5 I could just use the thomas algorithm. I recently found out that there is a "cyclic thomas algorithm" to solve cyclic tridiagonal matrices. Unfortunately I couldn't find any informations about this cyclic thomas algorithm! Does anyone know how this algorithm works or can point me to some tutorials/pdfs/algorithm descripts etc.? Thanks!
Advertisement
If you understand how Gaussian elimination works, the Thomas is simply what you get if you don't do any of the operations that you know you don't have to do because of all the zeroes in the matrix. You can easily use the same procedure to get a slightly-less-elegant algorithm for the cyclic case. Give it a try and, if you don't succeed, I'll write some sample code for you.

[Edited by - alvaro on June 1, 2009 8:03:15 PM]
Hello alvaro,

yes, I have understood the gaussian elimination and I just read the whole wiki article about the thomas algorithm to understand its derivation. You are right, the thomas algorithm is just a modified gaussian elimination. If I have understood the thomas algorithm correctly, then it basically transforms the tridiagonal system:


into this (in this example n=5):

which can now easily be solved with back substitution.

But my tridiagonal matrix is cyclic:

I think if I could eliminate c_5 I was done, since the b_1 is no problem. I just had to modify the last step of the back substitution where I calculate x_1. Since I already know x_5 i could solve for x_1. I just dont know how I should handle c_5.

What really frustrates me: At the very end of the wiki article about the thomas algrithm is a section "Variants", where there show EXACTLY my cyclic tridiagonal system and they point to a so called "Sherman–Morrison formula". But this formula doesnt help me at all:(
I agree, the Sherman Morrison formula is probably the best route.

In this case, I would set u = [b1,0,....,c5]^T, and v = [1,0,0,..,1]^T, so that A no longer has c5 or b1 (you have to modify other elements of A)

Then you can solve

(A+uv^T)x = b using the sherman-morrison formula. Rather than actually computing A^-1, you treat it as an operator on a vector w, and compute A^-1*w using the Thomas algorithm. By my count you have to do two Thomas solves (one for A^-1*u and one for A^-1*x.
I hadn't understood that section of the Wikipedia page either. Thanks gpu_fem for explaining how to apply the Sherman-Morrison formula in this case.

If you still want to modify the Thomas algorithm for this variant, every time you add row i multiplied by -b[i+1]/a to row i+1, you also need to add row i multiplied by something to the last row, to get rid of the "c5". This will result in the c5 "moving to the right" one step (the place where c5 was is now 0, but the one to the right contains a value). When you get to the antepenultimate row, you'll have to be careful to update b5 and a5 accordingly. Of course you'll have to modifify d5 every time.

Does that help?
Check out the Gnu Scientific Library.
Two functions there come to mind:
gsl_linalg_solve_symm_cyc_tridiag() (symmetric matrices only)
gsl_linalg_solve_cyc_tridiag() (general)
They are described here:
http://www.gnu.org/s...al-Systems.html
(Download, unpack the GSL and look at the file "linalg/tridiag.c")

The symmetric version includes this comment:
/* For description of method see [Engeln-Mullges + Uhlig, p. 96]

The second, more general version looks like it is using Sherman-Morrison:
/* solve following system w/o the corner elements and then use
/* Sherman-Morrison formula to compensate for them"

This code is GPL license, but you can at least see how they implemented it.
Hope that helps someone out there.

This topic is closed to new replies.

Advertisement