ronm 105 Report post Posted July 9, 2011 Hi there, I need to compare two Variance-covariance matrices for 2 arbitrary random vectors (assuming both are multivariate normal with Zero location). I am aware of different Matrix norms and primarily thought of using some of them to compare 2 matrices. However I was wondering what is the best metric to compare specifically VCV matrices as, VCV matrix can not be any arbitrary general matrix therefore there may be some special treatment for this purpose? My goal is to ***properly capture*** the difference between 2 VCV matrices. Can somebody guide me? Thanks, 0 Share this post Link to post Share on other sites
Dave Eberly 1173 Report post Posted July 9, 2011 A covariance matrix has only real-valued and nonnegative eigenvalues (matrix is positive semidefinite). The usual comparison is to compute the eigenvalues of each and the matrix of eigenvectors of each, although what the comparisons mean for a specific application depends on that application. 0 Share this post Link to post Share on other sites
alvaro 21270 Report post Posted July 9, 2011 If you think of your covariance matrices as describing multivariate normal distributions, you can compare the distributions. For 1-dimensional distributions the most common tests are Kolmogorov-Smirnov and Cramer-von Mises, and I think you can generalize their ideas to multiple dimensions (replace the cumulative probability function with F(t1,t2,...,tn)=P(x1<t1 & x2<t2 & ... & xn<tn)), but I don't know if there is a way to then perform the computations in a computationally reasonable manner. What exactly do your matrices represent? What problem are you trying to tackle? 0 Share this post Link to post Share on other sites
Emergent 982 Report post Posted July 10, 2011 Similar to alvaro's suggestion, many people use the [url="http://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence"]Kullback-Leibler[/url] "distance" or "divergence" as a measure for the difference between probability distributions; it's the cost in bits per character of assuming the wrong probability distribution for symbols when you compress a stream of them. A formula for the KL-divergence from one Gaussian to another is given [url="http://en.wikipedia.org/wiki/Multivariate_normal_distribution#Kullback.E2.80.93Leibler_divergence"]here[/url]. One caveat: KL-divergence is not symmetric in its arguments and does not satisfy the triangle inequality, so it is not a norm. Nevertheless, it's very popular in these sorts of situations. 0 Share this post Link to post Share on other sites
ronm 105 Report post Posted July 11, 2011 Thanks all for your replies. My goal is to quantify the difference between 2 given VCV matrices. Here, I can take those VCV matrices for 2 multivariate normal distributions (with same location parameter.) Therefore I think KL divergence metric should be okay for now. Let see and discuss this with my colleagues. One more question. That metric will essentially give some number (whatever it is.) However is there any criterion or general practice to say: whether that number is big or small? How much should I tolerate? I appreciate your suggestion. Thanks, 0 Share this post Link to post Share on other sites
alvaro 21270 Report post Posted July 11, 2011 Even if we knew what your distributions represent, it would be really hard to give you definite thresholds. But we don't even know that! 0 Share this post Link to post Share on other sites