I have a linear system like: AX = B
Normally to measure how stable the solution will be for such a system condition numbers are used.
With X being the only unknown the condition number will be checked for A normally.
However in this case A only defines the locations at which observations in B are made.
So checking the condition number and reducing it over multiple observations of A does not benefit me much as observed B at those locations might still be contributing to errors significantly in estimated X.
Instead if I check the condition number of B and try to reduce it over multiple observations by selecting A that reduces condition number of B I can more reliably estimate X.
But the issue is to justify the use of evaluating condition number for B which is not involved in inversion in the above linear system.
Otherwise it seems incorrect to evaluate the condition number of B instead of A.
Normally condition numbers are defined to be the measure of relative error in X divided by the relative error in B etc.
It shows they are related so there must be a way to justify the use of minimizing condition number of B instead of A in this system and I want to know of some opinion or suggestions for this.