1
$\begingroup$

I have the following overdetermined linear equation system:

$$Ax=b$$

where $A$ is a matrix of $n \times k$, $x$ is of $k \times 1$,$b$ is of $n \times 1$, where $n>k$.

We all know this is an overdetermined linear equation system.

The question is how to check whether the solve for $x$, and check that the vector is consistent in this case? Consistent as in the sense that when we plug in the $x$ vector value into the above linear equation systems, then the above matrix will be satisfied.

I can separate out the $k$ linear equations and find $x$ from $x_1$ to $x_k$, and then substitute in the remaining equations to check for consistency.

I afraid that this method can be numerically unstable; I would like to implement this on a computer, so I would prefer a solution that fully works here. Let us consider one pitfall of my above solution:

$$A=\begin{bmatrix} 10 & 10 \\ 0 & 0 \\0 & 10 \end{bmatrix}$$

Note that if you separate the $1$ and $2$ rows out, and compute the solution, you may not be able to even solve it ( equation $2$ is an equation here with no unknown terms, after you times in the $0$ factor)!

Is there other method?

$\endgroup$
7
  • 1
    $\begingroup$ "Consistency" means that the system has a solution. What you are asking is not how to check for consistency, but rather how to check if a particular $x$ is a solution. What's wrong with plugging it in? It's pretty straightforward, and pretty quick. $\endgroup$ Commented Dec 15, 2010 at 6:20
  • 1
    $\begingroup$ Why do you not like the approach that you mentioned in your last sentence? $\endgroup$ Commented Dec 15, 2010 at 6:20
  • $\begingroup$ @Arturo, I afraid that it can be numerically unstable; I would like to implement this on a computer, so I would prefer a solution that fully works here. $\endgroup$ Commented Dec 15, 2010 at 6:54
  • $\begingroup$ @J.M., see the updated question $\endgroup$ Commented Dec 15, 2010 at 7:20
  • 1
    $\begingroup$ You can still swap rows. Gaussian elimination routines generally swap rows for stability purposes anyway. Look up "partial pivoting". $\endgroup$ Commented Dec 15, 2010 at 9:20

3 Answers 3

4
$\begingroup$

Updated according to comments.

If you are worried about the numerical stability do QR decomposition of matrix $A$. Then $A=QR$, where $Q$ is orthogonal and $R$ is triangular. Then you need to check whether $x$ satisfies the equation

$$Rx=Q^Tb$$

Now since $R$ is triangular and $n>k$ we will have that the last $n-k$ rows of $R$ are zero. Since $Ax=b$ it follows that the last $n-k$ elements of $Q^Tb$ should be zero also. If they are not, then $x$ is not a solution.

Furthermore since we have an overdetermined matrix the solution exists only if $b$ lies in the linear space spanned by columns of $A$. So the real question is, how do we reliably check whether $b$ is in linear space spanned by columns of $A$.

Update 2

Since $n>k$, the $R$ matrix will look like:

\begin{align*} R=\begin{bmatrix} R_1\\ 0 \end{bmatrix} \end{align*} where $R_1$ is $k\times k$ upper triangular matrix. If the solution exists, then

\begin{align*} Q^Tb=\begin{bmatrix} b_1\\ 0 \end{bmatrix} \end{align*} where $b_1$ is $k\times 1$ vector. The solution for our system is then $$x=R_1^{-1}b_1$$

$\endgroup$
11
  • 1
    $\begingroup$ mpiktas, that's for least squares. For instance: if you QR decompose the 3-by-2 example of Ngu (call the original matrix $\mathbf A$), and let $\mathbf b=(2\quad 3\quad 4)^T$ and $\mathbf x=\left(0\quad \frac1{10}\right)^T$ , your equation doesn't work, even if the $\mathbf x$ I gave minimizes $\|\mathbf A\mathbf x-\mathbf b\|_{\infty}$ $\endgroup$ Commented Dec 15, 2010 at 9:03
  • $\begingroup$ @mpiktas, how to compute the $x$ in this case? $\endgroup$ Commented Dec 15, 2010 at 9:16
  • $\begingroup$ Yes, you are right. You cannot even multiply $Q$ by $b$, since the dimensions do not match. The answer I think still lies in decomposition of $A$, but it should be rephrased better. $\endgroup$ Commented Dec 15, 2010 at 9:16
  • 1
    $\begingroup$ OK, I fixed the answer. Actually there is a slight typo in the question, $b$ should be $n\times 1$ vector, not $k\times 1$. $\endgroup$ Commented Dec 15, 2010 at 9:36
  • 1
    $\begingroup$ @Ngu Soon Hui. When we get to the matrix $R_1$ and vector $b_1$ we get the square linear system. Since determinant of $R_1$ is non zero, the solution always exists. If $Q^Tb$ is not of the form I mentioned, then the solution of the original system does not exist. $\endgroup$ Commented Dec 15, 2010 at 11:43
0
$\begingroup$

I am not sure why I cannot comment, but anyway, you can always do an RREF and then you should be fine.

$\endgroup$
4
  • $\begingroup$ let's say if there is no matlab involved? $\endgroup$ Commented Dec 15, 2010 at 7:43
  • $\begingroup$ Manually doing an RREF is not that difficult. How else do you invert a matrix? (assuming you do not know LR decomp) $\endgroup$ Commented Dec 15, 2010 at 8:03
  • $\begingroup$ ah, I get your point. But how does RREF helps to find the $x$? $\endgroup$ Commented Dec 15, 2010 at 8:26
  • $\begingroup$ perform a GJ elimination to solve for x $\endgroup$ Commented Dec 15, 2010 at 16:26
0
$\begingroup$

Consider the related least squares question: find x minimizing $||Ax-b||_2$. In the unlikely scenario that the overdetermined system has a solution, then we in fact have $||Ax-b||_2=0$.

The x solving the least squares problem can be found via the Moore-Penrose pseudoinverse:

$$x=(A^tA)^{-1}A^tb$$

So, one algorithm would be as follows:

1) Solve $(A^tA)x=A^tb$ via your favorite linear solver.

2) Check if $Ax-b=0$.

$\endgroup$
3
  • $\begingroup$ 1. If $\mathbf A$ is rank deficient, the normal equations method fails. 2. I've mentioned the Lauchli example in here and other places on why the normal equations isn't always a good idea. The QR and singular value decomposition are safer alternatives. $\endgroup$ Commented Dec 16, 2010 at 10:56
  • $\begingroup$ If you use a krylov solver like conjugate gradient and the initial guess is not in the null space of A, then it should work even with rank-deficient A (I think), though the point on stability is a good one. $\endgroup$ Commented Dec 16, 2010 at 11:15
  • $\begingroup$ Krylov methods are appropriate only for large sparse systems. For small dense problems, QR and SVD remain standard. $\endgroup$ Commented Dec 16, 2010 at 11:49

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.