0
$\begingroup$

I'm trying to use LinearSolve to deal with the linear system m.x==b. For some reason, I get the message error:

Result for LinearSolve of badly conditioned matrix may contain significant numerical errors.

My code works perfectly when m and b are small (for example, when m is 128x128 and b is 128x1). However, when the size of these matrixes increases (let's say 256x256 and 256x1), I get the error.

Here is matrix b (256x1):

txt for b

Here is matrix m (256x256): matrix m is sparse and is a result of Matrix Identity minus another matrix

txt for m

What is going on? I have no idea how to deal with it.

I really appreciate any help you can provide.

$\endgroup$
2
  • 1
    $\begingroup$ By the way, only half the matrix' entries are zeros. That is way too few to profit from sparse matrix arithmetic. $\endgroup$ Commented Aug 24, 2024 at 15:45
  • 1
    $\begingroup$ Questions should be self-contained. Links can rot, and then the Q becomes unintelligible. $\endgroup$ Commented Aug 24, 2024 at 15:56

3 Answers 3

3
$\begingroup$

The largest eigenvalue of your matrix has magnitude 2. and the smallest has magniuted 1.11979*10^-9. Hence the condition number is 2. / 1.11979*10^-9 = 1.78605*10^9, so your matrix is quite ill-conditioned.

That can have various reasons. For example, you might have used a suboptimal way to encode your linear system. But maybe your problem is as ill-conditioned. I cannot tell because I do not have the background knowledge.

$\endgroup$
7
  • $\begingroup$ I do not know if that helps, but if I write the system of equations that results from m.x==b, extract the coefficients using CoefficientArrays[ ], and then use LinearSolve[ ], I get the correct answer without error messages. However, the method of creating the system of equations is quite slow, and I am trying to avoid it. Nevertheless, the system can be solved, so I do not understand why the matrix version cannot. $\endgroup$ Commented Aug 24, 2024 at 17:33
  • 1
    $\begingroup$ @PedroHenrique "I do not understand why the matrix version cannot" -- What makes you think it did not? $\endgroup$ Commented Aug 24, 2024 at 19:55
  • 1
    $\begingroup$ Are the two results different? If so, that also would indicate bad conditioning. $\endgroup$ Commented Aug 24, 2024 at 22:00
  • 1
    $\begingroup$ Ill-conditioning has not effect if you use exact arithmetic. (But for larger problems, exact arithmetic will be too expensive.) Ill-conditioning does not mean that the matrix equation is not solvable. It just means that floating-point errors are likely to accumulate during the computations. This is why Mathematica warns you: The solution might have an error that is acceptable to your application or not. You have to check the quality of the solution yourself. $\endgroup$ Commented Aug 25, 2024 at 13:07
  • 1
    $\begingroup$ Also, it is not uncommon that the condition number increases with the size of the matrix. So this explain why Mathematica complained only about the larger problems. $\endgroup$ Commented Aug 25, 2024 at 16:21
3
$\begingroup$

Do you know about "PseudoInverse" or "Moore-Penrose Inverse"? This is based on "SingularValueDecomposition". If you do not know this terms, look them up.

For your example:

res = PseudoInverse[m] . b; 

Now to check how accurate this is;

Max[Abs[m . res - b]] 1.97774269073*10^-7 

Is this accurate enough for your application?

$\endgroup$
1
  • 1
    $\begingroup$ Even on an ill-conditioned problem the residual error Max[Abs[m . res - b]] is usually small: (* exact answer is all ones *) hilb = N@HilbertMatrix[10]; b = Total /@ hilb; res = LinearSolve[hilb, b]; {Max[Abs[hilb . res - b]], Max[Abs[res - 1]]} --> {4.44089*10^-16, 0.000397359}. (Condition number is 3.53541*10^13.) A residual on the order of $10^{-7}$ multiplied by a condition number around $10^{10}$ makes a large-looking bound on the error between res and the exact solution, but it should be judged in comparison with the magnitude of the solution. $\endgroup$ Commented Aug 25, 2024 at 19:35
2
$\begingroup$

I posed a question in a comment. I do not understand the responding comment. Let me try again.

I asked "Are the two results different?" This was in reference to the original post where presumably (i) LinearSolve[m,b] was used and gave a warning message, and then (ii) it seems variables were created, this was turned into a system of equations, which in turn was reverted to a matrix and vector using CoefficientArrays, after which LinearSolve was used on the new system. I am inferring all this; a well-written post would have such details included.

Here is what happens when I perform these operations.

soln1 = LinearSolve[m, b]; (* LinearSolve::luc: Result for LinearSolve of badly conditioned matrix {{1.,-0.101414,0.,-0.113779,0.,-0.092337,0.,-0.0682501,0.,-0.047823,<<246>>},{-0.109844,1.,-0.122769,0.,-0.0956708,0.,-0.0656789,0.,-0.0417028,0.,<<246>>},<<8>>,<<246>>} may contain significant numerical errors. *) vars = Array[x, Length[m[[1]]]]; neweqns = m . vars - b; {b2, m2} = CoefficientArrays[neweqns, vars]; soln2 = LinearSolve[m2, -b2]; 

No warning message this time. That's a bit of a mystery to me. Anyway, let's compare results.

Max[Abs[soln2 - soln]] (* Out[43]= 8.9587*10^-8 *) 

So let's break down what we are seeing. First is the claim: "When I increase the size of the matrix it does not solve because “badly conditioning”." That's not what I am getting. Specifically, I get a result in soln1 accompanied by a warning message. That is very different from a "does not solve". Second, we now have two results that differ by around 10^(-7). This has also been shown or suggested in other responses. This is quite reasonable. If machine doubles have 16-17 digits of precision, and a condition number is estimated to be in the ballpark of 10^10, then we expect to have results accurate to seven or so places. The condition number of that matrix, coupled with this pair of results, strongly suggests this is the best you can do.

Also let me draw attention to the response by @HenrikSchumacher. With one exception, the singular values of this matrix are all between .7 and 2 (note: use singular values instead of eigenvalues if the matrix is not real-symmetric or Hermitian). This means there is to some approximation a one-dimensional null space. Which in turn means you need to be fairly sure your right-hand-side vector b is, again to reasonable approximation, in the complement of that null space. From these results and some further experiments (not shown) with a randomly generated right-hand-side, it seems like it might be.

$\endgroup$
2
  • $\begingroup$ As for the mystery, it may be due to sparse arrays, which CoefficientArrays return: No message for me when I try LinearSolve[SparseArray@m, b]. (Caveat: I used my own ill-conditioned m, since I didn't want to download the files and Import didn't work on the OP's links). $\endgroup$ Commented Aug 25, 2024 at 18:52
  • 1
    $\begingroup$ Thanks @MichaelE2 for the explanation. I did download the matrix and vector and that seemed safe (he said). $\endgroup$ Commented Aug 25, 2024 at 18:58

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.