I posed a question in a comment. I do not understand the responding comment. Let me try again.
I asked "Are the two results different?" This was in reference to the original post where presumably (i) LinearSolve[m,b] was used and gave a warning message, and then (ii) it seems variables were created, this was turned into a system of equations, which in turn was reverted to a matrix and vector using CoefficientArrays, after which LinearSolve was used on the new system. I am inferring all this; a well-written post would have such details included.
Here is what happens when I perform these operations.
soln1 = LinearSolve[m, b]; (* LinearSolve::luc: Result for LinearSolve of badly conditioned matrix {{1.,-0.101414,0.,-0.113779,0.,-0.092337,0.,-0.0682501,0.,-0.047823,<<246>>},{-0.109844,1.,-0.122769,0.,-0.0956708,0.,-0.0656789,0.,-0.0417028,0.,<<246>>},<<8>>,<<246>>} may contain significant numerical errors. *) vars = Array[x, Length[m[[1]]]]; neweqns = m . vars - b; {b2, m2} = CoefficientArrays[neweqns, vars]; soln2 = LinearSolve[m2, -b2];
No warning message this time. That's a bit of a mystery to me. Anyway, let's compare results.
Max[Abs[soln2 - soln]] (* Out[43]= 8.9587*10^-8 *)
So let's break down what we are seeing. First is the claim: "When I increase the size of the matrix it does not solve because “badly conditioning”." That's not what I am getting. Specifically, I get a result in soln1 accompanied by a warning message. That is very different from a "does not solve". Second, we now have two results that differ by around 10^(-7). This has also been shown or suggested in other responses. This is quite reasonable. If machine doubles have 16-17 digits of precision, and a condition number is estimated to be in the ballpark of 10^10, then we expect to have results accurate to seven or so places. The condition number of that matrix, coupled with this pair of results, strongly suggests this is the best you can do.
Also let me draw attention to the response by @HenrikSchumacher. With one exception, the singular values of this matrix are all between .7 and 2 (note: use singular values instead of eigenvalues if the matrix is not real-symmetric or Hermitian). This means there is to some approximation a one-dimensional null space. Which in turn means you need to be fairly sure your right-hand-side vector b is, again to reasonable approximation, in the complement of that null space. From these results and some further experiments (not shown) with a randomly generated right-hand-side, it seems like it might be.