I have a system of equations.
$$ \small \begin{aligned} (1 - p_0)u_0 + (q_0 - 1)v_0 + u_1 - v_1 = r_0 - c t_0 \\\\ (1 - p_1)u_1 + (q_1 - 1)v_1 + u_0 - v_0 = r_1 - c t_1 \\\\ (1 - p_2)u_2 + (q_2 - 1)v_2 + u_3 - v_3 + u_4 - v_4 = r_2 - c t_2 \\\\ (1 - p_3)u_3 + (q_3 - 1)v_3 + u_2 - v_2 + u_4 - v_4 = r_3 - c t_3 \\\\ (1 - p_4)u_4 + (q_4 - 1)v_4 + u_2 - v_2 + u_3 - v_3 = r_4 - c t_4 \end{aligned} $$
Expressed in matrix form, the coefficient matrix can always be formulated as two adjacent block-diagonal matrices.
$$ \scriptsize \begin{bmatrix} 1 - p_0 & 1 & 0 & 0 & 0 & q_0 - 1 & -1 & 0 & 0 & 0 \\\\ 1 & 1 - p_1 & 0 & 0 & 0 & -1 & q_1 - 1 & 0 & 0 & 0 \\\\ 0 & 0 & 1 - p_2 & 1 & 1 & 0 & 0 & q_2 - 1 & -1 & -1 \\\\ 0 & 0 & 1 & 1 - p_3 & 1 & 0 & 0 & -1 & q_3 - 1 & -1 \\\\ 0 & 0 & 1 & 1 & 1 - p_4 & 0 & 0 & -1 & -1 & q_4 - 1 \end{bmatrix} \begin{bmatrix} u_0 \\\\ u_1 \\\\ u_2 \\\\ u_3 \\\\ u_4 \\\\ v_0 \\\\ v_1 \\\\ v_2 \\\\ v_3 \\\\ v_4 \end{bmatrix} =\begin{bmatrix} r_0 - c t_0 \\\\ r_1 - c t_1 \\\\ r_2 - c t_2 \\\\ r_3 - c t_3 \\\\ r_4 - c t_4 \end{bmatrix} $$
This can, of course, be solved to get all $u_i$ and $v_i$.
My question is about the actual math but, for completeness, I am using the Python function scipy.optimize.lsq_linear, which supports bounds, because each of these has a lower bound of $0$ and a positive upper bound. Additionally, I am aware that the known block-diagonal shape allows this problem to be trivially divided into smaller sub-problems, which would be particularly desirable since my real problem has around 80 equations instead of the 5 shown. However, I leave it in this form because I also want to calculate the scale factor $c$ as part of one big solution.
To this end, the problem can be slightly rearranged.
$$ \scriptsize \begin{bmatrix} 1 - p_0 & 1 & 0 & 0 & 0 & q_0 - 1 & -1 & 0 & 0 & 0 & t_0 \\\\ 1 & 1 - p_1 & 0 & 0 & 0 & -1 & q_1 - 1 & 0 & 0 & 0 & t_1 \\\\ 0 & 0 & 1 - p_2 & 1 & 1 & 0 & 0 & q_2 - 1 & -1 & -1 & t_2 \\\\ 0 & 0 & 1 & 1 - p_3 & 1 & 0 & 0 & -1 & q_3 - 1 & -1 & t_3 \\\\ 0 & 0 & 1 & 1 & 1 - p_4 & 0 & 0 & -1 & -1 & q_4 - 1 & t_4 \end{bmatrix} \begin{bmatrix} u_0 \\\\ u_1 \\\\ u_2 \\\\ u_3 \\\\ u_4 \\\\ v_0 \\\\ v_1 \\\\ v_2 \\\\ v_3 \\\\ v_4 \\\\ c \end{bmatrix} =\begin{bmatrix} r_0 \\\\ r_1 \\\\ r_2 \\\\ r_3 \\\\ r_4 \end{bmatrix} $$
In my real problem, all $r_i$ start as $0$ (on the first iteration of my program) which means the trivial solution of setting $c = 0$ is always returned as the least-squares solution, all $t_i$ have no effect, and all all $r_i$ stay as $0$. What I really want is a way to get the largest $c$, up to its own positive upper bound $k$, while still finding a least-squares solution for everything else. I could add another row to the matrix and allow $c$ to simply target its largest possible value $k$ but I am not sure if this is a good idea. I think it will dominate the least-squares calculation, because $k$ can be quite large compared to everything else, and I am not sure how I would choose a sensible scaling for this to achieve the correct balance.
$$ \scriptsize \begin{bmatrix} 1 - p_0 & 1 & 0 & 0 & 0 & q_0 - 1 & -1 & 0 & 0 & 0 & t_0 \\\\ 1 & 1 - p_1 & 0 & 0 & 0 & -1 & q_1 - 1 & 0 & 0 & 0 & t_1 \\\\ 0 & 0 & 1 - p_2 & 1 & 1 & 0 & 0 & q_2 - 1 & -1 & -1 & t_2 \\\\ 0 & 0 & 1 & 1 - p_3 & 1 & 0 & 0 & -1 & q_3 - 1 & -1 & t_3 \\\\ 0 & 0 & 1 & 1 & 1 - p_4 & 0 & 0 & -1 & -1 & q_4 - 1 & t_4 \\\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} u_0 \\\\ u_1 \\\\ u_2 \\\\ u_3 \\\\ u_4 \\\\ v_0 \\\\ v_1 \\\\ v_2 \\\\ v_3 \\\\ v_4 \\\\ c \end{bmatrix} =\begin{bmatrix} r_0 \\\\ r_1 \\\\ r_2 \\\\ r_3 \\\\ r_4 \\\\ k \end{bmatrix} $$
Is there a better way to formulate this type of situation, or is this a known type of problem with a standard solution?