6
$\begingroup$

Could you help me with an idea of solving the following problem? I think that proof involves the positive definiteness of Gram matrix, but I don't know how.

Consider a system of vectors $e_1, e_2, ..., e_n, e_{n+1}$ in some Euclidean space such that dot product $(e_i, e_j) <0$ for all $i \neq j.$ Prove that any $n$ vectors of the system are linearly independent.

$\endgroup$
1
  • 1
    $\begingroup$ ... is $n$ the dimension of the space? $\endgroup$ Commented Oct 26, 2016 at 20:40

4 Answers 4

5
+50
$\begingroup$

Let us try by recurrence and suppose that $n$ is the dimension of the space.

  • for $n=2$ we have $(e_1, e_2, e_3)$ in an Euclidian space of dimension 2, such that the scalar products are non positive.

    Let us suppose that $e_1$ and $e_2$ (without loss of generality) are linearly dependent, i.e that there exist $(\lambda_1, \lambda_2) \neq (0,0) $ such that $$\lambda_1 e_1 + \lambda_2 e_2 = 0$$ If $\lambda_1 = 0$, then $\lambda_2 = 0$ therefore we can suppose that $\lambda_1 > 0$.

    Then $\lambda_1 (e_1|e_1)+\lambda_2(e_1|e_2) =0$ and therefore $\lambda_2>0$.

    Then you can also write that $0=\lambda_1(e_1|e_3)+\lambda_2(e_2|e_3)<0$ which is absurd.

    You conclude that $e_1$ and $e_2$ are linearly independent.

  • Let $n>2$, and suppose that the property we want to prove is right at the rank $n-1$. Let $e_1, \dots e_{n+1}$ be vectors verifying the hypothesis from the question. We can suppose that each of these vectors have unit norm (by dividing them by their norm).

    Let us take $n$ of these vectors, let us say $e_1, \dots, e_n$ and suppose they are linearly dependent, i.e. that there exist $(\lambda_1, \dots, \lambda_n) \neq (0, \dots, 0)$ such that $$\sum_{i=1}^n \lambda_i e_i =0$$ Then $$\sum_{i=1}^n \lambda_i (e_i|e_n)e_n =0$$ If we substract the two previous equalities we obtain $$\sum_{i=1}^n \lambda_i f_i =0$$ where we note $f_i=e_i-(e_i|e_n)e_n$. We notice that $f_n=0$ (since $\|e_n\|^2=1$). This shows that $$\sum_{i=1}^{n-1} \lambda_i f_i =0$$ Let us know compute $(f_i|f_j)$ for $i \neq j$ (and $i,j \neq n$). $$\begin{align}(f_i|f_j)&=(e_i-(e_i|e_n)e_n|e_j-(e_j|e_n)e_n)\\&=(e_i|e_j)-(e_i|(e_j|e_n)e_n)-(e_i|e_n)(e_n|e_j)+(e_i|e_n)(e_j|e_n)\|e_n\|^2\\&=(e_i|e_j)-(e_i|e_n)(e_j|e_n)<0\end{align}$$

    Moreover all the $f_i$ ($i \neq n$) are non-zero vectors located in $(e_n)^{\perp}$ space, which is of dimension $n-1$.

    We have now $n$ vectors verifying the hypothesis at rank $n-1$, therefore any $n-1$ combination of the vectors $(f_i)_{i \in \{1, \dots, n-1, n+1 \}}$ is linearly independant. In particular $f_1, \dots, f_{n-1}$ are independant and consequently $\lambda_1, \dots, \lambda_{n-1} = 0$.

    This proves that $(e_1, \dots, e_n)$ are independant. And the property is true at rank n.

$\endgroup$
1
  • $\begingroup$ Great answer! Never thought of using induction in a problem of linear algebra. $\endgroup$ Commented Jan 9, 2017 at 6:15
2
$\begingroup$

The result follows from the following Lemma, which is used in the theory of root systems to study bases of root systems.

Lemma: Let $(V, \langle\,{,}\,\rangle)$ be a Euclidean vector space. Let $\ell\in V^*$ a linear form and $A\subseteq V$ a subset such that:
(1) $\ell(x)>0$ for all $x\in A$.
(2) $\langle x,y\rangle \le 0$ for all $x,y\in A$, $x\neq y$.

Then $A$ is linearly independent.

Apply this Lemma to $\ell = \langle\,\cdot\,,-e_{i_0}\rangle$ and $A = \{e_1,\dotsc,\widehat{e_{i_0}},\dotsc,e_{n+1}\}$ (where $\widehat{e_{i_0}}$ denotes omission) for some fixed $i_0$.

Proof of the Lemma: Let $\sum_{x\in A}\lambda_xx = 0$ be a (finite) linear combination. Now, consider $$ v:= \sum_{x\in A_1}\lambda_xx = \sum_{x\in A_2}\mu_xx, $$ where $A = A_1\sqcup A_2$ and $\lambda_x\ge0$ for all $x\in A_1$, $\mu_x := -\lambda_x >0$ for all $x\in A_2$. Then $$ 0\le \langle v,v\rangle = \sum_{x\in A_1}\sum_{y\in A_2}\lambda_x \mu_x \langle x,y\rangle \le 0 $$ where we have used (2) for the last inequality. It follows that $v = 0$. Hence, $$ 0 = \ell(v) = \sum_{x\in A_1}\lambda_x \ell(x) = \sum_{x\in A_2} \mu_x \ell(x). $$ Since (1) says that $\ell(x)>0$ for all $x\in A$, it follows that $\lambda_x = 0$ for all $x\in A_1$ and $-\lambda_x = \mu_x = 0$ for all $x\in A_2$. Thus, $A$ is linearly independent.

$\endgroup$
6
  • $\begingroup$ Very nice! .......... +1 $\endgroup$ Commented Jan 8, 2017 at 14:25
  • $\begingroup$ How do you choose such a linear form? $\endgroup$ Commented Jan 8, 2017 at 16:23
  • $\begingroup$ @bat_of_doom I already answered how to apply this Lemma, directly below the Lemma! I only had the notation for the scalar product wrong, whence my edit. $\endgroup$ Commented Jan 9, 2017 at 5:41
  • $\begingroup$ @user218931 Sorry, I didn't understand the notation earlier. $\ell(e_j) = <e_j|(-e_i)>$, isn't it? $\endgroup$ Commented Jan 9, 2017 at 6:14
  • 1
    $\begingroup$ @bat_of_doom yes, this is the linear form. The above Lemma came up in the study of root systems. I have never seen it being applied to other fields, so I wouldn't say that this Lemma is common. $\endgroup$ Commented Jan 9, 2017 at 10:04
1
$\begingroup$

Yet another proof. We shall prove the statement for $n\ge1$ only, because the case $n=0$ is a vacuous truth (an empty set is linearly independent by convention). Suppose the contrary that $e_1,\ldots,e_n$ are linearly dependent, so that some non-trivial linear combinations of them is zero. Among all such linear combinations, pick one with the minimal number of nonzero coefficients. Write it as $$ \sum_{i\in I} p_ie_i-\sum_{j\in J}q_je_j=0,\tag{1} $$ where $I,J$ are two non-overlapping subsets of $\{1,2,\ldots,n\}$ and every $p_i,\,q_j>0$.

The linear combination is non-trivial, so $I$ and $J$ cannot be both empty. We may assume that $I$ is non-empty. Then $J$ must be non-empty too, otherwise $(1)$ implies that $\sum_{i\in I} p_ie_i=0$ and contradiction arises by taking inner products on both sides with $e_{n+1}$.

Furthermore, $I$ cannot be singleton, otherwise, if $p_1e_1=\sum_{j\in J}q_je_j$, contradiction arises if we take inner products on both sides with $e_1$.

Thus $I$ has at least two indices. Pick any $i_0\in I$. Let $x=p_{i_0}e_{i_0}$. Then $y=\sum_{i\in I\setminus\{i_0\}} p_ie_i$ is a non-empty sum and $x+y=\sum_{j\in J}q_je_j$. Therefore \begin{align*} \langle x,y\rangle=\sum_{i\in I\setminus\{i_0\}}p_i\langle x,e_i\rangle&\lt0,\\ \langle x,x\rangle + \langle x,y\rangle=\langle x,x+y\rangle =\sum_{j\in J}q_j\langle x,e_j\rangle&\le0,\\ \langle y,x\rangle + \langle y,y\rangle=\langle y,x+y\rangle =\sum_{j\in J}q_j\langle y,e_j\rangle&\le0 \end{align*} and in turn $\langle x,x\rangle \langle y,y\rangle\le |\langle x,y\rangle|^2$. By Cauchy-Schwarz inequality, $x$ and $y$ must be linearly dependent. Consequently, we obtain a linearly dependent set $\{e_i: i\in I\}$ that is strictly smaller than $\{e_k: k\in I\cup J\}$ because $J$ is non-empty. This contradicts the minimality of $(1)$. Hence our initial assumption is wrong and $e_1,\ldots,e_n$ must be linearly independent.

$\endgroup$
1
  • $\begingroup$ Thank you for the great answer. How did you come up with such an elegant solution? $\endgroup$ Commented Jan 9, 2017 at 6:43
-1
$\begingroup$

Here is an overkill, but it has an interesting connection to another area of matrix theory.

Let $E=[e_1|\cdots|e_n|e_{n+1}]$. By the given conditions, $P=E^\top E\in M_{n+1}(\mathbb R)$ is a positive semidefinite irreducible $Z$-matrix. Therefore $P+tI$ is an $M$-matrix for every $t>0$, so that $(P+tI)^{-1}\ge0$ entrywise. As $P+tI$ is irreducible, so is $(P+tI)^{-1}$. By Perron-Frobenius theorem, the largest eigenvalue of $(P+tI)^{-1}$ is simple. Thus the smallest eigenvalue of $P$ is simple too. Hence the rank deficiency of $P$ is at most $1$ because $P$ is positive semidefinite.

It follows that $\operatorname{rank}(E)=\operatorname{rank}(P)\ge n$. That is, some $n$ vectors of $\{e_1,e_2,\ldots,e_n,e_{n+1}\}$ are linearly independent. Without loss of generality, suppose the subset $\{e_2,\ldots,e_{n+1}\}$ is linearly independent. We want to prove that $\{e_1,e_2,\ldots,e_n\}$ (and similarly for other subsets of size $n$ containing $e_1$) is also linearly independent. Suppose $$ \sum_{i=1}^k p_ie_i = \underbrace{\sum_{i=k+1}^n p_ie_i}_{:=\, x}, $$ where $k\ge1$ and each $p_i$ is nonnegative. By taking inner products on both sides with $x$, we see that $x$ must be zero. As $e_{k+1},\ldots,e_n$ are linearly independent, $p_i$ must be zero for every $i\ge k+1$. In turn, $\sum_{i=1}^k p_ie_i = 0$. Take inner products again on both sides with $e_{n+1}$, we see that $p_i$ must also be zero for each $i\le k$. Hence all coefficients are zero and $e_1,\ldots,e_n$ are linearly independent. QED

Our conclusion also translates to the following

Proposition. The rank deficiency of a positive semidefinite irreducible $Z$-matrix is at most $1$. Furthermore, if the matrix has strictly negative off-diagonal entries, then its proper principal submatrices are all nonsingular.

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.