58
$\begingroup$

Let $V$ be a vector space of finite dimension and let $T,S$ linear diagonalizable transformations from $V$ to itself. I need to prove that if $TS=ST$ every eigenspace $V_\lambda$ of $S$ is $T$-invariant and the restriction of $T$ to $V_\lambda$ ($T:{V_{\lambda }}\rightarrow V_{\lambda }$) is diagonalizable. In addition, I need to show that there's a base $B$ of $V$ such that $[S]_{B}^{B}$, $[T]_{B}^{B}$ are diagonalizable if and only if $TS=ST$.

Ok, so first let $v\in V_\lambda$. From $TS=ST$ we get that $\lambda T(v)= S(T(v))$ so $T(v)$ is eigenvector of $S$ and we get what we want. I want to use that in order to get the following claim, I just don't know how. One direction of the "iff" is obvious, the other one is more tricky to me.

$\endgroup$
1
  • 1
    $\begingroup$ $0$ is diagonalizable. $\endgroup$ Commented Aug 8, 2011 at 14:59

3 Answers 3

64
+500
$\begingroup$

This answer is basically the same as Paul Garrett's. --- First I'll state the question as follows.

Let $V$ be a finite dimensional vector space over a field $K$, and let $S$ and $T$ be diagonalizable endomorphisms of $V$. We say that $S$ and $T$ are simultaneously diagonalizable if (and only if) there is a basis of $V$ which diagonalizes both. The theorem is

$S$ and $T$ are simultaneously diagonalizable if and only if they commute.

If $S$ and $T$ are simultaneously diagonalizable, they clearly commute. For the converse, I'll just refer to Theorem 5.1 of The minimal polynomial and some applications by Keith Conrad. [Harvey Peng pointed out in a comment that the link to Keith Conrad's text was broken. I hope the link will be restored, but in the meantime here is a link to the Wayback Machine version. Edit: original link just updated.]

EDIT. The key statement to prove the above theorem is Theorem 4.11 of Keith Conrad's text, which says:

Let $A: V \to V$ be a linear operator. Then $A$ is diagonalizable if and only if its minimal polynomial in $F[T]$ splits in $F[T]$ and has distinct roots.

[$F$ is the ground field, $T$ is an indeterminate, and $V$ is finite dimensional.]

The key point to prove Theorem 4.11 is to check the equality $$V=E_{\lambda_1}+···+E_{\lambda_r},$$ where the $\lambda_i$ are the distinct eigenvalues and the $E_{\lambda_i}$ are the corresponding eigenspaces. One can prove this by using Lagrange's interpolation formula: put $$f:=\sum_{i=1}^r\ \prod_{j\not=i}\ \frac{T-\lambda_j}{\lambda_i-\lambda_j}\ \in F[T]$$ and observe that $f(A)$ is the identity of $V$.

$\endgroup$
10
  • 14
    $\begingroup$ You've listed a document by Keith Conrad. I believe this one by the same author also has a very simple proof for the question being asked without reference to a minimal polynomial. I found it helpful since I didn't know what a minimal polynomial is: math.uconn.edu/~kconrad/blurbs/linmultialg/simulcomm.pdf $\endgroup$ Commented Sep 24, 2014 at 17:26
  • 1
    $\begingroup$ @MichaelLevy - The definition of diagonalizable endomorphisms and examples of such are given in the links kconrad.math.uconn.edu/blurbs/linmultialg/minpolyandappns.pdf and kconrad.math.uconn.edu/blurbs/linmultialg/simulcomm.pdf. (Both texts were written by Keith Conrad.) $\endgroup$ Commented Mar 8, 2022 at 17:33
  • 2
    $\begingroup$ @MichaelLevy - Yes, in the context of this problem, the expressions "endomorphism" and "linear operator" are synonymous. $\endgroup$ Commented Mar 9, 2022 at 0:35
  • 1
    $\begingroup$ @MichaelLevy - Answer to Question 1: yes. What is $T$? It is an indeterminate, see en.wikipedia.org/wiki/…. A linear factor in $F[T]$ is a factor of the form $T-a$ with $a\in F$. Example: $T-1$ is a linear factor of $T^2-1$. $\endgroup$ Commented Mar 9, 2022 at 12:30
  • 1
    $\begingroup$ I am sorry, but the link seems broken. $\endgroup$ Commented Jul 3, 2024 at 12:16
27
$\begingroup$

You've proven (from $ST=TS$) that the $\lambda$-eigenspace $V_\lambda$ of $T$ is $S$-stable. The diagonalizability of $S$ on the whole space is equivalent to its minimal polynomial having no repeated factors. Its minimal poly on $V_\lambda$ divides that on the whole space, so is still repeated-factor-free, so $S$ is diagonalizable on that subspace. This gives an induction to prove the existence of a simultaneous basis of eigenvectors. Note that it need not be the case that every eigenvector of $T$ is an eigenvector of $S$, because eigenspaces can be greater-than-one-dimensional.

Edit: Thanks Arturo M. Yes, over a not-necessarily algebraically closed field, one must say that "diagonalizable" is equivalent to having no repeated factor and splits into linear factors.

Edit 2: $V_\lambda$ being "S-stable" means that $SV_\lambda\subset V_\lambda$, that is, $Sv\in V_\lambda$ for all $v\in V_\lambda$.

$\endgroup$
0
0
$\begingroup$

As you proved, each eigenspace of S is invariant (stable) for T. As S is diagonalisable, we can split domain V of both operators to direct sum of eigenspaces of S. If we now choose a base by firs choosing a base of 1st eigenspace of S (corresponding to eigenvalue λ1), then base of 2nd etc, we get a base of V on which S matrix is diagonal and on its diagonal it first has λ1 (1 or more times, depending of dimension of corresponding eigenspace), then λ2 etc.

Simultaneously, T matrix is block diagonal on this base (due to proven stability of eigenspaces).Now, it's diagonal sunblocks B1, B2,... are also diagonalisable (as their minimal polynom divides minimal polynom of T which has liner factorization without multiple roots, so divisor must have this same property which is equivalent to diagonalisability).

If we now duagonize B1 (with base change on eigenspace 1, using matrix P1 and P1^-1),B2 (with P2 and P2^-1) and if we create blockdiagonal matrice P with diagonal blocks P1, P2... and P^-1 (blockdiagonal with blocks P1^-1,P2^-1...),we will diagonalize T with those two matrices in new base, and S will remain same matrix in this new base, and it is already diagonal.

This is a bit long argument, but in my opinion relatively elementary compared to linked proofs.

$\endgroup$

You must log in to answer this question.