0
$\begingroup$

I'm looking at the last part of Question 31 in the link which states:

Section 3.6 Problem 31 : $\mathbf{M}$ is the space of 3 by 3 matrices. Multiply each matrix $X$ in $\mathbf{M}$ by $$ \begin{bmatrix} 1 & 0 & -1\\ -1 & 1 & 0 \\ 0 & -1 & 1 \\ \end{bmatrix}. \text{Notice}: A \begin{bmatrix} 1\\ 1\\ 1\\ \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0\\ \end{bmatrix} $$

(We are skipping the previous parts of this question)

$(a)$ find the "nullspace" of that operation $AX$ and $(b)$ find the "column space". Why do the dimensions add to $(n-r)+r=9$?

I see that for part a you can solve for the nullspace of A. It says you're solving for the nullspace of the operation AX but because AX is a linear combination of A, it's my understanding that it doesn't make a difference.

However, for part b I'm pretty lost. Solving for the column space of B definitely will not get $\dim(C(B))=6$ or $\dim(N(B))=3$. Where are these dimensions coming from?

$\endgroup$
8
  • $\begingroup$ Use mathjax to format mathematical symbols. $\endgroup$ Commented Apr 28, 2017 at 22:42
  • $\begingroup$ Done, although I don't see that it makes a significant difference in clarity. $\endgroup$ Commented Apr 28, 2017 at 22:46
  • $\begingroup$ Your linear mapping is $L_A(X) = AX$, which is a mapping from a vector space of dimension $9$ to itself. You're not solving for the nullspace of $AX$ which is a mapping between two vector spaces of dimension $3$. You can think of $M_{3\times 3}(F)$ (where $F$ is your underlying field, usually, but not always, $\Bbb R$) as being $F^3 \oplus F^3 \oplus F^3$, that is 3 copies (the columns) of three-vectors (the entries in each row). Note that for $AX$ to be the $0$-matrix, each column of $AX$ must be the $0$-vector, so that any column of $X$ must lie in the null space of $A$. $\endgroup$ Commented Apr 28, 2017 at 23:08
  • $\begingroup$ @Zeo Next time you give a question, format the all of it into your post rather than sending us a link. This way people won't downvote your post. $\endgroup$ Commented Apr 28, 2017 at 23:38
  • $\begingroup$ @Arbuja - ah, now I understand. Thank you very much! $\endgroup$ Commented Apr 28, 2017 at 23:40

3 Answers 3

1
$\begingroup$

The space of $3\times 3$ real matrices is a 9-dimensional vector space over the reals. The null space of your mapping $X\mapsto AX$ is the subspace of all $3\times 3$ matrices $X$ such that $AX$ is the $3\times 3$ zero matrix.

The hint tells you that $A$ applied to a scalar multiple of the vector $(1,1,1)$ is the zero vector; in fact, these are the only vectors that $A$ maps to zero. Using the fact that the columns of the matrix $AX$ are just $A$ times the respective columns of $X$, we conclude that for $AX$ to be the zero matrix, the columns of $X$ must be scalar multiples of $(1,1,1)$. But it doesn't matter which scalar multiple, so you have three degrees of freedom: one multiple for the first column, one for the second, and one for the third. This shows the nullspace of the mapping $X\mapsto AX$ is 3-dimensional. It follows by the rank-nullify theorem that the column space is 6-dimensional.

(Note carefully that the nullspace of $A$ construed as a mapping from vectors to vectors is not the same thing as the nullspace of $A$ construed as a mapping from matrices to matrices. In the former case $A$ sends a $3\times 1$ vector to a $3\times 1$ vector. In the latter case, $A$ sends a $3\times 3$ matrix to a $3\times 3$ matrix. These are entirely different objects. The former is a linear endomorphism on a $3$-dimensional vector space. The latter is a linear endomorphism on a $9$-dimensional vector space.)

$\endgroup$
6
  • $\begingroup$ Are you saying that a $3 \times 3$ matrix (not its column or null spaces but the matrix itself) is described as $9$-dimensional, rather than having a dimension of $3 \times 3$? When you say "9-dimensional vector space", I understand that to mean a $9 \times 1$ or $1 \times 9$ vector, but if that is the case I don't understand how a $3 \times 3$ matrix is in that space. $\endgroup$ Commented Apr 29, 2017 at 0:20
  • $\begingroup$ @Zeo The dimension of a finite-dimensional vector space is the number of vectors in any basis for it. When viewing matrices as elements of a vector space, this corresponds to the number of degrees of freedom that you have in assigning elements of the matrix, just as it does for the row/column vectors that you seem to be more familiar with. It doesn’t say anything about the shape of the array of numbers. This exercise is a good opportunity to stop thinking of “vectors’ only as tuples of numbers arranged in a row or column. $\endgroup$ Commented Apr 29, 2017 at 0:34
  • $\begingroup$ @Zeo: you seem to be seriously confused about the basic concepts of a vector space and a linear transformation on a vector space. I suggest you study the definitions carefully. Your main error seems to be failing to appreciate that a linear transformation is a function defined on a particular space. The matrix $A$ by itself is not a function, but it can be used to define different functions on different spaces. $\endgroup$ Commented Apr 29, 2017 at 0:37
  • $\begingroup$ @symplectomorphic "Your main error seems to be failing to appreciate that a linear transformation is a function defined on a particular space" - I have no doubt that you are correct in that assessment, but we haven't gone over that in my class yet. All I can do is keep going over the definitions I've already got to work with and ask questions to figure out what I don't know yet. Sorry I don't already know the things you want me to know! $\endgroup$ Commented Apr 29, 2017 at 0:55
  • $\begingroup$ @amd - ahhh, that is the information I was missing. So if I'm understanding you and symplectomorphic correctly, I think this means that when working with a set of matrices that are the basis of a vector space, the pivots that form the rank are no longer limited to the number of rows in my vectors -- the pivots actually can go up to the number of elements in my matrices (of whatever shape)? $\endgroup$ Commented Apr 29, 2017 at 1:10
0
$\begingroup$

Essentially what they are saying is that any $3\times3$ matrix $B$ such that $AX=B$ must have the prescribed form. There are nine elements in your matrix, and the first two rows can be freely chosen, hence the "column space" being 6-dimensional. The last row is then uniquely determined, and so your "null space" is 3-dimensional.

These spaces are not referring to $B$, but rather to the $9\times9$ matrix that you get by equating the nine elements of $B$ with the nine elements of $AX$.

$\endgroup$
2
  • $\begingroup$ I can see how those dimensions would come from a $9 times 9$ matrix but not how you're getting that matrix. Sorry, there may just be something really elementary that I am missing here. When I google for "equate elements of two matrices" I get a bunch of questions about how to use Matlab for element comparison, but it doesn't tell me what is being performed here. I understand that A, X, and B are all $3 \times 3$ matrices. $\endgroup$ Commented Apr 28, 2017 at 23:38
  • $\begingroup$ Matrices are a way of encoding a linear system of equations. In this case, the nine equations arise from equating each element. For example, your first equation could be from equating the $(1,1)$ entry of $AX$ with the $(1,1)$ entry of $B$, and so on for the other entries. This will give you nine equations in the nine variables, which is where the $9\times9$ matrix comes from. $\endgroup$ Commented Apr 29, 2017 at 1:25
0
$\begingroup$

People often misapprehend what a vector is: a vector is, quite simply, any element of a vector space.

This sort of begs the question: what is a vector space?

The simple (and sort of inaccurate) answer is: a structure where we can:

$1)$ Add stuff (vectors) together.

$2)$ Scale (intuitively, this means "stretch" or "shrink") vectors by a field element (this scaling is called "scalar multiplication", and the field elements are thus called "scalars").

The more correct (but more formal, and somewhat mystifying) answer is, a vector space is any set, together with an associated field, that satisfies a certain set of axioms (these are usually listed as $10$ to $12$ "rules", such as "associativity and commutativity of vector addition"). The field itself, has to satisfy another set of axioms (the "field axioms", which are usually just accepted as "the rules of arithmetic", when one is talking about well-known fields, such as the rational or real numbers). For example one vector space axiom is:

For any two scalars $a,b$ and any vector $v$:

$(a+b)v = ab + bv$ (note that the "$+$" means two different things on each side of the equals sign-the one on the left is the sum of two scalars, the one on the right is the sum of two vectors).

I won't list the vector space axioms here, they can be found in almost any worthwhile text on linear algebra.

So, it turns out that any $3 \times 3$ matrix:

$\begin{bmatrix}a&b&c\\d&e&f\\g&h&j\end{bmatrix}$ can be written as the linear combination of nine basis matrices, like so:

$\begin{bmatrix}a&b&c\\d&e&f\\g&h&j\end{bmatrix} = a\begin{bmatrix}1&0&0\\0&0&0\\0&0&0\end{bmatrix}+b\begin{bmatrix}0&1&0\\0&0&0\\0&0&0\end{bmatrix}+c\begin{bmatrix}0&0&1\\0&0&0\\0&0&0\end{bmatrix}\\ +d\begin{bmatrix}0&1&0\\0&0&0\\0&0&0\end{bmatrix}+e\begin{bmatrix}0&0&0\\0&1&0\\0&0&0\end{bmatrix}+f\begin{bmatrix}0&0&0\\0&0&1\\0&0&0\end{bmatrix}\\ +g\begin{bmatrix}0&0&0\\0&0&0\\1&0&0\end{bmatrix}+h\begin{bmatrix}0&0&0\\0&0&0\\0&1&0\end{bmatrix}+j\begin{bmatrix}0&0&0\\0&0&0\\0&0&1\end{bmatrix}.$

If we want to find the nullspace of the linear transformation:

$L_A: \mathbf{M} \to \mathbf{M}$ which takes $X$ to the matrix $AX$, and we write:

$A = \begin{bmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{bmatrix}$ and:

$X = \begin{bmatrix}x_{11}&x_{12}&x_{13}\\x_{21}&x_{22}&x_{23}\\x_{31}&x_{32}&x_{33}\end{bmatrix}$

This involves solving the $9$ linear equations (in the $9$ unknowns $x_{ij}$):

$a_{11}x_{11} + a_{12}x_{21} + a_{13}x_{31} = 0\\ a_{21}x_{11} + a_{22}x_{21} + a_{23}x_{31} = 0\\ a_{31}x_{11} + a_{32}x_{21} + a_{33}x_{31} = 0$


$a_{11}x_{12} + a_{12}x_{22} + a_{13}x_{32} = 0\\ a_{21}x_{12} + a_{22}x_{22} + a_{23}x_{32} = 0\\ a_{31}x_{12} + a_{32}x_{22} + a_{33}x_{32} = 0$


$a_{11}x_{13} + a_{12}x_{23} + a_{13}x_{33} = 0\\ a_{21}x_{13} + a_{22}x_{23} + a_{23}x_{33} = 0\\ a_{31}x_{13} + a_{32}x_{23} + a_{33}x_{33} = 0.$

Note how I have grouped the nine equations. If we write $X_i = \begin{bmatrix}x_{1i}\\x_{2i}\\x_{3i}\end{bmatrix}$ for $i = 1,2,3$

we see that each group of three represents the condition $AX_i = \begin{bmatrix}0\\0\\0\end{bmatrix}$,

that is to say, each $X_i$ is in the nullsapce of the matrix $A$ (not $L_A$).

Now this is true if and only if $X_i = c_i\begin{bmatrix}1\\1\\1\end{bmatrix}$ for some scalars $c_i$ (they might conceivably be different for different $i$), that is to say, it is necessary and sufficient to merely specify the first row of $X$, since the subsequent row entries must match the matching column entry above. It takes $3$ scalars to do so, so the dimension of the nullspace of $L_A$ is $3$, it has the basis:

$\left\{\begin{bmatrix}1&0&0\\1&0&0\\1&0&0\end{bmatrix}, \begin{bmatrix}0&1&0\\0&1&0\\0&1&0\end{bmatrix}, \begin{bmatrix}0&0&1\\0&0&1\\0&0&1\end{bmatrix}\right\}$

The computation of the range of $L_A$ is done in essentially the same way. We see that the columns of $AX$ are the $3$-vectors $AX_i$, so, for example, with $i = 1$, we have:

$x_{11} - x_{31} = b_{11}\\ -x_{11} + x_{21} = b_{21}\\ -x_{21} + x_{31} = b_{31}$

so that $b_{11} + b_{21} + b_{31} = 0$.

So specifying two elements of each column of $AX$ determines the third, which means we need $6$ scalars to completely determine $AX$, that is: the range of $L_A$ has dimension $6$.

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.