0
$\begingroup$

Apology if this is silly or too simple. But I just realized I can't hammer out a confusion over what "adjoint" or "transpose" is supposed to mean. Suppose $X$ is an $n$-dim $k$-vector-space, and $A: X\to X$ a linear transformation. The adjoint of $A$ is defined as $A^*: V^* \to V^*$ satisfying $f \circ A=A^*(f)$ for every $f\in V^*$. If we use the so-called "natural pairing" $\langle x, f\rangle:=f(x)$ then it could be stated as $\langle A(x), f\rangle=\langle x, A^*(f)\rangle$. If equating $X$ with the $n$-dim column space and $V^*$ the $n$-dim row space, which is canonical, and if $A_n$ is the matrix for transformation $A$, then we could rewrite left-hand-side as $\langle A_nx, f\rangle=\langle x, A^*(f)\rangle$.

Now here is the confusion. Since $f\in V^*$ is a row vector, we should have $A^*(f)=fA_n$. That is, right multiplication of matrix $A_n$, without transposing it. Only if we use the isomorphism $T:V\to V^*$ sending standard unit column vectors to standard row vectors, $T(e_i):=e_i^T$, for $1\leq i\leq n$, which is matrix transposition, would we have $T^{-1}(fA_n)=T(fA_n)=A_n^T f^T$. Only now can we have the matrix transpose creeping in. But, now $f^T$ is not in $V^*$, it is in $V$!!! So we are actually left multiplying $A_n^T$ to a column vector in $V$!!! So why do we take writing $\langle A_n x, f\rangle=\langle x, A_n^T f\rangle$ for granted everywhere, when it should've been $\langle A_n x, f\rangle=\langle x, fA_n\rangle$?!

Hope it makes sense. Thanks in advance.

$\endgroup$

1 Answer 1

0
$\begingroup$

There are (at least) two different ways we can express the elements of $V^*$ as arrays of numbers.

One of those ways is the one that you seem to be comfortable with: each $f\in V^*$ corresponds to a row vector, such that for all $v\in V$ we have $f(v)=fv$. (The right-hand side is a matrix multiplication of the row vector $f$ and the column vector $v$.) This representation is generally used when we don't have an inner product on $V$, and want to distinguish vectors and dual vectors.

However, we could have just as easily decided to represent $f\in V^*$ by a column vector, such that $f(v)=f\cdot v$. (This time the right-hand side is the Euclidean dot product.) This representation is usually better when we do have an inner product on $V$, or otherwise want to emphasize the choice of isomorphism between $V$ and $V^*$ by putting them on the same footing.

When you see someone write $\langle A_n x, f\rangle=\langle x, A_n^T f\rangle$, they are implicitly using the latter choice where both $V$ and $V^*$ are both represented using column vectors. Ultimately neither choice is right or wrong, and hopefully context will make it clear which representation is being used.

$\endgroup$
2
  • $\begingroup$ I understand that in an inner-product space. But the context here is "natural pairing" in Banach spaces. I didn't explicitly say that because this pairing still makes sense more generally without a norm. So you are saying, $V^*$ is usually treated still as a column space, which is the same as $V$, but should be understood as another copy of $V$? That seems the same as using what I called "T", a basis-dependent isomorphism $T: V\to V^*$, which is exactly matrix transpose. $\endgroup$ Commented Aug 24, 2024 at 3:51
  • $\begingroup$ So we interpret both $v\in V$ and $f\in V^*$ as column vectors, and when we write $\langle x,f\rangle$, it's corresponding matrix form is $f^T\cdot x$, where $\cdot$ is matrix multiplication. So that all matrix multiplications happen on the left. Is that the right understanding? $\endgroup$ Commented Aug 24, 2024 at 4:01

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.