Apology if this is silly or too simple. But I just realized I can't hammer out a confusion over what "adjoint" or "transpose" is supposed to mean. Suppose $X$ is an $n$-dim $k$-vector-space, and $A: X\to X$ a linear transformation. The adjoint of $A$ is defined as $A^*: V^* \to V^*$ satisfying $f \circ A=A^*(f)$ for every $f\in V^*$. If we use the so-called "natural pairing" $\langle x, f\rangle:=f(x)$ then it could be stated as $\langle A(x), f\rangle=\langle x, A^*(f)\rangle$. If equating $X$ with the $n$-dim column space and $V^*$ the $n$-dim row space, which is canonical, and if $A_n$ is the matrix for transformation $A$, then we could rewrite itleft-hand-side as $\langle A_nx, f\rangle=\langle x, A^*(f)\rangle$.
Now here is the confusion. Since $f\in V^*$ is a row vector, we should have $A^*(f)=fA_n$. That is, right multiplication of matrix $A_n$, without transposing it. Only if we use the isomorphism $T:V\to V^*$ sending standard unit column vectors to standard row vectors, $T(e_i):=e_i^T$, for $1\leq i\leq n$, which is matrix transposition, would we have $T^{-1}(fA_n)=T(fA_n)=A_n^T f^T$. Only now can we have the matrix transpose creeping in. But, now $f^T$ is not in $V^*$, it is in $V$!!! So we are actually left multiplying $A_n^T$ to a column vector in $V$!!! So why do we take writing $\langle A_n x, f\rangle=\langle x, A_n^T f\rangle$ for granted everywhere, when it should've been $\langle A_n x, f\rangle=\langle x, fA_n\rangle$?!
Hope it makes sense. Thanks in advance.